CN106228200B - Action identification method independent of action information acquisition equipment - Google Patents
Action identification method independent of action information acquisition equipment Download PDFInfo
- Publication number
- CN106228200B CN106228200B CN201610903076.4A CN201610903076A CN106228200B CN 106228200 B CN106228200 B CN 106228200B CN 201610903076 A CN201610903076 A CN 201610903076A CN 106228200 B CN106228200 B CN 106228200B
- Authority
- CN
- China
- Prior art keywords
- motion information
- motion
- different
- information acquisition
- acquisition equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to a motion recognition method independent of motion information acquisition equipment, which is suitable for different motion information acquisition equipment; the method comprises two stages, namely a model training stage and a model prediction stage, wherein the model training stage is used for establishing a mapping relation between motion information and motions, and the model prediction stage is used for calculating corresponding motion categories according to the collected motion information; the invention relates to a problem that compatibility is allocated on different motion information acquisition terminal equipment by using a motion identification method, and the influence of factors such as different sampling frequencies, different wearing positions, different accuracy and sensitivity of a sensor and the like on a motion identification result is specifically considered; the invention can be applied to terminal equipment such as smart phones, tablet computers, wristbands, wristwatches and the like with embedded inertial sensor units such as accelerometers, gyroscopes or magnetometers and the like.
Description
Technical Field
The invention relates to a motion identification method independent of motion information acquisition equipment, which is suitable for different motion information acquisition equipment and can be applied to terminal equipment such as smart phones, tablet computers, wrist bands, wristwatches and the like with embedded inertial sensor units such as accelerometers, gyroscopes or magnetometers and the like.
Background
In recent years, with the development of MEMS technology, various types of sensors (such as an accelerometer, a gyroscope, a magnetometer, an infrared camera, and the like) are embedded in more and more terminal devices (such as a smart phone, a tablet computer, a wrist strap, a wristwatch, and the like). Accordingly, various applications surrounding these sensors have emerged, such as in the health care field, including: limb action recognition, fall detection and alarm, heart rate monitoring, abnormal gait analysis and quantitative evaluation and the like.
However, most of the various applications on the market are only suitable for specific models (brands) of terminal devices, and cannot be effectively compatible with all types of terminal devices, essentially because of the difference in the information collected from different terminal devices. The reason for this is that the following aspects are roughly included:
(1) the models of the sensors embedded in different terminal devices are different, and the indexes of the sensors, such as sensitivity, precision, detection limit and the like, are different;
(2) the sampling frequencies of the sensors set in different terminal devices are different;
(3) if the terminal equipment runs other application programs while acquiring the sensor information, the sampling frequency of the sensor fluctuates;
(4) after the terminal equipment is dropped and other abnormal conditions occur, the drift problem of the embedded sensor can occur.
In particular, in the aspect of motion recognition application, it is explicitly stated in the existing literature that when existing motion recognition algorithms are deployed on different terminal devices, the accuracy of motion recognition is reduced. Therefore, how to design an action recognition method which is applicable to various types of terminal devices without depending on information acquisition equipment is a problem to be solved urgently at present.
Disclosure of Invention
Aiming at the problem that the existing action recognition method generally depends on information acquisition equipment, the invention provides an action recognition method independent of motion information acquisition equipment. The method comprises the steps of firstly constructing a standard sampler, normalizing motion information with different sampling rates collected from different terminal devices, unifying the motion information to a standard sampling frequency, then constructing a clustering device, distinguishing wearing positions of different terminal devices, and finally constructing an integrated action recognition method framework consisting of a plurality of weak classifiers, so as to eliminate influences caused by different indexes such as precision, sensitivity and the like of built-in sensors in different terminal devices.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a motion recognition method independent of motion information acquisition equipment comprises two stages, namely a model training stage and a model prediction stage, wherein the model training stage is used for establishing a mapping relation between motion information and motion, and the model prediction stage is used for calculating a corresponding motion type according to the acquired motion information.
Preferably, the model training phase specifically comprises the following steps:
1) acquiring motion information, namely wearing different motion information acquisition equipment at different positions of a human body, and then recording the motion information of the human body when different actions are executed;
2) the step of sampling frequency standardization, namely, carrying out frequency normalization on original motion information from different motion information acquisition equipment by using a down-sampling method;
3) extracting the characteristics of the original motion information by a time domain method, a frequency domain method or a nonlinear analysis method, and screening the extracted characteristics by a mutual information correlation method, a genetic algorithm, a sparse optimization method or a principal component analysis method, so as to select the characteristics which can represent the characteristics of the motion information most;
4) identifying and clustering the wearing positions of the terminal equipment, namely constructing a terminal equipment wearing position identifying and clustering device by using a method of leading a teacher or a method of no leading a teacher according to the characteristics extracted and screened in the step 3);
5) and a step of random forest action recognition model, namely establishing a corresponding action recognition model by using a random forest method aiming at each wearing position.
Preferably, in step 1), the sports information collecting device includes, but is not limited to, a smart phone, a tablet computer, a wristwatch, and a bracelet; the wearing position of the motion information acquisition equipment comprises, but is not limited to, a wrist, a forearm, an upper arm, a waist, a thigh and a lower leg; the actions performed by the human body include, but are not limited to: sitting still, lying, standing, walking slowly, going upstairs, going downstairs, running; the acquired motion information includes, but is not limited to, acceleration, angular velocity, magnetic field strength of the three-dimensional space X, Y, Z axis.
Preferably, the frequency normalization in step 2) is to perform down-sampling frequency resampling on the motion information with a sampling frequency higher than 25Hz, so that the new motion information sampling frequency is 25 Hz.
Preferably, in step 3), the features extracted by using the time domain method include, but are not limited to, motion amplitude, angle, speed; the features extracted by the frequency domain method include, but are not limited to, motion frequency and energy; the features extracted by the nonlinear analysis method include but are not limited to approximate entropy and multi-scale entropy.
Preferably, the instructor learning method in step 4) includes, but is not limited to, neural networks, support vectors, and decision trees.
Preferably, the instructor-free learning method in the step 4) includes, but is not limited to, a self-organizing map neural network and a distance discrimination method.
Preferably, the clustering device in step 4) identifies the wearing positions of the terminal devices in the motion identification process, and then establishes motion identification models for different wearing positions respectively.
Preferably, the model prediction stage specifically is: the motion information acquisition equipment is worn on a certain part of a human body, then the motion information of the human body in the process of finishing the motion to be recognized is acquired, then the original information sequentially passes through modules such as a standard sampler, a feature extraction and feature selection module, a wearing position clustering device and a motion recognition model, and finally a final motion recognition result is output.
Compared with the prior art, the invention has the beneficial effects that:
the method provided by the invention focuses on the compatibility problem of deployment of one action recognition method on different motion information acquisition terminal devices (including but not limited to smart phones, tablet computers, wristwatches, bracelets and the like), and specifically considers the influence of factors such as different sampling frequencies, different wearing positions, different accuracy and sensitivity of sensors and the like on action recognition results; the method has the advantages of strong compatibility, high accuracy and the like, so that the applicability of the motion recognition technology in the wide specific application field can be greatly improved.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a built-in acceleration sensor sampling frequency table of a typical motion information collection device;
FIG. 3 shows the same motion acceleration signals collected by different terminal devices;
FIG. 4 is a diagram of different motion acceleration signals collected by the same terminal device;
fig. 5 is a table of the result of identifying the wearing position of the terminal device;
fig. 6 is a comparison table of motion recognition accuracy.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a motion recognition method independent of a motion information acquisition device belongs to a mentor learning method and includes two stages of model building and model prediction.
The model building phase mainly comprises the following steps:
(1) and (5) acquiring motion information. Different motion information acquisition devices (such as a smart phone, a tablet computer, a wristwatch, a bracelet and the like) are worn at different positions (such as wrists, forearms, upper arms, waists, thighs, cruses and the like) of a human body, and then motion information of the human body when different actions (such as sitting still, lying, standing, walking, going upstairs, going downstairs, running and the like) are performed (different sensors built in different terminal devices are different, such as an accelerometer, a gyroscope, a magnetometer and the like) is recorded.
(2) The sampling frequency is normalized. The sampling frequency of the built-in sensor is different for different terminal devices. Fig. 2 lists the maximum sampling frequencies of the acceleration sensors supported by several typical types of terminal devices, and it can be seen that the differences between different models are very large from 25 to 200 Hz. As known from Nyquist-Shannon sampling theorem, since motion information acquired at different sampling frequencies contains different information components, it is necessary to first normalize the sampling frequencies of information from different terminal devices. Common methods include an interpolation method (up-sampling) and a down-sampling method, but considering that the interpolation method can artificially introduce new errors, and for motion recognition application, a human body cannot generate motion information larger than 10Hz in a normal condition, therefore, in the invention, the down-sampling method is adopted, and the sampling frequency is unified to 25Hz, that is, for terminal equipment with the sampling frequency higher than 25Hz, the motion information acquired by the terminal equipment needs to be resampled.
(3) Feature extraction and feature selection. Since the motion information collected during a period of time in which the human body is performing an effective motion usually contains many data points, it is difficult to directly analyze the raw data. Therefore, the original information needs to be subjected to feature extraction, and common feature extraction methods include, but are not limited to, the following methods: time domain methods (motion amplitude, angle, velocity, etc.), frequency domain methods (motion frequency, energy, etc.), nonlinear analysis methods (approximate entropy, multi-scale entropy, etc.). Meanwhile, as a plurality of types of sensors (accelerometers, gyroscopes, magnetometers, and the like) are usually built in many terminal devices, and the terminal devices are multi-axis (two-axis or three-axis) sensors, the feature dimension that can be extracted is also usually high. Therefore, in practical applications, feature selection (feature dimension reduction) is often required if it is not possible to determine which features best characterize each action. Common feature selection methods include, but are not limited to, the following: mutual information correlation method, genetic algorithm, sparse optimization method, principal component analysis method and the like.
(4) And identifying clustering of the wearing positions of the terminal equipment. Most of the conventional motion recognition methods only aim at the situation that the terminal equipment is worn at the same position, so that when the terminal equipment is worn at other positions, the accuracy of motion recognition is greatly reduced. Therefore, in order to enable the motion recognition method to be compatible with different wearing positions, the invention constructs a clustering device, firstly recognizes the wearing position of the terminal equipment between motion recognition, and then respectively establishes motion recognition models aiming at different wearing positions (wrist, forearm, upper arm, waist, thigh, calf and the like). Common methods of construction of a clusterer include, but are not limited to, the following two types of methods: there are instructor learning methods (neural networks, support vector machines, decision trees, etc.) and instructor-less learning methods (self-organizing map neural networks, distance discriminators, etc.).
(5) And identifying a model by random forest actions. And establishing a corresponding action recognition model aiming at each wearing position. In order to eliminate the influence caused by different performances of precision, sensitivity and the like of built-in sensors of different terminal devices, the invention constructs a random forest action recognition model integrating a plurality of weak classifiers.
In the model prediction stage, firstly, the motion information acquisition equipment is worn on a certain part of a human body, then the motion information of the human body in the process of finishing the motion to be recognized is acquired, then the original information sequentially passes through modules such as a standard sampler, feature extraction and feature selection, a wearing position clustering device, a motion recognition model and the like, and finally, the final motion recognition result is output.
The invention is further analyzed by way of example with reference to fig. 3 to 6.
This embodiment has contained 3 different motion information acquisition devices: the system comprises Samsung Galaxy Gear, HTCDesire and Xsens inertial sensor units, wherein the highest sampling frequencies of corresponding built-in acceleration sensors are respectively 100Hz, 50Hz and 200 Hz; the present embodiment includes 5 different actions: sitting still, standing, lying, going upstairs, going downstairs; meanwhile, the wearing positions of the 3 types of motion information acquisition equipment are different, the Samsung Galaxy Gear is worn on the wrist, the HTC Desire is worn on the upper arm, and the Xsens inertial sensor unit is worn on the back waist.
First, let the human subject wear each device at the corresponding position, and then complete the above 5 different actions in sequence, each action of each device being repeated 10 times. Some pieces of motion information recorded in the experiment process are shown in fig. 3 and 4, and it can be seen from the figures that the motion information corresponding to the same motion acquired when different devices are worn at different positions has a large difference, and meanwhile, the motion information corresponding to different motions acquired by the same device also has a significant difference.
Secondly, resampling the acquired motion information by using a down-sampling method so as to realize that the frequency of the acquired information corresponding to all the terminal equipment is 25 Hz.
Then, extracting features corresponding to each piece of motion information by using a time domain method, specifically, the method includes: magnitude, velocity, and angle of motion in the three-dimensional space X, Y and the Z-direction, among others.
And then, constructing a terminal equipment wearing position identification clustering device by using a support vector machine. In this embodiment, 50 motion signals are collected at each wear position, 40 samples being randomly selected for training and the remaining 10 samples for testing. I.e. for the entire data set, 150 samples are included, wherein the training set includes 120 samples and the test set includes 30 samples. The identification result corresponding to the test set is shown in fig. 5, and it can be seen that the position worn by the terminal device can be better identified through the constructed identification clustering device.
And finally, establishing an action recognition model by using a random forest method aiming at each wearing position, wherein each random forest comprises 50-100 decision trees, and the final recognition result is summarized by adopting a voting method. As shown in fig. 6, the comparison between the recognition result of this embodiment and the recognition accuracy obtained by the motion recognition method established by using only the data of a single motion information device shows that, if a motion recognition model is established by using only the data acquired from a single motion information device, the method is only suitable for the same terminal device. If the model is applied to other terminal devices, the accuracy of motion recognition is significantly reduced. On the contrary, the action recognition model established by the method of the invention can be well compatible with different terminal devices. The reason is that the method integrates sensor information from all terminal equipment in the modeling process, and adds modules such as a standard sampler, terminal equipment wearing position identification clustering, random forest weak classifier integration and the like on the basis of the traditional action identification method, so that the influences caused by different sampling frequencies of different terminal equipment, different built-in sensor precisions, different sensitivities and the like can be effectively eliminated.
While the invention has been described in detail with reference to the preferred embodiments thereof, it will be apparent to one skilled in the art that the invention is not limited thereto, and that various changes may be made therein without departing from the spirit and scope thereof.
Claims (6)
1. A motion recognition method independent of motion information acquisition equipment is characterized by comprising the following steps: the method comprises two stages, namely a model training stage and a model prediction stage, wherein the model training stage is used for establishing a mapping relation between motion information and motions, and the model prediction stage is used for calculating corresponding motion categories according to the collected motion information;
the model training phase specifically comprises the following steps:
1) acquiring motion information, namely wearing different motion information acquisition equipment at different positions of a human body, and then recording the motion information of the human body when different actions are executed;
2) the step of sampling frequency standardization, namely, carrying out frequency normalization on original motion information from different motion information acquisition equipment by using a down-sampling method; the frequency normalization is to perform sampling frequency reduction resampling on the motion information with the sampling frequency higher than 25Hz so that the sampling frequency of the new motion information is 25 Hz;
3) extracting the characteristics of the original motion information by a time domain method, a frequency domain method or a nonlinear analysis method, and screening the extracted characteristics by a mutual information correlation method, a genetic algorithm, a sparse optimization method or a principal component analysis method, so as to select the characteristics which can represent the characteristics of the motion information most;
4) identifying and clustering the wearing positions of the terminal equipment, namely constructing a terminal equipment wearing position identifying and clustering device by using a method of leading a teacher or a method of no leading a teacher according to the characteristics extracted and screened in the step 3); the clustering device is used for firstly identifying the wearing position of the terminal equipment during action identification and then respectively establishing action identification models aiming at different wearing positions;
5) and a step of random forest action recognition model, namely establishing a corresponding action recognition model by using a random forest method aiming at each wearing position.
2. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: in the step 1), the motion information acquisition equipment comprises a smart phone, a tablet personal computer, a wristwatch and a bracelet; the wearing position of the motion information acquisition equipment comprises a wrist, a forearm, an upper arm, a waist, a thigh and a shank; the actions performed by the human body include: sitting still, lying, standing, walking slowly, going upstairs, going downstairs, running; the acquired motion information includes acceleration, angular velocity, magnetic field strength of the three-dimensional space X, Y, Z axis.
3. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: in the step 3), the characteristics extracted by using the time domain method comprise motion amplitude, angle and speed; the features extracted by the frequency domain method comprise motion frequency and energy; the features extracted by the nonlinear analysis method comprise approximate entropy and multi-scale entropy.
4. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: the instructor learning method in the step 4) comprises a neural network, a support vector and a decision tree.
5. The motion recognition method independent of a motion information collection device according to claim 1, characterized in that: the instructor-free learning method in the step 4) comprises a self-organizing mapping neural network and a distance discrimination method.
6. The motion recognition method independent of motion information collection equipment according to claim 1, wherein the model prediction stage is specifically: the motion information recognition method comprises the steps of firstly wearing motion information acquisition equipment on a certain part of a human body, then acquiring motion information of the human body in the process of finishing a motion to be recognized, then sequentially passing the motion information through a standard sampler, feature extraction and feature selection, a wearing position clustering device and a motion recognition model, and finally outputting a final motion recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610903076.4A CN106228200B (en) | 2016-10-17 | 2016-10-17 | Action identification method independent of action information acquisition equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610903076.4A CN106228200B (en) | 2016-10-17 | 2016-10-17 | Action identification method independent of action information acquisition equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106228200A CN106228200A (en) | 2016-12-14 |
CN106228200B true CN106228200B (en) | 2020-01-14 |
Family
ID=58077158
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610903076.4A Active CN106228200B (en) | 2016-10-17 | 2016-10-17 | Action identification method independent of action information acquisition equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228200B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874874A (en) * | 2017-02-16 | 2017-06-20 | 南方科技大学 | Motion state identification method and device |
CN107016686A (en) * | 2017-04-05 | 2017-08-04 | 江苏德长医疗科技有限公司 | Three-dimensional gait and motion analysis system |
CN108734055B (en) * | 2017-04-17 | 2021-03-26 | 杭州海康威视数字技术股份有限公司 | Abnormal person detection method, device and system |
CN107316052A (en) * | 2017-05-24 | 2017-11-03 | 中国科学院计算技术研究所 | A kind of robust Activity recognition method and system based on inexpensive sensor |
CN107742070B (en) * | 2017-06-23 | 2020-11-24 | 中南大学 | Method and system for motion recognition and privacy protection based on acceleration data |
CN107358210B (en) * | 2017-07-17 | 2020-05-15 | 广州中医药大学 | Human body action recognition method and device |
CN108710822B (en) * | 2018-04-04 | 2022-05-13 | 燕山大学 | Personnel falling detection system based on infrared array sensor |
CN108550385B (en) * | 2018-04-13 | 2021-03-09 | 北京健康有益科技有限公司 | Exercise scheme recommendation method and device and storage medium |
CN108968918A (en) * | 2018-06-28 | 2018-12-11 | 北京航空航天大学 | The wearable auxiliary screening equipment of early stage Parkinson |
CN109100537B (en) * | 2018-07-19 | 2021-04-20 | 百度在线网络技术(北京)有限公司 | Motion detection method, apparatus, device, and medium |
CN109190762B (en) * | 2018-07-26 | 2022-06-07 | 北京工业大学 | Mobile terminal information acquisition system |
CN109635638B (en) * | 2018-10-31 | 2021-03-09 | 中国科学院计算技术研究所 | Feature extraction method and system and recognition method and system for human body motion |
CN110689041A (en) * | 2019-08-20 | 2020-01-14 | 陈羽旻 | Multi-target behavior action recognition and prediction method, electronic equipment and storage medium |
CN111221419A (en) * | 2020-01-13 | 2020-06-02 | 武汉大学 | Array type flexible capacitor electronic skin for sensing human motion intention |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104434119A (en) * | 2013-09-20 | 2015-03-25 | 卡西欧计算机株式会社 | Body information obtaining device and body information obtaining method |
CN105046215A (en) * | 2015-07-07 | 2015-11-11 | 中国科学院上海高等研究院 | Posture and behavior identification method without influences of individual wearing positions and wearing modes |
-
2016
- 2016-10-17 CN CN201610903076.4A patent/CN106228200B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104434119A (en) * | 2013-09-20 | 2015-03-25 | 卡西欧计算机株式会社 | Body information obtaining device and body information obtaining method |
CN105046215A (en) * | 2015-07-07 | 2015-11-11 | 中国科学院上海高等研究院 | Posture and behavior identification method without influences of individual wearing positions and wearing modes |
Non-Patent Citations (2)
Title |
---|
基于手机和可穿戴设备的用户活动识别问题研究;孙泽浩;《中国优秀博士学位论文全文数据库信息科技辑》;20160915;全文 * |
基于旋转模式的移动设备佩戴位置识别方法;时岳 等;《软件学报》;20130815;第1898-1907页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106228200A (en) | 2016-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228200B (en) | Action identification method independent of action information acquisition equipment | |
Huang et al. | TSE-CNN: A two-stage end-to-end CNN for human activity recognition | |
Yen et al. | Human daily activity recognition performed using wearable inertial sensors combined with deep learning algorithms | |
Kwon et al. | Unsupervised learning for human activity recognition using smartphone sensors | |
Zhang et al. | A comprehensive study of smartphone-based indoor activity recognition via Xgboost | |
CN108958482B (en) | Similarity action recognition device and method based on convolutional neural network | |
Mohammed et al. | Unsupervised deep representation learning to remove motion artifacts in free-mode body sensor networks | |
Figueira et al. | Body location independent activity monitoring | |
CN111178155B (en) | Gait feature extraction and gait recognition method based on inertial sensor | |
CN107358248B (en) | Method for improving falling detection system precision | |
Hung et al. | Activity recognition with sensors on mobile devices | |
Al-Naffakh et al. | Activity recognition using wearable computing | |
Pham | MobiRAR: Real-time human activity recognition using mobile devices | |
Sheng et al. | An adaptive time window method for human activity recognition | |
Thu et al. | Real-time wearable-device based activity recognition using machine learning methods | |
Minh et al. | Evaluation of smartphone and smartwatch accelerometer data in activity classification | |
Cola et al. | Personalized gait detection using a wrist-worn accelerometer | |
Kao et al. | GA-SVM applied to the fall detection system | |
Tarekegn et al. | Enhancing human activity recognition through sensor fusion and hybrid deep learning model | |
Nguyen et al. | The internet-of-things based fall detection using fusion feature | |
Al-Naffakh | A comprehensive evaluation of feature selection for gait recognition using smartwatches | |
Prasertsung et al. | A classification of accelerometer data to differentiate pedestrian state | |
CN105147249A (en) | Wearable or implantable device evaluation system and method | |
Kongsil et al. | Physical activity recognition using streaming data from wrist-worn sensors | |
Kau et al. | A smart phone-based pocket fall accident detection system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |