CN110664412A - Human activity recognition method facing wearable sensor - Google Patents

Human activity recognition method facing wearable sensor Download PDF

Info

Publication number
CN110664412A
CN110664412A CN201910887761.6A CN201910887761A CN110664412A CN 110664412 A CN110664412 A CN 110664412A CN 201910887761 A CN201910887761 A CN 201910887761A CN 110664412 A CN110664412 A CN 110664412A
Authority
CN
China
Prior art keywords
data
grained
wearable
sensor
activity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910887761.6A
Other languages
Chinese (zh)
Inventor
马春梅
孙华志
姜丽芬
梁研
宿通通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Tianjin Normal University
Original Assignee
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Normal University filed Critical Tianjin Normal University
Priority to CN201910887761.6A priority Critical patent/CN110664412A/en
Publication of CN110664412A publication Critical patent/CN110664412A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a human activity recognition method facing a wearable sensor, which comprises the steps of firstly, forming a fingerprint matrix by perceived time sequence heterogeneous data, taking data after segmentation according to the size of a sliding window as model input, then processing the input data through a bidirectional LSTM layer formed by forward long-short term memory and backward long-short term memory to obtain coarse granularity characteristics of source data, then performing importance calculation on the previous coarse granularity characteristics by an attention mechanism layer to obtain fine granularity characteristics capable of reflecting activity characteristics, and finally processing the fine granularity characteristics by classified logistic regression to obtain probability distribution of a plurality of labels of the current data so as to finally judge the activity type. The invention improves the cognitive ability of the wearable sensor to the user activity, can accurately identify the user activity and improves the man-machine interaction ability.

Description

Human activity recognition method facing wearable sensor
Technical Field
The invention relates to the field of intelligent perception, mobile computing and pattern recognition, in particular to a human activity recognition method facing a wearable sensor.
Background
Human activity recognition refers to sensing human behavior data through various sensors, and then utilizing a computer automatic detection technology to analyze and understand various motion and behavior processes of a human body, wherein the technology has wide application scenes such as intelligent monitoring, human-computer interaction, robots and the like. In recent years, with the popularization of wearable devices with various built-in sensors, contact type human activity recognition based on the wearable sensors can be directly closely related to daily life of people, such as medical health monitoring or fitness monitoring. Therefore, activity recognition for wearable sensors has become a research focus in recent years.
Generally, wearable sensors are multi-channel, so that data sensed by the wearable sensors have characteristics of heterogeneity and time sequence, and can reflect characteristics of multi-dimensional movement of people, and therefore, human activity recognition facing the wearable sensors is generally considered as a classification problem of heterogeneous time sequence data. To solve this problem, some early scholars proposed an identification method based on data fusion, that is, a comprehensive characteristic is obtained by analyzing physical characteristics of multi-channel sensing data and then fusing multi-source data by methods such as weighted average, for example, a comprehensive acceleration value can be obtained by fusing triaxial acceleration information. And finally classifying the fused information by methods such as a Support Vector Machine (SVM), a Random Forest (RF), a Hidden Markov Model (HMM) and the like. However, this type of method belongs to an artificial feature extraction method, which is difficult to apply in a complex real-world environment because different people may have great differences for the same activity. In addition, the method neither reflects the characteristic of time continuity of the data, nor extracts internal features among heterogeneous data.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a wearable sensor-oriented human activity recognition method. The cognitive ability of the wearable sensor to the activities of the user is improved, the wearable sensor can be used as an auxiliary skill for reality enhancement, and the user experience is improved.
The invention is realized by the following technical scheme:
a wearable-sensor-oriented human activity recognition method, the method comprising the steps of:
(1) forming context fingerprint matrixes from time sequence heterogeneous data sensed by a wearable sensor, labeling the data by using a sliding window overlapping mechanism, and marking labels of activity types on the sensed data;
(2) processing input data through a bidirectional LSTM layer consisting of forward long-short term memory and backward long-short term memory to obtain coarse-grained characteristics of source data;
(3) calculating the importance of the coarse-grained features by using an Attention mechanism to obtain fine-grained features capable of reflecting activity characteristics;
(4) processing fine-grained features through a classified logistic regression method to obtain probability distribution of a plurality of labels of current data, wherein the maximum probability is the activity type of the current sensing data;
(5) and (3) training the network models in the steps (2) to (4) through the labeled data set in the step (1), and further obtaining a final layered deep learning model.
Further, the context fingerprint in step (1) refers to human behavior information perceived by integrating wearable sensors to make the human behavior information become context-invariant features and can be used for subsequent data processing, and the context fingerprint matrix F ═ F1,f2,…,fn) For the expression of chronologically heterogeneous data, whereini=(Accxi,Accyi,Acczi,Gyrxi,Gyryi,Gyrzi,Magxi,Magyi,Magzi,Comi…..)T,fiThe elements in the data acquisition method are values of various wearable sensors, and i is a data acquisition point.
Further, in the step (1), the data is segmented through a sliding window, the redundancy characteristic of the data is increased by utilizing a window overlapping mechanism, and the data is labeled by using the activity type of the last data frame of each window; further, the sliding window size is set to 1500ms optimal.
Further, the hidden state h obtained by the bidirectional LSTM model in step (2) is (h)0,h1,…,ht) Namely the extracted data coarse-grained characteristics,
Figure BDA0002207825820000021
wherein the content of the first and second substances,
Figure BDA0002207825820000022
andare coarse-grained features on the data extracted by the forward LSTM and backward LSTM models, respectively.
Further, the step (3) of obtaining fine-grained features with activity discrimination characteristics by using an Attention mechanism means that the weights of the coarse-grained features extracted in the step (2) are learned through the Attention mechanism, so that the fine-grained features with feature preference are obtained, and unique characteristics presented when activities change can be reflected.
Further, in step (3), firstly, obtaining an implicit expression value u of the coarse-grained feature h through a nonlinear transformation, which can be expressed as:
u=tanh(Wu·h+bu),
on the basis of implicit expression, a normalized weight coefficient vector alpha capable of reflecting the importance of each element in u is learned through an Attention mechanism, so that the more the weight of the feature capable of reflecting the activity characteristic in the coarse-grained feature is obtained, and the finer-grained feature is obtained. The calculation expression of the weight coefficient α is:
Figure BDA0002207825820000031
thus, the fine-grained feature s can be expressed as:
Figure BDA0002207825820000032
further, in step (4), the activity type result is calculated as:
y=softmax(wls+bl)。
further, in the step (5), in the model training, a cross entropy loss function is used for evaluation, and when the cross entropy loss function tends to converge in the training process, an optimal model is obtained.
The invention has the advantages and beneficial effects that:
(1) according to the invention, time series heterogeneous data sensed by a sensor is used as original data, and in the aspect of activity feature expression, fine-grained feature extraction with stronger distinctiveness is emphasized through a layered deep learning model, so that the feature can better reflect unique features presented when activities change, and the accuracy of activity identification can be improved. For this purpose, a context fingerprint matrix is firstly constructed as the input of the model; secondly, extracting coarse-grained characteristics of the original data by using a bidirectional LSTM model; then obtaining fine-grained characteristic expression of the original data according to an Attention mechanism; and finally, obtaining an activity recognition result through the multiple classifiers. The user activities can be accurately identified, and the man-machine interaction capability is improved.
(2) The cognitive ability of the wearable sensor to the activities of the user is improved, the wearable sensor can be used as an auxiliary skill for reality enhancement, and the user experience is improved.
(3) The activity recognition method provided by the invention has stronger robustness to the real environment, namely, under the complex environment, the model also has higher recognition precision and stable recognition speed, and has stronger transportability.
Drawings
FIG. 1 is a diagram of a layered deep learning model architecture for a wearable sensor oriented human activity recognition method of the present invention;
FIG. 2 is a diagram of a bi-directional LSTM model architecture;
FIG. 3 is a schematic diagram of data annotation based on a sliding window overlapping mechanism;
FIG. 4 is a diagram illustrating the classification results of activities under different sliding windows based on the OPPORTUNITY data set hierarchical deep learning model.
For a person skilled in the art, other relevant figures can be obtained from the above figures without inventive effort.
Detailed Description
In order to make the technical solution of the present invention better understood, the technical solution of the present invention is further described below with reference to specific examples.
Example one
A human activity recognition method facing a wearable sensor obtains fine-grained characteristics of sensing data through a layered deep learning model, and realizes activity recognition in an end-to-end mode, wherein the model structure is shown in figure 1 and comprises an input layer, a coarse-grained characteristic extraction layer, a fine-grained characteristic extraction layer and an activity recognition output layer. The method comprises the following steps:
(1) firstly, forming a context fingerprint matrix by time sequence heterogeneous data perceived by a wearable sensor, and marking data by using a sliding window overlapping mechanism to input the data;
(2) then, processing input data through a bidirectional LSTM layer consisting of forward long-short term memory and backward long-short term memory to obtain coarse-grained characteristics of source data;
(3) then, an Attention mechanism is utilized to carry out importance calculation on the previous coarse-grained characteristics so as to obtain fine-grained characteristics capable of reflecting activity characteristics;
(4) and finally, processing the fine-grained characteristics by using a classified logistic regression method to obtain the probability distribution of a plurality of labels of the current data, thereby finally judging the activity type.
Embodiments of the invention are further illustrated below.
Wherein:
step (1), context fingerprint refers to human behavior information perceived by integrating wearable sensors to enable the human behavior information to become context-invariant features and can be used for hierarchical deep learning model processing. The data sensed by the multi-channel sensors is typically multi-granular, e.g., acceleration sensor data reflects changes in the speed of movement of an object, while gyroscope sensor data reflects changes in the direction of motion of the object. Therefore, the perceived data has heterogeneous characteristics. In addition, the perceived data is time dependent and changes over timeAnd (4) transforming. Therefore, the sensed data also has the characteristic of being continuous in time. To this end, the context fingerprint matrix F ═ (F)1,f2,…,fn) For the expression of heterogeneous time-series data, whereini=(Accxi,Accyi,Acczi,Gyrxi,Gyryi,Gyrzi,Magxi,Magyi,Magzi,Comi…..)T,fiThe elements in the data acquisition method are various sensor values, and i is a data acquisition point.
And carrying out category marking on the perceived activity data, namely identifying the label of the activity category on the perceived data to obtain an activity category data set. Because the perception data is continuous, in order to facilitate the processing of a layered deep learning model and be used for network training, the data needs to be segmented and activity categories need to be marked, the invention adopts a sliding window overlapping mechanism for data segmentation, namely, the data is segmented by taking the window size as T, and in order to avoid missing key features of the data during the training and testing of the model, a sliding window is slid under the condition that the current data segment is 50% overlapped with the previous data segment. In addition, for each window data activity category, labeling the data with the activity category to which the last data frame in each window belongs, wherein a schematic diagram of data labeling description is shown in fig. 3, wherein sensor channels are different types of sensors. Thus, the data perceived at this time may be represented as: { Fk,y (k)N, wherein F is 1, 2, 3kThe fingerprint matrix of the kth window data has the dimension of m multiplied by T, m is the number of sensor channels, y(k)Is the activity category of the window data. To determine the size of the window T, we tested the classification accuracy of the model at different window sizes using the opportony data set, and the results are shown in fig. 4. As can be seen, to obtain a high activity classification accuracy, the sliding window size is set to 1500ms optimal.
Step (2), in order to extract coarse-grained characteristics of source data, a bidirectional LSTM model is adopted, and the hidden state h obtained by the model is equal to (h)0,h1,…,ht) It is the coarse-grained feature of the data we want to extract, it is modeledCell State CtTemporary cell status
Figure BDA0002207825820000051
Forget door ftMemory gate itAnd an output gate otControl, by forgetting the information in the cell state and memorizing new information, the useful information is transmitted, and the useless information is discarded. Specifically, for the one-way LSTM model, let x be the data input at time ttIf the value is the following value after the forgetting gate is performed:
ft=sigmoid(Wf·[ht-1,xt]+bf), (1)
for xtThe information to be memorized is:
it=sigmoid(Wi·[ht-1,xt]+bi), (2)
the cellular state of the LSTM model at this time was:
Figure BDA0002207825820000052
wherein the content of the first and second substances,
Figure BDA0002207825820000053
for xtThe output gate values are:
ot=sigmoid(Wo·[ht-1,xt]+bo), (4)
therefore, the hidden state at the current moment is the coarse-grained feature h to be extractedtComprises the following steps:
ht=ot·tanh(Ct), (5)
through the above process, when the input data is x ═ x (x)1,x2,…,xt) When the hidden state is one-way, h ═ h1,h2,…,ht). Thus, through the bi-directional LSTM model, the extracted data x has coarse-grained characteristics:
wherein the content of the first and second substances,and
Figure BDA0002207825820000063
are coarse-grained features about the data x extracted by the forward LSTM and backward LSTM models, respectively.
And (3) after the coarse-grained feature extraction process, the coarse-grained features h of the data x can be obtained, but the importance of the features to the activity identification is consistent, and the change characteristics of the data when the activity changes cannot be reflected, so that the activity recognition accuracy is influenced. Therefore, a fine-grained feature extraction process is introduced on the basis of coarse-grained feature extraction, so that features more attention to specific activities can be obtained, and the activity recognition accuracy is improved. To achieve this, the Attention mechanism is used, enabling it to learn the more important feature expressions in coarse-grained features. The specific process is that firstly, a non-linear transformation is performed on the coarse-grained feature h to obtain an implicit expression value u, and the process can be expressed as follows:
u=tanh(Wu·h+bu), (7)
on the basis of implicit expression, a normalized weight coefficient vector alpha capable of reflecting the importance of each element in u is learned through an Attention mechanism, so that the more the weight of the feature capable of reflecting the activity characteristic in the coarse-grained feature is obtained, and the finer-grained feature is obtained. The calculation expression of the weight coefficient α is:
thus, the fine-grained feature s can be expressed as:
Figure BDA0002207825820000065
after fine-grained characteristics are obtained, predicting the probability of the data x corresponding to each type of activity by using a multi-classification logistic regression method, wherein the maximum probability value is an identified activity result, and the activity type prediction result can be calculated as follows:
y=softmax(wls+bl), (10)
further, W and b relating to the parameters in the formulas (1) to (10) are variables to be determined, and need to pass through the steps
(1) And training the network model determination by the labeled data set, and further obtaining a final layered deep learning model.
In order to determine the optimal parameter value in the model, the network needs to be trained by using the labeled data, an index needs to be introduced in the process to evaluate the error of the classification result of the model, and the model parameter is updated by minimizing the error so as to obtain the optimal result. In the invention, a cross entropy objective function is used as an error evaluation index, which can be expressed as:
Figure BDA0002207825820000071
wherein i is the ith group of perceived data index, and j is the jth class of activity. In the model training process, labeled data are input in an input layer, then a time Back Propagation (BPTT) algorithm is adopted to obtain the derivative of an objective function relative to all parameters, and the objective function is minimized through a random gradient descent method, so that the optimal parameters are determined.
To verify the validity of the final model, the model also needs to be tested, in which process we also test the accuracy of the model's classification of its activities using a portion of the tagged data set. During model training and testing, the scale of the data set was set to 7: 3. And when the precision of the test data is less than a given threshold value, the model is considered to be a valid model.
Example two
The wearable sensor-oriented human activity recognition method is applied to the smart phone:
the smart phone is equipped with various sensors such as an accelerometer, a magnetometer, a GPS, a compass, etc., and has strong calculation, storage and communication capabilities. Therefore, the behavior of the person can be sensed by the smart phone carried with the person, and the behavior of the person can be monitored by the wearable sensor-oriented human activity recognition method. For example, user A is an elderly person who is cared for without fail because of a busy work. For the old, the behaviors of falling, sedentary and the like are the primary factors damaging the body health of the old, so that the family can be provided with a smart phone A, the daily behavior of the family A is monitored in real time, and the family and an emergency department can be contacted in time by means of a default contact when the family falls is judged; or when the user A is judged to be sedentary, the user A is reminded to do proper exercise, so that the quality of life of the old can be greatly improved in the later year.
The invention has been described in an illustrative manner, and it is to be understood that any simple variations, modifications or other equivalent changes which can be made by one skilled in the art without departing from the spirit of the invention fall within the scope of the invention.

Claims (9)

1. A human activity recognition method facing a wearable sensor is based on a layered deep learning model and is characterized in that: the method comprises the following steps:
(1) forming context fingerprint matrixes from time sequence heterogeneous data sensed by a wearable sensor, labeling the data by using a sliding window overlapping mechanism, and marking labels of activity types on the sensed data;
(2) processing input data through a bidirectional LSTM layer consisting of forward long-short term memory and backward long-short term memory to obtain coarse-grained characteristics of source data;
(3) calculating the importance of the coarse-grained features by using an Attention mechanism to obtain fine-grained features capable of reflecting activity characteristics;
(4) processing fine-grained features through a classified logistic regression method to obtain probability distribution of a plurality of labels of current data, wherein the maximum probability is the activity type of the current sensing data;
(5) and (3) training the network models in the steps (2) to (4) through the labeled data set in the step (1), and further obtaining a final layered deep learning model.
2. The wearable-sensor-oriented human activity recognition method of claim 1, wherein: the context fingerprint in the step (1) means that the human body behavior information perceived by the wearable sensor is integrated to be a context-invariant feature and can be used for subsequent data processing, and the context fingerprint matrix F ═ (F ═ F-1,f2,…,fn) For the expression of chronologically heterogeneous data, whereini=(Accxi,Accyi,Acczi,Gyrxi,Gyryi,Gyrzi,Magxi,Magyi,Magzi,Comi…..)T,fiThe elements in the data acquisition method are values of various wearable sensors, and i is a data acquisition point.
3. The wearable-sensor-oriented human activity recognition method of claim 1, wherein: in the step (1), data is segmented through a sliding window, the redundancy characteristic of the data is increased by using a window overlapping mechanism, and the data is labeled by using the activity type of the last data frame of each window.
4. The wearable-sensor-oriented human activity recognition method of claim 3, wherein: the sliding window size is set to 1500 ms.
5. The wearable-sensor-oriented human activity recognition method of claim 1, wherein: step (2) obtaining a hidden state h ═ h through a bidirectional LSTM model0,h1,…,ht) Namely the extracted data coarse-grained characteristics,
Figure FDA0002207825810000011
wherein the content of the first and second substances,
Figure FDA0002207825810000012
and
Figure FDA0002207825810000013
are coarse-grained features on the data extracted by the forward LSTM and backward LSTM models, respectively.
6. The wearable-sensor-oriented human activity recognition method of claim 1, wherein: and (3) acquiring fine-grained features with activity distinguishing characteristics by using an Attention mechanism, namely learning the weights of the coarse-grained features extracted in the step (2) through the Attention mechanism, so as to obtain fine-grained features with feature preference, and reflecting unique characteristics presented when activities change.
7. The wearable-sensor-oriented human activity recognition method of claim 5, wherein: in step (3), firstly, the coarse-grained feature h is subjected to a nonlinear transformation to obtain an implicit expression value u, and the process can be represented as follows:
u=tanh(Wu·h+bu),
on the basis of implicit expression, a normalized weight coefficient vector alpha capable of reflecting the importance of each element in u is learned through an Attention mechanism, so that the more the weight of the feature capable of reflecting the activity characteristic in the coarse-grained feature is obtained, and the finer-grained feature is obtained. The calculation expression of the weight coefficient α is:
Figure FDA0002207825810000021
thus, the fine-grained feature s can be expressed as:
8. the wearable-sensor-oriented human activity recognition method of claim 7, wherein: in step (4), the activity type result is calculated as:
y=softmax(wls+bl)。
9. the wearable-sensor-oriented human activity recognition method of claim 1, wherein: in the model training, a cross entropy loss function is used for evaluation, and when the cross entropy loss function tends to be converged in the training process, an optimal model is obtained.
CN201910887761.6A 2019-09-19 2019-09-19 Human activity recognition method facing wearable sensor Pending CN110664412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910887761.6A CN110664412A (en) 2019-09-19 2019-09-19 Human activity recognition method facing wearable sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910887761.6A CN110664412A (en) 2019-09-19 2019-09-19 Human activity recognition method facing wearable sensor

Publications (1)

Publication Number Publication Date
CN110664412A true CN110664412A (en) 2020-01-10

Family

ID=69076903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910887761.6A Pending CN110664412A (en) 2019-09-19 2019-09-19 Human activity recognition method facing wearable sensor

Country Status (1)

Country Link
CN (1) CN110664412A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444983A (en) * 2020-04-22 2020-07-24 中国科学院上海微系统与信息技术研究所 Risk event identification method and system based on sensing data information fingerprints
CN111652361A (en) * 2020-06-04 2020-09-11 南京博芯电子技术有限公司 Composite granularity near-storage approximate acceleration structure and method of long-time memory network
CN115438705A (en) * 2022-11-09 2022-12-06 武昌理工学院 Human body action prediction method based on wearable equipment
CN115964678A (en) * 2023-03-16 2023-04-14 微云智能科技有限公司 Intelligent identification method and system based on multi-sensor data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956558A (en) * 2016-04-26 2016-09-21 陶大鹏 Human movement identification method based on three-axis acceleration sensor
CN106845351A (en) * 2016-05-13 2017-06-13 苏州大学 It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term
CN107092894A (en) * 2017-04-28 2017-08-25 孙恩泽 A kind of motor behavior recognition methods based on LSTM models
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
WO2018191555A1 (en) * 2017-04-14 2018-10-18 Drishti Technologies. Inc Deep learning system for real time analysis of manufacturing operations
CN108960337A (en) * 2018-07-18 2018-12-07 浙江大学 A kind of multi-modal complicated activity recognition method based on deep learning model
CN109670548A (en) * 2018-12-20 2019-04-23 电子科技大学 HAR algorithm is inputted based on the more sizes for improving LSTM-CNN
CN109740419A (en) * 2018-11-22 2019-05-10 东南大学 A kind of video behavior recognition methods based on Attention-LSTM network
CN109784280A (en) * 2019-01-18 2019-05-21 江南大学 Human bodys' response method based on Bi-LSTM-Attention model
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A kind of Activity recognition technical method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956558A (en) * 2016-04-26 2016-09-21 陶大鹏 Human movement identification method based on three-axis acceleration sensor
CN106845351A (en) * 2016-05-13 2017-06-13 苏州大学 It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term
WO2018191555A1 (en) * 2017-04-14 2018-10-18 Drishti Technologies. Inc Deep learning system for real time analysis of manufacturing operations
CN107092894A (en) * 2017-04-28 2017-08-25 孙恩泽 A kind of motor behavior recognition methods based on LSTM models
CN107609460A (en) * 2017-05-24 2018-01-19 南京邮电大学 A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism
CN108960337A (en) * 2018-07-18 2018-12-07 浙江大学 A kind of multi-modal complicated activity recognition method based on deep learning model
CN109740419A (en) * 2018-11-22 2019-05-10 东南大学 A kind of video behavior recognition methods based on Attention-LSTM network
CN109670548A (en) * 2018-12-20 2019-04-23 电子科技大学 HAR algorithm is inputted based on the more sizes for improving LSTM-CNN
CN109784280A (en) * 2019-01-18 2019-05-21 江南大学 Human bodys' response method based on Bi-LSTM-Attention model
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A kind of Activity recognition technical method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TONGTONG SU,HUAZHI SUN,CHUNMEI MA,LIFEN JIANG,TONGTONG XU: "HDL: Hierarchical Deep Learning Model based", 《INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS》 *
宿通通,孙华志,马春梅,姜丽芬: "基于循环神经网络的人体行为识别", 《天津师范大学学报(自然科学版)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444983A (en) * 2020-04-22 2020-07-24 中国科学院上海微系统与信息技术研究所 Risk event identification method and system based on sensing data information fingerprints
CN111444983B (en) * 2020-04-22 2023-10-24 中国科学院上海微系统与信息技术研究所 Risk event identification method and system based on sensing data information fingerprint
CN111652361A (en) * 2020-06-04 2020-09-11 南京博芯电子技术有限公司 Composite granularity near-storage approximate acceleration structure and method of long-time memory network
CN111652361B (en) * 2020-06-04 2023-09-26 南京博芯电子技术有限公司 Composite granularity near storage approximate acceleration structure system and method for long-short-term memory network
CN115438705A (en) * 2022-11-09 2022-12-06 武昌理工学院 Human body action prediction method based on wearable equipment
CN115964678A (en) * 2023-03-16 2023-04-14 微云智能科技有限公司 Intelligent identification method and system based on multi-sensor data
CN115964678B (en) * 2023-03-16 2023-10-03 微云智能科技有限公司 Intelligent identification method and system based on multi-sensor data

Similar Documents

Publication Publication Date Title
CN110664412A (en) Human activity recognition method facing wearable sensor
CN108764059B (en) Human behavior recognition method and system based on neural network
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN108171209A (en) A kind of face age estimation method that metric learning is carried out based on convolutional neural networks
CN105488456B (en) Method for detecting human face based on adaptive threshold adjustment rejection sub-space learning
CN110575663B (en) Physical education auxiliary training method based on artificial intelligence
CN109101876A (en) Human bodys' response method based on long memory network in short-term
CN111199202B (en) Human body action recognition method and recognition device based on circulating attention network
Benalcázar et al. Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG
CN110232395A (en) A kind of fault diagnosis method of electric power system based on failure Chinese text
CN112597921B (en) Human behavior recognition method based on attention mechanism GRU deep learning
CN112148128B (en) Real-time gesture recognition method and device and man-machine interaction system
CN109978870A (en) Method and apparatus for output information
CN103106394A (en) Human body action recognition method in video surveillance
CN108762503A (en) A kind of man-machine interactive system based on multi-modal data acquisition
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
JP2022120775A (en) On-device activity recognition
CN115937975A (en) Action recognition method and system based on multi-modal sequence fusion
Iyer et al. Sign Language Detection using Action Recognition
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
CN115689981A (en) Lung image detection method and device based on information fusion and storage medium
Tao et al. Attention-based convolutional neural network and bidirectional gated recurrent unit for human activity recognition
CN111767402B (en) Limited domain event detection method based on counterstudy
CN113223018A (en) Fine-grained image analysis processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200110

WD01 Invention patent application deemed withdrawn after publication