CN108875836B - Simple-complex activity collaborative recognition method based on deep multitask learning - Google Patents
Simple-complex activity collaborative recognition method based on deep multitask learning Download PDFInfo
- Publication number
- CN108875836B CN108875836B CN201810678316.4A CN201810678316A CN108875836B CN 108875836 B CN108875836 B CN 108875836B CN 201810678316 A CN201810678316 A CN 201810678316A CN 108875836 B CN108875836 B CN 108875836B
- Authority
- CN
- China
- Prior art keywords
- activity
- complex
- classifier
- network
- recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000694 effects Effects 0.000 title claims abstract description 241
- 238000000034 method Methods 0.000 title claims abstract description 26
- 238000012549 training Methods 0.000 claims description 33
- 238000012545 processing Methods 0.000 claims description 21
- 238000011176 pooling Methods 0.000 claims description 16
- 230000002159 abnormal effect Effects 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 5
- 238000011156 evaluation Methods 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 21
- 230000015654 memory Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000002759 z-score normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a simple-complex activity collaborative recognition method based on deep multitask learning, which comprises the following steps: 1) dividing time windows of original activity data to obtain simple and complex activity samples, wherein one complex activity sample consists of a plurality of simple activity samples; 2) extracting simple activity characteristics by using a CNN network, and establishing a simple activity classifier; 3) extracting the time sequence characteristics of the complex activities by using an LSTM network, and establishing a complex activity classifier; 4) the two classification tasks share the CNN layer and the simple activity characteristic layer, and the simple and complex activity classifiers are trained cooperatively by using a sharing structure. 5) And identifying the probability of the simple activity and the probability of the complex activity of the activity data to be detected by utilizing the trained simple and complex activity classifiers. The method utilizes deep learning and multi-task learning to carry out simple-complex activity collaborative recognition, and has wide application prospect in the fields of medical care, industrial assistance, skill evaluation and the like.
Description
Technical Field
The invention relates to the field of activity recognition, in particular to a simple-complex activity collaborative recognition method based on deep multitask learning.
Background
Along with the quick update of smart machine (like smart mobile phone, smart watch etc.) and wearable equipment (like chest area, bracelet etc.), the pervasive computation constantly develops, shows more and more intelligent application, very big the daily life who has made things convenient for people. Human activity recognition is a direction with great influence and value in the field of pervasive computing, can realize the perception of user activity, and has wide application prospect in the fields of medical care, industrial assistance, skill evaluation and the like.
Human activities can be divided into simple and complex activities. Simple activities usually consist of periodic movements or body gestures, such as standing, sitting, walking, running, etc. Complex activities are usually composed of simple activities, are longer in duration, and have high-level semantics such as eating, working, shopping, and the like. The traditional activity recognition method based on intelligent equipment and wearable equipment generally comprises the steps of preprocessing and segmenting original activity data, extracting features, and training a classification model by using the feature data. Common features can be divided into two categories: one type is statistical characteristics, including time domain characteristics such as mean values and variances, and frequency domain characteristics such as coefficients of different frequencies; the other is structural characteristics, including polynomial characteristics and the like for describing the variation trend of time series data. However, these features are manually defined, are widely used for time series data, do not rely on activity recognition tasks, and manually defined features can cause information loss.
Deep learning is a machine learning method expressed by learning data, and is successfully applied to the fields of computer vision, natural language processing and the like. With the development of deep learning, some activity recognition studies based on deep learning have emerged. Deep learning networks (such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), long-term memory networks (LSTM) and the like) are used for extracting deep features from activity data and training classification models, and researches show that the deep feature extraction has a better effect on activity recognition than the traditional features.
However, the above method treats activity recognition as a single classification task, and does not exploit information between related tasks. Simple activities are components of complex activities, simple and complex activity identification is a pair of closely connected tasks, commonalities and differences between the two tasks can be found through multi-task learning, and meanwhile, the accuracy of simple and complex activity identification is improved.
Disclosure of Invention
The invention aims to provide a simple-complex activity cooperative identification method based on deep multitask learning by effectively utilizing the correlation between simple and complex activity identification.
In order to achieve the purpose, the invention provides the following technical scheme:
a simple-complex activity collaborative recognition method based on deep multitask learning comprises the following steps:
(1) acquiring activity data comprising two layers of labels of simple activities and complex activities, and constructing a training set after carrying out abnormal value removal, activity division and normalization processing on the activity data;
(2) constructing an activity recognition network, wherein the activity recognition network comprises an improved CNN network and a simple activity classifier, the improved CNN network is used for extracting the depth characteristics of an input simple activity sample, and the simple activity classifier is used for predicting the probability of outputting simple activity according to the depth characteristics output by the CNN; the system also comprises a multilayer LSTM network and a complex activity classifier, wherein the LSTM network is used for carrying out feature extraction on the depth features output by the CNN to obtain the time sequence features of the complex activity, and the complex activity classifier is used for predicting the probability of outputting the complex activity according to the time sequence features output by the LSTM network;
(3) training the constructed activity recognition network by using a training set until the training is finished, and determining activity recognition network parameters to obtain an activity recognition model;
(4) carrying out abnormal value removal, activity division and normalization processing on the activity data to be detected by using the method in the step (1) to obtain simple activity data;
(5) and inputting the simple activity data into an activity recognition model, and obtaining the probability of the simple activity and the probability of the complex activity through calculation, namely, recognizing to obtain the simple activity and the complex activity.
Wherein, step (1) includes:
(1-1) acquiring activity data simultaneously containing two layers of labels of simple activity and complex activity;
(1-2) carrying out abnormal value elimination processing on the activity data, and dividing the processed activity data into complex activity samples with the time window length of lc;
(1-3) dividing each complex motion sample into simple motion samples of length ls, wherein ls < lc;
and (1-4) after normalization processing is carried out on each simple activity sample, each simple activity sample and the corresponding activity label are used as a training sample, and a training set is constructed.
Wherein the improved CNN network comprises:
a convolutional layer for extracting a feature map of input data;
the pooling layer is used for performing down-sampling on the feature mapping by maximum pooling operation;
the full connection layer is used for weighting the features obtained by convolution and pooling;
a residual unit accumulating input data from the convolutional layer and output data of the convolutional layer, and activating by using a ReLU function;
the improved CNN network takes the convolution layer as a first layer and comprises a plurality of convolution layers, every two continuous convolution layers are used for residual processing, the pooling layer is arranged between the convolution layers, and the full-connection layer is the last layer of the CNN network.
Specifically, the simple activity classifier is a Softmax classifier or an SVM classifier. The complex activity classifier is a Softmax classifier or an SVM classifier.
And when the activity recognition network is trained, calculating the classification losses of the simple activity classifier and the complex activity classifier by adopting a negative log-likelihood algorithm, and taking the sum of the classification losses of the simple activity classifier and the complex activity classifier as the total loss of the activity recognition network.
And when the activity recognition network is trained, updating parameters of the improved CNN network in the activity recognition network by adopting a gradient descent and back propagation algorithm by taking the total loss of the activity recognition network as the reduction until the training is finished to obtain an activity recognition model. The training end conditions are as follows: the activity recognition network converges or reaches a preset number of iterations.
Compared with the prior art, the invention has the following beneficial effects:
through simple activity and complex activity multitask learning, the commonality between simple and complex activity recognition tasks is discovered, and the accuracy of simple and complex activity recognition is improved at the same time by utilizing a sharing structure (improved CNN).
The depth learning methods (CNN and LSTM) are used for extracting the depth features of simple and complex activities, the depth features are automatically obtained in a data-driven mode, the activities can be accurately represented, and information loss caused by manually defined features is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is an overall framework diagram of the simple-complex activity collaborative recognition method based on deep multitask learning provided by the present invention;
FIG. 2 is a flow diagram of simple activity recognition provided by the present invention;
FIG. 3 is a flow chart of complex activity recognition provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the simple-complex activity collaborative recognition method based on deep multitask learning provided by this embodiment includes three stages, which are a data processing stage, an activity recognition model construction and training stage, and a simple-complex activity recognition stage.
And (3) a data processing stage:
and in the data processing stage, abnormal value removal, activity division and other processing are mainly carried out on the activity data so as to meet the size requirement of the input data of the activity recognition model and improve the speed and the precision of the activity recognition model.
Specifically, the data processing comprises the following specific steps:
and S101, acquiring a data set with two layers of tags of simple and complex activities.
Because the method is used for cooperatively identifying simple and complex activities, the data of the two layers of labels of the simple and complex activities are required to train an activity identification model so as to guide two identification tasks. In the existing public datasets, both the Opportunity and Ubicomp 08 datasets meet the requirements.
S102, carrying out abnormal value elimination processing on the activity data, and dividing the processed activity data into complex activity samples with the time window length lc.
In the step, firstly, abnormal value detection is carried out on the activity data, and invalid values (such as values beyond the range of normal values and zero values) in the activity data are eliminated or mean filling is carried out; and then, performing time window division on the processed activity data to obtain a complex activity sample with the length of lc.
And S103, dividing each complex activity sample into simple activity samples with the length ls, wherein ls is less than lc.
Since complex activities are longer in duration than simple activities, and simple activities are components of complex activities, the complex activity samples are divided into simple activity samples of length ls (ls < lc), such a complex activity sample contains lc/ls simple activity samples.
And S104, performing normalization processing on each simple activity sample.
Specifically, the data of each simple activity sample is subjected to Z-score normalization processing according to columns, so that each column of processed data is normalized to a certain range and conforms to the standard normal distribution, and the formula is as follows:
wherein x is the original value, μ is the mean of the column in which the value is located, σ is the standard deviation of the column in which the value is located, and x' is the normalized value.
And taking each simple activity sample subjected to normalization processing and the corresponding activity label as a training sample to construct a training set.
And (3) constructing and training a motion recognition model:
the activity recognition model building and training stage mainly comprises the steps of building a proper activity recognition network, training the activity recognition network by using a training sample, and storing network parameters to obtain an activity recognition model after the training is finished.
The constructed activity recognition network comprises two parts, wherein one part is an improved Convolutional Neural Network (CNN) and a simple activity classifier, the improved CNN is used for extracting the depth characteristics of an input simple activity sample, and the simple activity classifier is used for predicting the probability of outputting simple activity according to the depth characteristics output by the CNN; the other part is a multilayer long-short term memory (LSTM) network and a complex activity classifier, wherein the LSTM network is used for carrying out feature extraction on the depth features output by the CNN to obtain the time sequence features of the complex activity, and the complex activity classifier is used for predicting the probability of outputting the complex activity according to the time sequence features output by the LSTM network.
Specifically, as shown in fig. 2, the improved CNN network includes convolutional layers, pooling layers, fully-connected layers, and residual units, each layer containing a plurality of neural units.
For the convolutional layer: taking the simple activity sample a as input data, carrying out kernel convolution on the simple activity sample a, and outputting extracted feature mapping:
wherein, l represents the number of layers,represents the jth feature map at level l + 1,represents the convolution kernel that generates the l +1 th layer jth feature map from the l layers of the fth feature map,the f-th feature map at layer l is represented,representing a bias term, ReLU (·) is an activation function.
For a pooling layer: downsampling the feature map with a maximum pooling operation:
For a fully connected layer: weighting the features obtained by convolution and pooling:
where w is the weight, tsFor the weighted features obtained after full concatenation,is the value of the ith neural unit in layer i.
For the residual unit: the input of each residual unit consists of two parts: and accumulating the two parts of data of the output data after the convolution layer and the input data without the convolution layer, and activating by utilizing a ReLU function so as to reduce the compression loss caused in the convolution process.
The data propagation process is shown in fig. 2, the convolutional layer corresponds to a form a × b @ c representing that a convolution kernel is a × b, the convolutional kernel number is c, the pooling layer corresponds to a form a × b @ c representing that a pooling window is a × b, the pooled feature mapping number is c, and the input data passes through a series of convolutional layers, pooling layers, full-link layers and residual error units to obtain a simple motion depth feature ts。
The simple activity classifier adopts a Softmax classifier and performs on the input simple activity depth characteristic tsAnd performing prediction calculation and outputting the prediction probability of the simple activities.
Each layer of the LSTM network comprises lc/ls LSTM units, each LSTM unit comprises a memory unit ctAnd three gates: input door itAnd an output gate otAnd forget door ftAnd respectively controlling the input, the output and the updating of data. With xtAt time tInput, ht-1And ct-1The calculation formula for the hidden state and the memory unit state at the previous moment is as follows:
it=sigm(Wxixt+Whiht-1+bi) (5)
ft=sigm(Wxfxt+Whfht-1+bf) (6)
ot=sigm(Wxoxt+Whoht-1+Wcoct-1+bo) (8)
wherein the operatorThe point multiplication operation is represented, W and b represent a weight matrix and a bias vector respectively, and sigm and tanh represent a sigmoid function and a hyperbolic tangent function respectively.
Simple Activity depth feature t in Complex Activity samples, when appliedsRespectively inputting corresponding LSTM units, and obtaining the time sequence characteristics t of complex activities through the LSTM networkc。
As shown in FIG. 3, each complex activity sample contains lc/ls simple activity samples, and lc/ls simple activity depth features t obtained through CNNsThe corresponding LSTM unit is input. In the LSTM network, the state of the last time is input into the next LSTM unit, and the timing information of the data is retained.
The complex activity classifier adopts a Softmax classifier to input the time sequence characteristic t of the complex activitycAnd performing prediction calculation and outputting the prediction probability of the complex activity.
After the activity recognition network is built, the activity recognition network is trained by using the training samples to obtain an activity recognition model.
And calculating the classification loss of the simple activity classifier and the complex activity classifier by adopting a negative log-likelihood algorithm, wherein the loss is calculated as follows:
L(fs)=-logfs(sa) (10)
L(fc)=-logfc(ca) (11)
wherein f issFor simple activity classifiers, fcRepresenting a complex activity classifier, sa and ca represent a simple activity sample and a complex activity sample, respectively.
Adding the losses of the simple activity classifier and the complex activity classifier to obtain the overall loss of the two tasks, namely the total loss of the activity recognition network, wherein the formula is as follows:
Ltotal=L(fs)+L(fc) (12)
inputting training samples into an activity recognition network to reduce total loss LtotalAnd updating parameters of the improved CNN network in the activity recognition network by adopting a gradient descent and back propagation algorithm for a training target until the training is finished to obtain an activity recognition model.
Simple-complex activity recognition phase:
firstly, removing abnormal values, dividing activities and carrying out normalization processing on activity data to be detected according to the process of a data processing stage to obtain simple activity data;
then, the simple activity data is input into the activity recognition model, and the probability of the simple activity and the probability of the complex activity are obtained through calculation, namely, the simple activity and the complex activity are obtained through recognition.
In the simple-complex activity collaborative recognition method based on deep multitask learning provided by the embodiment, the commonality between the simple and complex activity recognition tasks is found through the multitask learning, and the accuracy of the simple and complex activity recognition is improved simultaneously by using the shared structure (i.e. the improved CNN network).
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (8)
1. A simple-complex activity collaborative recognition method based on deep multitask learning comprises the following steps:
(1) acquiring activity data comprising two layers of labels of simple activities and complex activities, and constructing a training set after carrying out abnormal value removal, activity division and normalization processing on the activity data;
(2) constructing an activity recognition network, wherein the activity recognition network comprises an improved CNN network and a simple activity classifier, the improved CNN network is used for extracting the depth characteristics of an input simple activity sample, and the simple activity classifier is used for predicting the probability of outputting simple activity according to the depth characteristics output by the CNN; the system also comprises a multilayer LSTM network and a complex activity classifier, wherein the LSTM network is used for carrying out feature extraction on the depth features output by the CNN to obtain the time sequence features of the complex activity, and the complex activity classifier is used for predicting the probability of outputting the complex activity according to the time sequence features output by the LSTM network;
(3) training the constructed activity recognition network by using a training set until the training is finished, and determining activity recognition network parameters to obtain an activity recognition model;
(4) removing abnormal values, dividing activities and carrying out normalization processing on the activity data to be detected to obtain simple activity data;
(5) and inputting the simple activity data into an activity recognition model, and obtaining the probability of the simple activity and the probability of the complex activity through calculation, namely, recognizing to obtain the simple activity and the complex activity.
2. The simple-complex activity collaborative recognition method based on deep multitask learning according to claim 1, characterized in that the step (1) comprises:
(1-1) acquiring activity data simultaneously containing two layers of labels of simple activity and complex activity;
(1-2) carrying out abnormal value elimination processing on the activity data, and dividing the processed activity data into complex activity samples with the time window length of lc;
(1-3) dividing each complex activity sample into simple activity samples of length ls, wherein ls < lc;
and (1-4) after normalization processing is carried out on each simple activity sample, each simple activity sample and the corresponding activity label are used as a training sample, and a training set is constructed.
3. The method for deep multitask learning based simple-complex activity collaborative recognition according to claim 1, characterized in that said improved CNN network comprises:
a convolutional layer for extracting a feature map of input data;
the pooling layer is used for performing down-sampling on the feature mapping by maximum pooling operation;
the full connection layer is used for weighting the features obtained by convolution and pooling;
a residual unit accumulating input data from the convolutional layer and output data of the convolutional layer, and activating by using a ReLU function;
the improved CNN network takes the convolution layer as a first layer and comprises a plurality of convolution layers, every two continuous convolution layers are used for residual processing, the pooling layer is arranged between the convolution layers, and the full-connection layer is the last layer of the CNN network.
4. The deep multitask learning based simple-complex activity collaborative recognition method according to claim 1, wherein the simple activity classifier is a Softmax classifier or an SVM classifier.
5. The deep multitask learning based simple-complex activity collaborative recognition method according to claim 1, wherein the complex activity classifier is a Softmax classifier or an SVM classifier.
6. The method for the cooperative simple-complex activity recognition based on deep multitask learning as claimed in claim 1, characterized in that in the training of the activity recognition network, the negative log-likelihood algorithm is used to calculate the classification loss of the simple activity classifier and the complex activity classifier, and the sum of the classification losses of the simple activity classifier and the complex activity classifier is used as the total loss of the activity recognition network.
7. The method as claimed in claim 6, wherein in training the activity recognition network, the gradient descent and back propagation algorithm is used to update the parameters of the improved CNN network in the activity recognition network with the total loss of the activity recognition network reduced as a training target, until the training is finished, so as to obtain the activity recognition model.
8. The method for the deep multitask learning based simple-complex activity collaborative recognition according to claim 7, wherein the training end condition is as follows: the activity recognition network converges or reaches a preset number of iterations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810678316.4A CN108875836B (en) | 2018-06-27 | 2018-06-27 | Simple-complex activity collaborative recognition method based on deep multitask learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810678316.4A CN108875836B (en) | 2018-06-27 | 2018-06-27 | Simple-complex activity collaborative recognition method based on deep multitask learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875836A CN108875836A (en) | 2018-11-23 |
CN108875836B true CN108875836B (en) | 2020-08-11 |
Family
ID=64295940
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810678316.4A Active CN108875836B (en) | 2018-06-27 | 2018-06-27 | Simple-complex activity collaborative recognition method based on deep multitask learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875836B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670172A (en) * | 2018-12-06 | 2019-04-23 | 桂林电子科技大学 | A kind of scenic spot anomalous event abstracting method based on complex neural network |
CN109886105B (en) * | 2019-01-15 | 2021-12-14 | 广州图匠数据科技有限公司 | Price tag identification method, system and storage medium based on multi-task learning |
CN110046409B (en) * | 2019-03-29 | 2020-10-27 | 西安交通大学 | ResNet-based steam turbine component health state evaluation method |
CN110276380B (en) * | 2019-05-22 | 2021-08-17 | 杭州电子科技大学 | Real-time motion on-line guidance system based on depth model framework |
CN111160443B (en) * | 2019-12-25 | 2023-05-23 | 浙江大学 | Activity and user identification method based on deep multitasking learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2464991A1 (en) * | 2009-08-10 | 2012-06-20 | Robert Bosch GmbH | A method for human only activity detection based on radar signals |
CN103970271A (en) * | 2014-04-04 | 2014-08-06 | 浙江大学 | Daily activity identifying method with exercising and physiology sensing data fused |
CN106599869A (en) * | 2016-12-22 | 2017-04-26 | 安徽大学 | Vehicle attribute identification method based on multi-task convolutional neural network |
CN107609460A (en) * | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
-
2018
- 2018-06-27 CN CN201810678316.4A patent/CN108875836B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2464991A1 (en) * | 2009-08-10 | 2012-06-20 | Robert Bosch GmbH | A method for human only activity detection based on radar signals |
CN103970271A (en) * | 2014-04-04 | 2014-08-06 | 浙江大学 | Daily activity identifying method with exercising and physiology sensing data fused |
CN106599869A (en) * | 2016-12-22 | 2017-04-26 | 安徽大学 | Vehicle attribute identification method based on multi-task convolutional neural network |
CN107609460A (en) * | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
Non-Patent Citations (2)
Title |
---|
Juan C. Núñez etal..Convolutional Neural Networks and Long Short-Term Memory for skeleton-based human activity and hand gesture recognition.《Pattern Recognition》.2018,第76卷 * |
融合视频时空域运动信息的3DCNN人体行为识别;刘嘉莹等;《电子测量技术》;20180430;第41卷(第7期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108875836A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875836B (en) | Simple-complex activity collaborative recognition method based on deep multitask learning | |
Karim et al. | Multivariate LSTM-FCNs for time series classification | |
Wason | Deep learning: Evolution and expansion | |
WO2022007823A1 (en) | Text data processing method and device | |
CN108960337B (en) | Multi-modal complex activity recognition method based on deep learning model | |
CN112257449B (en) | Named entity recognition method and device, computer equipment and storage medium | |
Thakur et al. | Convae-lstm: Convolutional autoencoder long short-term memory network for smartphone-based human activity recognition | |
CN112784778B (en) | Method, apparatus, device and medium for generating model and identifying age and sex | |
Liew et al. | An optimized second order stochastic learning algorithm for neural network training | |
Fang et al. | Gait neural network for human-exoskeleton interaction | |
CN111898636B (en) | Data processing method and device | |
CN113326852A (en) | Model training method, device, equipment, storage medium and program product | |
Wang et al. | Adaptive feature fusion for time series classification | |
CN111178288B (en) | Human body posture recognition method and device based on local error layer-by-layer training | |
Li et al. | Nuclear norm regularized convolutional Max Pos@ Top machine | |
EP4273754A1 (en) | Neural network training method and related device | |
Islam et al. | Prediction of stock market using recurrent neural network | |
CN113065633A (en) | Model training method and associated equipment | |
Shojaedini et al. | Mobile sensor based human activity recognition: distinguishing of challenging activities by applying long short-term memory deep learning modified by residual network concept | |
WO2023159756A1 (en) | Price data processing method and apparatus, electronic device, and storage medium | |
Li et al. | Smartphone-sensors based activity recognition using IndRNN | |
Butt et al. | Fall detection using LSTM and transfer learning | |
Bhat et al. | Evaluation of deep learning model for human activity recognition | |
Wang et al. | A Multidimensional Parallel Convolutional Connected Network Based on Multisource and Multimodal Sensor Data for Human Activity Recognition | |
CN117056589A (en) | Article recommendation method and related equipment thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |