CN108062170A - Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal - Google Patents
Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal Download PDFInfo
- Publication number
- CN108062170A CN108062170A CN201711346910.5A CN201711346910A CN108062170A CN 108062170 A CN108062170 A CN 108062170A CN 201711346910 A CN201711346910 A CN 201711346910A CN 108062170 A CN108062170 A CN 108062170A
- Authority
- CN
- China
- Prior art keywords
- neural networks
- convolutional neural
- intelligent terminal
- data
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 32
- 230000001133 acceleration Effects 0.000 claims abstract description 26
- 238000012360 testing method Methods 0.000 claims abstract description 15
- 230000000875 corresponding effect Effects 0.000 claims abstract description 4
- 238000002054 transplantation Methods 0.000 claims abstract description 3
- 210000002569 neuron Anatomy 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 6
- 230000001537 neural effect Effects 0.000 claims description 6
- 230000008676 import Effects 0.000 claims description 3
- 230000009471 action Effects 0.000 description 12
- 230000036544 posture Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 230000007935 neutral effect Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000003475 lamination Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of multi-class human posture recognition method based on convolutional neural networks and intelligent terminal, includes the following steps:Step 1, the 3-axis acceleration sensor data of mobile intelligent terminal equipment are gathered, and record corresponding action classification;Step 2, after being pre-processed to 3-axis acceleration sensor data, two classes are splitted data into, one kind is training sample, and one kind is test sample;Step 3, with training sample training convolutional neural networks, and its accuracy rate and according to demand constantly adjustment are tested with test sample;It step 4, will be on trained convolutional neural networks model transplantations to mobile intelligent terminal;Step 5,3-axis acceleration sensor data are gathered using mobile intelligent terminal, after being pre-processed, is input to trained convolutional neural networks model, obtains human body attitude recognition result.Such method accuracy of identification is high, and the type that can be identified is more.
Description
Technical field
The invention belongs to field of artificial intelligence research, are related to wearable intelligent monitoring field, more particularly to a kind of to utilize biography
Sensor carries out human body attitude and knows method for distinguishing.
Background technology
Human body attitude identification technology is in fields such as virtual reality, moving game, health care, human-computer interaction, image identifications
Extensive application.Generally gesture recognition technology is divided into two kinds:It is non-wearable and wearable.Non- wearable technology, which cares for name, to be thought
Justice refers to human body attitude identification technology of the gesture recognition equipment not with direct body contact, such as image recognition technology.Wearable people
Body gesture recognition technology is wearable compared to non-, there is the advantages of space is unrestricted, above has better hair in research and application
Open up space.Due to the diversity of human body attitude and the otherness of individual actions, a kind of posture of high accuracy of identification how is established
Identification model is the research topic inquired into and paid close attention to always at present.
In general, in order to keep higher accuracy of identification, it can be had more in human body and multiple sensor devices are disposed on joint.Although
This method can intuitively find the acceleration signature of various actions, but user is required to carry multiple sensings in practical application
Device is very inconvenient.How using it is less in addition simply with one group of sensor carry out high-accuracy human body attitude identify be one
A very actual studies a question.
Human body attitude identification is carried out using the built-in sensors of smart mobile phone or smartwatch, early has much grind both at home and abroad
Study carefully application, most Intelligent bracelet wrist-watches and mobile phone have the application APP of gesture recognition on the market at present.Such human body attitude
Recognition methods is most for threshold detection method, i.e., by judging that sensor is original or treated, whether data are more than or less than
Preset good threshold is come type of action of classifying.This method calculates simple, and it is few to occupy the memory of Intelligent mobile equipment, but with
This simultaneously, shortcoming is also apparent from:Different product accuracy rate is irregular, and the action classification that can be identified is also extremely limited.This
On the one hand the reason for being each company research staff technological gap, prior one side is the reason is that the limitation of such method.It needs
The action classification to be identified is more, and such algorithm constructs more complicated.
Deep learning has good development prospect in pattern-recognition.Deep learning (Deep Learning) originates from
The research of artificial neural network (Artificial Neural Network, ANN).Wherein convolutional neural networks are containing convolution
The neutral net of layer (Convolutional Layer).Convolutional neural networks are of great interest in computer vision field, volume
Product neutral net can not only handle one-dimensional data (for example, text), it is also particularly suitable for processing 2-D data (for example, image)
With three-dimensional data (for example, three-dimensional acceleration data that video and this patent refer to).Convolutional neural networks belong to artificial intelligence
Scope, it is more efficient than conventional method on the structure of pattern recognition classifier device, and be easy to extend, it can realize and compare conventional method
The more identification models of action recognition type.
The content of the invention
The purpose of the present invention is to provide a kind of based on the knowledge of the multi-class human body attitude of convolutional neural networks and intelligent terminal
Other method, accuracy of identification is high, and the type that can be identified is more.
In order to achieve the above objectives, solution of the invention is:
A kind of multi-class human posture recognition method based on convolutional neural networks and intelligent terminal, includes the following steps:
Step 1, the 3-axis acceleration sensor data of mobile intelligent terminal equipment are gathered, and record corresponding action class
Not;
Step 2, after being pre-processed to 3-axis acceleration sensor data, two classes are splitted data into, one kind is trained sample
This, one kind is test sample;
Step 3, with training sample training convolutional neural networks, and its accuracy rate and according to demand not is tested with test sample
Disconnected adjustment;
It step 4, will be on trained convolutional neural networks model transplantations to mobile intelligent terminal;
Step 5,3-axis acceleration sensor data are gathered using mobile intelligent terminal, after being pre-processed, is input to instruction
The convolutional neural networks model perfected, obtains human body attitude recognition result.
In above-mentioned steps 1, sample frequency is set as 25Hz.
In above-mentioned steps 2, data are pre-processed, including being filtered to data, normalized, and by data tune
It is made into the input format of convolutional neural networks.
In above-mentioned steps 2, using 75% in data as training sample, using 25% in data as test sample.
Above-mentioned steps 3 comprise the concrete steps that:
A establishes the convolutional neural networks model of multilayer;
B imports training sample and adjusts convolutional neural networks model parameter, obtains the model of high-accuracy.
In above-mentioned steps b, adjust convolutional neural networks model parameter include each layer neuronal quantity adjusting, loss function and
Convolution kernel is adjusted.
In above-mentioned steps a, the structure of convolutional neural networks model includes:Input layer, two layers of convolution and maximum pond layer, one
A full articulamentum and an output layer.
In above-mentioned convolutional neural networks model, convolution kernel size is 3*3, and the neuron number of two convolutional layers is respectively 96
With 198, the data size of first layer convolution kernel is (5,5,3), shares 96 convolution kernels;The Chi Huahe entirely tested all for (2,
2), pond step-length is all 2, all using maximum pondization strategy;The convolution kernel data size of second layer convolutional layer is (3,3,96), altogether
There are 198 convolution kernels;Full articulamentum includes 1024 concealed nodes;Learning rate is 0.0001;Drop-out is 1.
After using the above program, due to the advantage of convolutional neural networks, as long as sample size is enough, by adjusting parameter,
The action classification that the present invention can will classify expands to more.The present invention is in intelligent monitoring, human body attitude identification etc.
It has important practical application of significance.
The invention has the advantages that:
(1) present invention is using artificial intelligence-convolutional neural networks recognition methods, and accuracy of identification is high, the type that can be identified
It is more;
(2) amount of action of recognition methods identification of the present invention has scalability, and extended operation is simple, is easy to exploit person
Member's operation;
(3) present invention knows method for distinguishing compared to video or image, can be effectively protected privacy of user;
(4) present invention can apply to the common Android smartphone of people and smartwatch, there is good generalization.
Description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the principle of the present invention figure;
Fig. 3 is mobile phone 3-axis acceleration sensor direction schematic diagram;
Fig. 4 is the corresponding part acceleration information waveform diagram of different actions;
Fig. 5 is variation diagram of the cross entropy (cross_entropy) with frequency of training.
Specific embodiment
Below with reference to attached drawing, technical scheme and advantageous effect are described in detail.
The present invention provides a kind of multi-class human posture recognition method based on convolutional neural networks and intelligent terminal, including
Following steps:
Step 1, the 3-axis acceleration sensor of mobile intelligent terminal equipment is gathered in the case where the third party supervises and records
Data, and enclose action classification label in advance, sample when being trained as human body attitude identification model use;
Step 2,3-axis acceleration sensor data are pre-processed, including being filtered to data, normalized,
And data point reuse is splitted data into two classes, one kind is training sample, and one kind is test into the input format of convolutional neural networks
Sample;
Step 3, with training sample training convolutional neural networks, and its accuracy rate and according to demand not is tested with test sample
Disconnected adjustment;It specifically includes:
A establishes the convolutional neural networks model of multilayer;
B imports training sample and adjusts convolutional neural networks model parameter, obtains the model of high-accuracy;Wherein, it is described
Convolutional neural networks model parameter is adjusted, including:Each layer neuronal quantity is adjusted, and loss function and convolution kernel are adjusted.
Step 4, trained convolutional neural networks model (human body attitude identification model) is transplanted to mobile intelligent terminal
On, realize real time terminal gesture recognition processing function;
Step 5,3-axis acceleration sensor data are gathered using mobile intelligent terminal, after being pre-processed, is input to instruction
The convolutional neural networks model perfected, obtains human body attitude recognition result.
The present invention is based on default training sets and convolutional neural networks structured training to obtain human body attitude identification model, can be right
Walk, run, upstairs, downstairs, sit-ups, sweep the floor, wipe seven kinds of movement postures and be identified.
Fig. 1 is the flow chart of target processing, and the three-dimensional acceleration time sequence of human motion is collected from intelligent mobile terminal
After row, input to initial convolutional neural networks carry out model training after integration processing, by the trained mould for meeting design requirement
Type is exported to mobile terminal, makes it the identified off-line human action on mobile intelligent terminal.
Fig. 2 is convolutional neural networks structure chart, is mainly included:Input layer, two layers of convolution and maximum pond layer, one connects entirely
Connect layer and an output layer.Input is pretreated 3-axis acceleration data x, y, z, such as each acceleration of smart mobile phone
Direction is as shown in Figure 3.
Optionally, the sample frequency for gathering intelligent terminal may be set to 25Hz.With this frequency collection to partial act add
Speed data waveform is as shown in Figure 4.Optionally, every 2.56 seconds of exemplary definition of the present invention is a sample action, i.e., every 64 groups of numbers
According to for a sample.Certainly, sample frequency can voluntarily set suitable value according to actual demand, not limit herein.
For training convolutional neural networks, the sample collected is divided into two classes by the present invention:Training sample and test sample.
Training sample carries out model training as the input of convolutional neural networks, and test sample considers foundation as recognition accuracy.
Acquiescence, using the 75% of data set as training set, using the 25% of data set as test set.
As the input of convolutional neural networks, acceleration information is folded.Example of the present invention is by 3-axis acceleration
Data size is arranged to (8,8,3), represents length and width and depth respectively.The data matrix form of wherein each axis is as follows:
The 3-axis acceleration data in each short time can thus be made shaped like pixel picture, to be adapted to convolutional Neural
The training of network.Of course, it is possible to voluntarily set suitable value according to actual demand, do not limit herein.
The formula of neutral net elementary cell neuron is as follows:
Wherein, x is neuron input, and n is input parameter number, and b is to bias, hW,b(x) it is neuron output.
Convolutional neural networks and general neural network difference lies in, convolutional neural networks contain one by convolutional layer and
The feature extractor that sub-sampling layer is formed.In the convolutional layer of convolutional neural networks, a neuron is only neural with part adjacent bed
Member connection.In a convolutional layer of CNN, generally comprise several characteristic planes (featureMap), each characteristic plane by
The neuron composition of some rectangular arrangeds, the neuron of same characteristic plane share weights, and shared weights are exactly convolution here
Core.Convolution kernel initializes generally in the form of random decimal matrix, and convolution kernel is closed study in the training process of network
The weights of reason.The direct benefit that shared weights (convolution kernel) are brought is the connection reduced between each layer of network, while is reduced again
The risk of over-fitting.
This part of the invention only needs to set the size and neuron number of convolution kernel.Convolution kernel size and neuron
The value of number be empirical value, without fixed obtaining value method, in example of the present invention convolution kernel size be 3*3, two convolutional layers
Neuron number be respectively 96 and 198, this data is only for reference.
Sub-sampling is also referred to as pond (pooling), usually has average sub-sampling (mean pooling) and maximum to adopt
Two kinds of forms of sample (max pooling).Sub-sampling is considered as a kind of special convolution process.Convolution and sub-sampling greatly simplify
Model complexity reduces the parameter of model.
The final specific experiment parameter of model is listed below:The data size of first layer convolution kernel is (5,5,3), shares 96
A convolution kernel;The Chi Huahe entirely tested is (2,2), and pond step-length is all 2, all using maximum pondization strategy;The second layer is rolled up
The convolution kernel data size of lamination is (3,3,96), shares 198 convolution kernels;Full articulamentum includes 1024 concealed nodes;It learns
Habit rate is 0.0001;Drop-out is 1.
If amount of training data is not big enough, need to reuse data.50 data are inputted to nerve every time
Network is trained, and every 50 recognition accuracies of measurement and cross entropy, the variation diagram of wherein cross entropy are as shown in Figure 5.
When trained convolutional neural networks meet design requirement, you can will make on the model extraction to mobile intelligent terminal
With.If the convolutional neural networks of training do not meet design requirement, it is necessary to change the neuron number of each hidden layer.Neuron number
Being modified to which value is advisable, it is necessary to test repeatedly.If the method for the neuron number of above-mentioned each hidden layer of modification is accurate to identifying
Rate influences little, it is proposed that addition hides the number of plies or increases number of training.
It should be noted that the human body attitude identification device in the embodiment of the present invention can specifically be integrated in intelligent mobile end
In end, above-mentioned intelligent terminal is specifically as follows the terminals such as smart mobile phone, smartwatch, is not construed as limiting herein.
It will be understood that the human body attitude identification device in the embodiment of the present invention can realize the whole in above method embodiment
Technical solution, the function of each function module can be implemented according to the method in above method embodiment, specific real
Existing process can refer to the associated description in above-described embodiment, and details are not described herein again.
Therefore the human body attitude identification device in the embodiment of the present invention is by gathering the acceleration number of degrees of intelligent terminal
According to based on the acceleration information of the intelligent terminal collected, and by pretreated data input trained human body
Gesture recognition model obtains human body attitude recognition result.Since human body attitude identification model is based on default training set volume
Product neural metwork training obtains, and therefore, mould is identified by inputting trained human body attitude after acceleration information is pre-processed
Type, you can the identification to human body attitude is realized, it is achieved thereby that the human body attitude of the nonvisual means based on acceleration information is known
Not.
In the above-described embodiments, all emphasize particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
Above example is merely illustrative of the invention's technical idea, it is impossible to protection scope of the present invention is limited with this, it is every
According to technological thought proposed by the present invention, any change done on the basis of technical solution each falls within the scope of the present invention
Within.
Claims (8)
1. a kind of multi-class human posture recognition method based on convolutional neural networks and intelligent terminal, it is characterised in that including such as
Lower step:
Step 1, the 3-axis acceleration sensor data of mobile intelligent terminal equipment are gathered, and record corresponding action classification;
Step 2, two classes are splitted data into after being pre-processed to 3-axis acceleration sensor data, one kind is training sample, one
Class is test sample;
Step 3, with training sample training convolutional neural networks, and test its accuracy rate with test sample and constantly adjust according to demand
It is whole;
It step 4, will be on trained convolutional neural networks model transplantations to mobile intelligent terminal;
Step 5,3-axis acceleration sensor data are gathered using mobile intelligent terminal, after being pre-processed, is input to and trains
Convolutional neural networks model, obtain human body attitude recognition result.
2. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as described in claim 1,
It is characterized in that:In the step 1, sample frequency is set as 25Hz.
3. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as described in claim 1,
It is characterized in that:In the step 2, data are pre-processed, including being filtered to data, normalized, and by data
It is adjusted to the input format of convolutional neural networks.
4. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as described in claim 1,
It is characterized in that:In the step 2, using 75% in data as training sample, using 25% in data as test sample.
5. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as described in claim 1,
It is characterized in that:The step 3 comprises the concrete steps that:
A establishes the convolutional neural networks model of multilayer;
B imports training sample and adjusts convolutional neural networks model parameter, obtains the model of high-accuracy.
6. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as claimed in claim 5,
It is characterized in that:In the step b, adjust convolutional neural networks model parameter and adjusted including each layer neuronal quantity, loss function
And convolution kernel is adjusted.
7. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as claimed in claim 5,
It is characterized in that:In the step a, the structure of convolutional neural networks model includes:Input layer, two layers of convolution and maximum pond layer,
One full articulamentum and an output layer.
8. the multi-class human posture recognition method based on convolutional neural networks and intelligent terminal as claimed in claim 7,
It is characterized in that:In the convolutional neural networks model, convolution kernel size is 3*3, and the neuron number of two convolutional layers is respectively
96 and 198, the data size of first layer convolution kernel is (5,5,3), shares 96 convolution kernels;The Chi Huahe entirely tested is
(2,2), pond step-length are all 2, all using maximum pondization strategy;The convolution kernel data size of second layer convolutional layer for (3,3,
96) 198 convolution kernels, are shared;Full articulamentum includes 1024 concealed nodes;Learning rate is 0.0001;Drop-out is 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711346910.5A CN108062170A (en) | 2017-12-15 | 2017-12-15 | Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711346910.5A CN108062170A (en) | 2017-12-15 | 2017-12-15 | Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108062170A true CN108062170A (en) | 2018-05-22 |
Family
ID=62139047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711346910.5A Pending CN108062170A (en) | 2017-12-15 | 2017-12-15 | Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108062170A (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109033995A (en) * | 2018-06-29 | 2018-12-18 | 出门问问信息科技有限公司 | Identify the method, apparatus and intelligence wearable device of user behavior |
CN109346166A (en) * | 2018-11-22 | 2019-02-15 | 深圳市云护宝计算机技术有限公司 | A kind of inpatient department intelligent medical care bracelet and its deep learning modeling method |
CN109670548A (en) * | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
CN109685148A (en) * | 2018-12-28 | 2019-04-26 | 南京师范大学 | Multi-class human motion recognition method and identifying system |
CN109711324A (en) * | 2018-12-24 | 2019-05-03 | 南京师范大学 | Human posture recognition method based on Fourier transformation and convolutional neural networks |
CN109726662A (en) * | 2018-12-24 | 2019-05-07 | 南京师范大学 | Multi-class human posture recognition method based on convolution sum circulation combination neural net |
CN109740651A (en) * | 2018-12-24 | 2019-05-10 | 南京师范大学 | Limbs recognition methods based on 1- norm data processing transformation and convolutional neural networks |
CN109770912A (en) * | 2019-01-23 | 2019-05-21 | 复旦大学 | A kind of abnormal gait classification method based on depth convolutional neural networks |
CN109770913A (en) * | 2019-01-23 | 2019-05-21 | 复旦大学 | A kind of abnormal gait recognition methods based on reverse transmittance nerve network |
CN110275161A (en) * | 2019-06-28 | 2019-09-24 | 台州睿联科技有限公司 | A kind of wireless human body gesture recognition method applied to Intelligent bathroom |
EP3582196A1 (en) * | 2018-06-11 | 2019-12-18 | Verisure Sàrl | Shock sensor in an alarm system |
CN110610158A (en) * | 2019-09-16 | 2019-12-24 | 南京师范大学 | Human body posture identification method and system based on convolution and gated cyclic neural network |
CN111178288A (en) * | 2019-12-31 | 2020-05-19 | 南京师范大学 | Human body posture recognition method and device based on local error layer-by-layer training |
CN111222459A (en) * | 2020-01-06 | 2020-06-02 | 上海交通大学 | Visual angle-independent video three-dimensional human body posture identification method |
CN111700624A (en) * | 2020-07-27 | 2020-09-25 | 中国科学院合肥物质科学研究院 | Mode recognition method and system for detecting motion gesture of smart bracelet |
CN111723662A (en) * | 2020-05-18 | 2020-09-29 | 南京师范大学 | Human body posture recognition method based on convolutional neural network |
CN111753683A (en) * | 2020-06-11 | 2020-10-09 | 南京师范大学 | Human body posture identification method based on multi-expert convolutional neural network |
CN111860188A (en) * | 2020-06-24 | 2020-10-30 | 南京师范大学 | Human body posture recognition method based on time and channel double attention |
CN111860191A (en) * | 2020-06-24 | 2020-10-30 | 南京师范大学 | Human body posture identification method based on channel selection convolutional neural network |
CN112115964A (en) * | 2020-08-04 | 2020-12-22 | 深圳市联合视觉创新科技有限公司 | Acceleration labeling model generation method, acceleration labeling method, device and medium |
CN112287810A (en) * | 2020-10-27 | 2021-01-29 | 南京大学 | Device and method capable of dynamically increasing motion recognition gestures |
CN113705507A (en) * | 2021-09-02 | 2021-11-26 | 上海交通大学 | Mixed reality open set human body posture recognition method based on deep learning |
CN114167984A (en) * | 2021-01-28 | 2022-03-11 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN114916928A (en) * | 2022-05-12 | 2022-08-19 | 电子科技大学 | Human body posture multi-channel convolution neural network detection method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106022285A (en) * | 2016-05-30 | 2016-10-12 | 北京智芯原动科技有限公司 | Vehicle type identification method and vehicle type identification device based on convolutional neural network |
CN106228177A (en) * | 2016-06-30 | 2016-12-14 | 浙江大学 | Daily life subject image recognition methods based on convolutional neural networks |
CN106388831A (en) * | 2016-11-04 | 2017-02-15 | 郑州航空工业管理学院 | Method for detecting falling actions based on sample weighting algorithm |
CN107153871A (en) * | 2017-05-09 | 2017-09-12 | 浙江农林大学 | Fall detection method based on convolutional neural networks and mobile phone sensor data |
CN107180225A (en) * | 2017-04-19 | 2017-09-19 | 华南理工大学 | A kind of recognition methods for cartoon figure's facial expression |
-
2017
- 2017-12-15 CN CN201711346910.5A patent/CN108062170A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106022285A (en) * | 2016-05-30 | 2016-10-12 | 北京智芯原动科技有限公司 | Vehicle type identification method and vehicle type identification device based on convolutional neural network |
CN106228177A (en) * | 2016-06-30 | 2016-12-14 | 浙江大学 | Daily life subject image recognition methods based on convolutional neural networks |
CN106388831A (en) * | 2016-11-04 | 2017-02-15 | 郑州航空工业管理学院 | Method for detecting falling actions based on sample weighting algorithm |
CN107180225A (en) * | 2017-04-19 | 2017-09-19 | 华南理工大学 | A kind of recognition methods for cartoon figure's facial expression |
CN107153871A (en) * | 2017-05-09 | 2017-09-12 | 浙江农林大学 | Fall detection method based on convolutional neural networks and mobile phone sensor data |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3582196A1 (en) * | 2018-06-11 | 2019-12-18 | Verisure Sàrl | Shock sensor in an alarm system |
WO2019238256A1 (en) * | 2018-06-11 | 2019-12-19 | Verisure Sàrl | Shock sensor in an alarm system |
CN109033995A (en) * | 2018-06-29 | 2018-12-18 | 出门问问信息科技有限公司 | Identify the method, apparatus and intelligence wearable device of user behavior |
CN109346166A (en) * | 2018-11-22 | 2019-02-15 | 深圳市云护宝计算机技术有限公司 | A kind of inpatient department intelligent medical care bracelet and its deep learning modeling method |
CN109670548A (en) * | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
CN109670548B (en) * | 2018-12-20 | 2023-01-06 | 电子科技大学 | Multi-size input HAR algorithm based on improved LSTM-CNN |
CN109711324A (en) * | 2018-12-24 | 2019-05-03 | 南京师范大学 | Human posture recognition method based on Fourier transformation and convolutional neural networks |
CN109726662A (en) * | 2018-12-24 | 2019-05-07 | 南京师范大学 | Multi-class human posture recognition method based on convolution sum circulation combination neural net |
CN109740651A (en) * | 2018-12-24 | 2019-05-10 | 南京师范大学 | Limbs recognition methods based on 1- norm data processing transformation and convolutional neural networks |
CN109685148A (en) * | 2018-12-28 | 2019-04-26 | 南京师范大学 | Multi-class human motion recognition method and identifying system |
CN109770913A (en) * | 2019-01-23 | 2019-05-21 | 复旦大学 | A kind of abnormal gait recognition methods based on reverse transmittance nerve network |
CN109770912A (en) * | 2019-01-23 | 2019-05-21 | 复旦大学 | A kind of abnormal gait classification method based on depth convolutional neural networks |
CN110275161A (en) * | 2019-06-28 | 2019-09-24 | 台州睿联科技有限公司 | A kind of wireless human body gesture recognition method applied to Intelligent bathroom |
CN110610158A (en) * | 2019-09-16 | 2019-12-24 | 南京师范大学 | Human body posture identification method and system based on convolution and gated cyclic neural network |
CN111178288A (en) * | 2019-12-31 | 2020-05-19 | 南京师范大学 | Human body posture recognition method and device based on local error layer-by-layer training |
CN111178288B (en) * | 2019-12-31 | 2024-03-01 | 南京师范大学 | Human body posture recognition method and device based on local error layer-by-layer training |
CN111222459A (en) * | 2020-01-06 | 2020-06-02 | 上海交通大学 | Visual angle-independent video three-dimensional human body posture identification method |
CN111222459B (en) * | 2020-01-06 | 2023-05-12 | 上海交通大学 | Visual angle independent video three-dimensional human body gesture recognition method |
CN111723662A (en) * | 2020-05-18 | 2020-09-29 | 南京师范大学 | Human body posture recognition method based on convolutional neural network |
CN111753683A (en) * | 2020-06-11 | 2020-10-09 | 南京师范大学 | Human body posture identification method based on multi-expert convolutional neural network |
CN111860191A (en) * | 2020-06-24 | 2020-10-30 | 南京师范大学 | Human body posture identification method based on channel selection convolutional neural network |
CN111860188A (en) * | 2020-06-24 | 2020-10-30 | 南京师范大学 | Human body posture recognition method based on time and channel double attention |
CN111700624A (en) * | 2020-07-27 | 2020-09-25 | 中国科学院合肥物质科学研究院 | Mode recognition method and system for detecting motion gesture of smart bracelet |
CN111700624B (en) * | 2020-07-27 | 2024-03-12 | 中国科学院合肥物质科学研究院 | Pattern recognition method and system for detecting motion gesture by intelligent bracelet |
CN112115964A (en) * | 2020-08-04 | 2020-12-22 | 深圳市联合视觉创新科技有限公司 | Acceleration labeling model generation method, acceleration labeling method, device and medium |
CN112287810A (en) * | 2020-10-27 | 2021-01-29 | 南京大学 | Device and method capable of dynamically increasing motion recognition gestures |
CN114167984A (en) * | 2021-01-28 | 2022-03-11 | Oppo广东移动通信有限公司 | Device control method, device, storage medium and electronic device |
CN114167984B (en) * | 2021-01-28 | 2024-03-12 | Oppo广东移动通信有限公司 | Equipment control method and device, storage medium and electronic equipment |
CN113705507A (en) * | 2021-09-02 | 2021-11-26 | 上海交通大学 | Mixed reality open set human body posture recognition method based on deep learning |
CN113705507B (en) * | 2021-09-02 | 2023-09-19 | 上海交通大学 | Mixed reality open set human body gesture recognition method based on deep learning |
CN114916928B (en) * | 2022-05-12 | 2023-08-04 | 电子科技大学 | Human body posture multichannel convolutional neural network detection method |
CN114916928A (en) * | 2022-05-12 | 2022-08-19 | 电子科技大学 | Human body posture multi-channel convolution neural network detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108062170A (en) | Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal | |
Jalal et al. | A Triaxial acceleration-based human motion detection for ambient smart home system | |
Dang et al. | Sensor-based and vision-based human activity recognition: A comprehensive survey | |
CN107886061B (en) | Human body behavior recognition method and system based on multi-mode deep Boltzmann machine | |
Liu et al. | A facial expression emotion recognition based human-robot interaction system. | |
Zhang et al. | EEG-based intention recognition from spatio-temporal representations via cascade and parallel convolutional recurrent neural networks | |
CN112784763B (en) | Expression recognition method and system based on local and overall feature adaptive fusion | |
CN105205436B (en) | A kind of gesture recognition system based on forearm bioelectricity multisensor | |
Yan et al. | Multi-attributes gait identification by convolutional neural networks | |
CN109726662A (en) | Multi-class human posture recognition method based on convolution sum circulation combination neural net | |
CN110610158A (en) | Human body posture identification method and system based on convolution and gated cyclic neural network | |
Jaswanth et al. | A novel based 3D facial expression detection using recurrent neural network | |
CN108764059A (en) | A kind of Human bodys' response method and system based on neural network | |
CN109685148A (en) | Multi-class human motion recognition method and identifying system | |
CN108052884A (en) | A kind of gesture identification method based on improvement residual error neutral net | |
WO2021004510A1 (en) | Sensor-based separately deployed human body behavior recognition health management system | |
CN112183314B (en) | Expression information acquisition device, expression recognition method and system | |
CN111723662B (en) | Human body posture recognition method based on convolutional neural network | |
CN107351080B (en) | Hybrid intelligent research system based on camera unit array and control method | |
CN109711324A (en) | Human posture recognition method based on Fourier transformation and convolutional neural networks | |
CN109766845A (en) | A kind of Method of EEG signals classification, device, equipment and medium | |
CN109919085A (en) | Health For All Activity recognition method based on light-type convolutional neural networks | |
Long et al. | Video-based facial expression recognition using learned spatiotemporal pyramid sparse coding features | |
Tang et al. | A hybrid SAE and CNN classifier for motor imagery EEG classification | |
Brandizzi et al. | Automatic rgb inference based on facial emotion recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180522 |