CN107609501A - The close action identification method of human body and device, storage medium, electronic equipment - Google Patents

The close action identification method of human body and device, storage medium, electronic equipment Download PDF

Info

Publication number
CN107609501A
CN107609501A CN201710792386.8A CN201710792386A CN107609501A CN 107609501 A CN107609501 A CN 107609501A CN 201710792386 A CN201710792386 A CN 201710792386A CN 107609501 A CN107609501 A CN 107609501A
Authority
CN
China
Prior art keywords
data
mrow
action
human body
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710792386.8A
Other languages
Chinese (zh)
Inventor
王晓婷
栾欣泽
何光宇
孟健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201710792386.8A priority Critical patent/CN107609501A/en
Publication of CN107609501A publication Critical patent/CN107609501A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

This disclosure relates to a kind of close action identification method of human body and device, storage medium, electronic equipment.Methods described includes:Obtain the raw motion data of the human action of inertial sensor collection;After being pre-processed to the raw motion data, characteristic pattern corresponding to the human action is generated;The close action recognition model built in advance is obtained, the topological structure of the close action recognition model is convolutional neural networks;Input of the characteristic pattern as the close action recognition model, the action classification of the human action is identified by the close action recognition model.Such scheme, the accuracy rate of action recognition is favorably improved, it is higher especially for the recognition accuracy of close action.

Description

Human body similar action recognition method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of behavior recognition technologies, and in particular, to a method and an apparatus for recognizing human body similar actions, a computer-readable storage medium, and an electronic device.
Background
Human motion recognition has wide application in many fields, such as disease diagnosis, exercise training, human-computer interaction, and the like. The current human body action recognition methods mainly comprise the following two methods:
the identification method based on computer vision processes and analyzes the original image or image sequence data acquired by a camera through a computer, and the method is highly dependent on the quality of the acquired data, the position of a shot user, the existence of occlusion, the background environment and other factors.
The identification method based on the inertial sensor can sense action behaviors through the inertial sensor worn on a human body to obtain corresponding action data, and then a feature vector capable of describing the action behaviors is extracted from the action data and is used as the input of the classifier, and the classifier identifies the action category of a user. The commonly used classifiers may be K-nearest neighbor classifiers (KNN), Support Vector Machines (SVM), multi-layer Perceptron (MLP), K-means classifiers, etc. The recognition method based on the classifier has the problems of low recognition accuracy, poor action discrimination and the like, and particularly can not well recognize the similar actions of a human body, for example, the two similar actions are staggered during falling.
Disclosure of Invention
The purpose of the present disclosure is to provide a method and an apparatus for recognizing human body similar actions, a computer-readable storage medium, and an electronic device, which are helpful for improving the accuracy of action recognition, especially for recognizing similar actions with high accuracy.
In order to achieve the above object, in a first aspect, the present disclosure provides a method for recognizing human body similar actions, including:
acquiring original motion data of human body motion acquired by an inertial sensor;
preprocessing the original action data to generate a characteristic diagram corresponding to the human body action;
acquiring a pre-constructed similar action recognition model, wherein the topological structure of the similar action recognition model is a convolutional neural network;
and the characteristic diagram is used as the input of the similar action recognition model, and the action type of the human body action is recognized by the similar action recognition model.
Optionally, the generating a feature map corresponding to the human body motion includes:
vectorizing the original motion data by using a sliding window to obtain a vector combination;
preprocessing the vector combination, and extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination;
and generating an image containing the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body action.
Optionally, the method for constructing the similar motion recognition model is as follows:
acquiring a training data set according to historical original action data acquired by the inertial sensor, wherein the training data set comprises a plurality of training characteristic graphs;
determining the topological structure of the similar action recognition model as a convolutional neural network;
and training to obtain the similar action recognition model by utilizing the training characteristic diagram and the convolutional neural network.
Optionally, the hidden layers of the convolutional neural network comprise 5 convolutional layers and 3 fully-connected layers.
Optionally, the activation function of the convolutional neural network is a ReLU activation function;
the calculation formula of the local response normalized LRN of the convolutional neural network is as follows:
wherein,represents the value of the ith feature map at position (x, y) before normalization;representing the value of the ith feature map at the position (x, y) after normalization, N representing the total number of feature maps, N representing the number of adjacent feature maps at the same position, and k, N, α beta being hyper-parameters.
In a second aspect, the present disclosure provides a human body similar action recognition device, including:
the data acquisition module is used for acquiring original motion data of the human body motion acquired by the inertial sensor;
the characteristic diagram generating module is used for generating a characteristic diagram corresponding to the human body action after preprocessing the original action data;
the model acquisition module is used for acquiring a pre-constructed similar action recognition model, and the topological structure of the similar action recognition model is a convolutional neural network;
and the action type identification module is used for taking the characteristic diagram as the input of the similar action identification model, and identifying the action type of the human body action by the similar action identification model.
Optionally, the feature map generating module includes:
the vector combination obtaining module is used for carrying out vectorization processing on the original motion data by utilizing a sliding window to obtain a vector combination;
the data extraction module is used for preprocessing the vector combination and extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination;
and the image generation module is used for generating an image containing the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body action.
Optionally, the apparatus further comprises:
the training data set acquisition module is used for acquiring a training data set according to historical original motion data acquired by the inertial sensor, and the training data set comprises a plurality of training characteristic graphs;
the topological structure determining module is used for determining that the topological structure of the similar action recognition model is a convolutional neural network;
and the model training module is used for training to obtain the similar action recognition model by utilizing the training characteristic diagram and the convolutional neural network.
In a third aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
the computer-readable storage medium of the third aspect; and
one or more processors to execute the program in the computer-readable storage medium.
According to the scheme, the human body similar action recognition model can be constructed in advance based on the convolutional neural network, so that after the original action data of the human body action collected by the inertial sensor is preprocessed and the characteristic diagram corresponding to the human body action is generated, the characteristic diagram can be used as the input of the similar action recognition model, and the action category of the human body action is recognized by the similar action recognition model. The scheme disclosed by the invention is beneficial to improving the accuracy of motion recognition, and particularly has higher recognition accuracy for similar motions.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a schematic flow chart of a human body similar action recognition method according to the present disclosure;
FIG. 2 is a schematic illustration of a feature map in the present disclosure;
FIG. 3 is a schematic flow chart diagram of a method for constructing a near motion recognition model in the present disclosure;
FIG. 4 is a schematic diagram of the training of a convolutional neural network in the present disclosure;
FIG. 5 is a graphical illustration of approximate motion recognition accuracy in the present disclosure;
FIG. 6 is a graphical illustration of the predicted results of the model testing in the present disclosure;
FIG. 7 is a schematic structural diagram of a human body proximity action recognition device according to the present disclosure;
fig. 8 is a block diagram of an electronic device for human body proximity action recognition according to the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Referring to fig. 1, a flow chart of the method for recognizing human body similar actions according to the present disclosure is shown. The method may comprise the steps of:
step 101, acquiring original motion data of human motion acquired by an inertial sensor.
In the scheme disclosed by the invention, the inertial sensor can be fixed on one or more parts of the surface of the human body and used for measuring various physical quantities representing the motion state of the human body, namely acquiring the original motion data of the motion of the human body.
As one example, the inertial sensors may be embodied as accelerometers, magnetometers, gyroscopes, and the like, which may not be particularly limited by the present disclosure.
And 102, preprocessing the original motion data to generate a characteristic diagram corresponding to the human body motion.
In general, some noise may be included in the raw motion data collected by the inertial sensor, and for this reason, the raw motion data may be filtered to remove the noise. In addition, considering that the inertial sensor may be a 3-axis sensor, a 6-axis sensor, a 9-axis sensor, etc., the acquired raw motion data is not necessarily absolute standard, and therefore, the data after the filtering process may also be subjected to a verification process. The pretreatment process is not limited by the scheme disclosed by the invention, and the pretreatment process can be determined by combining the actual application condition.
After the original motion data is preprocessed, the preprocessed motion data can be utilized to generate a characteristic diagram corresponding to the human motion. As an example, the feature map may be generated as follows.
Firstly, vectorization processing can be carried out on original motion data by utilizing a sliding window to obtain a vector combination; after the vector combination is preprocessed, extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination; and finally, generating an image containing the preprocessed original motion data, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body motion.
Taking a 3-axis sensor as an example, assuming that the sampling frequency of the sensor is 10Hz, the size of the sliding window is set to 5 s. The output format of the raw motion data may be triaxial data, e.g., axData representing the x-axis, ayData representing the y-axis, azData representing the z-axis, which may be represented as a vector A1=[ax1,ay1,az1]. Thus, using the sliding window technique, the raw motion data output by the inertial sensor can be transformed into the following vector combination: a ═ A1,A2,…,A50]。
After the original motion data is converted into the vector combination, the vector combination can be preprocessed through filtering, checking and the like to obtain a preprocessed backward vector combination. As an example, the filtering process may be performed using a butterworth filter.
As an example, the time domain feature data in the present disclosure may be at least embodied as a mean, a variance, a standard deviation, a skewness, a kurtosis, and the like.
As an example, the frequency domain characteristic data in the present disclosure may be at least embodied as Fast Fourier Transform (FFT) or the like.
And finally, the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data can be drawn in the same image to generate a characteristic diagram corresponding to the human body action. As an example, reference may be made to the characteristic diagram shown in fig. 2.
Step 103, acquiring a pre-constructed similar action recognition model, wherein the topological structure of the similar action recognition model is a convolutional neural network.
And 104, taking the feature map as an input of the similar action recognition model, and recognizing the action type of the human body action by the similar action recognition model.
After the characteristic diagram corresponding to the human body action is generated, a pre-constructed similar action recognition model can be obtained for action recognition. Specifically, the input of the model may be a feature map corresponding to the human motion, and the output of the model may be a motion category corresponding to the human motion. For example, the motion category may be human motions with large differences such as running, walking, jumping, squatting, etc., or similar motions with small differences such as staggering, tumbling, etc.
As an example, the topological structure of the similar action recognition model may be a Convolutional Neural Network (CNN), so that the feature map may be directly used as a model input, complex pre-processing on the feature map is avoided, and the robustness and recognition accuracy of the scheme may be improved.
In general, a convolutional neural network may include an input layer, a hidden layer, and an output layer, and the hidden layer in the present disclosure may include 5 convolutional layers and 3 fully-connected layers. In the application process, the depth of the hidden layer may be set according to actual requirements, and the scheme of the present disclosure may not be specifically limited to this.
Referring to fig. 3, a flow chart of a method for constructing a near motion recognition model in the present disclosure is shown. The method may comprise the steps of:
step 201, obtaining a training data set according to historical raw motion data acquired by the inertial sensor, wherein the training data set comprises a plurality of training feature maps.
Step 202, determining that the topological structure of the similar action recognition model is a convolutional neural network.
And 203, training to obtain the similar action recognition model by using the training feature map and the convolutional neural network.
As an example, in the present disclosure, a ReLU (corrected Linear Unit) activation function may be used as an activation function for convolutional neural networks for selecting neurons in each layer involved in model training.
As an example, in order to improve the generalization ability of the model, normalization processing may be performed on the feature map input to the convolutional layer. For example, the feature map may be normalized by Local Response Normalization (LRN). The LRN calculation may be:
wherein,represents the value of the ith feature map at position (x, y) before normalization;the method comprises the steps of obtaining a normalized ith feature map.
The following explains the model building process of the present disclosure by taking an example that the hidden layer includes 5 convolutional layers and 3 fully-connected layers, and specifically, refer to the schematic diagram shown in fig. 4.
1. First convolution layer conv1
Assuming conv1 uses 96 filters of size 11 x 11 with a step size of 4 pixels (stride), then at the time of convolution, the feature map can be extracted according to the following formula: the "img _ size-filter _ size"/stride +1 ═ new _ geometry _ size, i.e., the feature map size obtained in this example is: ([ 227-11 ]/4 +1) ═ 55. Where [ denotes rounding down, the feature maps extracted here are in color, resulting in 96 feature maps of 55 by 55 size, and are RGB channels.
2. Second convolution layer conv2
After the output of conv1 was normalized and pooled by local response, the number of neurons in conv2 was 27 × 27 × 256 as the input of conv 2.
3. Third convolution layer conv3
conv3 has 384 cores of size 3 × 3 × 256 connected to the output of conv 2.
4. The fourth convolution layer conv4
conv4 was obtained by performing ReLU (Relu 3) once on conv3 and then applying 384 convolution templates of 3 × 3, the number of neurons in conv4 was 13 × 13 × 384.
5. Fifth convolution layer conv5
conv5 is similar to conv4 except that conv4 is generated after a ReLU (i.e., ReLU4), and the number of neurons in conv5 is 13 × 13 × 256.
6. First full connection layer fc6
fc6 is full connection after pooling by conv5, 4096 nodes are obtained after full connection, and the final node of fc6 is 4096.
7. Second full connection layer fc7
fc7 is the result of fc6 performing ReLU (Relu 6) and then sequentially performing dropout (drop 6) and full connection, and the number of fc7 nodes is 4096.
8. Third full connection layer fc8
fc8 is the result of full concatenation after fc7 has performed once more ReLU (i.e., ReLU7) and dropout (i.e., drop 7).
As can be seen from the above description, the convolutional neural network CNN includes two basic calculations:
one is feature extraction, in which the input of each neuron is connected to a local acceptance domain of the previous layer, and the features of the local acceptance domain are extracted. Once the feature of the local acceptance domain is extracted, the position relationship between the feature and other features is determined. The calculation may correspond to a convolutional layer process.
The second is feature mapping, and the feature mapping structure adopts a response function ReLU as an activation function of the convolutional network. Using Dropout at the fully connected layer may reduce overfitting. This calculation may correspond to the processing of the fully connected layer.
As an example, the disclosed solution may use a stochastic gradient descent approach for model training. The image size of the training feature map as a sample may be 128 × 128, the power may be 0.9, and the weight attenuation may be 0.0005. The update rule of the weight w may be embodied as:
wi+1:=wi+vi+1
wherein i represents an iteration index, v represents a power variable, epsilon represents a learning rate,representing the target with respect to the weights w, pair wiDerivative of evaluation at lot i sample DiAverage of above.
The following explains the advantageous effects of the present disclosure with reference to specific examples.
Referring to fig. 5, when the scheme disclosed by the invention is used for identifying actions such as falling, staggering and the like, the identification accuracy is 88.67%. And as the number of training samples increases, the model is continuously optimized, and the recognition accuracy is gradually improved.
The scheme of the disclosure can further adopt a sample which does not participate in model training, when performing model testing, stagger through the stagger identification, fall down through the slip identification, then in the prediction result shown in fig. 6, the probability of staggering the prediction result of the characteristic diagram stagger _0223.jpg (stagger) is 98.65%, and the probability of falling down (slip) the prediction result of the characteristic diagram slip _0223.jpg is 98.6%. According to the experimental result, the accuracy rate is high when the scheme of the invention is used for identifying the similar actions of the human body, and the identification effect is ideal.
As an example, the present disclosure may be applied to daily monitoring of elderly people living alone, collect action data of the elderly people in real time, accurately identify whether the action of the elderly people falls or staggers, and then perform relevant subsequent processing according to the identification result, for example, when determining that the elderly people fall, a phone of an emergency contact person may be dialed, etc.
Referring to fig. 7, a schematic structural diagram of the human body proximity action recognition device of the present disclosure is shown. The device comprises:
the data acquisition module 301 is configured to acquire original motion data of human body motion acquired by the inertial sensor;
a feature map generation module 302, configured to generate a feature map corresponding to the human body action after preprocessing the original action data;
the model obtaining module 303 is configured to obtain a pre-constructed similar action recognition model, where a topological structure of the similar action recognition model is a convolutional neural network;
and the action type identification module 304 is used for taking the feature map as the input of the similar action identification model, and identifying the action type of the human body action by the similar action identification model.
Optionally, the feature map generating module includes:
the vector combination obtaining module is used for carrying out vectorization processing on the original motion data by utilizing a sliding window to obtain a vector combination;
the data extraction module is used for preprocessing the vector combination and extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination;
and the image generation module is used for generating an image containing the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body action.
Optionally, the apparatus further comprises:
the training data set acquisition module is used for acquiring a training data set according to historical original motion data acquired by the inertial sensor, and the training data set comprises a plurality of training characteristic graphs;
the topological structure determining module is used for determining that the topological structure of the similar action recognition model is a convolutional neural network;
and the model training module is used for training to obtain the similar action recognition model by utilizing the training characteristic diagram and the convolutional neural network.
Optionally, the hidden layers of the convolutional neural network comprise 5 convolutional layers and 3 fully-connected layers.
Optionally, the activation function of the convolutional neural network is a ReLU activation function;
the calculation formula of the local response normalized LRN of the convolutional neural network is as follows:
wherein,represents the value of the ith feature map at position (x, y) before normalization;represents the normalized secondthe values of the i feature maps at the position (x, y), N represents the total number of feature maps, N represents the number of adjacent feature maps at the same position, and k, N, α, beta are hyper-parameters.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an electronic device 400 according to an exemplary embodiment, the electronic device 400 being configured to implement human proximity action recognition. As shown in fig. 8, the electronic device 400 may include: a processor 401, a memory 402, a multimedia component 403, an input/output (I/O) interface 404, and a communication component 405.
The processor 401 is configured to control the overall operation of the electronic device 400, so as to complete all or part of the steps in the method for recognizing the similar actions of the human body. The memory 402 is used to store various types of data to support operation at the electronic device 400, such as instructions for any application or method operating on the electronic device 400 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 402 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 403 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 402 or transmitted through the communication component 405. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 404 provides an interface between the processor 401 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 405 is used for wired or wireless communication between the electronic device 400 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding communication component 405 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the human proximity motion recognition method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions, such as the memory 402 comprising program instructions, which are executable by the processor 401 of the electronic device 400 to perform the human body proximity action recognition method described above is also provided.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A human body similar action recognition method is characterized by comprising the following steps:
acquiring original motion data of human body motion acquired by an inertial sensor;
preprocessing the original action data to generate a characteristic diagram corresponding to the human body action;
acquiring a pre-constructed similar action recognition model, wherein the topological structure of the similar action recognition model is a convolutional neural network;
and the characteristic diagram is used as the input of the similar action recognition model, and the action type of the human body action is recognized by the similar action recognition model.
2. The method according to claim 1, wherein the generating a feature map corresponding to the human body motion after preprocessing the raw motion data comprises:
vectorizing the original motion data by using a sliding window to obtain a vector combination;
preprocessing the vector combination, and extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination;
and generating an image containing the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body action.
3. The method of claim 1, wherein the close-proximity recognition model is constructed by:
acquiring a training data set according to historical original action data acquired by the inertial sensor, wherein the training data set comprises a plurality of training characteristic graphs;
determining the topological structure of the similar action recognition model as a convolutional neural network;
and training to obtain the similar action recognition model by utilizing the training characteristic diagram and the convolutional neural network.
4. The method of claim 3, wherein the hidden layers of the convolutional neural network comprise 5 convolutional layers and 3 fully-connected layers.
5. The method according to claim 3 or 4,
the activation function of the convolutional neural network is a ReLU activation function;
the calculation formula of the local response normalized LRN of the convolutional neural network is as follows:
<mrow> <msubsup> <mi>b</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mi>i</mi> </msubsup> <mo>=</mo> <msubsup> <mi>a</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mi>i</mi> </msubsup> <mo>/</mo> <msup> <mrow> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mi>&amp;alpha;</mi> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mi>max</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>i</mi> <mo>-</mo> <mi>n</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> <mrow> <mi>min</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> <mo>+</mo> <mi>n</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <msubsup> <mi>a</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mi>j</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mi>&amp;beta;</mi> </msup> </mrow>
wherein,representing the ith feature map before normalization at position: (x, y);representing the value of the ith feature map at the position (x, y) after normalization, N representing the total number of feature maps, N representing the number of adjacent feature maps at the same position, and k, N, α beta being hyper-parameters.
6. A human body similar action recognition device is characterized by comprising:
the data acquisition module is used for acquiring original motion data of the human body motion acquired by the inertial sensor;
the characteristic diagram generating module is used for generating a characteristic diagram corresponding to the human body action after preprocessing the original action data;
the model acquisition module is used for acquiring a pre-constructed similar action recognition model, and the topological structure of the similar action recognition model is a convolutional neural network;
and the action type identification module is used for taking the characteristic diagram as the input of the similar action identification model, and identifying the action type of the human body action by the similar action identification model.
7. The apparatus of claim 6, wherein the feature map generation module comprises:
the vector combination obtaining module is used for carrying out vectorization processing on the original motion data by utilizing a sliding window to obtain a vector combination;
the data extraction module is used for preprocessing the vector combination and extracting time domain characteristic data, frequency domain characteristic data and amplitude data from the preprocessed vector combination;
and the image generation module is used for generating an image containing the preprocessed vector combination, the time domain characteristic data, the frequency domain characteristic data and the amplitude data to obtain a characteristic diagram corresponding to the human body action.
8. The apparatus of claim 6 or 7, further comprising:
the training data set acquisition module is used for acquiring a training data set according to historical original motion data acquired by the inertial sensor, and the training data set comprises a plurality of training characteristic graphs;
the topological structure determining module is used for determining that the topological structure of the similar action recognition model is a convolutional neural network;
and the model training module is used for training to obtain the similar action recognition model by utilizing the training characteristic diagram and the convolutional neural network.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
10. An electronic device, comprising:
the computer-readable storage medium recited in claim 9; and
one or more processors to execute the program in the computer-readable storage medium.
CN201710792386.8A 2017-09-05 2017-09-05 The close action identification method of human body and device, storage medium, electronic equipment Pending CN107609501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710792386.8A CN107609501A (en) 2017-09-05 2017-09-05 The close action identification method of human body and device, storage medium, electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710792386.8A CN107609501A (en) 2017-09-05 2017-09-05 The close action identification method of human body and device, storage medium, electronic equipment

Publications (1)

Publication Number Publication Date
CN107609501A true CN107609501A (en) 2018-01-19

Family

ID=61057492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710792386.8A Pending CN107609501A (en) 2017-09-05 2017-09-05 The close action identification method of human body and device, storage medium, electronic equipment

Country Status (1)

Country Link
CN (1) CN107609501A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109033995A (en) * 2018-06-29 2018-12-18 出门问问信息科技有限公司 Identify the method, apparatus and intelligence wearable device of user behavior
CN109522874A (en) * 2018-12-11 2019-03-26 中国科学院深圳先进技术研究院 Human motion recognition method, device, terminal device and storage medium
CN109670548A (en) * 2018-12-20 2019-04-23 电子科技大学 HAR algorithm is inputted based on the more sizes for improving LSTM-CNN
WO2019192235A1 (en) * 2018-04-04 2019-10-10 深圳大学 User identity authentication method and system based on mobile device
CN115731602A (en) * 2021-08-24 2023-03-03 中国科学院深圳先进技术研究院 Human body activity recognition method, device, equipment and storage medium based on topological representation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914149A (en) * 2014-04-01 2014-07-09 复旦大学 Gesture interaction method and gesture interaction system for interactive television
CN105279495A (en) * 2015-10-23 2016-01-27 天津大学 Video description method based on deep learning and text summarization
CN106237604A (en) * 2016-08-31 2016-12-21 歌尔股份有限公司 Wearable device and the method utilizing its monitoring kinestate

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914149A (en) * 2014-04-01 2014-07-09 复旦大学 Gesture interaction method and gesture interaction system for interactive television
CN105279495A (en) * 2015-10-23 2016-01-27 天津大学 Video description method based on deep learning and text summarization
CN106237604A (en) * 2016-08-31 2016-12-21 歌尔股份有限公司 Wearable device and the method utilizing its monitoring kinestate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张俊杰 等: "基于加速度传感器的上肢运动信息采集与姿态识别", 《北京工业大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192235A1 (en) * 2018-04-04 2019-10-10 深圳大学 User identity authentication method and system based on mobile device
CN109033995A (en) * 2018-06-29 2018-12-18 出门问问信息科技有限公司 Identify the method, apparatus and intelligence wearable device of user behavior
CN109522874A (en) * 2018-12-11 2019-03-26 中国科学院深圳先进技术研究院 Human motion recognition method, device, terminal device and storage medium
CN109670548A (en) * 2018-12-20 2019-04-23 电子科技大学 HAR algorithm is inputted based on the more sizes for improving LSTM-CNN
CN109670548B (en) * 2018-12-20 2023-01-06 电子科技大学 Multi-size input HAR algorithm based on improved LSTM-CNN
CN115731602A (en) * 2021-08-24 2023-03-03 中国科学院深圳先进技术研究院 Human body activity recognition method, device, equipment and storage medium based on topological representation

Similar Documents

Publication Publication Date Title
Janarthanan et al. Optimized unsupervised deep learning assisted reconstructed coder in the on-nodule wearable sensor for human activity recognition
CN108960337B (en) Multi-modal complex activity recognition method based on deep learning model
CN107609501A (en) The close action identification method of human body and device, storage medium, electronic equipment
Asim et al. Context-aware human activity recognition (CAHAR) in-the-Wild using smartphone accelerometer
CN107688790B (en) Human behavior recognition method and device, storage medium and electronic equipment
US20210064141A1 (en) System for detecting a signal body gesture and method for training the system
Giorgi et al. Try walking in my shoes, if you can: Accurate gait recognition through deep learning
Kuncan et al. A novel approach for activity recognition with down-sampling 1D local binary pattern
CN105787434A (en) Method for identifying human body motion patterns based on inertia sensor
CN114943324B (en) Neural network training method, human motion recognition method and device, and storage medium
CN111950437A (en) Gait recognition method and device based on deep learning model and computer equipment
Dhanraj et al. Efficient smartphone-based human activity recognition using convolutional neural network
CN109919085A (en) Health For All Activity recognition method based on light-type convolutional neural networks
Shi et al. Fall detection system based on inertial mems sensors: Analysis design and realization
Wang et al. A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu
Yunas et al. Gait activity classification using multi-modality sensor fusion: a deep learning approach
Kim et al. Activity recognition using fully convolutional network from smartphone accelerometer
Li et al. Estimation of blood alcohol concentration from smartphone gait data using neural networks
Khatun et al. Human activity recognition using smartphone sensor based on selective classifiers
Hajjej et al. Deep human motion detection and multi-features analysis for smart healthcare learning tools
Baloch et al. CNN‐LSTM‐Based Late Sensor Fusion for Human Activity Recognition in Big Data Networks
CN110598599A (en) Method and device for detecting abnormal gait of human body based on Gabor atomic decomposition
Hafeez et al. Multi-Sensor-Based Action Monitoring and Recognition via Hybrid Descriptors and Logistic Regression
Zeng et al. Accelerometer-based gait recognition via deterministic learning
Jarrah et al. IoMT-based smart healthcare of elderly people using deep extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180119