CN115497171A - Human behavior recognition method and system based on deep learning - Google Patents

Human behavior recognition method and system based on deep learning Download PDF

Info

Publication number
CN115497171A
CN115497171A CN202211348196.4A CN202211348196A CN115497171A CN 115497171 A CN115497171 A CN 115497171A CN 202211348196 A CN202211348196 A CN 202211348196A CN 115497171 A CN115497171 A CN 115497171A
Authority
CN
China
Prior art keywords
data
human behavior
imut
submodule
behavior recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211348196.4A
Other languages
Chinese (zh)
Inventor
尹选春
丁朋旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202211348196.4A priority Critical patent/CN115497171A/en
Publication of CN115497171A publication Critical patent/CN115497171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a human behavior recognition method, in particular to a human behavior recognition method and a human behavior recognition system based on deep learning.

Description

Human behavior recognition method and system based on deep learning
Technical Field
The invention relates to the technical field of human behavior recognition, in particular to a human behavior recognition method and system based on deep learning.
Background
The human behavior recognition can be applied to multiple fields, such as virtual reality, augmented reality, medical care, security and the like, with the development of the deep learning technology, the deep learning technology is used in more and more fields, and a good effect is achieved, the human behavior recognition field is not exceptional, and in the current human behavior recognition, a deep learning algorithm is mainly used in 3 directions, namely an RGB video, a deep video and a 3D framework. The existing human behavior recognition algorithms all need videos as input, and the threshold for obtaining information is high; based on RGB video input, the method is easy to be interfered by abnormity, such as the occlusion of a small part of body and even the occlusion of the whole body; the depth video can obtain depth information only by a depth camera; the 3D framework needs to be extracted from the RGB video in advance for identification; training and reasoning using 3D CNN consumes a lot of resources.
The prior art discloses a human behavior identification method based on a multi-scale attention-driven graph convolution network, which comprises the following steps: acquiring an original 3D skeleton map sequence to be identified; inputting the original 3D skeleton diagram sequence into a human behavior recognition model which is trained in advance, and firstly extracting joint information, skeleton information and motion information from the original 3D skeleton diagram sequence through a multi-branch input module to serve as behavior characteristic data; then, learning the correlation of the 3D skeleton joint points based on the behavior characteristic data through a multi-scale attention diagram convolution module, and extracting time sequence information of various behaviors in different durations; finally, recognizing the human body behavior corresponding to the original 3D skeleton diagram sequence through a global attention pooling layer; and outputting a corresponding human behavior recognition result. According to the scheme, the relevance of the 3D skeleton joint points is learned through a multi-scale attention-seeking convolutional network, and the time sequence information of various behaviors in different durations is extracted, so that the relevance between the 3D skeleton joints and the continuity in time can be captured, further the changeable inter-joint relevance information of the human body under different behaviors can be effectively shown, the identification accuracy of human behavior identification can be improved, and the identification effect of the human behavior identification can be guaranteed. Meanwhile, the problem of joint feature redundancy in the skeleton sequence is solved by means of behavior feature extraction, however, in the scheme, the 3D skeleton can be recognized only by being proposed from the RGB video, the threshold for obtaining information is high, a convolution module and a global attention pooling layer are achieved through multi-scale attention, the number of models is large, the calculation process is complex, and a large amount of resources are consumed for training and reasoning.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a human behavior recognition method and system based on deep learning, which can conveniently acquire human behavior data, reduce the information acquisition threshold, are not easy to interfere and can reduce resources consumed by model training and reasoning.
In order to solve the technical problems, the invention adopts the technical scheme that:
the human behavior recognition method based on deep learning comprises the following steps:
s1: collecting human body behavior data by using an IMU sensor;
s2: inputting the human body behavior data into a pre-trained IMUT network model;
s3: and outputting a corresponding human behavior recognition result.
According to the human behavior recognition method based on deep learning, the human behavior data can be conveniently acquired through the IMU sensor, the human behavior data are transmitted to the IMUT network model, the human behavior is recognized through the IMUT network model, and finally the corresponding human behavior recognition result is output. The human behavior recognition method provided by the invention utilizes the IMU sensor, reduces the threshold for obtaining information, does not need to carry out human recognition through visual information, is not easy to be interfered, and can reduce resources consumed by model training and reasoning.
Preferably, in step S2, the process of predicting the input human behavior data by the IMUT network model includes:
s21: the feature embedding layer is utilized to enable the input human behavior data to be fused with adjacent space-time features;
s22: extracting features in the features by using a feature extraction layer;
s23: and identifying the human body behaviors by utilizing the classification layer through the extracted features.
Preferably, in step S22, the feature extraction layer is implemented by a convolution layer with convolution kernel 3, padding 1 and step size 1, and the specific formula is as follows:
Figure BDA0003918905900000021
wherein, C in Representing the number of input characteristic layers;
Figure BDA0003918905900000022
representing the number of output characteristic layers; n is a radical of i Indicating the amount of data entered for each batch; k represents the size of the convolution kernel; input (N) i K) denotes the tensor of the input;
Figure BDA0003918905900000023
representing a bias consistent with the output feature size;
Figure BDA0003918905900000024
the tensor representing the final output.
Preferably, in step S22, the feature extraction layer is composed of 6 feature extraction blocks, and each feature extraction block has a self-attribute and a fully connected layer.
Preferably, each feature extraction block further comprises two batch normalization layers, and the two batch normalization layers are respectively positioned after self-attention and after the full link layer so as to optimize the feature extraction layer. In the invention, different from most existing methods, the method is only suitable for the convolutional neural network to identify the human body behaviors, but is based on a model of a self-attention mechanism, so that the human body behaviors can be better identified globally, the self-attention mechanism is used for extracting IMU data signs and identifying the human body behaviors, the interference can be reduced, and the resource consumption can be reduced.
Preferably, the feature extraction layer is formed by a method specifically including:
Q=W Q X
K=W K X
V=W V X
Figure BDA0003918905900000031
Figure BDA0003918905900000032
z=Wa+b
RwLU=max(0,V)
wherein Q represents a query matrix; k represents a key matrix; v represents a value matrix; w is a group of Q A weight matrix representing the query; w K A weight matrix representing keys; w is a group of V A weight matrix representing value; x represents an input value; d k Representing the dimensions of the key matrix; z is a radical of i Represents an output value of the ith node; c represents the number of output nodes; c represents each node; z represents the output of the fully connected layer; w represents the weight of the fully-connected layer; a represents the input of the fully connected layer; b represents the bias of the fully connected layer; reLU denotes the activation function; attention (Q, K, V) denotes self-Attention computation of features; softmax represents the probability of outputting the result.
Preferably, in step S2, the training process of the IMUT network model includes:
s201: acquiring an IMU data set, and dividing the data set into a training set and a test set;
s202: building an IMUT network architecture;
s203: inputting the training set into an IMUT network for training to obtain an initial IMUT network model;
s204: and inputting the test set into the initial IMUT network model for testing to obtain a final IMUT network model.
Preferably, in step S201, before the data set is divided into the training set and the test set, the data set is labeled.
The invention also provides a human behavior recognition system applied to the human behavior recognition method based on deep learning, which comprises the following steps: the system comprises an IMU sensor, a data transmission module and a server-side processing program module, wherein the IMU sensor is connected with the input end of the data transmission module, and the server-side processing program module is connected with the output end of the data transmission module; the IMU sensor is used for collecting human body behavior data; the data transmission module is used for transmitting the human body behavior data input from the IMU sensor to the server-side processing program module; and the server-side processing program module is used for resolving the data sent by the data transmission module, storing the resolved data and identifying the human body behavior by using the resolved data and the IMUT network model.
According to the human body behavior recognition system, the human body behavior data can be conveniently acquired through the IMU sensor and are forwarded to the server-side processing program module through the data transmission module, the server-side processing program module resolves and stores the received human body behavior data, and the human body behavior is recognized through the IMUT network model. The human behavior recognition system disclosed by the invention utilizes the IMU sensor, reduces the threshold for obtaining information, does not need to carry out human recognition through visual information, is not easy to be interfered, and can reduce resources consumed by model training and reasoning.
Furthermore, the server-side processing program module comprises a resolving submodule, a storage submodule, a processing submodule and a display submodule, the data transmission module, the storage submodule and the processing submodule are respectively connected with the resolving submodule, the display submodule is connected with the processing submodule, the resolving submodule is used for resolving human behavior data transmitted by the data transmission module, the storage submodule is used for storing the data resolved by the resolving submodule, the processing submodule is used for inputting the resolved data into the trained IMUT network model for human behavior recognition, and the display submodule is used for displaying a human behavior recognition result.
Compared with the background technology, the human behavior recognition method and the human behavior recognition system based on deep learning have the beneficial effects that:
by utilizing the IMU sensor, the human behavior data can be conveniently acquired, the information acquisition threshold is reduced, human body identification is not required through visual information, and interference is not easy to occur; the model based on the self-attention mechanism can better perform global recognition on human body behaviors, and the self-attention mechanism is used for extraction of IMU data signs and human body behavior recognition, so that resources consumed by model training and reasoning can be reduced.
Drawings
Fig. 1 is a flowchart of a deep learning-based human behavior recognition method according to an embodiment of the present invention;
fig. 2 is a flowchart of the method for the IMUT network model to identify human body behavior according to the first embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a training process of an IMUT network model according to a second embodiment of the present invention;
fig. 4 is a schematic block diagram of a deep learning-based human behavior recognition system in a third embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the following embodiments. Wherein the showings are for the purpose of illustration only and not for the purpose of limiting the same, the same is shown by way of illustration only and not in the form of limitation; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
In the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by the terms "upper", "lower", "left", "right", etc. based on the orientation or positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but it is not intended to indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limiting the present patent, and the specific meaning of the terms may be understood by those skilled in the art according to specific circumstances.
Example one
A human behavior recognition method based on deep learning is disclosed, as shown in FIG. 1, and comprises the following steps:
s1: acquiring human body behavior data by using an IMU sensor;
s2: inputting human body behavior data into a pre-trained IMUT network model;
s3: and outputting a corresponding human behavior recognition result.
According to the human behavior recognition method based on deep learning, the human behavior data can be conveniently acquired through the IMU sensor, the human behavior data are transmitted to the IMUT network model, the human behavior is recognized through the IMUT network model, and finally the corresponding human behavior recognition result is output. The human behavior recognition method of the embodiment utilizes the IMU sensor, reduces the threshold of obtaining information, does not need to carry out human recognition through visual information, is not easy to be interfered, and can reduce resources consumed by model training and reasoning.
In step S2, the process of predicting the input human behavior data by the IMUT network model is shown in fig. 2, and includes the steps of:
s21: the feature embedding layer is utilized to enable the input human behavior data to be fused with adjacent space-time features;
s22: extracting features in the features by using the feature extraction layer;
s23: and identifying the human body behaviors by using the extracted features of the classification layer.
In step S22, the feature extraction layer is implemented by a convolution layer with convolution kernel 3, padding 1 and step length 1, and the specific formula is as follows:
Figure BDA0003918905900000051
wherein, C in Representing the number of input characteristic layers;
Figure BDA0003918905900000052
representing the number of output characteristic layers; n is a radical of i Indicating the amount of data entered for each batch; k represents the size of the convolution kernel; input (N) i K) represents the tensor of the input;
Figure BDA0003918905900000061
representing a bias consistent with the output feature size;
Figure BDA0003918905900000062
the tensor representing the final output.
In step S22, the feature extraction layer is composed of 6 feature extraction blocks, and each feature extraction block has a self-attention and a fully connected layer. Each feature extraction block also comprises two batch standardization layers which are respectively positioned behind self-attention and behind the full connection layer so as to optimize the feature extraction layer. In the embodiment, different from most existing methods which only use a convolutional neural network to identify human behaviors, the method is based on a model of a self-attention mechanism, so that global identification can be better performed on the human behaviors, the self-attention mechanism is used for extraction of IMU data signs and human behavior identification, interference can be reduced, and consumption of resources is reduced.
The method for forming the characteristic extraction layer comprises the following steps:
Q=W Q X
K=W K X
V=W V X
Figure BDA0003918905900000063
Figure BDA0003918905900000064
z=Wa+b
RwLU=max(0,V)
wherein Q represents a query matrix; k represents a key matrix; v represents a value matrix; w Q A weight matrix representing the query; w K A weight matrix representing keys; w V A weight matrix representing value; x represents an input value; d k Representing the dimensions of the key matrix; z is a radical of i Represents an output value of the ith node; c represents the number of output nodes; c represents each node; z represents the output of the fully connected layer; w represents a fully connected layerThe weight of (c); a represents the input of the fully connected layer; b represents the bias of the fully-connected layer; reLU denotes the activation function; attention (Q, K, V) denotes self-Attention computation of features; softmax represents the probability of outputting the result.
Example two
The present embodiment is similar to the embodiment, except that, in step S2, the training process of the IMUT network model is as shown in fig. 3, and includes the steps of:
s201: obtaining an IMU data set, and dividing the data set into a training set and a testing set;
s202: building an IMUT network architecture;
s203: inputting the training set into an IMUT network for training to obtain an initial IMUT network model;
s204: and inputting the test set into the initial IMUT network model for testing to obtain a final IMUT network model.
In step S201, before the data set is divided into the training set and the test set, the data set is labeled.
EXAMPLE III
The present embodiment is a human behavior recognition system applied to the first embodiment or the second embodiment, as shown in fig. 4, including: the system comprises an IMU sensor, a data transmission module and a server-side processing program module, wherein the IMU sensor is connected with the input end of the data transmission module, and the server-side processing program module is connected with the output end of the data transmission module; the IMU sensor is used for collecting human body behavior data; the data transmission module is used for transmitting the human body behavior data input from the IMU sensor to the server-side processing program module; and the server-side processing program module is used for resolving the data sent by the data transmission module, storing the resolved data and identifying the human body behavior by using the resolved data and the IMUT network model.
According to the human body behavior recognition system, the human body behavior data can be conveniently acquired through the IMU sensor and forwarded to the server-side processing program module through the data transmission module, the server-side processing program module resolves and stores the received human body behavior data, and the human body behavior is recognized through the IMUT network model. The human behavior recognition system of the embodiment utilizes the IMU sensor, reduces the threshold for obtaining information, does not need to carry out human recognition through visual information, is not easy to be interfered, and can reduce resources consumed by model training and reasoning.
As shown in fig. 4, the server-side processing program module includes a resolving submodule, a storage submodule, a processing submodule and a display submodule, the data transmission module, the storage submodule and the processing submodule are respectively connected with the resolving submodule, the display submodule is connected with the processing submodule, the resolving submodule is used for resolving human behavior data transmitted by the data transmission module, the storage submodule is used for storing the data resolved by the resolving submodule, the processing submodule is used for inputting the resolved data into a trained IMUT network model for human behavior recognition, and the display submodule is used for displaying a human behavior recognition result. Specifically, the IMUT network model comprises a feature embedding layer, a feature extraction layer and a classification layer, wherein the feature extraction layer is composed of 6 feature extraction blocks, and each feature extraction block comprises a self-entry and a full connection layer. Each feature extraction block also comprises two batch standardization layers which are respectively positioned behind self-attentions and behind full connection layers, the feature embedding layer is used for enabling input human behavior data to be subjected to fusion of adjacent space-time features, the feature extraction layer is used for extracting features in the features, and the classification layer is used for identifying human behaviors through the extracted features
In the detailed description of the embodiments, various technical features may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A human behavior recognition method based on deep learning is characterized by comprising the following steps:
s1: collecting human body behavior data by using an IMU sensor;
s2: inputting the human body behavior data into a pre-trained IMUT network model;
s3: and outputting a corresponding human behavior recognition result.
2. The deep learning based human behavior recognition method according to claim 1, wherein in step S2, the process of predicting the input human behavior data by the IMUT network model comprises:
s21: the feature embedding layer is utilized to enable the input human behavior data to fuse adjacent space-time features;
s22: extracting features in the features by using the feature extraction layer;
s23: and identifying the human body behaviors by utilizing the classification layer through the extracted features.
3. The method for recognizing human body behaviors based on deep learning of claim 2, wherein in step S22, the feature extraction layer is implemented by a convolution layer with convolution kernel 3, filling 1 and step length 1, and the specific formula is as follows:
Figure FDA0003918905890000011
wherein, C in Representing the number of input characteristic layers;
Figure FDA0003918905890000012
representing the number of output characteristic layers; n is a radical of i Indicating the amount of data input for each batch; k represents the size of the convolution kernel; input (N) i K) denotes the tensor of the input;
Figure FDA0003918905890000013
representing a bias consistent with the output feature size;
Figure FDA0003918905890000014
the tensor representing the final output.
4. The deep learning based human behavior recognition method according to claim 3, wherein in step S22, the feature extraction layer is composed of 6 feature extraction blocks, and each feature extraction block has a self-entry and a fully connected layer.
5. The deep learning-based human behavior recognition method according to claim 4, wherein each feature extraction block further comprises two batch normalization layers, and the two batch normalization layers are respectively located after self-attention and after a full-link layer.
6. The human behavior recognition method based on deep learning of claim 5, wherein the feature extraction layer is specifically composed of:
Q=W Q X
K=W K X
V=W V X
Figure FDA0003918905890000021
Figure FDA0003918905890000022
z=Wa+b
ReLU=max(0,V)
wherein Q represents a query matrix; k represents a key matrix; v represents a value matrix; w is a group of Q A weight matrix representing the query; w K A weight matrix representing keys; w V A weight matrix representing value; x represents an input value; d is a radical of k A dimension representing a key matrix; z is a radical of formula i Represents an output value of the ith node; c represents the number of output nodes; c represents each node; z represents the output of the fully connected layer; w represents the weight of the fully-connected layer; a represents the input of the fully connected layer; b represents the bias of the fully-connected layer; reLU denotes the activation function; attention (Q, K, V) denotes self-Attention computation of features; softmax represents the probability of outputting the result.
7. The deep learning based human behavior recognition method according to claim 1, wherein in step S2, the training process of the IMUT network model comprises:
s201: acquiring an IMU data set, and dividing the data set into a training set and a test set;
s202: building an IMUT network architecture;
s203: inputting the training set into an IMUT network for training to obtain an initial IMUT network model;
s204: and inputting the test set into the initial IMUT network model for testing to obtain a final IMUT network model.
8. The method for recognizing human body behaviors based on deep learning of claim 7, wherein in step S201, the data set is labeled before being divided into a training set and a testing set.
9. A human behavior recognition system applied to the deep learning based human behavior recognition method according to any one of claims 1 to 8, comprising: the system comprises an IMU sensor, a data transmission module and a server-side processing program module, wherein the IMU sensor is connected with the input end of the data transmission module, and the server-side processing program module is connected with the output end of the data transmission module; the IMU sensor is used for collecting human body behavior data; the data transmission module is used for transmitting the human body behavior data input from the IMU sensor to the server-end processing program module; and the server-side processing program module is used for resolving the data sent by the data transmission module, storing the resolved data and identifying the human body behavior by using the resolved data and the IMUT network model.
10. The deep learning-based human behavior recognition system according to claim 9, wherein the server-side processing program module comprises a calculation submodule, a storage submodule, a processing submodule and a display submodule, the data transmission module, the storage submodule and the processing submodule are respectively connected with the calculation submodule, the display submodule is connected with the processing submodule, the calculation submodule is used for calculating human behavior data transmitted by the data transmission module, the storage submodule is used for storing the data calculated by the calculation submodule, the processing submodule is used for inputting the data calculated by the calculation submodule into the trained IMUT network model for human behavior recognition, and the display submodule is used for displaying human behavior recognition results.
CN202211348196.4A 2022-10-31 2022-10-31 Human behavior recognition method and system based on deep learning Pending CN115497171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211348196.4A CN115497171A (en) 2022-10-31 2022-10-31 Human behavior recognition method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211348196.4A CN115497171A (en) 2022-10-31 2022-10-31 Human behavior recognition method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN115497171A true CN115497171A (en) 2022-12-20

Family

ID=85114929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211348196.4A Pending CN115497171A (en) 2022-10-31 2022-10-31 Human behavior recognition method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN115497171A (en)

Similar Documents

Publication Publication Date Title
CN112766244B (en) Target object detection method and device, computer equipment and storage medium
CN108734210B (en) Object detection method based on cross-modal multi-scale feature fusion
CN112131985A (en) Real-time light human body posture estimation method based on OpenPose improvement
CN115223020A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113569627B (en) Human body posture prediction model training method, human body posture prediction method and device
CN117152459B (en) Image detection method, device, computer readable medium and electronic equipment
CN116935188A (en) Model training method, image recognition method, device, equipment and medium
CN113870160A (en) Point cloud data processing method based on converter neural network
CN110348395B (en) Skeleton behavior identification method based on space-time relationship
CN112101154B (en) Video classification method, apparatus, computer device and storage medium
CN115497171A (en) Human behavior recognition method and system based on deep learning
CN113628107B (en) Face image super-resolution method and system
CN117392488A (en) Data processing method, neural network and related equipment
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
CN112183669B (en) Image classification method, device, equipment and storage medium
CN116958624A (en) Method, device, equipment, medium and program product for identifying appointed material
CN117351382A (en) Video object positioning method and device, storage medium and program product thereof
CN115018215A (en) Population residence prediction method, system and medium based on multi-modal cognitive map
CN114329065A (en) Processing method of video label prediction model, video label prediction method and device
CN114663910A (en) Multi-mode learning state analysis system
CN112883868A (en) Training method of weak surveillance video motion positioning model based on relational modeling
CN118233222B (en) Industrial control network intrusion detection method and device based on knowledge distillation
WO2024174583A1 (en) Model training method and apparatus, and device, storage medium and product
CN117115903A (en) Action time sequence positioning method based on relation sensing
CN117612066A (en) Robot action recognition method based on multi-mode information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination