CN111401270A - Human motion posture recognition and evaluation method and system - Google Patents

Human motion posture recognition and evaluation method and system Download PDF

Info

Publication number
CN111401270A
CN111401270A CN202010196951.6A CN202010196951A CN111401270A CN 111401270 A CN111401270 A CN 111401270A CN 202010196951 A CN202010196951 A CN 202010196951A CN 111401270 A CN111401270 A CN 111401270A
Authority
CN
China
Prior art keywords
neural network
network model
data
video image
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010196951.6A
Other languages
Chinese (zh)
Inventor
庄文芹
谢世朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiai Information Technology Co ltd
Original Assignee
Nanjing Weiai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weiai Information Technology Co ltd filed Critical Nanjing Weiai Information Technology Co ltd
Priority to CN202010196951.6A priority Critical patent/CN111401270A/en
Publication of CN111401270A publication Critical patent/CN111401270A/en
Priority to PCT/CN2020/103074 priority patent/WO2021184619A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human motion posture identification and evaluation method which comprises the following steps of S01, collecting a video image test data set, carrying out data processing on data in the video image test data set, S02, inputting the test data after data processing into a trained L STM neural network model to carry out human motion posture identification, and outputting an identification result, S03, comparing the output identification result with standard motion data, and evaluating the standard degree of the identified human motion posture according to the comparison result, and further discloses a human motion posture identification and evaluation system which comprises an image collection module, a data processing module, a L STM neural network model, a model training module, a data center and a posture evaluation module.

Description

Human motion posture recognition and evaluation method and system
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a human motion gesture recognition evaluation method and a human motion gesture recognition evaluation system.
Background
At present, with the rapid development of human-computer interaction technology, human posture recognition technology is more and more emphasized. Gesture recognition, an important component of human behavior recognition, has recently become an important research hotspot in the field of computer vision.
The existing gesture recognition method mainly comprises 2 types, one is human gesture recognition based on a motion sensor, and the other is human gesture recognition based on image analysis; the recognition technology based on the sensor mainly collects relevant motion data by leading researchers to carry the sensor, the sensor commonly used mainly comprises an accelerometer, a magnetoresistive sensor, a gyroscope and the like, the posture of a person is recognized by combining a relevant learning method after the motion information of the researchers is obtained by the sensor, and the recognition result of the posture is mainly influenced by a characteristic extraction mode, namely the use of the sensor and the selection of a classifier, so that the method is not accurate enough for posture recognition; the image-based analysis method extracts images of researchers as features of research and analysis, and the prior image-based method mostly adopts contour features of analysis images such as stack image aspect ratio, shape complexity change, eccentricity and the like to be combined with K-means or SVM to judge the posture category of people, however, the traditional method is difficult to obtain good classification effect on a large number of complex similar samples.
In addition, in the prior art, whether the action finished by the tested person is standard or not is basically judged by a manual method in the process of sports test or sports exercise, and objective and accurate evaluation cannot be made.
Therefore, it is an urgent need to solve the problems of the prior art to provide a fast and accurate human motion gesture recognition method and a method for accurately evaluating the motion gesture criteria.
Disclosure of Invention
In view of the above, the invention provides a human motion posture identification and evaluation method and a system thereof, which can effectively solve the problems that the human motion posture identification technology in the prior art is not accurate enough and has low speed, and further provide an evaluation method for the identified motion posture, and effectively solve the problem that the human body in the prior art cannot objectively evaluate the human motion posture.
In order to achieve the purpose, the invention adopts the following technical scheme:
a human motion posture identification and evaluation method comprises the following steps:
s01: collecting a video image test data set, and carrying out data processing on data in the video image test data set;
s02, inputting the test data after data processing into a trained L STM neural network model to recognize the motion posture of the human body, and outputting a recognition result;
s03: comparing the output recognition result with standard motion data, and evaluating the standard degree of the recognized human motion posture according to the comparison result;
wherein the training process of the L STM neural network model comprises the following steps:
s11: acquiring a video image sample data set;
s12, inputting sample data in the video image sample data set into a L STM neural network model, introducing constraint on the weight of the connection of the joint points and the neurons into an objective function of the neural network model, classifying the data of different frames and different joint points according to the weight, and finishing learning of distributing the importance of the different frames and the different joint points based on the content type;
s13: and performing back propagation on the obtained classification result to realize the updating of the weight, and performing the operation in S12 in a loop mode.
Preferably, the data processing contents in S01 include: performing time domain segmentation and content type judgment on the collected video image sample data set; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
Preferably, the specific contents of S02 include:
(1) extracting time stream characteristics and space stream characteristics, extracting space-time information to form characteristic vectors with fixed lengths, extracting depth characteristics of video frames, and fusing all extracted characteristics by using a space-time characteristic fusion strategy;
(2) according to the sequence content type, performing spatial domain attention calculation and time domain attention calculation on the fused feature vector to respectively obtain spatial domain features and time domain features;
(3) and (3) fusing the characteristics obtained in the step (1) and the step (2) to obtain a classification result, and finishing human body action recognition.
A human motion posture recognition and evaluation system comprises an image acquisition module, a data processing module, an L STM neural network model, a model training module, a data center and a posture evaluation module;
the image acquisition module is used for acquiring a video image test data set;
the data processing module is used for carrying out data processing on the collected video image test data set;
the L STM neural network model is used for inputting the test data after data processing into the trained L STM neural network model for recognizing the motion posture of the human body and outputting a recognition result;
the model training module is used for training the L STM neural network model;
the data center is used for storing standard motion data;
and the posture evaluation module is used for calling standard motion data in the data center and comparing the output recognition result with the standard motion data to obtain an evaluation result of the standard degree of the recognized human motion posture.
Preferably, the data processing module is specifically configured to perform time domain segmentation and content type determination on the acquired video image sample data set; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
Preferably, the L STM neural network model comprises a L STM main network, a spatial attention sub-network, a temporal attention sub-network and a feature fusion module;
the L STM main network is used for extracting time stream characteristics and space stream characteristics, extracting space-time information to form characteristic vectors with fixed lengths, extracting depth characteristics of video frames, and fusing all extracted characteristics by using a space-time characteristic fusion strategy;
the spatial domain attention subnetwork is used for automatically learning and distributing the importance of different joint points according to different content types, and performing spatial domain attention calculation in the identification process to obtain spatial domain characteristics;
the time domain attention subnetwork is used for automatically learning and distributing the importance of different frames aiming at different content types, and performing time domain attention calculation in the identification process to obtain time domain characteristics;
and the characteristic fusion module is used for controlling the characteristics to be fused to obtain a final classification result.
Preferably, the model training module is specifically configured to acquire a video image sample data set, input sample data in the video image sample data set into L STM neural network model, introduce a constraint on a weight of a joint point and a neuron connection in an objective function of the neural network model, and further control the spatial attention subnetwork and the temporal attention subnetwork to complete learning of importance assignment to different joint points and different frames.
According to the technical scheme, compared with the prior art, the L STM model is used as a recurrent neural network, long-term space-time dependency can be captured by storing time sequence information, and gradient disappearance can be effectively avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flow chart of a human motion posture identification and evaluation method provided by the invention.
Fig. 2 is a schematic structural diagram of a human motion posture recognition and evaluation system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a human motion posture identification and evaluation method, which comprises the following steps as shown in figure 1:
s01: collecting a video image test data set, and carrying out data processing on data in the video image test data set;
s02, inputting the test data after data processing into a trained L STM neural network model to recognize the motion posture of the human body, and outputting a recognition result;
s03: comparing the output recognition result with standard motion data, and evaluating the standard degree of the recognized human motion posture according to the comparison result;
wherein, the training process of the L STM neural network model comprises the following steps:
s11: acquiring a video image sample data set;
s12, inputting sample data in a video image sample data set into a L STM neural network model, introducing constraint on the weight of the connection of the joint points and the neurons into an objective function of the neural network model, classifying the data of different frames and different joint points according to the weight, and finishing learning of distributing the importance of the different frames and the different joint points based on the content type;
s13: and performing back propagation on the obtained classification result to realize the updating of the weight, and performing the operation in S12 in a loop mode.
In order to further implement the above technical solution, the data processing contents in S01 include: performing time domain segmentation and content type judgment on the collected video image sample data set; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
In order to further implement the above technical solution, the specific content of S02 includes:
(1) extracting time stream characteristics and space stream characteristics, extracting space-time information to form characteristic vectors with fixed lengths, extracting depth characteristics of video frames, and fusing all extracted characteristics by using a space-time characteristic fusion strategy;
(2) according to the sequence content type, performing spatial domain attention calculation and time domain attention calculation on the fused feature vector to respectively obtain spatial domain features and time domain features;
(3) and (3) fusing the characteristics obtained in the step (1) and the step (2) to obtain a classification result, and finishing human body action recognition.
A human motion posture recognition and evaluation system is shown in figure 2 and comprises an image acquisition module, a data processing module, an L STM neural network model, a model training module, a data center and a posture evaluation module;
the image acquisition module is used for acquiring a video image test data set;
the data processing module is used for carrying out data processing on the collected video image test data set;
l STM neural network model for inputting the test data after data processing into L STM neural network model after training to recognize human body motion posture and output recognition result;
the model training module is used for training the L STM neural network model;
the data center is used for storing standard motion data;
and the posture evaluation module is used for calling standard motion data in the data center and comparing the output recognition result with the standard motion data to obtain an evaluation result of the standard degree of the recognized human motion posture.
In order to further implement the above technical solution, the data processing module is specifically configured to perform time domain segmentation and content type determination on the collected video image sample dataset; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
In order to further realize the technical scheme, the L STM neural network model comprises a L STM main network, a spatial attention sub-network, a temporal attention sub-network and a feature fusion module;
l STM main network for extracting time stream feature and space stream feature, extracting space-time information to form fixed length feature vector, extracting depth feature of video frame, and fusing all extracted features by space-time feature fusion strategy;
the spatial domain attention subnetwork is used for automatically learning and distributing the importance of different joint points according to different content types, and performing spatial domain attention calculation in the identification process to obtain spatial domain characteristics;
the time domain attention subnetwork is used for automatically learning and distributing the importance of different frames aiming at different content types, and carrying out time domain attention calculation in the identification process to obtain time domain characteristics;
and the characteristic fusion module is used for controlling the characteristics to be fused to obtain a final classification result.
In order to further implement the above technical solution, the model training module is specifically configured to obtain a video image sample data set, input the sample data in the video image sample data set into L STM neural network model, introduce constraints on weights of connection between a joint point and a neuron in an objective function of the neural network model, and further control a spatial attention subnetwork and a temporal attention subnetwork to complete learning of importance assignment to different joint points and different frames.
It should be noted that the system is realized by the method described above.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A human motion posture recognition and evaluation method is characterized by comprising the following steps:
s01: collecting a video image test data set, and carrying out data processing on data in the video image test data set;
s02, inputting the test data after data processing into a trained L STM neural network model to recognize the motion posture of the human body, and outputting a recognition result;
s03: comparing the output recognition result with standard motion data, and evaluating the standard degree of the recognized human motion posture according to the comparison result;
wherein the training process of the L STM neural network model comprises the following steps:
s11: acquiring a video image sample data set;
s12, inputting sample data in the video image sample data set into a L STM neural network model, introducing constraint on the weight of the connection of the joint points and the neurons into an objective function of the neural network model, classifying the data of different frames and different joint points according to the weight, and finishing learning of distributing the importance of the different frames and the different joint points based on the content type;
s13: and performing back propagation on the obtained classification result to realize the updating of the weight, and performing the operation in S12 in a loop mode.
2. The human motion gesture recognition and evaluation method according to claim 1, wherein the data processing content in S01 comprises: performing time domain segmentation and content type judgment on the collected video image sample data set; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
3. The human motion gesture recognition and evaluation method according to claim 2, wherein the specific content of S02 includes:
(1) extracting time stream characteristics and space stream characteristics, extracting space-time information to form characteristic vectors with fixed lengths, extracting depth characteristics of video frames, and fusing all extracted characteristics by using a space-time characteristic fusion strategy;
(2) according to the sequence content type, performing spatial domain attention calculation and time domain attention calculation on the fused feature vector to respectively obtain spatial domain features and time domain features;
(3) and (3) fusing the characteristics obtained in the step (1) and the step (2) to obtain a classification result, and finishing human body action recognition.
4. A human motion posture recognition and evaluation system is characterized by comprising an image acquisition module, a data processing module, an L STM neural network model, a model training module, a data center and a posture evaluation module;
the image acquisition module is used for acquiring a video image test data set;
the data processing module is used for carrying out data processing on the collected video image test data set;
the L STM neural network model is used for inputting the test data after data processing into the trained L STM neural network model for recognizing the motion posture of the human body and outputting a recognition result;
the model training module is used for training the L STM neural network model;
the data center is used for storing standard motion data;
and the posture evaluation module is used for calling standard motion data in the data center and comparing the output recognition result with the standard motion data to obtain an evaluation result of the standard degree of the recognized human motion posture.
5. The system according to claim 4, wherein the data processing module is specifically configured to perform time domain segmentation and content type determination on the collected video image sample dataset; and preprocessing the segmented video sequence to obtain RGB images and optical flow of video frames.
6. The human motion gesture recognition and evaluation system of claim 4, wherein the L STM neural network model comprises a L STM main network, a spatial attention sub-network, a temporal attention sub-network and a feature fusion module;
the L STM main network is used for extracting time stream characteristics and space stream characteristics, extracting space-time information to form characteristic vectors with fixed lengths, extracting depth characteristics of video frames, and fusing all extracted characteristics by using a space-time characteristic fusion strategy;
the spatial domain attention subnetwork is used for automatically learning and distributing the importance of different joint points according to different content types, and performing spatial domain attention calculation in the identification process to obtain spatial domain characteristics;
the time domain attention subnetwork is used for automatically learning and distributing the importance of different frames aiming at different content types, and performing time domain attention calculation in the identification process to obtain time domain characteristics;
and the characteristic fusion module is used for controlling the characteristics to be fused to obtain a final classification result.
7. The system according to claim 6, wherein the model training module is specifically configured to obtain a video image sample data set, input sample data in the video image sample data set into L STM neural network model, introduce constraints on weights of connection between a joint point and a neuron in an objective function of the neural network model, and further control the spatial attention subnetwork and the temporal attention subnetwork to complete learning of importance assignment to different joint points and different frames.
CN202010196951.6A 2020-03-19 2020-03-19 Human motion posture recognition and evaluation method and system Pending CN111401270A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010196951.6A CN111401270A (en) 2020-03-19 2020-03-19 Human motion posture recognition and evaluation method and system
PCT/CN2020/103074 WO2021184619A1 (en) 2020-03-19 2020-07-20 Human body motion attitude identification and evaluation method and system therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010196951.6A CN111401270A (en) 2020-03-19 2020-03-19 Human motion posture recognition and evaluation method and system

Publications (1)

Publication Number Publication Date
CN111401270A true CN111401270A (en) 2020-07-10

Family

ID=71432707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010196951.6A Pending CN111401270A (en) 2020-03-19 2020-03-19 Human motion posture recognition and evaluation method and system

Country Status (2)

Country Link
CN (1) CN111401270A (en)
WO (1) WO2021184619A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738218A (en) * 2020-07-27 2020-10-02 成都睿沿科技有限公司 Human body abnormal behavior recognition system and method
CN112434608A (en) * 2020-11-24 2021-03-02 山东大学 Human behavior identification method and system based on double-current combined network
CN112686111A (en) * 2020-12-23 2021-04-20 中国矿业大学(北京) Attention mechanism-based multi-view adaptive network traffic police gesture recognition method
CN112843647A (en) * 2021-01-09 2021-05-28 吉首大学 Stretching training control system and method for cheering exercises
CN113239897A (en) * 2021-06-16 2021-08-10 石家庄铁道大学 Human body action evaluation method based on space-time feature combination regression
CN113255554A (en) * 2021-06-04 2021-08-13 福州大学 Shooting training instantaneous percussion action recognition and standard auxiliary evaluation method
CN113408349A (en) * 2021-05-17 2021-09-17 浙江大华技术股份有限公司 Training method of motion evaluation model, motion evaluation method and related equipment
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
CN114067436A (en) * 2021-11-17 2022-02-18 山东大学 Fall detection method and system based on wearable sensor and video monitoring
CN114119753A (en) * 2021-12-08 2022-03-01 北湾科技(武汉)有限公司 Transparent object 6D attitude estimation method facing mechanical arm grabbing
WO2023138154A1 (en) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 Object recognition method, network training method and apparatus, device, medium, and program
WO2023197938A1 (en) * 2022-04-14 2023-10-19 华为技术有限公司 Dynamic scene processing method and apparatus, and neural network model training method and apparatus
CN117671784A (en) * 2023-12-04 2024-03-08 北京中航智信建设工程有限公司 Human behavior analysis method and system based on video analysis

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902995B (en) * 2021-11-10 2024-04-02 中国科学技术大学 Multi-mode human behavior recognition method and related equipment
CN114310954B (en) * 2021-12-31 2024-04-16 北京理工大学 Self-adaptive lifting control method and system for nursing robot
CN114913594A (en) * 2022-03-28 2022-08-16 北京理工大学 FMS action classification method and system based on human body joint points
CN114863556A (en) * 2022-04-13 2022-08-05 上海大学 Multi-neural-network fusion continuous action recognition method based on skeleton posture
CN114943929A (en) * 2022-04-20 2022-08-26 中国农业大学 Real-time detection method for abnormal behaviors of fishes based on image fusion technology
CN114926761B (en) * 2022-05-13 2023-09-05 浪潮卓数大数据产业发展有限公司 Action recognition method based on space-time smoothing characteristic network
CN115068919B (en) * 2022-05-17 2023-11-14 泰山体育产业集团有限公司 Examination method of horizontal bar project and implementation device thereof
CN115019233B (en) * 2022-06-15 2024-05-03 武汉理工大学 Mental retardation judging method based on gesture detection
CN116458852B (en) * 2023-06-16 2023-09-01 山东协和学院 Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot
CN117423166B (en) * 2023-12-14 2024-03-26 广州华夏汇海科技有限公司 Motion recognition method and system according to human body posture image data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764050A (en) * 2018-04-28 2018-11-06 中国科学院自动化研究所 Skeleton Activity recognition method, system and equipment based on angle independence
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video
WO2019152194A1 (en) * 2018-01-31 2019-08-08 Microsoft Technology Licensing, Llc Artificial intelligence system utilizing microphone array and fisheye camera
CN110222665A (en) * 2019-06-14 2019-09-10 电子科技大学 Human motion recognition method in a kind of monitoring based on deep learning and Attitude estimation
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110826453A (en) * 2019-10-30 2020-02-21 西安工程大学 Behavior identification method by extracting coordinates of human body joint points

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10255910B2 (en) * 2016-09-16 2019-04-09 Apptek, Inc. Centered, left- and right-shifted deep neural networks and their combinations
CN107330362B (en) * 2017-05-25 2020-10-09 北京大学 Video classification method based on space-time attention
CN108846332B (en) * 2018-05-30 2022-04-29 西南交通大学 CLSTA-based railway driver behavior identification method
CN110197235B (en) * 2019-06-28 2021-03-30 浙江大学城市学院 Human body activity recognition method based on unique attention mechanism
CN111401270A (en) * 2020-03-19 2020-07-10 南京未艾信息科技有限公司 Human motion posture recognition and evaluation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019152194A1 (en) * 2018-01-31 2019-08-08 Microsoft Technology Licensing, Llc Artificial intelligence system utilizing microphone array and fisheye camera
CN108764050A (en) * 2018-04-28 2018-11-06 中国科学院自动化研究所 Skeleton Activity recognition method, system and equipment based on angle independence
CN108875708A (en) * 2018-07-18 2018-11-23 广东工业大学 Behavior analysis method, device, equipment, system and storage medium based on video
CN110222665A (en) * 2019-06-14 2019-09-10 电子科技大学 Human motion recognition method in a kind of monitoring based on deep learning and Attitude estimation
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110826453A (en) * 2019-10-30 2020-02-21 西安工程大学 Behavior identification method by extracting coordinates of human body joint points

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
CN111738218A (en) * 2020-07-27 2020-10-02 成都睿沿科技有限公司 Human body abnormal behavior recognition system and method
CN112434608A (en) * 2020-11-24 2021-03-02 山东大学 Human behavior identification method and system based on double-current combined network
CN112434608B (en) * 2020-11-24 2023-02-28 山东大学 Human behavior identification method and system based on double-current combined network
CN112686111A (en) * 2020-12-23 2021-04-20 中国矿业大学(北京) Attention mechanism-based multi-view adaptive network traffic police gesture recognition method
CN112843647A (en) * 2021-01-09 2021-05-28 吉首大学 Stretching training control system and method for cheering exercises
CN113408349A (en) * 2021-05-17 2021-09-17 浙江大华技术股份有限公司 Training method of motion evaluation model, motion evaluation method and related equipment
WO2022242104A1 (en) * 2021-05-17 2022-11-24 Zhejiang Dahua Technology Co., Ltd. Training method for action evaluation model, action evaluation method, and electronic device
CN113255554B (en) * 2021-06-04 2022-05-27 福州大学 Shooting training instantaneous percussion action recognition and standard auxiliary evaluation method
CN113255554A (en) * 2021-06-04 2021-08-13 福州大学 Shooting training instantaneous percussion action recognition and standard auxiliary evaluation method
CN113239897A (en) * 2021-06-16 2021-08-10 石家庄铁道大学 Human body action evaluation method based on space-time feature combination regression
CN113239897B (en) * 2021-06-16 2023-08-18 石家庄铁道大学 Human body action evaluation method based on space-time characteristic combination regression
CN114067436A (en) * 2021-11-17 2022-02-18 山东大学 Fall detection method and system based on wearable sensor and video monitoring
CN114067436B (en) * 2021-11-17 2024-03-05 山东大学 Fall detection method and system based on wearable sensor and video monitoring
CN114119753A (en) * 2021-12-08 2022-03-01 北湾科技(武汉)有限公司 Transparent object 6D attitude estimation method facing mechanical arm grabbing
WO2023138154A1 (en) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 Object recognition method, network training method and apparatus, device, medium, and program
WO2023197938A1 (en) * 2022-04-14 2023-10-19 华为技术有限公司 Dynamic scene processing method and apparatus, and neural network model training method and apparatus
CN117671784A (en) * 2023-12-04 2024-03-08 北京中航智信建设工程有限公司 Human behavior analysis method and system based on video analysis

Also Published As

Publication number Publication date
WO2021184619A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN111401270A (en) Human motion posture recognition and evaluation method and system
Yang et al. Towards rich feature discovery with class activation maps augmentation for person re-identification
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN106295568B (en) The mankind's nature emotion identification method combined based on expression and behavior bimodal
Gall et al. Hough forests for object detection, tracking, and action recognition
CN110781829A (en) Light-weight deep learning intelligent business hall face recognition method
CN111291604A (en) Face attribute identification method, device, storage medium and processor
WO2020140723A1 (en) Method, apparatus and device for detecting dynamic facial expression, and storage medium
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
JP3938872B2 (en) Data classification device and object recognition device
CN111523462A (en) Video sequence list situation recognition system and method based on self-attention enhanced CNN
CN101027678A (en) Single image based multi-biometric system and method
CN109086659B (en) Human behavior recognition method and device based on multi-channel feature fusion
CN101299234B (en) Method for recognizing human eye state based on built-in type hidden Markov model
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN107820619A (en) One kind classification interactive decision making method, interactive terminal and cloud server
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN109325408A (en) A kind of gesture judging method and storage medium
CN109977867A (en) A kind of infrared biopsy method based on machine learning multiple features fusion
CN111199202A (en) Human body action recognition method and device based on circulating attention network
CN109145947B (en) Fashion women's dress image fine-grained classification method based on part detection and visual features
CN110599463A (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network
CN115909407A (en) Cross-modal pedestrian re-identification method based on character attribute assistance
CN111860117A (en) Human behavior recognition method based on deep learning
Hoque et al. Bdsl36: A dataset for bangladeshi sign letters recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710

RJ01 Rejection of invention patent application after publication