CN110674785A - Multi-person posture analysis method based on human body key point tracking - Google Patents

Multi-person posture analysis method based on human body key point tracking Download PDF

Info

Publication number
CN110674785A
CN110674785A CN201910948203.6A CN201910948203A CN110674785A CN 110674785 A CN110674785 A CN 110674785A CN 201910948203 A CN201910948203 A CN 201910948203A CN 110674785 A CN110674785 A CN 110674785A
Authority
CN
China
Prior art keywords
human body
key point
key points
key
video stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910948203.6A
Other languages
Chinese (zh)
Inventor
蒋平
郭昌野
王文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongxing Flying Mdt Infotech Ltd
Original Assignee
Zhongxing Flying Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongxing Flying Mdt Infotech Ltd filed Critical Zhongxing Flying Mdt Infotech Ltd
Priority to CN201910948203.6A priority Critical patent/CN110674785A/en
Publication of CN110674785A publication Critical patent/CN110674785A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-person posture analysis method based on human body key point tracking, which specifically comprises the following steps: s1, obtaining a video stream, S2, detecting key points of a human body by utilizing a key point detection model for images in the video stream, S3, selecting points with high reliability and position prediction calibration from the key points of the human body, S4, obtaining positions of people in a current frame and all previous frames of the video stream, S5, determining the type of the gesture needing to be analyzed, S6, designing a classification network, S7, and finally inputting a key point sequence in a section of the video stream into the classification network. The multi-person posture analysis method based on human body key point tracking realizes the tracking of key points, extracts key points of the same person within a period of time and distinguishes key points among different persons; and meanwhile, integrating all key point information of the sequence frame, adding information of a time dimension, and reversely correcting the outlier key points.

Description

Multi-person posture analysis method based on human body key point tracking
Technical Field
The invention relates to the technical field of human body posture recognition, in particular to a multi-person posture analysis method based on human body key point tracking.
Background
With the development and application of computer science and artificial intelligence, video analysis technology is rapidly emerging and is widely concerned, one of the front directions is human behavior recognition, the video analysis technology has wide application prospects in the fields of virtual reality, visual monitoring, perceptual user interfaces (PUl) and the like, the accuracy and rapidity of recognition directly influence a subsequent video analysis system, the current behavior recognition is to detect the position information of skeletal joint points of a human body by analyzing a target image or video and further recognize human body actions by logic judgment, and therefore how to efficiently and accurately detect the key points of the human body becomes the important factor in the field of video analysis.
The most similar implementation scheme of the invention is a patent 'CN 109165552A', a gesture recognition method, a system and a memory based on human key points, wherein the method comprises the following steps: the method comprises the steps of judging moving areas in a video picture through motion detection, performing target detection in the effective areas, acquiring key points of a body and a face through gesture recognition, performing machine learning on the key points, and recognizing human postures such as standing, sitting, turning, playing and the like. Therefore, key points of the human body and the face are trained, and then the posture of the target is recognized.
The most similar realization scheme of the invention is also a patent 'CN 107358149A' -a human posture detection method and a device thereof, wherein the method comprises the steps of obtaining a character image, wherein the character image comprises a pedestrian area and a background area, and the pedestrian area comprises a left shoulder point and a right shoulder point; detecting the position of the pedestrian area in the character image and the width of the pedestrian area, and detecting the number of key points and the positions of the key points in the character image; if the number of the key points is equal to the preset number, marking the figure image as an unoccluded state; calculating a first distance between the left shoulder point and the right shoulder point; and if the first distance is not less than the turning threshold value, the image of the marked person is in a non-turning state. And judging the state of the person image according to the number of the key points and the distance between the key points. The method can classify a plurality of character images according to the states of the character images, or select the character images in the non-blocking and non-turning states to perform the next operation of pedestrian recognition and the like, so that the accuracy of pedestrian recognition can be improved.
The current human body posture identification method mainly utilizes a plurality of mature networks to extract the positions of human body key points in an image, and then determines the current human body posture by analyzing the number of the key points and the position relation between the key points, but the prior art has the following problems: firstly, when a convolutional neural network is used for acquiring multi-person key points, some errors inevitably occur in information such as the number and the positions of the key points; secondly, for a section of video stream, only human key points of a certain frame are used for attitude analysis, and the accuracy of an analysis result is low due to the error; third, the accuracy of the pose estimate obtained when the analysis is performed solely using the number of keypoints and the position information is low.
Aiming at the problems, the invention tracks key points of a human body, extracts key point information of a certain person in a video within a period of time, and classifies a key point feature map by using a deep learning method, thereby comprehensively analyzing the posture of the object, and has the following advantages: firstly, the key points are tracked, and time dimension information is added, so that the interference among a plurality of people of key points is reduced; secondly, after the tracking information of the continuous frames is combined, the accuracy of key point detection is further improved; thirdly, the key points are classified by using a deep learning method, so that postures such as climbing, jumping, lying, running, squatting and the like can be effectively distinguished.
Disclosure of Invention
Technical problem to be solved
The invention provides a multi-person posture analysis method based on human body key point tracking, which is used for analyzing the postures of a plurality of persons in a video stream.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a multi-person posture analysis method based on human body key point tracking specifically comprises the following steps:
s1, acquiring a video stream;
s2, detecting key points of the human body of the image in the video stream by using the key point detection model to obtain the positions and the prediction confidence coefficients of the key points of all parts of each human body in the image;
s3, selecting points with higher reliability and position prediction calibration from the key points of the human body, and using the points to represent the position of the human body;
s4, after the positions of people in the current frame and all the previous frames of the video stream are obtained, the people between the adjacent frames are related by using a filtering algorithm, so that the action track of all people from the beginning of the video stream to the current frame is obtained, the positions of key points of each person in each frame are recorded while the track of each person is recorded, the key point information of the human body in the sequence frame is corrected reversely, and the precision is improved so as to facilitate the later comprehensive analysis;
s5, determining the gesture types to be analyzed, simulating each gesture to obtain the information of key points of the pedestrian under the gesture, and recording the information as a training sample;
s6, designing a classification network, inputting key point information characteristic diagrams of multiple postures obtained through simulation into the classification network for training, and obtaining a reliable posture classification model by adjusting related hyper-parameters;
and S7, finally, inputting the key point sequence in a section of video stream into a classification network, carrying out final judgment, and sequencing the final posture classification results according to the confidence degrees, wherein the input is obtained in the step S4, and the classification network is obtained in the step S6.
Preferably, the key points of the head of the key point detection model in step S2 are eyes, nose, and ears, the key points of the upper limbs are hands, elbows, and shoulders, and the key points of the lower limbs are feet, knees, and crotch.
Preferably, in step S3, the key point of the neck is selected as the center point of the person.
Preferably, in step S7, if the pose with the highest confidence passes a certain threshold, it is determined as the final pose of the segment of video stream.
Preferably, the filtering algorithm in step S4 is a kalman filtering algorithm, but is not limited to one of the kalman filtering algorithms.
Preferably, in step S1, the video stream is acquired by a camera or by video reading.
(III) advantageous effects
The invention provides a multi-person posture analysis method based on human body key point tracking. Compared with the prior art, the method has the following beneficial effects:
(1) the multi-person posture analysis method based on human body key point tracking firstly extracts the positions of human body key points in a current frame by utilizing a convolutional neural network through analyzing the postures of a plurality of persons in a video stream, then tracks the key points by combining the information of the previous frame so as to obtain the time sequence information of all the key points of each person in a period of time, and classifies the key points containing the time sequence information through a deep learning classification model so as to judge the posture of each person in the video; the human body key points are tracked by using a Kalman filtering method, and time dimension information is added into the obtained key point information, so that the analysis accuracy is improved; the invention uses the Kalman filtering method to track the key points of the human body and extracts the key point information of a certain person in a video within a period of time, on one hand, the postures of a plurality of persons can be respectively analyzed, on the other hand, the time dimension information is added into the key point information, the analysis accuracy is improved, the key points are tracked, the key points of the same person within a period of time are extracted, and key points among different persons are distinguished; and meanwhile, integrating all key point information of the sequence frame, adding information of a time dimension, and reversely correcting the outlier key points.
(2) According to the multi-person posture analysis method based on human body key point tracking, a deep learning method is used in posture classification, so that a classification result with high accuracy and strong robustness is obtained, the accuracy of posture analysis is improved, and the method is suitable for a scene for analyzing the postures of multiple persons in a video stream.
Drawings
Fig. 1 is a system structure diagram according to a first embodiment of the present invention;
fig. 2 is a flowchart of a keypoint detection module according to a second embodiment of the present invention;
fig. 3 is a flowchart of a multi-user tracking module according to a third embodiment of the present invention;
fig. 4 is a flowchart of a posture comprehensive analysis module according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-4, the embodiment of the present invention provides four technical solutions: a multi-person posture analysis method based on human body key point tracking specifically comprises the following embodiments:
example 1
As shown in fig. 1, module 101: and the video stream input module acquires the video stream in a camera or video reading mode and the like.
The module 102: and the key point detection module is used for detecting the key points of the human body of the image in the video stream by using the key point detection model to obtain the positions and the prediction confidence coefficients of the key points of all parts of each human body in the image.
The module 103: the multi-target tracking module selects a key point to represent the position of a person, after the positions of the persons in the current frame and all the previous frames of the video stream are obtained, Kalman filtering, but not limited to Kalman filtering, is used for associating the persons between the adjacent frames, so that the action track of all the persons from the beginning to the current frame of the video stream is obtained, the positions of the key points of the persons in each frame are recorded while the track of each person is recorded, meanwhile, the key point information of the persons in the sequence frame is reversely corrected, the precision is improved, and the key point information is used for carrying out comprehensive analysis later.
The module 104: and the posture classification module is used for determining the posture types to be analyzed, simulating each posture to obtain the information of key points of the pedestrian in the posture, inputting the information of the key points into a network as training samples, and adjusting related hyper-parameters to obtain a pre-training model with a better effect.
The module 105: and the gesture comprehensive analysis module is used for inputting the sequence frames of the key points into a pre-training model to obtain the confidence degrees of all gestures in the time period, and judging the final gesture classification result by setting a threshold value.
Example 2
As shown in fig. 2, step 201: inputting a frame image to be detected.
Step 202: and detecting the key points of the human body by using the key point detection model for the images in the video stream.
Step 203: and integrating the detection results to obtain the positions and the prediction confidence degrees of the key points of all parts of each human body in the image, and recording the key point detection result of each frame.
Example 3
As shown in fig. 3, step 301: a key point is selected as the center point of a person, and the key point is used for determining the position of the person.
Step 302: after the positions of people in the current frame and all previous frames of the video stream are obtained, people between the previous frame and the next frame can be associated by using Kalman filtering, but not limited to Kalman filtering, so that the action track of all people from the beginning to the current frame of the video stream is obtained.
Step 303: due to the fact that single frame detection is possible to be abnormal, correct key point information of the sequence frame is obtained through reverse correction of outlier key points.
Step 304: the locus of each person is recorded, and the positions of key points of the person in each frame are recorded at the same time, so that comprehensive analysis can be performed later.
Example 4
As shown in fig. 4, step 401: and integrating the key point information of multiple frames, inputting the key point information into the long-term and short-term memory network, and integrating the time sequence information to obtain a final characteristic diagram.
Step 402: and inputting the final feature graph into a full-connection layer for classification to obtain the attitude confidence degree sequence of the sequence frame.
Step 403: and setting a threshold, and judging that the confidence coefficient is maximum and exceeds the threshold as a final posture.
To sum up the above
The method comprises the steps of analyzing the postures of a plurality of people in a video stream, firstly extracting the positions of key points of a human body in a current frame by using a convolutional neural network, then tracking the key points by combining the information of the previous frame to obtain the time sequence information of all the key points of each person in a period of time, classifying the key points containing the time sequence information by using a deep learning classification model, and judging the posture of each person in the video; the human body key points are tracked by using a Kalman filtering method, and time dimension information is added into the obtained key point information, so that the analysis accuracy is improved; the invention uses the Kalman filtering method to track the key points of the human body and extracts the key point information of a certain person in a video within a period of time, on one hand, the postures of a plurality of persons can be respectively analyzed, on the other hand, the time dimension information is added into the key point information, the analysis accuracy is improved, the key points are tracked, the key points of the same person within a period of time are extracted, and key points among different persons are distinguished; meanwhile, by using a deep learning method in the gesture classification, a classification result with higher accuracy and strong robustness is obtained, the accuracy of gesture analysis is improved, and the method is suitable for a scene for analyzing the gestures of multiple persons in the video stream.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A multi-person posture analysis method based on human body key point tracking is characterized in that: the method specifically comprises the following steps:
s1, acquiring a video stream;
s2, detecting key points of the human body of the image in the video stream by using the key point detection model to obtain the positions and the prediction confidence coefficients of the key points of all parts of each human body in the image;
s3, selecting points with higher reliability and position prediction calibration from the key points of the human body, and using the points to represent the position of the human body;
s4, after the positions of people in the current frame and all the previous frames of the video stream are obtained, the people between the adjacent frames are related by using a filtering algorithm, so that the action track of all people from the beginning to the current frame of the video stream is obtained, the positions of all key points of each person in each frame are recorded while the track of each person is recorded, and the information of the key points of the human body in the sequence of frames is corrected reversely;
s5, determining the gesture types to be analyzed, simulating each gesture to obtain the information of key points of the pedestrian under the gesture, and recording the information as a training sample;
s6, designing a classification network, inputting key point information characteristic diagrams of multiple postures obtained through simulation into the classification network for training, and obtaining a reliable posture classification model by adjusting related hyper-parameters;
and S7, finally, inputting the key point sequence in a section of video stream into a classification network, carrying out final judgment, and sequencing the final posture classification results according to the confidence degrees, wherein the input is obtained in the step S4, and the classification network is obtained in the step S6.
2. The multi-person posture analysis method based on human body key point tracking according to claim 1, characterized in that: the key points of the head of the key point detection model in the step S2 are the eyes, nose, and ears, the key points of the upper limbs are the hands, elbows, and shoulders, and the key points of the lower limbs are the feet, knees, and crotch.
3. The multi-person posture analysis method based on human body key point tracking according to claim 1, characterized in that: in step S3, a key point of the neck is selected as the center point of the person.
4. The multi-person posture analysis method based on human body key point tracking according to claim 1, characterized in that: if the pose with the highest confidence level passes a certain threshold in step S7, it is determined as the final pose of the segment of video stream.
5. The multi-person posture analysis method based on human body key point tracking according to claim 1, characterized in that: the filtering algorithm in the step S4 is a kalman filtering algorithm.
6. The multi-person posture analysis method based on human body key point tracking according to claim 1, characterized in that: in step S1, a video stream is acquired by a camera or by video reading.
CN201910948203.6A 2019-10-08 2019-10-08 Multi-person posture analysis method based on human body key point tracking Pending CN110674785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910948203.6A CN110674785A (en) 2019-10-08 2019-10-08 Multi-person posture analysis method based on human body key point tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910948203.6A CN110674785A (en) 2019-10-08 2019-10-08 Multi-person posture analysis method based on human body key point tracking

Publications (1)

Publication Number Publication Date
CN110674785A true CN110674785A (en) 2020-01-10

Family

ID=69080708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910948203.6A Pending CN110674785A (en) 2019-10-08 2019-10-08 Multi-person posture analysis method based on human body key point tracking

Country Status (1)

Country Link
CN (1) CN110674785A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507192A (en) * 2020-03-19 2020-08-07 北京捷通华声科技股份有限公司 Appearance instrument monitoring method and device
CN111724414A (en) * 2020-06-23 2020-09-29 宁夏大学 Basketball movement analysis method based on 3D attitude estimation
CN111753747A (en) * 2020-06-28 2020-10-09 高新兴科技集团股份有限公司 Violent motion detection method based on monocular camera and three-dimensional attitude estimation
CN111753724A (en) * 2020-06-24 2020-10-09 上海依图网络科技有限公司 Abnormal behavior identification method and device
CN112071084A (en) * 2020-09-18 2020-12-11 城云科技(中国)有限公司 Method and system for judging illegal parking by utilizing deep learning
CN112149494A (en) * 2020-08-06 2020-12-29 中国地质大学(武汉) Multi-person posture recognition method and system
CN112200076A (en) * 2020-10-10 2021-01-08 福州大学 Method for carrying out multi-target tracking based on head and trunk characteristics
CN112634400A (en) * 2020-12-21 2021-04-09 浙江大华技术股份有限公司 Rope skipping counting method, terminal and computer readable storage medium thereof
CN112733767A (en) * 2021-01-15 2021-04-30 西安电子科技大学 Human body key point detection method and device, storage medium and terminal equipment
CN112749658A (en) * 2020-04-30 2021-05-04 杨九妹 Pedestrian behavior analysis method and system for big data financial security system and robot
CN112861776A (en) * 2021-03-05 2021-05-28 罗普特科技集团股份有限公司 Human body posture analysis method and system based on dense key points
CN112990061A (en) * 2021-03-30 2021-06-18 上海擎朗智能科技有限公司 Control method and device of mobile equipment and storage medium
CN113158756A (en) * 2021-02-09 2021-07-23 上海领本智能科技有限公司 Posture and behavior analysis module and method based on HRNet deep learning
CN113192105A (en) * 2021-04-16 2021-07-30 嘉联支付有限公司 Method and device for tracking multiple persons and estimating postures indoors
CN113223084A (en) * 2021-05-27 2021-08-06 北京奇艺世纪科技有限公司 Position determination method and device, electronic equipment and storage medium
CN113269013A (en) * 2020-02-17 2021-08-17 京东方科技集团股份有限公司 Object behavior analysis method, information display method and electronic equipment
CN113470080A (en) * 2021-07-20 2021-10-01 浙江大华技术股份有限公司 Illegal behavior identification method
CN113554034A (en) * 2020-04-24 2021-10-26 北京达佳互联信息技术有限公司 Key point detection model construction method, detection method, device, equipment and medium
CN113569675A (en) * 2021-07-15 2021-10-29 郑州大学 Mouse open field experimental behavior analysis method based on ConvLSTM network
CN113822202A (en) * 2021-09-24 2021-12-21 河南理工大学 Taijiquan attitude detection system based on OpenPose and PyQt
CN113850221A (en) * 2021-09-30 2021-12-28 北京航空航天大学 Attitude tracking method based on key point screening
CN115631464A (en) * 2022-11-17 2023-01-20 北京航空航天大学 Pedestrian three-dimensional representation method oriented to large space-time target association
WO2023206236A1 (en) * 2022-04-28 2023-11-02 华为技术有限公司 Method for detecting target and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory
CN109558865A (en) * 2019-01-22 2019-04-02 郭道宁 A kind of abnormal state detection method to the special caregiver of need based on human body key point
CN109670474A (en) * 2018-12-28 2019-04-23 广东工业大学 A kind of estimation method of human posture based on video, device and equipment
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture
CN109165552A (en) * 2018-07-14 2019-01-08 深圳神目信息技术有限公司 A kind of gesture recognition method based on human body key point, system and memory
CN109670474A (en) * 2018-12-28 2019-04-23 广东工业大学 A kind of estimation method of human posture based on video, device and equipment
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN109558865A (en) * 2019-01-22 2019-04-02 郭道宁 A kind of abnormal state detection method to the special caregiver of need based on human body key point
CN110188599A (en) * 2019-04-12 2019-08-30 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intellectual analysis recognition methods
CN110070029A (en) * 2019-04-17 2019-07-30 北京易达图灵科技有限公司 A kind of gait recognition method and device

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164464A1 (en) * 2020-02-17 2021-08-26 京东方科技集团股份有限公司 Object behavior analysis method, information display method, and electronic device
CN113269013A (en) * 2020-02-17 2021-08-17 京东方科技集团股份有限公司 Object behavior analysis method, information display method and electronic equipment
CN113269013B (en) * 2020-02-17 2024-06-07 京东方科技集团股份有限公司 Object behavior analysis method, information display method and electronic equipment
US12008793B2 (en) 2020-02-17 2024-06-11 Boe Technology Group Co., Ltd. Object behavior analysis method, information display method, and electronic device
CN111507192A (en) * 2020-03-19 2020-08-07 北京捷通华声科技股份有限公司 Appearance instrument monitoring method and device
CN113554034A (en) * 2020-04-24 2021-10-26 北京达佳互联信息技术有限公司 Key point detection model construction method, detection method, device, equipment and medium
CN112749658A (en) * 2020-04-30 2021-05-04 杨九妹 Pedestrian behavior analysis method and system for big data financial security system and robot
CN111724414B (en) * 2020-06-23 2024-01-26 宁夏大学 Basketball motion analysis method based on 3D gesture estimation
CN111724414A (en) * 2020-06-23 2020-09-29 宁夏大学 Basketball movement analysis method based on 3D attitude estimation
CN111753724A (en) * 2020-06-24 2020-10-09 上海依图网络科技有限公司 Abnormal behavior identification method and device
CN111753747B (en) * 2020-06-28 2023-11-24 高新兴科技集团股份有限公司 Violent motion detection method based on monocular camera and three-dimensional attitude estimation
CN111753747A (en) * 2020-06-28 2020-10-09 高新兴科技集团股份有限公司 Violent motion detection method based on monocular camera and three-dimensional attitude estimation
CN112149494A (en) * 2020-08-06 2020-12-29 中国地质大学(武汉) Multi-person posture recognition method and system
CN112071084A (en) * 2020-09-18 2020-12-11 城云科技(中国)有限公司 Method and system for judging illegal parking by utilizing deep learning
CN112200076A (en) * 2020-10-10 2021-01-08 福州大学 Method for carrying out multi-target tracking based on head and trunk characteristics
CN112200076B (en) * 2020-10-10 2023-02-21 福州大学 Method for carrying out multi-target tracking based on head and trunk characteristics
CN112634400A (en) * 2020-12-21 2021-04-09 浙江大华技术股份有限公司 Rope skipping counting method, terminal and computer readable storage medium thereof
CN112733767A (en) * 2021-01-15 2021-04-30 西安电子科技大学 Human body key point detection method and device, storage medium and terminal equipment
CN113158756A (en) * 2021-02-09 2021-07-23 上海领本智能科技有限公司 Posture and behavior analysis module and method based on HRNet deep learning
CN112861776A (en) * 2021-03-05 2021-05-28 罗普特科技集团股份有限公司 Human body posture analysis method and system based on dense key points
CN112990061A (en) * 2021-03-30 2021-06-18 上海擎朗智能科技有限公司 Control method and device of mobile equipment and storage medium
CN113192105B (en) * 2021-04-16 2023-10-17 嘉联支付有限公司 Method and device for indoor multi-person tracking and attitude measurement
CN113192105A (en) * 2021-04-16 2021-07-30 嘉联支付有限公司 Method and device for tracking multiple persons and estimating postures indoors
CN113223084B (en) * 2021-05-27 2024-03-01 北京奇艺世纪科技有限公司 Position determining method and device, electronic equipment and storage medium
CN113223084A (en) * 2021-05-27 2021-08-06 北京奇艺世纪科技有限公司 Position determination method and device, electronic equipment and storage medium
CN113569675B (en) * 2021-07-15 2023-05-23 郑州大学 ConvLSTM network-based mouse open field experimental behavior analysis method
CN113569675A (en) * 2021-07-15 2021-10-29 郑州大学 Mouse open field experimental behavior analysis method based on ConvLSTM network
CN113470080A (en) * 2021-07-20 2021-10-01 浙江大华技术股份有限公司 Illegal behavior identification method
CN113470080B (en) * 2021-07-20 2024-05-14 浙江大华技术股份有限公司 Illegal behavior recognition method
CN113822202A (en) * 2021-09-24 2021-12-21 河南理工大学 Taijiquan attitude detection system based on OpenPose and PyQt
CN113850221A (en) * 2021-09-30 2021-12-28 北京航空航天大学 Attitude tracking method based on key point screening
WO2023206236A1 (en) * 2022-04-28 2023-11-02 华为技术有限公司 Method for detecting target and related device
CN115631464A (en) * 2022-11-17 2023-01-20 北京航空航天大学 Pedestrian three-dimensional representation method oriented to large space-time target association

Similar Documents

Publication Publication Date Title
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN108256433B (en) Motion attitude assessment method and system
CN110472554B (en) Table tennis action recognition method and system based on attitude segmentation and key point features
CN109558810B (en) Target person identification method based on part segmentation and fusion
TWI430185B (en) Facial expression recognition systems and methods and computer program products thereof
Li et al. Model-based segmentation and recognition of dynamic gestures in continuous video streams
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN111104816A (en) Target object posture recognition method and device and camera
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN110555387B (en) Behavior identification method based on space-time volume of local joint point track in skeleton sequence
US20090316983A1 (en) Real-Time Action Detection and Classification
CN111460976B (en) Data-driven real-time hand motion assessment method based on RGB video
CN110956141B (en) Human body continuous action rapid analysis method based on local recognition
Hu et al. Exemplar-based recognition of human–object interactions
CN112800892B (en) Human body posture recognition method based on openposition
Yi et al. Human action recognition based on action relevance weighted encoding
CN111105443A (en) Video group figure motion trajectory tracking method based on feature association
CN110309729A (en) Tracking and re-detection method based on anomaly peak detection and twin network
CN117137435B (en) Rehabilitation action recognition method and system based on multi-mode information fusion
Min Human body pose intelligent estimation based on BlazePose
Parashar et al. Improved Yoga Pose Detection Using MediaPipe and MoveNet in a Deep Learning Model.
Tsai et al. Temporal-variation skeleton point correction algorithm for improved accuracy of human action recognition
CN112580526A (en) Student classroom behavior identification system based on video monitoring
Kulic et al. Detecting changes in motion characteristics during sports training
CN115294660B (en) Body-building action recognition model, training method of model and body-building action recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110