CN112287758A - Climbing identification method based on key point detection - Google Patents

Climbing identification method based on key point detection Download PDF

Info

Publication number
CN112287758A
CN112287758A CN202011026072.5A CN202011026072A CN112287758A CN 112287758 A CN112287758 A CN 112287758A CN 202011026072 A CN202011026072 A CN 202011026072A CN 112287758 A CN112287758 A CN 112287758A
Authority
CN
China
Prior art keywords
distance
climbing
climbing action
human body
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011026072.5A
Other languages
Chinese (zh)
Other versions
CN112287758B (en
Inventor
张继勇
戴振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Handrui Intelligent Technology Co Ltd
Original Assignee
Zhejiang Handrui Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Handrui Intelligent Technology Co Ltd filed Critical Zhejiang Handrui Intelligent Technology Co Ltd
Priority to CN202011026072.5A priority Critical patent/CN112287758B/en
Publication of CN112287758A publication Critical patent/CN112287758A/en
Application granted granted Critical
Publication of CN112287758B publication Critical patent/CN112287758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a climbing identification method based on key point detection, and the method comprises the following steps of S10, installing a Detectron2 open source framework; s20, configuration file parameters; s30, defining human body climbing action classes and constructing a main function; s40, processing the collected image data; and S50, judging the climbing action. The method calculates the relationship (such as distance and included angle of three point composition vectors) among the key points, judges three continuous frames of images of the video, and judges when the key point relationship of each frame of image meets the climbing condition, so that the system can effectively increase the robustness of the identification system, has higher reliability and has great practical value.

Description

Climbing identification method based on key point detection
Technical Field
The invention belongs to the technical field of deep learning, and relates to a climbing identification method based on key point detection.
Background
Human body key point detection is a key point for identifying and positioning all people in an image, and is a fundamental research topic of many visual applications (such as pedestrian motion identification and human-computer interaction). The postures of the human body are composed of different structural parts of the body, such as a head part, a trunk part, elbows, legs and the like, and the deformation of the different structural parts is very diverse, so that the detection of key points of the human body is very challenging. In the last decade, many different approaches have been made to solve this problem, from the first based on graphical structures and graphical models to the later based on depth maps using RGB-D cameras, which have achieved some success, but are difficult to put into practical use. Until 2014, the convolutional neural network is introduced into the field of human key point detection, and key point detection makes some significant progress.
From the property classification of the input image, it can be classified into a depth image-based and a general RGB image-based keypoint detection. In recent years, human key point detection research is based on common RGB images, human key point detection based on RGB images can be divided into two categories, namely a traditional method and a deep learning method
The traditional method mainly adopts a graph structure to solve the problem of human body key point detection, for example, a Tree model (Tree Models) and a Random Forest model (Random Forest Models) are proved to be very effective key point detection algorithms. Still other algorithms are based on graphical Models, such as Random Field Models (Random Field Models) and dependent Graph Models (Dependency Graph Models) that are also being studied by many scholars. The graphical structure was proposed by Fischler et al [21] in 1973, which was originally a classical facial structure detection algorithm. Later Felzenszwalb et al made some significant improvements to the pattern structure, making the pattern structure approach more versatile.
Due to the fact that deep learning is conducted on fire heat in the field of computer vision, at present, a plurality of human body key point detection methods are all based on deep learning methods. According to different processing conditions, the method can be divided into single key point detection and multi-person key point detection, the single key point detection can be used for processing the single key point detection problem, and people are generally required to be in the center of the picture. Toshiev et al propose a deppose algorithm to solve the key point detection problem, which is the first time that a convolutional neural network is introduced to the human body key point detection problem, and deppose proposes to process the key point detection problem through a cascaded convolutional neural network. DeepPose directly regresses coordinate points, and because human postures are flexible and the shapes are various, direct coordinate regression is not easy, and training is not easy to converge. Tompson et al predict thermodynamic diagrams (Heatmaps) of key points using a convolutional neural network for feature extraction and a graph model for constructing the positional relationship between key points of the human body, where the position where the response value is the largest represents the position of key points of the human body, and the training is easier to converge. The method lays an important position of the thermodynamic diagram in the field of key point detection, and most of the subsequent methods use the thermodynamic diagram to predict the position of the key point. In 2016, some scholars generated thermodynamic diagrams of key points by constructing very deep convolutional neural networks, and the detection effect reached a leading level on various data sets.
In recent years, the rise of deep learning provides a new idea for solving the human body key point detection problem, but a complex network model usually means that support with a higher hardware level and longer calculation time are needed, and most deep learning network models need to consume a large amount of manpower, material resources and calculation cost. The detection algorithm of the human body key points is also developed from the detection limited to single key points to the detection of the human body key points under the condition of multiple persons. And the human body key point detection under the condition of multiple persons is usually accompanied by a more complex network structure, which means higher calculation cost and longer training time. Today, a good network model suitable for human body key point detection problems has the characteristics of high robustness, low calculation cost, high identification precision, capability of accommodating multiple users and the like, such as R-CNN.
A regional convolutional neural network (R-CNN) series network model is a mature network structure which is developed in a target detection algorithm in recent years, a plurality of convolution structures, even dozens of or hundreds of convolution layers, activation layers and pooling layers are constructed to extract features of an image, a target detection problem is split into a set of two problems, namely regression and classification, and the problems that a specific model is difficult to train or even over-fit are solved by using a mode of supervised pre-training under a large sample and small sample fine adjustment.
Disclosure of Invention
In order to solve the problems, the invention uses a deep learning method to identify key points of the human body so as to judge the action of the human body, which has great significance for improving the safety of a production construction site. In a building site, the climbing actions of workers are detected, and the climbing workers can pay attention to the detection so as to ensure the safety of the workers; in the outside cleaning process of high building, climbing identification technology based on key point detection can be used as well to mark workers who are cleaning, so that the operation safety of the workers is concerned.
In order to achieve the aim, the technical scheme of the invention is a climbing identification method based on key point detection, which comprises the following steps:
s10, mounting a Detectron2 open source frame;
s20, configuration file parameters;
s30, defining human body climbing action classes and constructing a main function;
s40, processing the collected image data;
s50, judging climbing action;
the S30 definition of the human body climbing action class and the construction of the main function comprises the following steps:
s31, defining a human body climbing action class, wherein the human body climbing action class is used for processing key points of a human body and specifically judging a climbing action;
s32, after the human body climbing action class is defined, a class method and attributes are defined, and a method for calculating vector angle and point distance is established;
s33, defining a method of writing an image by displaying the calculated angle and distance in real time in the image of each frame;
s34, defining a method for processing the image, dividing the image into each frame and independently processing the frame;
s35, defining a storage method, wherein the method classifies and stores the calculated angles;
s36, defining an angle and distance judgment method, processing the relation of the angle and the distance, respectively judging the ratio of the upper limbs, the lower limbs and the distance, returning a corresponding numerical value or an array, returning 1 to the angle or the distance meeting the judgment condition, and otherwise, returning 0 to the angle or the distance meeting the judgment condition;
s37, defining a climbing action judging method, summing the numerical values returned from S36, if the sum meets the condition, returning to 1, indicating that the climbing action is performed, otherwise, returning to 0;
s38, defining a video processing method, drawing key points and human body frames for the image of each frame, operating a climbing action judging method, and continuously returning the numerical value of each frame in a main function to perform the next judgment;
the main function is used for processing the video stream, and the video screen of each frame is captured into an image to perform climbing action recognition.
Preferably, the Detectron2 open source framework is installed, and the Detectron2 open source framework is installed through GitHub in order to configure the installation environment of the Detectron 2.
Preferably, the profile parameters include the steps of:
s21, introducing detectron2 and opencv2 modules into a python master file;
s22, acquiring configuration parameters of the program through a yaml file and a default file provided by a detectron2, selecting a proper RCNN model, and downloading a required pre-training model;
s23, define get _ parser () function for accepting the input parameter of the function.
Preferably, the collected image data is processed, 15 to 20 high-definition cameras are arranged at an application place, the image data is tracked in real time and collected continuously, the collected image data is input of python, and the input image data is processed in real time to detect the climbing action.
Preferably, the judgment of the climbing action is that the class is operated in a main function according to a defined human body climbing action class, a specific parameter is given to the class, a return value of the class is obtained, the judgment is made according to the return values of three continuous frames of images, if the return values are all 1, the action is determined to be the climbing action, and a warning sign or an alarm is output on a video page through opencv2 processing.
Preferably, the determining the climbing action is to process the video data to obtain 17 key points of each frame of image, and calculate angles of the left arm, the right arm, the left leg and the right leg; calculating the shoulder distance and the arm distance and obtaining the shoulder-arm distance ratio; calculating the crotch distance and the leg distance to obtain a crotch-leg distance ratio; after obtaining data of the angle and distance ratio, setting an arm angle threshold value to be 30-150 degrees and setting a leg angle threshold value to be 45-160 degrees; setting the shoulder-arm distance ratio threshold value to be 0.5-1 and the crotch-leg distance ratio threshold value to be 0.5-1; and judging each frame of obtained data according to the threshold value, if the three continuous frames of images exceed the threshold value, conforming to the climbing action condition, and judging the human body motion at the moment as the climbing action.
Preferably, after the judgment result shows that the climbing action is being performed, the warning is displayed on a display interface to remind that the climbing action is being performed.
The invention has at least the following specific beneficial effects:
1. a highly integrated Facebook open source framework detectron2 was used. The work of identifying key points at the early stage is simplified, the code amount of a project is reduced, and a detection system is stable and reliable; meanwhile, the detectron2 adopts the current advanced algorithm to detect key points of the human body, and the speed and the accuracy of detection can be effectively improved by using the integrated framework; in addition, the integrated framework is used, so that the system code is simple and visual, the readability is high, the expansibility is high, and the climbing detection can be introduced to other action detections, such as jumping and other actions;
2. the relation (such as distance and included angle of vectors formed by three points) among the key points is calculated, three continuous frames of images of the video are judged, and when the relation of the key points of each frame of image meets the climbing condition, the system can make judgment, so that the robustness of the recognition system can be effectively improved, the reliability is higher, and the practical value is very high.
Drawings
FIG. 1 is a flow chart illustrating the steps of a climbing identification method based on keypoint detection according to an embodiment of the method of the present invention;
FIG. 2 is a flow chart of a climbing action judging step of the climbing identification method based on key point detection according to the embodiment of the method of the present invention;
fig. 3 is a human body key point detection diagram of a climbing identification method based on key point detection according to an embodiment of the method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a better understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
Referring to fig. 1, a technical solution of the present invention, which is an embodiment of the present invention, is a flow chart of steps of a climbing identification method based on key point detection, including the following steps:
s10, mounting a Detectron2 open source frame;
s20, configuration file parameters;
s30, defining human body climbing action classes and constructing a main function;
s40, processing the collected image data;
s50, judging climbing action;
the S30 definition of the human body climbing action class and the construction of the main function comprises the following steps:
s31, defining a human body climbing action class, wherein the human body climbing action class is used for processing key points of a human body and specifically judging a climbing action;
s32, after the human body climbing action class is defined, a class method and attributes are defined, and a method for calculating vector angle and point distance is established;
s33, defining a method of writing an image by displaying the calculated angle and distance in real time in the image of each frame;
s34, defining a method for processing the image, dividing the image into each frame and independently processing the frame;
s35, defining a storage method, wherein the method classifies and stores the calculated angles;
s36, defining an angle and distance judgment method, processing the relation of the angle and the distance, respectively judging the ratio of the upper limbs, the lower limbs and the distance, returning a corresponding numerical value or an array, returning 1 to the angle or the distance meeting the judgment condition, and otherwise, returning 0 to the angle or the distance meeting the judgment condition;
s37, defining a climbing action judging method, summing the numerical values returned from S36, if the sum meets the condition, returning to 1, indicating that the climbing action is performed, otherwise, returning to 0;
and S38, defining a video processing method, drawing key points and human body frames for the image of each frame, operating a climbing action judging method, and continuously returning the numerical value of each frame in a main function so as to carry out the next judgment.
In a specific embodiment, a Detectron2 open source framework is installed, and a Detectron2 open source framework is installed through a GitHub in order to configure an installation environment of Detectron 2.
The configuration file parameters include the following steps:
s21, introducing detectron2 and opencv2 modules into a python master file;
s22, acquiring configuration parameters of the program through a yaml file and a default file provided by a detectron2, selecting a proper RCNN model, and downloading a required pre-training model;
s23, define get _ parser () function for accepting the input parameter of the function.
Processing the acquired image data, arranging 15-20 high-definition cameras in an application place, tracking in real time and continuously acquiring the image data, wherein the acquired image data is input of python, and processing the input image data in real time to detect the climbing action.
Judging the climbing action, running the class in a main function according to a defined human body climbing action class, giving a specific parameter to the class to obtain a class return value, judging according to the return values of three continuous frames of images, if the return values are all 1, determining that the action is the climbing action, and outputting a warning sign or alarming on a video page through opencv2 processing. Judging climbing action, processing the video data to obtain 17 key points of each frame of image, and calculating angles of a left arm, a right arm, a left leg and a right leg; calculating the shoulder distance and the arm distance and obtaining the shoulder-arm distance ratio; calculating the crotch distance and the leg distance to obtain a crotch-leg distance ratio; after obtaining data of the angle and distance ratio, setting an arm angle threshold value to be 30-150 degrees and setting a leg angle threshold value to be 45-160 degrees; setting the shoulder-arm distance ratio threshold value to be 0.5-1 and the crotch-leg distance ratio threshold value to be 0.5-1; and judging each frame of obtained data according to the threshold value, if the three continuous frames of images exceed the threshold value, conforming to the climbing action condition, and judging the human body motion at the moment as the climbing action. And after the climbing action is judged, displaying warning on a display interface to remind that the climbing action is in progress.
Referring to fig. 2, a specific explanation for determining the climbing action is that 15 to 20 high-definition cameras are arranged in an industrial production or construction site, and the cameras can track in real time and continuously acquire data, and transmit the acquired image data into a climbing identification program.
S51, processing the video data through human body climbing actions in the climbing identification method to obtain 17 key points of each frame of image, and calculating angles of a left arm, a right arm, a left leg and a right leg; calculating the shoulder distance and the arm distance and obtaining the shoulder-arm distance ratio; calculating the crotch distance and the leg distance to obtain a crotch-leg distance ratio;
after the angle and distance ratio data is obtained, a judgment is made. Setting an arm angle threshold, b1 being 150 °, b2 being 30 °; setting leg angle thresholds, b3 being 160 °, b4 being 45 °; the shoulder-arm distance ratio threshold d1 was set to 0.5 and the crotch-leg distance ratio threshold to 0.5.
S52, whether the left arm and right arm angles are within (b1, b 2);
s53, whether the left leg and right leg angles are within (b3, b 4);
s54, whether the shoulder-hip distance ratio and the leg-crossing distance ratio are respectively smaller than d1 and d 2;
s55, if the three items are all yes, S56 is judged as climbing action;
if not in S52-55, S57 is determined as a non-climbing motion.
And judging each frame of the obtained data by a set judging method, and if the three continuous frames of images meet the climbing action condition, judging the human body motion at the moment as the climbing action.
Fig. 3 is a diagram showing the detection of key points in human body by using detectron2 framework, and the position table is shown in table 1.
TABLE 1
0 Mouth bar
1 Left eye
2 Right eye
3 Left ear
4 Right ear
5 Left shoulder
6 Right shoulder
7 Left arm joint
8 Right arm joint
9 Left wrist
10 Right wrist
11 Left hip
12 Right crotch
13 Left knee
14 Right knee
15 Left ankle
16 Right ankle
In the master function, θ (1-4): the angles of the left arm, the right arm, the left leg and the right leg are respectively;
left arm angle: an angle formed by the three points 5, 7 and 9 takes 7 as a central point;
right arm angle: an angle formed by the three points 6, 8 and 10 takes 8 as a central point;
right leg angle: an angle formed by three points 12, 14 and 16 takes 14 as a central point;
left leg angle: the angle formed by the three points 11, 13 and 15 takes 13 as a central point;
arm angle upper threshold: b 1;
arm angle lower threshold: b 2;
upper leg angle threshold: b 3;
lower leg angle threshold: b 4;
shoulder distance: 5, 6 distance between two points;
arm distance: 7, 8;
hip distance: 11, 12 two-point distance;
leg length: 13, 14 distance between two points;
shoulder-arm distance ratio: shoulder distance/arm distance;
crotch-leg distance ratio: crotch distance/arm distance;
shoulder-arm distance ratio threshold: d 1;
crotch-leg distance ratio threshold: d2.
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (7)

1. The climbing identification method based on key point detection is characterized by comprising the following steps of:
s10, mounting a Detectron2 open source frame;
s20, configuration file parameters;
s30, defining human body climbing action classes and constructing a main function;
s40, processing the collected image data;
s50, judging climbing action;
the S30 definition of the human body climbing action class and the construction of the main function comprises the following steps:
s31, defining a human body climbing action class, wherein the human body climbing action class is used for processing key points of a human body and specifically judging a climbing action;
s32, after the human body climbing action class is defined, a class method and attributes are defined, and a method for calculating vector angle and point distance is established;
s33, defining a method of writing an image by displaying the calculated angle and distance in real time in the image of each frame;
s34, defining a method for processing the image, dividing the image into each frame and independently processing the frame;
s35, defining a storage method, wherein the method classifies and stores the calculated angles;
s36, defining an angle and distance judgment method, processing the relation of the angle and the distance, respectively judging the ratio of the upper limbs, the lower limbs and the distance, returning a corresponding numerical value or an array, returning 1 to the angle or the distance meeting the judgment condition, and otherwise, returning 0 to the angle or the distance meeting the judgment condition;
s37, defining a climbing action judging method, summing the numerical values returned from S36, if the sum meets the condition, returning to 1, indicating that the climbing action is performed, otherwise, returning to 0;
s38, defining a video processing method, drawing key points and human body frames for the image of each frame, operating a climbing action judging method, and continuously returning the numerical value of each frame in a main function to perform the next judgment;
the main function is used for processing the video stream, and the video screen of each frame is captured into an image to perform climbing action recognition.
2. The method of claim 1, wherein the installation of the Detectron2 open source framework is by GitHub to configure the installation environment of Detectron2 to install the Detectron2 open source framework.
3. The method of claim 1, wherein the profile parameters comprise the steps of:
s21, introducing detectron2 and opencv2 modules into a python master file;
s22, acquiring configuration parameters of the program through a yaml file and a default file provided by a detectron2, selecting a proper RCNN model, and downloading a required pre-training model;
s23, define get _ parser () function for accepting the input parameter of the function.
4. The method as claimed in claim 1, wherein the processing of the collected image data is to arrange 15-20 high-definition cameras at the application site, track and continuously collect image data in real time, wherein the collected image data is input of python, and process the input image data in real time to detect the climbing action.
5. The method according to claim 1, wherein the determining of the climbing action is to run the class in a main function according to a defined class of human climbing actions and give a specific parameter to the class, obtain a return value of the class, make a determination according to the return values of three consecutive frames of images, determine that the action is the climbing action if the return values are all 1, and output a warning sign or an alarm on a video page through opencv2 processing.
6. The method of claim 1, wherein the determining the climbing action is to process video data to obtain 17 key points of each frame of image, and calculate angles of the left arm, the right arm, the left leg and the right leg; calculating the shoulder distance and the arm distance and obtaining the shoulder-arm distance ratio; calculating the crotch distance and the leg distance to obtain a crotch-leg distance ratio; after obtaining data of the angle and distance ratio, setting an arm angle threshold value to be 30-150 degrees and setting a leg angle threshold value to be 45-160 degrees; setting the shoulder-arm distance ratio threshold value to be 0.5-1 and the crotch-leg distance ratio threshold value to be 0.5-1; and judging each frame of obtained data according to the threshold value, if the three continuous frames of images exceed the threshold value, conforming to the climbing action condition, and judging the human body motion at the moment as the climbing action.
7. The method of claim 6, wherein after the determination of the climbing action, the method displays warning on a display interface to remind that the climbing action is in progress.
CN202011026072.5A 2020-09-26 2020-09-26 Climbing identification method based on key point detection Active CN112287758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011026072.5A CN112287758B (en) 2020-09-26 2020-09-26 Climbing identification method based on key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011026072.5A CN112287758B (en) 2020-09-26 2020-09-26 Climbing identification method based on key point detection

Publications (2)

Publication Number Publication Date
CN112287758A true CN112287758A (en) 2021-01-29
CN112287758B CN112287758B (en) 2022-08-26

Family

ID=74421370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011026072.5A Active CN112287758B (en) 2020-09-26 2020-09-26 Climbing identification method based on key point detection

Country Status (1)

Country Link
CN (1) CN112287758B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
US20190362139A1 (en) * 2018-05-28 2019-11-28 Kaia Health Software GmbH Monitoring the performance of physical exercises
CN110619285A (en) * 2019-08-29 2019-12-27 福建天晴数码有限公司 Human skeleton key point extracting method and computer readable storage medium
US20200105014A1 (en) * 2018-09-28 2020-04-02 Wipro Limited Method and system for detecting pose of a subject in real-time
CN110969078A (en) * 2019-09-17 2020-04-07 博康智能信息技术有限公司 Abnormal behavior identification method based on human body key points
CN111507283A (en) * 2020-04-21 2020-08-07 浙江蓝鸽科技有限公司 Student behavior identification method and system based on classroom scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190362139A1 (en) * 2018-05-28 2019-11-28 Kaia Health Software GmbH Monitoring the performance of physical exercises
CN109460702A (en) * 2018-09-14 2019-03-12 华南理工大学 Passenger's abnormal behaviour recognition methods based on human skeleton sequence
US20200105014A1 (en) * 2018-09-28 2020-04-02 Wipro Limited Method and system for detecting pose of a subject in real-time
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN110619285A (en) * 2019-08-29 2019-12-27 福建天晴数码有限公司 Human skeleton key point extracting method and computer readable storage medium
CN110969078A (en) * 2019-09-17 2020-04-07 博康智能信息技术有限公司 Abnormal behavior identification method based on human body key points
CN111507283A (en) * 2020-04-21 2020-08-07 浙江蓝鸽科技有限公司 Student behavior identification method and system based on classroom scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GILBERT TANNER: "Detectron2 - Object Detection with PyTorch", 《HTTPS://GILBERTTANNER.COM/BLOG/DETECTRON-2-OBJECT-DETECTION-WITH-PYTORCH》, 18 October 2019 (2019-10-18), pages 1 - 21 *
HAN S U 等: "Vision-based detection of unsafe actions of a construction worker: Case study of ladder climbing", 《JOURNAL OF COMPUTING IN CIVIL ENGINEERING》, vol. 27, no. 6, 31 December 2013 (2013-12-31), pages 635 - 644 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene

Also Published As

Publication number Publication date
CN112287758B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110222665B (en) Human body action recognition method in monitoring based on deep learning and attitude estimation
Wang et al. Fall detection based on dual-channel feature integration
CN104038738B (en) Intelligent monitoring system and intelligent monitoring method for extracting coordinates of human body joint
CN107423730A (en) A kind of body gait behavior active detecting identifying system and method folded based on semanteme
CN112287759A (en) Tumble detection method based on key points
CN104715493A (en) Moving body posture estimating method
CN111553229B (en) Worker action identification method and device based on three-dimensional skeleton and LSTM
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN111783702A (en) Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
CN103049748B (en) Behavior monitoring method and device
CN112287758B (en) Climbing identification method based on key point detection
CN104077591A (en) Intelligent and automatic computer monitoring system
CN115346272A (en) Real-time tumble detection method based on depth image sequence
Tang et al. Intelligent video surveillance system for elderly people living alone based on ODVS
Chen et al. Vision-based skeleton motion phase to evaluate working behavior: case study of ladder climbing safety
CN113384267A (en) Fall real-time detection method, system, terminal equipment and storage medium
US11948400B2 (en) Action detection method based on human skeleton feature and storage medium
CN113111733A (en) Posture flow-based fighting behavior recognition method
CN117158955A (en) User safety intelligent monitoring method based on wearable monitoring equipment
Arisandi et al. Human Detection and Identification for Home Monitoring System
Sugimoto et al. Robust rule-based method for human activity recognition
CN115731563A (en) Method for identifying falling of remote monitoring personnel
Ariyani et al. Heuristic Application System on Pose Detection of Elderly Activity Using Machine Learning in Real-Time
Zhao et al. Abnormal behavior detection based on dynamic pedestrian centroid model: Case study on u-turn and fall-down
CN207529395U (en) A kind of body gait behavior active detecting identifying system folded based on semanteme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant