CN113591590B - Drilling video rod-withdrawal counting method based on human body gesture recognition - Google Patents

Drilling video rod-withdrawal counting method based on human body gesture recognition Download PDF

Info

Publication number
CN113591590B
CN113591590B CN202110755483.6A CN202110755483A CN113591590B CN 113591590 B CN113591590 B CN 113591590B CN 202110755483 A CN202110755483 A CN 202110755483A CN 113591590 B CN113591590 B CN 113591590B
Authority
CN
China
Prior art keywords
human body
drill
coordinates
rod
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110755483.6A
Other languages
Chinese (zh)
Other versions
CN113591590A (en
Inventor
姚超修
吴航海
胡亚磊
谢浩
武福生
蒋泽
蒋志龙
陈佩佩
王琪
郝东波
徐晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Original Assignee
Tiandi Changzhou Automation Co Ltd
Changzhou Research Institute of China Coal Technology and Engineering Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiandi Changzhou Automation Co Ltd, Changzhou Research Institute of China Coal Technology and Engineering Group Corp filed Critical Tiandi Changzhou Automation Co Ltd
Priority to CN202110755483.6A priority Critical patent/CN113591590B/en
Priority to PCT/CN2021/118738 priority patent/WO2023279531A1/en
Publication of CN113591590A publication Critical patent/CN113591590A/en
Application granted granted Critical
Publication of CN113591590B publication Critical patent/CN113591590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a drilling video rod-withdrawal counting method based on human body gesture recognition, which comprises the following specific steps: acquiring video when a drilling surface withdraws from a rod by using a front-end mining intrinsic safety camera, and further acquiring video data; the video data are transmitted to a ground server through a ring network, and the ground server analyzes and processes the video data; training an alpha phase model with a human body key point detection function; detecting a drill rod, selecting a frame of the detected drill rod, and recording relevant parameters of the frame; meanwhile, detecting a person, detecting key points of human bones of the detected person, and recording coordinates of the key points of the human body; the back-end server algorithm jointly judges the actually obtained drill rod by detecting whether a worker grabs the drill rod and whether a carrying action exists. According to the method, the continuous action of taking down the drill rods by workers is detected through human body gesture recognition, and the number of the drill rods taken out by the workers is automatically calculated, so that the counting accuracy of intelligent video analysis of the drill rods is improved.

Description

Drilling video rod-withdrawal counting method based on human body gesture recognition
Technical Field
The invention relates to the technical field of intelligent image recognition, in particular to a drilling video rod-withdrawal counting method based on human body gesture recognition.
Background
With the popularization of underground video monitoring, intelligent image recognition is more and more widely applied in coal mines. The intelligent image recognition technology utilizes the digital image acquired by the mine camera, carries out operation analysis through an intelligent algorithm embedded with the camera or a back-end server algorithm, realizes the perception of video content, and further judges and recognizes a corresponding target and carries out corresponding alarm according to a set rule. Because the intelligent video identification adopts non-contact detection, the intelligent video identification has the advantages of wide detection range, low detection cost and the like, and can work continuously for 24 hours, so that the working efficiency is greatly improved.
However, most underground coal mines still rely on the ground to manually review video to count the drilling and backing rods, the frequency of use of manual counting buttons prepared for workers in the underground is low, and the counting cannot be performed; meanwhile, each video recording period is often as long as 1-2 hours, the underground working environment of the coal mine is severe, the light is dim, workers need to concentrate on looking back at the video recording all the time, and after long-term continuous working, missed detection and false detection caused by fatigue are very easy to occur.
Although some intelligent video analysis is performed to automatically count drill rods, the effect is not ideal, and the most main reason is that the methods are usually used for intercepting pictures of a worker taking a plurality of frames before and after the drill rods, extracting instantaneous characteristics of the worker taking the drill rods through a neural network, and performing +1 counting once the worker is detected in the video to touch the end drill rods by hand. However, the phenomena of drill rod holding, misplacement and overlapping and the like of workers often occur during actual working operation, and the drill rod is not really taken down, and false detection is caused if the drill rod is still counted.
Disclosure of Invention
The invention aims to solve the technical problems that: in order to overcome the defects in the prior art, the drilling video rod withdrawal counting method based on human body gesture recognition is provided, the continuous action that a worker takes down a drill rod is detected through human body gesture recognition, and the number of the drill rods taken out by the worker is automatically calculated, so that the counting accuracy of intelligent video analysis drill rods is improved.
The technical scheme adopted for solving the technical problems is as follows: a drilling video rod-withdrawing counting method based on human body gesture recognition comprises the following specific steps:
step 1, data acquisition: acquiring video when a drilling surface withdraws from a rod by using a front-end mining intrinsic safety camera, and further acquiring video data;
step 2, data preprocessing and label making: the video data are transmitted to a ground server through a ring network, and the ground server analyzes and processes the video data;
step 3, training an alpha phase model with a human body key point detection function;
step 4, detecting the drill rod, selecting the detected drill rod in a frame mode, and recording relevant parameters of the frame; meanwhile, detecting a person, detecting key points of human bones of the detected person, and recording coordinates of the key points of the human body;
step 5, the back-end server algorithm jointly judges the actually obtained drill rod by detecting whether a worker grabs the drill rod or not and whether a carrying action exists or not: judging whether the hand key point coordinates are coincident with the drill rod frame selection area or not, and repeating the step 4 when the hand key point coordinates are not coincident with the drill rod frame selection area; when the coordinates of the key points of the hand are coincident with the selected area of the drill rod frame, judging whether the action of carrying the drill rod exists or not through the motion track of the key points of the whole body; when judging that the action of transporting the drill rods does not exist through the whole body key point movement track, the drill rod number is kept unchanged; and when judging that the action of carrying the drill rods exists through the whole body key point movement track, adding 1 on the basis of the number of the drill rods which are taken.
Further specifically defined, in the above technical solution, in step 4, the relevant parameters of the frame include a center point position of the frame, a length of the frame, and a height of the frame.
Further specifically defined, in the above technical solution, in step 4, the coordinates of the key points of the human body include coordinates of the head of the human body, coordinates of shoulders of the human body, coordinates of hands of the human body, coordinates of knees of the human body, and coordinates of feet of the human body.
Further specifically defined, in the above technical solution, in step 2, the labelImg tool is used to label the collected picture data, and each type of the same picture is labeled with a corresponding type of label.
The beneficial effects of the invention are as follows: the drilling video rod-withdrawal counting method based on human body gesture recognition has the following advantages:
1. the number of drill rods can be directly counted through analysis of video recordings, so that manual counting with long time period and high strength is avoided;
2. the number of drilling and retreating rods is accurately counted by detecting whether a worker grabs the drill rods and analyzing whether a human body motion track has a carrying action, so that the method has extremely high accuracy;
3. the method is suitable for the reconstruction scheme of the original universal camera for drilling the underground mine, and can be realized by only intelligently analyzing video on the back-end server, so that the reconstruction cost is low, and the construction steps are simple.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an algorithm of the present invention;
FIG. 2 is a schematic diagram of the present invention;
FIG. 3 is a schematic diagram of the algorithm effect of the present invention;
FIG. 4 is a second schematic diagram of the algorithm effect of the present invention;
fig. 5 is a schematic diagram of the algorithm effect of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, 2, 3, 4 and 5, the drilling video rod-withdrawal counting method based on human body gesture recognition comprises a front-end mine intrinsic safety camera, a ring network and a rear-end server, wherein the front-end mine intrinsic safety camera has the functions of automatic focusing, strong light inhibition, light supplementing and the like. The front-end mining intrinsic safety camera is at least 400 ten thousand pixels, the protection level is IP67, and the power supply voltage range is DC 17V-19V. The algorithm of the back-end server adopts an alpha Pose model with a human body key point detection function to detect the key points of the drill rod and the human skeleton at the same time, and then judges whether to carry the drill rod or not by analyzing the human body movement track so as to achieve accurate counting. The front-end mining intrinsic safety camera is arranged on an underground drilling working face and used for recording video when a drilling rod is retracted. Data acquired by the front-end mining intrinsic safety camera are transmitted to the ground through the ten-thousand-megaring network, the data are subjected to algorithm analysis through the rear-end server, and workers in the video are detected to acquire the number of drill rods, so that the automatic counting function is achieved.
Referring to fig. 2, the drilling video rod-withdrawal counting method based on human body gesture recognition has the following specific principle: firstly, the underground camera acquires a rod-retreating video, then the industrial ring network transmits data, and finally the intelligent counting is completed through the back-end algorithm processing.
Referring to fig. 1, the drilling video rod-withdrawal counting method based on human body gesture recognition specifically comprises the following steps:
step 1, data acquisition: and acquiring video when the drilling surface withdraws from the rod by utilizing the front-end mining intrinsic safety camera, and further acquiring video data.
Step 2, data preprocessing and label making: the video data is transmitted to a ground server through the ring network, and the ground server analyzes and processes the video data. In the experiment, a labelImg tool is adopted to label the acquired picture data, and each type of the same picture is labeled with a corresponding type of label. Such as equipment in the mine is labeled "machine", characters are labeled "person", target drill pipe is labeled "object", etc. Human body keypoint tags employ Keypoint evaluation in the MS COCO dataset.
And step 3, training an alpha Pose model with a human body key point detection function.
Step 4, detecting the drill rod, selecting the detected drill rod in a frame mode, and recording relevant parameters of the frame, wherein the relevant parameters of the frame comprise the position of a central point of the frame, the length of the frame and the height of the frame; meanwhile, detecting a person, detecting human skeleton key points of the detected person, and recording coordinates of the human skeleton key points, wherein the coordinates of the human skeleton key points comprise coordinates of a human head, coordinates of a human shoulder, coordinates of a human hand, coordinates of a human knee and coordinates of a human foot.
Step 5, the back-end server algorithm judges the actually obtained drill rod in a combined way by detecting whether a worker grabs the drill rod or not and whether a carrying action exists or not, so that missing detection and false detection are avoided: judging whether the hand key point coordinates are coincident with the drill rod frame selection area or not, and repeating the step 4 when the hand key point coordinates are not coincident with the drill rod frame selection area; when the coordinates of the key points of the hand are coincident with the selected area of the drill rod frame, judging whether the action of carrying the drill rod exists or not through the motion track of the key points of the whole body; when judging that the action of transporting the drill rods does not exist through the whole body key point movement track, the drill rod number is kept unchanged; and when judging that the action of carrying the drill rods exists through the whole body key point movement track, adding 1 on the basis of the number of the drill rods which are taken.
For example, after the algorithm detects the drill rod, the output drill rod has boundingbox coordinates (X1, Y1, X2, Y2), where X1, Y1 is the upper left corner coordinate of the object frame and X2, Y2 is the lower right corner coordinate of the object frame. The center point of the frame and the length of the frame can be calculated. Similarly, the coordinates of human body key points of a person are also a group of (x, y) position coordinates, and detection can be performed through logic judgment among the coordinates. Whether coincidences are mentioned herein, i.e., whether the IOU between coordinates or boundingbox exceeds a threshold.
The details of the back-end server algorithm are divided into the following parts:
(1) STN (space transformation network): STN is known as Spatial Transformer Network and chinese meaning spatial transformation network. For irregular human body image input, an accurate human frame is obtained after STN operation. And inputting a candidate region for acquiring a high-quality candidate region. I.e., anchor the picture frame to the human picture data in the video stream. Since the characters in the video stream are moving all the time, the human body picture data obtained by decoding has tortuosity, namely irregular shape, and the STN is adopted to operate the picture data in the invention, so that the neural network is allowed to learn how to perform spatial transformation on the input image so as to enhance the geometric invariance of the model.
The STN is an affine transformation of 2D defined as follows:
(1)
wherein i represents an ith coordinate point in the picture data; s represents a new coordinate name; t represents an original coordinate name;for transforming the post-coordinates, in particular, +.>For the abscissa in the transformed person image data, < > x->Is the ordinate in the transformed character image data; />To transform the front coordinates, in particular, +.>For the abscissa of the pixel point in the original character image data before transformation, < >>For the ordinate of the pixel point in the original character image data before transformation, 1 represents the default value of the ordinate of the pixel point in the character image data before 2D affine transformation; />、/>And +.>Is a transformation parameter.
(2) SPPE (single person pose estimation): SPPE is known as single person pose estimator, and Chinese meaning single person pose estimation.
(3) SDTN (spatial inverse transform network): the estimated pose is mapped back to the original image coordinates. The STDN is collectively referred to as Spatial Transformer Networks and chinese means the spatial inverse transform network.
The definition of SDTN is as follows:
(2)
wherein,、/>and +.>For transforming parameters +.>、/>And +.>And->、/>And +.>The relationship of (2) is as follows:
(3)
(4)
(4) Pose-NMS: eliminating the additional estimated pose. Pose-NMS is known in full as parametric Pose nonmaximum suppression, and Chinese means that the parameter Pose is not maximally suppressed, and here it can be understood that the additional estimated Pose is eliminated.
Definition: let the first orderThe personal posture is composed of->A composition of individual joints, wherein->And->All are positive integers greater than or equal to 1, and the set of the ith gesture is defined as:
wherein the method comprises the steps ofFor positioning, the positioning represents a joint positioning point; />Is socre. Score represents the pose score of the current anchor point.
The elimination process comprises the following steps: the highest pose of score is used as a reference, and the pose close to the reference pose is repeatedly eliminated until a single pose is left. Elimination criteria: the elimination criterion is used for repeatedly eliminating the residual gesture, and the elimination criterion is as follows:
f(Pi,Pj|Λ,η)=1[d(Pi,Pj|Λ,λ)≤η](5)
wherein f represents an elimination criterion, and when the output is 1, the current gesture Pi is deleted, otherwise, the current gesture Pi is reserved; pi and Pj respectively represent different attitudes; Λ represents a parameter set of the pose distance metric; η represents a threshold; d represents a pose distance metric; λ represents a weight that balances the gesture distance and the spatial distance; f () overall represents the pose point elimination criterion; d () denotes the pose distance metric function as a whole, the pose distance metric function d (() includes the pose distance and the spatial distance, and if d (() is not greater than η, the output of f (() above is 1, indicating that P is due to) i And a reference posture P j Too similar, thus P i The need to be eliminated. The definition is as follows:
d(Pi,Pj|Λ)=K Sim (Pi,Pj|σ1)+λH sim (Pi,Pj|σ2)(6)
wherein K is sim Representing a soft matching function, i.e. the similarity between different features; σ1 and σ2 represent the learning rule, i.e. the initialization of the gradient, respectively. Λ= { σ1, σ2, λ }
The pose distance is used to eliminate poses that are too close and too similar to other poses, assuming P i Bbox of (B) i Bbox represents the pose P i Position information of the selected frame. It is defined as the following soft matching formula (similarity of score between different features):
(7)
wherein i and j represent different attitude points; c represents a set; is witnin indicates if the poseIs at +.>When in the box of (2), the box is cleared; other means that the reverse is not cleared; />Is in the center->And each coordinate +.>For the original coordinates->1/10 of (2); box representation and pose->A frame with concentric coordinates, the length and width of which are +.>1/10 of the gesture frame coordinates.
After the specific key points of the human body are positioned, judging whether carrying actions exist or not according to the motion trail of the specific key points.
The invention aims to intelligently analyze the video of drilling and tripping, and detect the continuous action of taking down the drill rod by workers through human body gesture recognition, automatically calculate the number of the drill rods taken out by workers, and not only recognize the characteristic of taking 2-3 frames of the drill rod by hands by workers, thereby improving the counting accuracy of intelligent video analysis of the drill rod. The invention has the advantages that: (1) The number of drill rods can be directly counted through analysis of video recordings, so that manual counting with long time period and high strength is avoided; (2) The number of drilling and retreating rods is accurately counted by detecting whether a worker grabs the drill rods and analyzing whether a human body motion track has a carrying action, so that the method has extremely high accuracy; (3) The method is suitable for the reconstruction scheme of the original universal camera for drilling the underground mine, and can be realized by only intelligently analyzing video on the back-end server, so that the reconstruction cost is low, and the construction steps are simple.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (4)

1. A drilling video rod-withdrawing counting method based on human body gesture recognition is characterized by comprising the following specific steps:
step 1, data acquisition: acquiring video when a drilling surface withdraws from a rod by using a front-end mining intrinsic safety camera, and further acquiring video data;
step 2, data preprocessing and label making: the video data are transmitted to a ground server through a ring network, and the ground server analyzes and processes the video data;
step 3, training an alpha phase model with a human body key point detection function;
step 4, detecting the drill rod, selecting the detected drill rod in a frame mode, and recording relevant parameters of the frame; meanwhile, detecting a person, detecting key points of human bones of the detected person, and recording coordinates of the key points of the human body;
step 5, the back-end server algorithm jointly judges the actually obtained drill rod by detecting whether a worker grabs the drill rod or not and whether a carrying action exists or not: judging whether the hand key point coordinates are coincident with the drill rod frame selection area or not, and repeating the step 4 when the hand key point coordinates are not coincident with the drill rod frame selection area; when the coordinates of the key points of the hand are coincident with the selected area of the drill rod frame, judging whether the action of carrying the drill rod exists or not through the motion track of the key points of the whole body; when judging that the action of transporting the drill rods does not exist through the whole body key point movement track, the drill rod number is kept unchanged; and when judging that the action of carrying the drill rods exists through the whole body key point movement track, adding 1 on the basis of the number of the drill rods which are taken.
2. The method for counting the number of the drill video rod withdrawal based on human gesture recognition according to claim 1 is characterized in that: in step 4, the relevant parameters of the frame include the location of the center point of the frame, the length of the frame, and the height of the frame.
3. The method for counting the number of the drill video rod withdrawal based on human gesture recognition according to claim 1 is characterized in that: in step 4, the coordinates of the key points of the human body include the coordinates of the head of the human body, the coordinates of the shoulders of the human body, the coordinates of the hands of the human body, the coordinates of the knees of the human body and the coordinates of the feet of the human body.
4. The method for counting the number of the drill video rod withdrawal based on human gesture recognition according to claim 1 is characterized in that: in step 2, labeling the acquired picture data by using a labelImg tool, and labeling the same picture in each type with a label in a corresponding type.
CN202110755483.6A 2021-07-05 2021-07-05 Drilling video rod-withdrawal counting method based on human body gesture recognition Active CN113591590B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110755483.6A CN113591590B (en) 2021-07-05 2021-07-05 Drilling video rod-withdrawal counting method based on human body gesture recognition
PCT/CN2021/118738 WO2023279531A1 (en) 2021-07-05 2021-09-16 Method for counting drilling pipe withdrawals in a drilling video on basis of human body pose recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110755483.6A CN113591590B (en) 2021-07-05 2021-07-05 Drilling video rod-withdrawal counting method based on human body gesture recognition

Publications (2)

Publication Number Publication Date
CN113591590A CN113591590A (en) 2021-11-02
CN113591590B true CN113591590B (en) 2024-02-23

Family

ID=78245846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110755483.6A Active CN113591590B (en) 2021-07-05 2021-07-05 Drilling video rod-withdrawal counting method based on human body gesture recognition

Country Status (2)

Country Link
CN (1) CN113591590B (en)
WO (1) WO2023279531A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814601A (en) * 2020-06-23 2020-10-23 国网上海市电力公司 Video analysis method combining target detection and human body posture estimation
CN112116633A (en) * 2020-09-25 2020-12-22 深圳爱莫科技有限公司 Mine drilling counting method
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112580609A (en) * 2021-01-26 2021-03-30 南京北路智控科技股份有限公司 Coal mine drill rod counting method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8237574B2 (en) * 2008-06-05 2012-08-07 Hawkeye Systems, Inc. Above-water monitoring of swimming pools
US11378387B2 (en) * 2014-11-12 2022-07-05 Helmerich & Payne Technologies, Llc System and method for locating, measuring, counting, and aiding in the handling of drill pipes
CN110147743B (en) * 2019-05-08 2021-08-06 中国石油大学(华东) Real-time online pedestrian analysis and counting system and method under complex scene
CN110725711B (en) * 2019-10-29 2023-08-29 南京北路智控科技股份有限公司 Video-based drilling system and auxiliary drilling test method
CN112412440A (en) * 2020-10-23 2021-02-26 中海油能源发展股份有限公司 Method for detecting early kick in drilling period
CN112528960B (en) * 2020-12-29 2023-07-14 之江实验室 Smoking behavior detection method based on human body posture estimation and image classification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814601A (en) * 2020-06-23 2020-10-23 国网上海市电力公司 Video analysis method combining target detection and human body posture estimation
CN112116633A (en) * 2020-09-25 2020-12-22 深圳爱莫科技有限公司 Mine drilling counting method
CN112560741A (en) * 2020-12-23 2021-03-26 中国石油大学(华东) Safety wearing detection method based on human body key points
CN112580609A (en) * 2021-01-26 2021-03-30 南京北路智控科技股份有限公司 Coal mine drill rod counting method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Drill Pipe Counting Method Based on Scale Space and Siamese Network;Lihong Dong;Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence;全文 *
基于机器视觉的煤矿井下钻杆计数方法研究与实现;王杰;中国优秀硕士学位论文全文数据库(工程科技I辑);全文 *

Also Published As

Publication number Publication date
CN113591590A (en) 2021-11-02
WO2023279531A1 (en) 2023-01-12

Similar Documents

Publication Publication Date Title
Park et al. Continuous localization of construction workers via integration of detection and tracking
CN111339883A (en) Method for identifying and detecting abnormal behaviors in transformer substation based on artificial intelligence in complex scene
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
CN109905675A (en) A kind of mine personnel monitoring system based on computer vision and method
CN110991315A (en) Method for detecting wearing state of safety helmet in real time based on deep learning
CN107977639A (en) A kind of face definition judgment method
CN110097574A (en) A kind of real-time pose estimation method of known rigid body
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN104700088A (en) Gesture track recognition method based on monocular vision motion shooting
CN111898566B (en) Attitude estimation method, attitude estimation device, electronic equipment and storage medium
Alksasbeh et al. Smart hand gestures recognition using K-NN based algorithm for video annotation purposes
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
CN111178201A (en) Human body sectional type tracking method based on OpenPose posture detection
CN113537019A (en) Detection method for identifying wearing of safety helmet of transformer substation personnel based on key points
CN113591590B (en) Drilling video rod-withdrawal counting method based on human body gesture recognition
CN112883830B (en) Real-time automatic counting method for drill rods
CN114170686A (en) Elbow bending behavior detection method based on human body key points
CN113076825A (en) Transformer substation worker climbing safety monitoring method
CN116311082B (en) Wearing detection method and system based on matching of key parts and images
CN105809719A (en) Object tracking method based on pixel multi-coding-table matching
RU2802411C1 (en) Method for counting rod removal on drilling video recordings based on human body gesture recognition
CN105678321B (en) A kind of estimation method of human posture based on Fusion Model
CN113378702A (en) Multi-feature fusion fatigue monitoring and identifying method for pole climbing operation
Chen et al. A Human Activity Recognition Approach Based on Skeleton Extraction and Image Reconstruction
CN104866825B (en) A kind of sign language video frame sequence classification method based on Hu square

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant