CN113255509A - Building site dangerous behavior monitoring method based on Yolov3 and OpenPose - Google Patents
Building site dangerous behavior monitoring method based on Yolov3 and OpenPose Download PDFInfo
- Publication number
- CN113255509A CN113255509A CN202110552349.6A CN202110552349A CN113255509A CN 113255509 A CN113255509 A CN 113255509A CN 202110552349 A CN202110552349 A CN 202110552349A CN 113255509 A CN113255509 A CN 113255509A
- Authority
- CN
- China
- Prior art keywords
- monitoring
- openpose
- yolov3
- construction site
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000006399 behavior Effects 0.000 claims abstract description 42
- 238000010276 construction Methods 0.000 claims abstract description 31
- 238000001514 detection method Methods 0.000 claims abstract description 19
- 238000003708 edge detection Methods 0.000 claims abstract description 13
- 238000003709 image segmentation Methods 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 15
- 210000003423 ankle Anatomy 0.000 claims description 9
- 210000003127 knee Anatomy 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 210000000707 wrist Anatomy 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 230000009194 climbing Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 3
- 210000001217 buttock Anatomy 0.000 claims description 3
- 210000001513 elbow Anatomy 0.000 claims description 3
- 210000002683 foot Anatomy 0.000 claims description 3
- 238000003064 k means clustering Methods 0.000 claims description 3
- 210000004934 left little finger Anatomy 0.000 claims description 3
- 210000004936 left thumb Anatomy 0.000 claims description 3
- 210000004933 right little finger Anatomy 0.000 claims description 3
- 210000004935 right thumb Anatomy 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a building site dangerous behavior monitoring method based on Yolov3 and OpenPose, which comprises the following steps of; step S1, performing image segmentation based on edge detection from the image of the monitoring camera, and performing manual calibration on a safe region and a dangerous region to obtain a point set of the safe region; s2, constructing an image data set with human existence characteristics to train a Yolov3 model to obtain a human existence detection model; step S3, if the human body existence detection model detects that a human body exists in the video frame of the monitoring area, inputting the video frame into OpenPose for the next step of identification; step S4, the OpenPose carries out real-time multi-person gesture recognition on the input video frame, and outputs a body key point set of the human body in the picture; step S5, judging whether the key point of the human body is in the safe area; if the key point is not in the safe area, judging that dangerous behaviors exist in the site workers; the construction site dangerous behavior monitoring system can be established by utilizing the superior real-time performance and accuracy of Yolov3 and OpenPose.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to a building site dangerous behavior monitoring method based on Yolov3 and OpenPose.
Background
Along with the vigorous development of engineering construction, the quantity and the scale of construction sites are continuously increased, and the control difficulty therewith is also unprecedentedly promoted. The construction site area is large, the related range is wide, the site environment is complex, and the actual safety management needs are no longer met only by traditional manpower supervision. In some high-risk construction areas, safety management problems caused by lack of supervision often occur, and personal safety of construction personnel is threatened. In order to manage the construction site more effectively, the intellectualization and automation of the safety monitoring system are required to be urgent. How to more effectively apply intelligent safety monitoring technology to construction site safety management and better ensure the safety maximization of the construction process becomes a problem to be solved urgently.
Disclosure of Invention
The invention provides a building site dangerous behavior monitoring method based on Yolov3 and OpenPose, which can be used for establishing a building site dangerous behavior monitoring system by utilizing the excellent real-time performance and accuracy of Yolov3 and OpenPose.
The invention adopts the following technical scheme.
A building site dangerous behavior monitoring method based on Yolov3 and OpenPose identifies building site dangerous behaviors by analyzing monitoring videos of monitoring cameras, and comprises the following steps;
step S1, obtaining scene images from the monitoring video of the monitoring area by the monitoring camera, carrying out image segmentation based on edge detection on the images, and carrying out manual calibration on a safe area and a dangerous area to obtain a point set of the safe area;
s2, constructing an image data set with human existence characteristics to train a Yolov3 model to obtain a human existence detection model;
step S3, inputting the monitoring video of the monitoring area into a human body existence detection model for detection, and inputting the video frame into OpenPose for next identification if the human body exists in the video frame;
step S4, the OpenPose carries out real-time multi-person gesture recognition on the input video frame, and outputs a body key point set of the human body in the picture;
step S5, comparing the key point set of the body with the safe area point set, and judging whether the key point of the human body is in the safe area; and if the key point is not in the safe area, judging that the construction site worker has dangerous behaviors.
In step S1, the image segmentation based on edge detection includes the following steps;
step A1, using Gaussian filter to reduce the influence of noise on the edge detection result;
step A2, carrying out gray processing on the image;
step A3, calculating the amplitude and direction of the image gradient by using a differential operator, and estimating the edge of the image;
a4, carrying out non-maximum suppression on the gradient amplitude to obtain more accurate response to the edge;
step A5, processing the binary image by using dual-threshold detection, and eliminating stray response caused by edge detection;
step a6, using an edge join algorithm to join discrete edge pixel groups into a contour.
In step S2, the training of the Yolov3 model includes the following steps;
step B1, constructing a data enhanced data set; firstly, constructing a real worker data set, selecting high-risk borderline scene pictures of workers in quantity required by training, and then constructing a data-enhanced data set, namely performing data expansion on the real worker image data set by using an image processing means including affine transformation to generate a sufficiently large data set;
b2, clustering the target prior frames by adopting a K-means algorithm; determining prior frame parameters of K-means clustering, arranging the prior frame areas of new clustering from small to large, and equally dividing the prior frame areas to feature maps of different scales;
b3, constructing a Yolov3 human body existence detection model; specifically, a Darknet-53 is used as a backbone network construction model; the activating function of the constructed model is a Leaky ReLU function; the conditions for stopping model training are divided into two types, one is to stop training when iteration is carried out for a certain number of times, and the other is to stop training when the loss performance converges.
The high-risk side-facing scene comprises a worker high-risk side-facing scene at the periphery of a balcony without a handrail, the periphery of a layer without outer frame protection, the periphery of a frame engineering floor, two side edges of an upper runway, a lower runway and a chute and the side edge of an unloading platform.
When a Darknet-53 is used as a backbone network to construct a model, the feature layers of 3 different scales are respectively 13 × 13, 26 × 26 and 52 × 52, and 3 prior frames are firstly set for each downsampling scale so as to obtain 9 prior frames by using dimension clustering.
In step S4, a BODY key point set of a human BODY in the video frame picture is identified by the BODY _25 model.
And the BODY _25 model comprises key points of a left eye, a right eye, a nose, a left ear, a right ear, a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a middle hip, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left thumb, a right thumb, a left little finger, a right little finger, a left heel and a right heel.
And if the dangerous behaviors of the workers in the construction site include the behaviors that the workers violate approaching, adding the key points at the positions of the feet into the key point set of the body.
If dangerous behaviors of workers in the construction site include behaviors of illegally leaning and climbing personnel, key points of wrists, elbows, buttocks, knees and ankles are added into a body key point set.
In step S5, if it is determined that there is dangerous behavior for the site worker, an alarm is issued.
The method has the advantages that the calibration of the safe area is realized through a digital image processing method, the video of the fixed monitoring camera of the key control area is input into a trained Yolov3 model for human body existence detection, the video frame of the existing human body is input into OpenPose for body key point identification, whether the worker carries out dangerous behaviors or not is judged by calculating whether the body key point is positioned in the safe area, the method has good real-time performance and accuracy, and dangerous actions in a construction site, such as violation border crossing, leaning, climbing of a high-risk area and the like, can be intelligently detected, so that the timely alarm of the dangerous behaviors is realized.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a schematic workflow diagram of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the human body key point detection of the monitoring image according to the present invention.
Detailed Description
As shown in the figure, a worksite dangerous behavior monitoring method based on Yolov3 and OpenPose, which identifies worksite dangerous behaviors by analyzing monitoring videos of monitoring cameras, comprises the following steps;
step S1, obtaining scene images from the monitoring video of the monitoring area by the monitoring camera, carrying out image segmentation based on edge detection on the images, and carrying out manual calibration on a safe area and a dangerous area to obtain a point set of the safe area;
s2, constructing an image data set with human existence characteristics to train a Yolov3 model to obtain a human existence detection model;
step S3, inputting the monitoring video of the monitoring area into a human body existence detection model for detection, and inputting the video frame into OpenPose for next identification if the human body exists in the video frame;
step S4, the OpenPose carries out real-time multi-person gesture recognition on the input video frame, and outputs a body key point set of the human body in the picture;
step S5, comparing the key point set of the body with the safe area point set, and judging whether the key point of the human body is in the safe area; and if the key point is not in the safe area, judging that the construction site worker has dangerous behaviors.
In step S1, the image segmentation based on edge detection includes the following steps;
step A1, using Gaussian filter to reduce the influence of noise on the edge detection result;
step A2, carrying out gray processing on the image;
step A3, calculating the amplitude and direction of the image gradient by using a differential operator, and estimating the edge of the image;
a4, carrying out non-maximum suppression on the gradient amplitude to obtain more accurate response to the edge;
step A5, processing the binary image by using dual-threshold detection, and eliminating stray response caused by edge detection;
step a6, using an edge join algorithm to join discrete edge pixel groups into a contour.
In step S2, the training of the Yolov3 model includes the following steps;
step B1, constructing a data enhanced data set; firstly, constructing a real worker data set, selecting high-risk borderline scene pictures of workers in quantity required by training, and then constructing a data-enhanced data set, namely performing data expansion on the real worker image data set by using an image processing means including affine transformation to generate a sufficiently large data set;
b2, clustering the target prior frames by adopting a K-means algorithm; determining prior frame parameters of K-means clustering, arranging the prior frame areas of new clustering from small to large, and equally dividing the prior frame areas to feature maps of different scales;
b3, constructing a Yolov3 human body existence detection model; specifically, a Darknet-53 is used as a backbone network construction model; the activating function of the constructed model is a Leaky ReLU function; the conditions for stopping model training are divided into two types, one is to stop training when iteration is carried out for a certain number of times, and the other is to stop training when the loss performance converges.
The high-risk side-facing scene comprises a worker high-risk side-facing scene at the periphery of a balcony without a handrail, the periphery of a layer without outer frame protection, the periphery of a frame engineering floor, two side edges of an upper runway, a lower runway and a chute and the side edge of an unloading platform.
When a Darknet-53 is used as a backbone network to construct a model, the feature layers of 3 different scales are respectively 13 × 13, 26 × 26 and 52 × 52, and 3 prior frames are firstly set for each downsampling scale so as to obtain 9 prior frames by using dimension clustering.
In step S4, a BODY key point set of a human BODY in the video frame picture is identified by the BODY _25 model.
And the BODY _25 model comprises key points of a left eye, a right eye, a nose, a left ear, a right ear, a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a middle hip, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left thumb, a right thumb, a left little finger, a right little finger, a left heel and a right heel.
And if the dangerous behaviors of the workers in the construction site include the behaviors that the workers violate approaching, adding the key points at the positions of the feet into the key point set of the body.
If dangerous behaviors of workers in the construction site include behaviors of illegally leaning and climbing personnel, key points of wrists, elbows, buttocks, knees and ankles are added into a body key point set.
In step S5, if it is determined that there is dangerous behavior for the site worker, an alarm is issued.
Claims (10)
1. A building site dangerous behavior monitoring method based on Yolov3 and OpenPose identifies building site dangerous behaviors by analyzing monitoring videos of monitoring cameras, and is characterized in that: the monitoring method comprises the following steps;
step S1, obtaining scene images from the monitoring video of the monitoring area by the monitoring camera, carrying out image segmentation based on edge detection on the images, and carrying out manual calibration on a safe area and a dangerous area to obtain a point set of the safe area;
s2, constructing an image data set with human existence characteristics to train a Yolov3 model to obtain a human existence detection model;
step S3, inputting the monitoring video of the monitoring area into a human body existence detection model for detection, and inputting the video frame into OpenPose for next identification if the human body exists in the video frame;
step S4, the OpenPose carries out real-time multi-person gesture recognition on the input video frame, and outputs a body key point set of the human body in the picture;
step S5, comparing the key point set of the body with the safe area point set, and judging whether the key point of the human body is in the safe area; and if the key point is not in the safe area, judging that the construction site worker has dangerous behaviors.
2. The method for monitoring the construction site risk behaviors based on Yolov3 and OpenPose according to claim 1, wherein: in step S1, the image segmentation based on edge detection includes the following steps;
step A1, using Gaussian filter to reduce the influence of noise on the edge detection result;
step A2, carrying out gray processing on the image;
step A3, calculating the amplitude and direction of the image gradient by using a differential operator, and estimating the edge of the image;
a4, carrying out non-maximum suppression on the gradient amplitude to obtain more accurate response to the edge;
step A5, processing the binary image by using dual-threshold detection, and eliminating stray response caused by edge detection;
step a6, using an edge join algorithm to join discrete edge pixel groups into a contour.
3. The method for monitoring the construction site risk behaviors based on Yolov3 and OpenPose according to claim 1, wherein: in step S2, the training of the Yolov3 model includes the following steps;
step B1, constructing a data enhanced data set; firstly, constructing a real worker data set, selecting high-risk borderline scene pictures of workers in quantity required by training, and then constructing a data-enhanced data set, namely performing data expansion on the real worker image data set by using an image processing means including affine transformation to generate a sufficiently large data set;
b2, clustering the target prior frames by adopting a K-means algorithm; determining prior frame parameters of K-means clustering, arranging the prior frame areas of new clustering from small to large, and equally dividing the prior frame areas to feature maps of different scales;
b3, constructing a Yolov3 human body existence detection model; specifically, a Darknet-53 is used as a backbone network construction model; the activating function of the constructed model is a Leaky ReLU function; the conditions for stopping model training are divided into two types, one is to stop training when iteration is carried out for a certain number of times, and the other is to stop training when the loss performance converges.
4. The method for monitoring construction site risk behaviors based on Yolov3 and OpenPose according to claim 3, wherein: the high-risk side-facing scene comprises a worker high-risk side-facing scene at the periphery of a balcony without a handrail, the periphery of a layer without outer frame protection, the periphery of a frame engineering floor, two side edges of an upper runway, a lower runway and a chute and the side edge of an unloading platform.
5. The method for monitoring construction site risk behaviors based on Yolov3 and OpenPose according to claim 3, wherein: when a Darknet-53 is used as a backbone network to construct a model, the feature layers of 3 different scales are respectively 13 × 13, 26 × 26 and 52 × 52, and 3 prior frames are firstly set for each downsampling scale so as to obtain 9 prior frames by using dimension clustering.
6. The method for monitoring the construction site risk behaviors based on Yolov3 and OpenPose according to claim 1, wherein: in step S4, a BODY key point set of a human BODY in the video frame picture is identified by the BODY _25 model.
7. The method for monitoring construction site risk behaviors based on Yolov3 and OpenPose according to claim 6, wherein: and the BODY _25 model comprises key points of a left eye, a right eye, a nose, a left ear, a right ear, a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a middle hip, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left thumb, a right thumb, a left little finger, a right little finger, a left heel and a right heel.
8. The method for monitoring construction site risk behaviors based on Yolov3 and OpenPose according to claim 6, wherein: and if the dangerous behaviors of the workers in the construction site include the behaviors that the workers violate approaching, adding the key points at the positions of the feet into the key point set of the body.
9. The method for monitoring construction site risk behaviors based on Yolov3 and OpenPose according to claim 6, wherein: if dangerous behaviors of workers in the construction site include behaviors of illegally leaning and climbing personnel, key points of wrists, elbows, buttocks, knees and ankles are added into a body key point set.
10. The method for monitoring the construction site risk behaviors based on Yolov3 and OpenPose according to claim 1, wherein: in step S5, if it is determined that there is dangerous behavior for the site worker, an alarm is issued.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110552349.6A CN113255509A (en) | 2021-05-20 | 2021-05-20 | Building site dangerous behavior monitoring method based on Yolov3 and OpenPose |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110552349.6A CN113255509A (en) | 2021-05-20 | 2021-05-20 | Building site dangerous behavior monitoring method based on Yolov3 and OpenPose |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113255509A true CN113255509A (en) | 2021-08-13 |
Family
ID=77183106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110552349.6A Pending CN113255509A (en) | 2021-05-20 | 2021-05-20 | Building site dangerous behavior monitoring method based on Yolov3 and OpenPose |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113255509A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963439A (en) * | 2021-10-22 | 2022-01-21 | 无锡八英里电子科技有限公司 | Elevator car door-opening behavior identification method based on machine vision |
CN113989707A (en) * | 2021-10-27 | 2022-01-28 | 福州大学 | Public place queuing abnormal behavior detection method based on OpenPose and OpenCV |
CN113989719A (en) * | 2021-10-30 | 2022-01-28 | 福州大学 | Construction site theft monitoring method and system |
CN114360201A (en) * | 2021-12-17 | 2022-04-15 | 中建八局发展建设有限公司 | AI technology-based boundary dangerous area boundary crossing identification method and system for building |
CN114495166A (en) * | 2022-01-17 | 2022-05-13 | 北京小龙潜行科技有限公司 | Pasture shoe changing action identification method applied to edge computing equipment |
CN114724080A (en) * | 2022-03-31 | 2022-07-08 | 慧之安信息技术股份有限公司 | Construction site intelligent safety identification method and device based on security video monitoring |
CN114842560A (en) * | 2022-07-04 | 2022-08-02 | 广东瑞恩科技有限公司 | Computer vision-based construction site personnel dangerous behavior identification method |
CN114897824A (en) * | 2022-05-10 | 2022-08-12 | 电子科技大学 | Food safety threat detection and early warning method under crusty pancake industry monitoring scene |
CN115100734A (en) * | 2022-05-12 | 2022-09-23 | 燕山大学 | Openpos-based ski field dangerous action identification method and system |
CN115471874A (en) * | 2022-10-28 | 2022-12-13 | 山东新众通信息科技有限公司 | Construction site dangerous behavior identification method based on monitoring video |
CN116645727A (en) * | 2023-05-31 | 2023-08-25 | 江苏中科优胜科技有限公司 | Behavior capturing and identifying method based on Openphase model algorithm |
CN113989707B (en) * | 2021-10-27 | 2024-05-31 | 福州大学 | Method for detecting abnormal queuing behaviors in public places based on OpenPose and OpenCV |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190176820A1 (en) * | 2017-12-13 | 2019-06-13 | Humanising Autonomy Limited | Systems and methods for predicting pedestrian intent |
CN110533076A (en) * | 2019-08-01 | 2019-12-03 | 江苏濠汉信息技术有限公司 | The detection method and device of construction personnel's seatbelt wearing of view-based access control model analysis |
CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Construction worker high-fall accident early warning method and device |
CN111368696A (en) * | 2020-02-28 | 2020-07-03 | 淮阴工学院 | Dangerous chemical transport vehicle illegal driving behavior detection method and system based on visual cooperation |
CN111814601A (en) * | 2020-06-23 | 2020-10-23 | 国网上海市电力公司 | Video analysis method combining target detection and human body posture estimation |
CN111898514A (en) * | 2020-07-24 | 2020-11-06 | 燕山大学 | Multi-target visual supervision method based on target detection and action recognition |
CN112528960A (en) * | 2020-12-29 | 2021-03-19 | 之江实验室 | Smoking behavior detection method based on human body posture estimation and image classification |
-
2021
- 2021-05-20 CN CN202110552349.6A patent/CN113255509A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190176820A1 (en) * | 2017-12-13 | 2019-06-13 | Humanising Autonomy Limited | Systems and methods for predicting pedestrian intent |
CN110533076A (en) * | 2019-08-01 | 2019-12-03 | 江苏濠汉信息技术有限公司 | The detection method and device of construction personnel's seatbelt wearing of view-based access control model analysis |
CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Construction worker high-fall accident early warning method and device |
CN111368696A (en) * | 2020-02-28 | 2020-07-03 | 淮阴工学院 | Dangerous chemical transport vehicle illegal driving behavior detection method and system based on visual cooperation |
CN111814601A (en) * | 2020-06-23 | 2020-10-23 | 国网上海市电力公司 | Video analysis method combining target detection and human body posture estimation |
CN111898514A (en) * | 2020-07-24 | 2020-11-06 | 燕山大学 | Multi-target visual supervision method based on target detection and action recognition |
CN112528960A (en) * | 2020-12-29 | 2021-03-19 | 之江实验室 | Smoking behavior detection method based on human body posture estimation and image classification |
Non-Patent Citations (1)
Title |
---|
朱建宝等: "基于OpenPose人体姿态识别的变电站危险行为检测", 《自动化与仪表》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963439A (en) * | 2021-10-22 | 2022-01-21 | 无锡八英里电子科技有限公司 | Elevator car door-opening behavior identification method based on machine vision |
CN113989707A (en) * | 2021-10-27 | 2022-01-28 | 福州大学 | Public place queuing abnormal behavior detection method based on OpenPose and OpenCV |
CN113989707B (en) * | 2021-10-27 | 2024-05-31 | 福州大学 | Method for detecting abnormal queuing behaviors in public places based on OpenPose and OpenCV |
CN113989719A (en) * | 2021-10-30 | 2022-01-28 | 福州大学 | Construction site theft monitoring method and system |
CN114360201A (en) * | 2021-12-17 | 2022-04-15 | 中建八局发展建设有限公司 | AI technology-based boundary dangerous area boundary crossing identification method and system for building |
CN114495166A (en) * | 2022-01-17 | 2022-05-13 | 北京小龙潜行科技有限公司 | Pasture shoe changing action identification method applied to edge computing equipment |
CN114724080B (en) * | 2022-03-31 | 2023-10-27 | 慧之安信息技术股份有限公司 | Building site intelligent safety identification method and device based on security video monitoring |
CN114724080A (en) * | 2022-03-31 | 2022-07-08 | 慧之安信息技术股份有限公司 | Construction site intelligent safety identification method and device based on security video monitoring |
CN114897824A (en) * | 2022-05-10 | 2022-08-12 | 电子科技大学 | Food safety threat detection and early warning method under crusty pancake industry monitoring scene |
CN115100734A (en) * | 2022-05-12 | 2022-09-23 | 燕山大学 | Openpos-based ski field dangerous action identification method and system |
CN114842560A (en) * | 2022-07-04 | 2022-08-02 | 广东瑞恩科技有限公司 | Computer vision-based construction site personnel dangerous behavior identification method |
CN115471874A (en) * | 2022-10-28 | 2022-12-13 | 山东新众通信息科技有限公司 | Construction site dangerous behavior identification method based on monitoring video |
CN116645727A (en) * | 2023-05-31 | 2023-08-25 | 江苏中科优胜科技有限公司 | Behavior capturing and identifying method based on Openphase model algorithm |
CN116645727B (en) * | 2023-05-31 | 2023-12-01 | 江苏中科优胜科技有限公司 | Behavior capturing and identifying method based on Openphase model algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113255509A (en) | Building site dangerous behavior monitoring method based on Yolov3 and OpenPose | |
Hou et al. | Social distancing detection with deep learning model | |
Fang et al. | A deep learning-based approach for mitigating falls from height with computer vision: Convolutional neural network | |
WO2020215552A1 (en) | Multi-target tracking method, apparatus, computer device, and storage medium | |
CN109598211A (en) | A kind of real-time dynamic human face recognition methods and system | |
Mei et al. | Human intrusion detection in static hazardous areas at construction sites: Deep learning–based method | |
CN114842560B (en) | Computer vision-based construction site personnel dangerous behavior identification method | |
CN115131732A (en) | Safety belt illegal wearing detection method combining target detection and semantic segmentation | |
CN112613359B (en) | Construction method of neural network for detecting abnormal behaviors of personnel | |
CN112233770B (en) | Gymnasium intelligent management decision-making system based on visual perception | |
CN113111733A (en) | Posture flow-based fighting behavior recognition method | |
CN116311361B (en) | Dangerous source indoor staff positioning method based on pixel-level labeling | |
CN115841497B (en) | Boundary detection method and escalator area intrusion detection method and system | |
CN113762221B (en) | Human body detection method and device | |
KR20160073490A (en) | System for assessment of safety level at construction site based on computer vision | |
CN114758414A (en) | Pedestrian behavior detection method, device, equipment and computer storage medium | |
CN113989719A (en) | Construction site theft monitoring method and system | |
CN113076825A (en) | Transformer substation worker climbing safety monitoring method | |
US20220076554A1 (en) | Social Distancing and Contact Mapping Alerting Systems for Schools and other Social Gatherings | |
Korovin et al. | Human pose estimation applying ANN while RGB-D cameras video handling | |
Cheerla et al. | Social Distancing Detector Using Deep Learning | |
CN117237993B (en) | Method and device for detecting operation site illegal behaviors, storage medium and electronic equipment | |
CN113408433B (en) | Intelligent monitoring gesture recognition method, device, equipment and storage medium | |
CN111241959B (en) | Method for detecting personnel not wearing safety helmet through construction site video stream | |
Liliana et al. | Social Distance Monitoring System Using YOLO and Pixel-to-real-world Distance Mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210813 |
|
RJ01 | Rejection of invention patent application after publication |