CN115620192A - Method and device for detecting wearing of safety rope in aerial work - Google Patents

Method and device for detecting wearing of safety rope in aerial work Download PDF

Info

Publication number
CN115620192A
CN115620192A CN202211195865.9A CN202211195865A CN115620192A CN 115620192 A CN115620192 A CN 115620192A CN 202211195865 A CN202211195865 A CN 202211195865A CN 115620192 A CN115620192 A CN 115620192A
Authority
CN
China
Prior art keywords
target
safety rope
image
result
constructor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211195865.9A
Other languages
Chinese (zh)
Inventor
黄东升
梅志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Baide Thinking Information Technology Co ltd
Original Assignee
Wuhu Baide Thinking Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhu Baide Thinking Information Technology Co ltd filed Critical Wuhu Baide Thinking Information Technology Co ltd
Priority to CN202211195865.9A priority Critical patent/CN115620192A/en
Publication of CN115620192A publication Critical patent/CN115620192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention discloses a method and a device for detecting the wearing of a safety rope in high-altitude operation, wherein the method comprises the following steps: step one, detecting whether constructors exist in an image, controlling a zoom camera to realize positioning, focusing and collecting the constructors one by one to focus and amplify the image by taking the constructors as the center after the constructors are detected, enhancing the characteristics of a target, and detecting whether the target wears a safety rope or not; step two, detecting a single target for multiple times, judging whether the target wears a safety rope or not, and recording a judgment result; and step three, performing structured similarity comparison on the images of the same target acquired twice in the same area to obtain a compared difference result image, searching safety rope target information in the difference result image, verifying a returned result, finishing the judgment on target effectiveness and outputting a result, and if the judged target is effective, indicating that the constructor correctly wears the safety rope. The method can enhance the characteristics of the safety rope in the image and accurately identify whether the safety rope is worn correctly.

Description

Method and device for detecting wearing of safety rope in aerial work
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a method and a device for detecting the wearing of a safety rope in aerial work.
Background
With the development of social production, production safety is more and more emphasized in the whole building industry, and the standards of building construction enterprises and government agencies on production safety are gradually improved. To the construction site that the incident is highly sent out, how to rationally reduce building site incident incidence, it is the most important ring on the building site to promote staff's personal safety.
For the relevant workers working at high altitude, such as the operation at an opening, the climbing operation, the operation at the edge, the suspension operation and the cross operation, because the danger coefficient of the operation environment is higher, a safety rope must be worn in the operation process to ensure the safety of the workers. When the safety rope is used, an operator needs to fix one end of the safety rope on the safety steel rope and fix the other end of the safety rope on the body of the operator, and the safety rope is kept firm.
Because some high-altitude operators have insufficient safety consciousness, the safety ropes are not worn during operation or the safety ropes are not worn correctly (one end of each safety rope is not fixed on the safety steel cable), and potential safety hazards are caused to the workers and enterprises in the construction field.
At present, many construction sites all adopt a video monitoring mode, the safety dressing condition of high-altitude operation personnel is monitored through a camera, whether the high-altitude operation personnel are worn correctly or not is judged, and the safety risk of the construction sites is reduced through video supervision. Conventional video monitoring mainly adopts the mode that artifical naked eye looked over, looks over the control picture through the security check personnel and judges whether there is the security violation condition in the monitor room, but this kind of mode coverage is low, efficiency is poor, leak and examine many, hardly reaches effectual supervision effect.
The method of video intelligent monitoring can effectively solve the problems, and the acquired video stream is intelligently analyzed in real time through the intelligent monitoring equipment, so that the effects of real-time monitoring and instant warning are achieved, and the supervision efficiency is improved. The conventional solution is to find out a constructor target in an image picture by target search in a deep learning mode, and then send the cut target image part into a trained ResNet classification model to judge whether the constructor wears a safety rope.
The other scheme is that after a constructor target is found through target checking, the target of the safety rope is searched in a local image of the constructor target, and whether the constructor wears the safety rope is judged according to the searched result; whether one end of the safety rope is fixed on the safety steel rope or not is judged by searching the hook in the accessory area of the target constructor.
However, the camera of the high-altitude operator is far away from the target generally, most of the cameras are high-altitude cameras, the shot character target is generally small, and the characteristics of the safety rope are relatively unobvious, so that high accuracy is difficult to achieve. Even searching for the safety rope on the target image of the constructor can also face the same problem, and the phenomena of mistaken snapshot (such as the reflective stripes of reflective clothes) and missed snapshot are more caused by the unobvious characteristics of the safety rope.
In addition, as for whether the safety rope is fixed on the safety steel rope or not, the safety rope hook can be expressed in the size of a few pixels on an image, and meanwhile, the colors, specifications and the like of the safety rope and the hook are too many types, so that the safety rope hook can hardly be found by adopting a target searching mode. Therefore, the prior art has obvious defects in the aspect of searching and identifying the safety rope, is difficult to accurately and reliably identify the safety rope, and is more difficult to accurately judge whether the safety rope is safely fixed.
Disclosure of Invention
The invention aims to provide a wearing detection method for a safety rope for aerial work, which is used for solving the technical problems that in the prior art, the safety rope is difficult to accurately identify and whether the safety rope is safely fixed or not is difficult to accurately judge due to the fact that the characteristics of the safety rope and a fixing structure thereof in an image are not obvious.
The method for detecting the wearing of the aerial work safety rope comprises the following steps:
step one, detecting whether constructors exist in an image, controlling a zoom camera to realize positioning, focusing and collecting the images which are focused and amplified by taking the constructors as centers one by one after the constructors are detected, enhancing the characteristics of a target, and detecting whether the target wears a safety rope;
step two, detecting a single target for multiple times, judging whether the target wears a safety rope or not, and recording a judgment result;
and thirdly, performing structural similarity comparison on the images acquired twice in the same area by the same target to obtain a compared difference result image, searching safety rope target information in the difference result image, verifying a returned result, finishing the judgment on the target validity and outputting a result, and if the target is judged to be valid, indicating that the constructor correctly wears the safety rope.
Preferably, the step one comprises the following steps:
s1, detecting a video stream image target, wherein the target detection result is whether a constructor is detected from the image, and if the constructor is detected, storing the image;
s2, focusing the targets, controlling the cameras to move by taking constructors as the targets, and collecting focused and amplified images taking the targets as centers one by one;
s3, positioning and cutting the characteristic image to obtain a region detail image corresponding to a constructor;
and S4, carrying out classified judgment on the target, judging whether the target constructor wears the safety rope, and recording a judgment result.
Preferably, the second step comprises the following steps:
s5, setting a safety fixed state detection original area, and acquiring and storing the amplified area position information by taking the area corresponding to the constructor as the center when the judgment result is that the safety rope is worn;
and S6, repeatedly positioning and target classification judgment, after waiting for a certain time, acquiring a new image frame from the video stream, repeating the step S3, sending the new image frame image to the target detection model again for target search, if the target is still in the image, repeating the step S4, judging whether the target wears a safety rope, and recording a judgment result.
Preferably, the third step comprises the following steps:
s7, comparing the safety fixed state detection area with the image to obtain images, and if the safety rope is worn as a result of judgment in the step S6, comparing the target position information in the video frame image obtained in the step S6 with the position information stored in the step S3 to judge whether the position of the constructor changes; if the position of the constructor is changed, screenshot is carried out on the current video frame image according to the region position information stored in the step S5, and a corresponding region position image is obtained and stored;
s8, judging the safe fixed state of the safety rope, and comparing the structural similarity of the original image of the state detection area stored in the step S5 with the area comparison image stored in the step S7 to obtain a comparison difference result image, wherein the result is the structural change of a corresponding image generated after the position of a constructor moves; and searching safety rope target information in the image, checking a returned result, judging the target validity and outputting a result, wherein if the target is judged to be valid, the construction personnel correctly wear the safety rope.
Preferably, in step S8, the check rule for checking the returned result is as follows:
1) The target area of the safety rope needs to have intersection with the target area of the constructor of the corresponding frame, and calculation is carried out through intersection and comparison;
2) The height of the target area of the safety rope needs to be 2/3 higher than that of the target image area of the constructor, and the requirement that the fixing position of the safety rope is above the chest of the constructor is met;
if the standard is met, the safety rope is identified to be worn;
after the safety rope is worn by the identification constructor, whether the end part of the safety rope outside the human body moves needs to be identified, if the end part of the safety rope outside the human body does not exist or the end part of the safety rope outside the human body abnormally moves, the safety rope characteristic target is not effective, and when the effective safety rope characteristic target exists in the difference result image, the safety rope worn by the constructor is considered to be fixed at one end, namely, the constructor correctly wears the safety rope.
Preferably, the step S7 is executed for multiple times, and multiple corresponding area position images are acquired and stored, and each area position image captures the current video frame image according to the area position information stored for the first time in the step S5, so as to ensure that the relative positions of the two images subjected to the structural similarity comparison in the video image area are completely consistent.
Preferably, in the detection process, the wearing state of the safety rope of each constructor is judged through multi-round frame extraction judgment and comprehensive analysis; and (3) specifying the identification and judgment times, carrying out identification and judgment on the same target state for multiple times, and determining whether to return a positive result according to a set positive judgment time threshold value at an interval of fixed time in each judgment, and returning a negative result when the positive judgment time threshold value is not met by the multiple identification and judgment, wherein the positive result indicates that the constructor correctly wears the safety rope, and the negative result indicates that the constructor does not correctly wear the safety rope.
Preferably, the output result in step S1 is one or more bounding boxes containing the target and attribute information of the bounding boxes detected by the network, and in step S2, the bounding boxes are sorted and the angle at which the camera needs to move is calculated according to the relative position information of the centers of the corresponding bounding boxes in the whole video frame image in sequence, where the bounding boxes are target positions, and the specific calculation process is as follows:
1) Setting: the x-axis deviation pixel of the central point of the target position and the central point of the frame image is a, the y-axis deviation pixel is b, the length pixel of the frame image is c, the width pixel is d, the visual angle of the camera is e, and the display proportion is f;
obtaining: horizontal offset angle of camera: a, e/c, vertical offset angle: b e f/d;
controlling a camera holder to rotate by a specified angle, and moving a target position to a position where the camera visual angle is relatively centered;
2) When the target is in the image center position of the camera, the focal length is adjusted by controlling the camera; amplifying the target position by the amplification ratio;
setting: the horizontal width of the target position area is a, the vertical height is b, the length pixel of the frame image is c, the width pixel is d, and the original zoom factor of the camera is e;
obtaining: and (3) amplification ratio: e x (smaller value of c/a compared to d/b result)/3.
Preferably, the method for detecting the wearing of the safety rope for the aerial work further comprises the fourth step of resetting the camera, resetting the position and the focal length of the camera based on an onvif protocol, returning to the state of the step S1 after the resetting is completed, restarting the step S2, selecting the sequence of the constructors for focusing detection to be +1, and repeating all the detection processes.
The invention also provides a device for detecting the wearing of the safety rope in the high-altitude operation, which comprises a zoom camera and computer equipment, wherein the zoom camera is in signal connection with the computer equipment and is controlled by the computer equipment, the computer equipment comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the device is characterized in that: the steps of a method of aerial work safety line pull-on detection as claimed in any one of claims 1 to 9 are carried out by the processor when the computer program is executed in accordance with the method provided in embodiment 1.
The invention has the following advantages: according to the scheme, the camera is controlled to focus the target area, and the small target is amplified, so that the detected target contains more characteristic information in the image, and the accuracy of algorithm judgment is greatly improved. Meanwhile, according to the scheme, multiple times of image acquisition and detection are carried out on a single target, and the problem of misinformation caused by unobvious characteristics of the target relative to the safety rope at a specific angle of the camera is greatly reduced by matching with a logic judgment mode.
On the other hand, for the problem that whether one end of the safety rope is fixed safely or not, namely whether the safety rope is worn correctly or not, which is most difficult to achieve in the process of detecting the correct wearing of the safety rope, the correct wearing and identification of the safety rope in a complex environment becomes possible by capturing the state change of the safety rope in the moving process of the target and adding background information filtering and feature enhancement, and the technical problem that whether the safety rope is fixed correctly or not is difficult to judge in the prior art is solved.
Drawings
Fig. 1 is a flow chart of the method for detecting the wearing of the aerial work safety rope.
FIG. 2 is a diagram illustrating an original image of a video frame image focused by a camera according to an embodiment of the present invention.
Fig. 3 is an image obtained by searching the target in fig. 2 and obtaining the adjusted target position information in the embodiment of the present invention.
Fig. 4 is a detail image of the area corresponding to the constructor obtained in step S3 in the embodiment of the present invention.
Fig. 5 is a cut-out region image obtained after the enlarged region is obtained in step S5 in the embodiment of the present invention.
Fig. 6 is a diagram illustrating that in step S7, a current video frame image is captured to obtain a corresponding region position image in the embodiment of the present invention.
Fig. 7 is an exemplary comparison image of step S8 in the embodiment of the present invention.
Fig. 8 is a difference result image obtained by performing difference comparison image calculation on the image shown in fig. 7 according to the embodiment of the present invention.
Fig. 9 is an exemplary second comparison image of step S8 in the embodiment of the present invention.
Fig. 10 is a difference result image obtained by performing difference comparison image calculation on the image shown in fig. 9 according to the embodiment of the present invention.
Fig. 11 is a result diagram of sending the two difference result images of fig. 8 and fig. 10 to the safety rope characteristic target search model for target search in step S8 in the embodiment of the present invention.
Fig. 12 is an image of a worker hanging the safety line only from the waist and not having one end fixed.
Detailed Description
The following detailed description of the present invention will be given in conjunction with the accompanying drawings, for a more complete and accurate understanding of the inventive concept and technical solutions of the present invention by those skilled in the art.
Example 1
As shown in fig. 1-11, the present invention provides a method for aerial work safety line pull-through detection, comprising the following steps.
S1, detecting a video stream image target.
In the implementation flow of this embodiment, the input image is a frame image obtained by frame extraction and decoding from a video stream of a general monitoring camera (a non-snapshot camera, which supports the RTMP and RTSP protocols), and the output is a target (constructor) detected from the image. And judging whether constructors exist in the picture or not according to the target detection result (the designated threshold value), and if so, storing the frame of image.
The detailed steps of target detection include: after the video stream is subjected to frame extraction and decoding, video image frames are obtained, the images are subjected to preprocessing (for example, resize is a specific size and color channel conversion), and then input into a trained object detection algorithm model (based on Yolov3, yolov4, yolov5, SSD, fast-RCNN, cenernet and other training, for example, a Yolov5 object detection network is adopted), the image input size is 640x640 (since the object characteristics are not particularly obvious, a smaller input image is not suitable for use), the confidence threshold is set to 0.5 (for judging that the detected object belongs to a certain category of confidence and discarding if the object characteristics are lower than the threshold), the non-maximum suppression is set to 0.3 (for processing an overlapped bounding box, and if the object characteristics are greater than the threshold, judging that the same object is the same object and discarding an extra bounding box), and the output result is one or more bounding boxes containing objects detected by the network, and the attribute information of the bounding boxes, such as coordinates, long width, confidence and the like.
S2, focusing the target.
After the set of target positions of the constructors is obtained in the step S1, the target positions are sequenced from top to bottom and from left to right, and then the single detection target is subjected to state analysis and judgment in sequence.
And controlling the camera to move according to the relative position information of the selected target in the whole video frame image, moving the specified target position to a relatively central image area of the camera, controlling the camera to adjust the focal length, amplifying the target position area and acquiring a detail image. The original image of the video frame image is focused by a camera and then is as shown in figure 2.
The method specifically comprises the following steps; and (2) sequencing the data in the set from top to bottom and from left to right (each piece of data in the set comprises a position coordinate and the length and the width of a frame, counted by the center point of the target frame) according to the constructor target detection result set returned in the step (S1), acquiring the position information of the corresponding constructor target data according to the sequence, and calculating the angle of the camera required to move according to the relative position information of the position in the whole video frame image. The specific calculation procedure is as follows.
1) Setting: the x-axis deviation pixel of the center point of the target position and the center point of the frame image is a, the y-axis deviation pixel is b, the length pixel of the frame image is c, the width pixel is d, the camera viewing angle is e, and the display ratio is f (for example, in the camera display mode 16, 9 is recorded as 9/16.
Obtaining: horizontal offset angle of camera: a, e/c, vertical offset angle: b e f/d.
Based on an onvif protocol, the PTZ is used for controlling the camera holder to rotate by a specified angle, so that the target position can be moved to a position where the visual angle of the camera is relatively centered.
2) And when the target is positioned at the image center position of the camera, the camera is controlled to adjust the focal length (Zoom), and the target position is amplified in the amplification proportion.
Setting: the horizontal width of the target position area is a, the vertical height is b, the length pixel of the frame image is c, the width pixel is d, and the original zoom factor of the camera is e.
Obtaining: enlarging the scale: e x (a smaller value compared to the d/b result)/3.
After the camera is focused, the picture area is 3 times of the size of the target area of the designated constructor (if the width ratio of the picture area to the target area is greater than the height ratio, the height ratio meets 3 times of the size, otherwise, the width ratio meets 3 times of the size).
And S3, positioning and cutting the characteristic image.
And after the target focusing is finished, sending the new video frame image into the target detection model again for target searching, and acquiring the adjusted target position information. As shown in fig. 3.
And cutting the image of the detection target area according to the new target position information, simultaneously saving the position information, and acquiring the area detail image corresponding to the constructor. As shown in fig. 4.
And S4, target classification judgment.
After the camera target position is focused and the specific image of the designated constructor area is obtained, the collected constructor image is sent to a classification model for analysis, the classification judgment result of the image is obtained, whether the target constructor wears a safety rope is judged according to the detection result, and the judgment result is recorded.
After the camera is focused, a new image frame is preprocessed and then input into a previous target detection algorithm model, a corresponding constructor target is detected again, the accuracy of the target position is ensured, and meanwhile, the accuracy of a target result is verified.
If the focused picture has no target detection result returned again, the previous detection target is possibly false alarm, the current step is ended, and the next round of circulation is carried out; and if the target result is returned, performing image cropping. After image cutting is completed, a target characteristic image of a constructor is obtained, the target characteristic image is input into a trained residual error network classifier model (based on ResNet50 network model training) after preprocessing, a classification result is obtained, and whether a wearing safety rope exists in the result is judged.
And S5, setting a safe fixed state detection original area.
If the safety rope is worn in the step S4, the length and width of the area image are increased by 2 times respectively by taking the central point of the area as a reference, the amplified area is obtained, the coordinate area information (the coordinate position of the upper left corner, the length and the width) of the area relative to the video frame image picture is stored, and the area image (the original image for detecting the wearing state of the safety rope) is intercepted and stored. As shown in fig. 5.
And S6, repeatedly positioning and judging the classification of the target.
And after waiting for a certain time (1 second), acquiring a new image frame from the video stream, repeating the step S3, sending the new image frame into the target detection model again for target search, repeating the step S4 if the target is still in the picture, judging whether the target wears a safety rope, and recording a judgment result.
And S7, acquiring a contrast image of the safety fixed state detection area.
If the safety rope is worn as a result of the judgment in the step S6, comparing the target position information in the video frame image acquired in the step S6 with the position information stored in the step S3, and judging whether the position of the constructor changes (setting a movement distance threshold value, and judging whether the displacement distance of the center point of the position frame is greater than the threshold value or not); and if the position of the constructor is changed, screenshot is carried out on the current video frame image according to the region position information stored in the step S5, and a corresponding region position image is obtained and stored. As shown in fig. 6:
and S8, judging the safe fixing state of the safety rope.
And comparing the structured similarity of the original image of the state detection area stored in the step S5 and the contrast image of the area stored in the step S7 to obtain a comparison difference value.
The relative positions of the two compared images in the video image area are consistent, so most background image information can be filtered, and the compared result is the corresponding image structure change generated after the position of the constructor is moved.
The difference comparison image calculation needs to be noted that the images to be compared before and after need to ensure that the position areas are completely consistent relative to the positions of the video frame pictures, so that the background partial image information can be better filtered when the difference is calculated, and therefore, the position information (coordinates and range) of the area needs to be stored when the area is selected for the first time and is used as the reference position information of the subsequent screenshot, rather than redefined according to the result after each target detection (because the position frame information of the target detection changes).
By comparing the difference of position images of constructors and accessory areas (the length and the width of a focused constructor target area are amplified by 2 times by taking a central point as a reference) of the previous frame and the next frame, the characteristic change information in the picture is captured, and the specific method comprises the following steps:
1) And (3) using a compare _ ssim method of OpenCV (example, a proper method can be selected according to actual needs), transmitting two comparison images (which are similar to the previous images and captured at different construction personnel positions in different periods), and acquiring difference information of the images.
2) And (3) processing the gray value of the difference result image according to a set threshold value by using an OpenCV threshold method, and filtering background information.
After a difference result image after background information is filtered is obtained, safety rope target information is searched in the image, a return result is verified, and the verification rule is as follows:
1) The target area of the safety rope needs to be intersected with the target area of the constructor of the corresponding frame, and calculation is carried out through intersection and comparison.
2) The height of the target area of the safety rope needs to be 2/3 higher than that of the target image area of the constructor, and the requirement that the fixing position of the safety rope is above the chest of the constructor is met.
Example one aligned image is shown in figure 7. The difference result image obtained by calculating the difference comparison image is shown in fig. 8.
Example two aligned images are shown in figure 9. The difference result image obtained by calculating the difference comparison image is shown in fig. 10.
The difference result image is sent to a safety rope characteristic target searching model for target searching, so that the corresponding safety rope characteristic target can be easily searched, the searching effects of the example one and the example two are shown in fig. 11, the example one corresponds to fig. 11 (a), and the example two corresponds to fig. 11 (b).
And then, judging the effectiveness of the target, wherein the safety rope needs to intersect with the target human body area, meanwhile, the target area of the safety rope can be extended out of the human body area, the height of the safety rope needs to exceed the position of 2/3 (chest) of the human body, and the safety rope is identified as being worn if the standard is met. The above determination is mainly used to identify whether the safety rope is worn, but there is a case where the safety rope is merely hung between the waist and one end of the safety rope is not fixed, as shown in fig. 12, and therefore, it is necessary to identify whether the end of the safety rope located outside the human body moves, and according to these conditions, it is determined whether the found characteristic target of the safety rope is valid. If the end part of the safety rope outside the human body does not exist or the end part of the safety rope outside the human body has abnormal movement, the characteristic target of the safety rope is not valid, and the end part of the safety rope is not normally fixed; and when the difference result image has an effective safety rope characteristic target, determining that one end of the safety rope worn by the constructor is fixed.
And S9, resetting the camera.
After the above processes are completed, the position and the focal length of the camera are reset based on an onvif protocol, the state of the step S1 is returned after the resetting is completed, the step S2 is restarted, the constructor sequence +1 of the focusing detection is selected, and all detection processes are repeated. The overall flow chart of the detection method is shown in FIG. 1.
The method can be summarized as the following steps:
step one, whether constructors exist or not is detected from the images, after the constructors are detected, the zoom camera is controlled to realize the positioning, focusing and collecting of the constructors one by one, the images are focused and amplified by taking the constructors as centers, the target characteristics are enhanced, and whether the safety ropes are worn by the target or not is detected. This step corresponds to steps S1 to S4.
And step two, detecting a single target for multiple times, judging whether the target wears a safety rope or not, and recording a judgment result. This step corresponds to steps S5 to S6.
And step three, performing structured similarity comparison on the images of the same target acquired twice in the same area to obtain a compared difference result image, searching safety rope target information in the difference result image, verifying a returned result, finishing the judgment on target effectiveness and outputting a result, and if the judged target is effective, indicating that the constructor correctly wears the safety rope. This step corresponds to steps S7 to S8.
Step four, resetting the camera, corresponding to the step S9.
In the testing process, the safety rope wearing state to every constructor judges, all need pass through many rounds of judgments, because in actual detection process, because the wearing position characteristic of safety rope, when constructor is that the side is facing to the camera, can't judge the safety rope wearing state through the current frame, need take out frame judgement, integrated analysis through many rounds. There are various options for the determination method, for example, the following two.
Specifying the number of times of recognition and determination, performing recognition and determination on the same target state for multiple times, wherein a fixed time (for example, 1 second) is provided between each time of determination, a positive result is returned when there is one positive recognition (the safety rope is worn correctly), and a negative result is returned when all the multiple recognition results are negative recognition results (the safety rope is not worn correctly).
Designating the number of recognition and judgment times, performing recognition and judgment on the same target state for multiple times, performing comprehensive threshold judgment on all detection results at a fixed time interval (for example, 1 second) in each judgment, if the total detection is 5 times, returning a positive result if the positive recognition is satisfied more than 2 times, and otherwise, returning a negative result.
Example 2
The invention also provides a device for detecting the wearing of the safety rope in the high-altitude operation, which comprises a zoom camera and computer equipment, wherein the zoom camera is in signal connection with the computer equipment and is controlled by the computer equipment, the computer equipment comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the processor realizes the following steps according to the method provided by the embodiment 1 when executing the computer program.
Step one, whether constructors exist or not is detected from the images, after the constructors are detected, the zoom camera is controlled to realize the positioning, focusing and collecting of the constructors one by one, the images are focused and amplified by taking the constructors as centers, the target characteristics are enhanced, and whether the safety ropes are worn by the target or not is detected.
And step two, detecting a single target for multiple times, judging whether the target wears a safety rope or not, and recording a judgment result.
And step three, performing structured similarity comparison on the images of the same target acquired twice in the same area to obtain a compared difference result image, searching safety rope target information in the difference result image, verifying a returned result, finishing the judgment on target effectiveness and outputting a result, and if the judged target is effective, indicating that the constructor correctly wears the safety rope.
And step four, resetting the camera.
And identifying whether the constructor wears the safety rope according to the method, and judging whether one end of the safety rope worn by the constructor is fixed. The above specific limitations regarding the implementation steps after the program running on the processor is executed can be referred to in embodiment one, and detailed description thereof is omitted here.
It will be understood that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, in the description of the invention, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The invention is described above with reference to the accompanying drawings, it is obvious that the specific implementation of the invention is not limited by the above-mentioned manner, and it is within the scope of the invention to adopt various insubstantial modifications of the inventive concept and solution of the invention, or to apply the inventive concept and solution directly to other applications without modification.

Claims (10)

1. A method for detecting the wearing of a safety rope for overhead operation is characterized in that: the method comprises the following steps:
step one, detecting whether constructors exist in an image, controlling a zoom camera to realize positioning, focusing and collecting the images which are focused and amplified by taking the constructors as centers one by one after the constructors are detected, enhancing the characteristics of a target, and detecting whether the target wears a safety rope;
step two, detecting a single target for multiple times, judging whether the target wears a safety rope or not, and recording a judgment result;
and thirdly, performing structural similarity comparison on the images acquired twice in the same area by the same target to obtain a compared difference result image, searching safety rope target information in the difference result image, verifying a returned result, finishing the judgment on the target validity and outputting a result, and if the target is judged to be valid, indicating that the constructor correctly wears the safety rope.
2. The method for detecting the wearing of the aerial work safety rope according to claim 1, wherein the method comprises the following steps: the first step comprises the following steps:
s1, detecting a video stream image target, wherein the target detection result is whether a constructor is detected from the image, and if the constructor is detected, storing the image;
s2, focusing the targets, controlling the cameras to move by taking constructors as the targets, and collecting focused and amplified images taking each target as a center one by one;
s3, positioning and cutting the characteristic image to obtain a region detail image corresponding to a constructor;
and S4, carrying out classified judgment on the target, judging whether the target constructor wears the safety rope, and recording a judgment result.
3. The method of claim 2 for aerial work safety line pull-on detection, wherein the method comprises: the second step comprises the following steps:
s5, setting a safety fixed state detection original area, and acquiring and storing the amplified area position information by taking the area corresponding to the constructor as the center when the judgment result is that the safety rope is worn;
and S6, repeatedly positioning and target classification judgment, after waiting for a certain time, acquiring a new image frame from the video stream, repeating the step S3, sending the new image frame image to the target detection model again for target search, if the target is still in the image, repeating the step S4, judging whether the target wears a safety rope, and recording a judgment result.
4. The method for detecting the wearing of the aerial work safety rope according to claim 3, wherein the method comprises the following steps: the third step comprises the following steps:
s7, comparing the image acquisition in the safety fixed state detection area, and if the safety rope is worn as a result of the judgment in the step S6, comparing the target position information in the video frame image acquired in the step S6 with the position information stored in the step S3 to judge whether the position of the constructor changes; if the position of the constructor is changed, screenshot is carried out on the current video frame image according to the region position information stored in the step S5, and a corresponding region position image is obtained and stored;
s8, judging the safe fixed state of the safety rope, and comparing the structural similarity of the original image of the state detection area stored in the step S5 with the area comparison image stored in the step S7 to obtain a comparison difference result image, wherein the result is the structural change of a corresponding image generated after the position of a construction worker moves; and searching safety rope target information in the image, checking a returned result, judging the target validity and outputting a result, wherein if the target is judged to be valid, the construction personnel correctly wear the safety rope.
5. The method of claim 4 for aerial work safety line pull-on detection, wherein the method comprises: in step S8, the check rule for checking the returned result is as follows:
1) The target area of the safety rope needs to have intersection with the target area of the constructor of the corresponding frame, and calculation is carried out through intersection and comparison;
2) The height of the target area of the safety rope needs to be 2/3 higher than that of the target image area of the constructor, and the requirement that the fixing position of the safety rope is above the chest of the constructor is met;
if the standard is met, the safety rope is identified to be worn;
after the safety rope is worn by the identification constructor, whether the end part of the safety rope outside the human body moves needs to be identified, if the end part of the safety rope outside the human body does not exist or the end part of the safety rope outside the human body abnormally moves, the safety rope characteristic target is not effective, and when the effective safety rope characteristic target exists in the difference result image, the safety rope worn by the constructor is considered to be fixed at one end, namely, the constructor correctly wears the safety rope.
6. The method for detecting the wearing of the aerial work safety rope according to claim 4, wherein the method comprises the following steps: and S7, executing for multiple times, acquiring and storing a plurality of corresponding area position images, and capturing the current video frame image by each area position image according to the area position information stored for the first time in the step S5, so as to ensure that the relative positions of the two images for structural similarity comparison in the video image area are completely consistent.
7. The method for detecting the wearing of the aerial work safety rope according to claim 1, wherein the method comprises the following steps: in the detection process, the wearing state of the safety rope of each constructor is judged by multi-round frame extraction judgment and comprehensive analysis; and (3) specifying the identification and judgment times, carrying out identification and judgment on the same target state for multiple times, and determining whether to return a positive result according to a set positive judgment time threshold value at an interval of fixed time in each judgment, and returning a negative result when the positive judgment time threshold value is not met by the multiple identification and judgment, wherein the positive result indicates that the constructor correctly wears the safety rope, and the negative result indicates that the constructor does not correctly wear the safety rope.
8. The method for detecting the wearing of the aerial work safety rope according to claim 2, wherein the method comprises the following steps: in the step S1, the output result is one or more bounding boxes containing the target and attribute information of the bounding boxes detected by the network, in the step S2, the bounding boxes are sorted and the angle of the camera that needs to be moved is calculated according to the relative position information of the center of the corresponding bounding box in the whole video frame image in sequence, where the bounding box is the target position, and the specific calculation process is as follows:
1) Setting: the x-axis deviation pixel of the central point of the target position and the central point of the frame image is a, the y-axis deviation pixel is b, the length pixel of the frame image is c, the width pixel is d, the visual angle of the camera is e, and the display proportion is f;
obtaining: horizontal offset angle of camera: a, e/c, vertical offset angle: b e f/d;
controlling a camera holder to rotate by a specified angle, and moving a target position to a position where the camera visual angle is relatively centered;
2) When the target is in the image center position of the camera, the focal length is adjusted by controlling the camera; amplifying the target position by the amplification ratio;
setting: the horizontal width of the target position area is a, the vertical height is b, the length pixel of the frame image is c, the width pixel is d, and the original zoom factor of the camera is e;
obtaining: enlarging the scale: e x (smaller value of c/a compared to d/b result)/3.
9. The method for detecting the wearing of the aerial work safety rope according to claim 1, wherein the method comprises the following steps: and step four, resetting the camera, resetting the position and the focal length of the camera based on an onvif protocol, returning to the state of the step S1 after the resetting is finished, restarting the step S2, selecting the sequence +1 of the constructors for focusing detection, and repeating all detection processes.
10. An apparatus for detecting the wearing of a safety rope in overhead work, comprising a zoom camera and a computer device, wherein the zoom camera is in signal connection with the computer device and is controlled by the computer device, the computer device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, and the apparatus is characterized in that: the processor, when executing the computer program, performs the steps of a method of aerial work safety line pull-through detection as claimed in any one of claims 1-9.
CN202211195865.9A 2022-09-28 2022-09-28 Method and device for detecting wearing of safety rope in aerial work Pending CN115620192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211195865.9A CN115620192A (en) 2022-09-28 2022-09-28 Method and device for detecting wearing of safety rope in aerial work

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211195865.9A CN115620192A (en) 2022-09-28 2022-09-28 Method and device for detecting wearing of safety rope in aerial work

Publications (1)

Publication Number Publication Date
CN115620192A true CN115620192A (en) 2023-01-17

Family

ID=84861001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211195865.9A Pending CN115620192A (en) 2022-09-28 2022-09-28 Method and device for detecting wearing of safety rope in aerial work

Country Status (1)

Country Link
CN (1) CN115620192A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079218A (en) * 2023-09-20 2023-11-17 山东省地质矿产勘查开发局第一地质大队(山东省第一地质矿产勘查院) Dynamic monitoring method for rope position of passenger ropeway rope based on video monitoring
CN117351434A (en) * 2023-12-06 2024-01-05 山东恒迈信息科技有限公司 Working area personnel behavior specification monitoring and analyzing system based on action recognition
CN117351434B (en) * 2023-12-06 2024-04-26 山东恒迈信息科技有限公司 Working area personnel behavior specification monitoring and analyzing system based on action recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079218A (en) * 2023-09-20 2023-11-17 山东省地质矿产勘查开发局第一地质大队(山东省第一地质矿产勘查院) Dynamic monitoring method for rope position of passenger ropeway rope based on video monitoring
CN117079218B (en) * 2023-09-20 2024-03-08 山东省地质矿产勘查开发局第一地质大队(山东省第一地质矿产勘查院) Dynamic monitoring method for rope position of passenger ropeway rope based on video monitoring
CN117351434A (en) * 2023-12-06 2024-01-05 山东恒迈信息科技有限公司 Working area personnel behavior specification monitoring and analyzing system based on action recognition
CN117351434B (en) * 2023-12-06 2024-04-26 山东恒迈信息科技有限公司 Working area personnel behavior specification monitoring and analyzing system based on action recognition

Similar Documents

Publication Publication Date Title
CN109117827B (en) Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system
CN103726879B (en) Utilize camera automatic capturing mine ore deposit to shake and cave in and the method for record warning in time
CN110633612B (en) Monitoring method and system for inspection robot
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN112235537B (en) Transformer substation field operation safety early warning method
CN111783744A (en) Operation site safety protection detection method and device
CN111932709A (en) Method for realizing violation safety supervision of inspection operation of gas station based on AI identification
KR20210067498A (en) Method and system for automatically detecting objects in image based on deep learning
CN109724993A (en) Detection method, device and the storage medium of the degree of image recognition apparatus
CN113343854A (en) Fire operation flow compliance detection method based on video monitoring
CN113361420A (en) Mine fire monitoring method, device and equipment based on robot and storage medium
CN113887445A (en) Method and system for identifying standing and loitering behaviors in video
CN111178424A (en) Petrochemical production site safety compliance real-time detection system and method
CN115620192A (en) Method and device for detecting wearing of safety rope in aerial work
CN115797856A (en) Intelligent construction scene safety monitoring method based on machine vision
CN112906441A (en) Image recognition system and method for communication industry survey and maintenance
CN114943841A (en) Method and device for assisting operation safety control based on image recognition
CN108460357B (en) Windowing alarm detection system and method based on image recognition
CN113989711A (en) Power distribution construction safety tool use identification method and system
CN113052125B (en) Construction site violation image recognition and alarm method
CN112487976B (en) Monitoring method, device and storage medium based on image recognition
CN109873990A (en) A kind of illegal mining method for early warning in mine based on computer vision
CN112532927A (en) Intelligent safety management and control system for construction site
CN110580708B (en) Rapid movement detection method and device and electronic equipment
CN114387542A (en) Video acquisition unit abnormity identification system based on portable ball arrangement and control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination