CN110807429B - Construction safety detection method and system based on tiny-YOLOv3 - Google Patents

Construction safety detection method and system based on tiny-YOLOv3 Download PDF

Info

Publication number
CN110807429B
CN110807429B CN201911073341.0A CN201911073341A CN110807429B CN 110807429 B CN110807429 B CN 110807429B CN 201911073341 A CN201911073341 A CN 201911073341A CN 110807429 B CN110807429 B CN 110807429B
Authority
CN
China
Prior art keywords
tiny
image
personnel
yolov3
construction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911073341.0A
Other languages
Chinese (zh)
Other versions
CN110807429A (en
Inventor
郝帅
马旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Science and Technology
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201911073341.0A priority Critical patent/CN110807429B/en
Publication of CN110807429A publication Critical patent/CN110807429A/en
Application granted granted Critical
Publication of CN110807429B publication Critical patent/CN110807429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention belongs to the technical field of information processing, and discloses a construction safety detection method and a construction safety detection system based on tiny-YOLOv 3.A video monitoring camera is used for collecting images of persons constructed in a construction site, and the persons contained in the images are manually marked to produce a data set; training a tiny-Yolov3 network model by using a data set; construction site image acquisition: the construction site image acquisition equipment acquires a construction site image and synchronously transmits the acquired image to the processor for processing; detecting the acquired image by using the trained tiny-Yolov3 network model, and storing pictures of people with security violations (such as no safety helmet, smoking and the like); and pushing the alarm information by the personnel with the security violation behaviors in a mobile phone APP mode. The method has the advantages of simple steps, reasonable design, convenience in implementation, high detection precision and good use effect, and can accurately detect the security violation under the condition of large visual field.

Description

Construction safety detection method and system based on tiny-YOLOv3
Technical Field
The invention belongs to the technical field of information processing, and particularly relates to a construction safety detection method and system based on tiny-YOLOv 3.
Background
Currently, the closest prior art: the traditional method mainly utilizes color features, texture features, shape features and the like to detect the security violation behaviors, if the safety helmet is detected, the color features are often adopted to detect, when whether smoking behaviors exist or not is detected, the shape features are often used to detect a face area to extract mouth features, and finally the smoke is detected to judge the mouth features. However, because the monitoring device is affected by illumination changes and shooting angles and scales, it is often difficult to accurately extract the above features, resulting in low detection precision and weak robustness. At present, two approaches are mainly adopted for detecting smoking behavior, (1) a human face area is detected, and then mouth features are extracted to judge the smoking behavior; (2) Whether smoking behavior exists is judged by detecting smoke. When the monitoring device is far away from security violation personnel, because of the influence of image resolution, the human face region features or the smoke features are difficult to accurately extract, and the two methods cannot be used. Therefore, the judgment can be only carried out through smoking behavior under the condition of a long distance and a large visual field.
Safety production is always the most important part in production of various industries, and the benefits of enterprises can be guaranteed only on the premise of safety. In the actual production process, the unsafe behaviors of workers not only threaten the safety of the workers, but also possibly cause safety accidents. Among the insecure behaviors, security violations are one of the key points for most enterprise monitoring and prevention.
In the prior art, in order to monitor and prevent security violations, a video monitoring device is usually arranged on a construction site. The construction site images are collected in real time through a video monitoring device, then the collected images are processed through an image processing method, and whether the scene has a security violation or not is detected and identified. The image processing method adopted at present mainly utilizes color features, texture features, shape features and the like in security violation behaviors for judgment. However, the actual construction site is often complex in environment, and in order to monitor the whole construction site as much as possible, the monitoring device may be arranged at a far place, so that the shot video image is in a large-view scene. Therefore, when the characteristics such as color and shape used for judging whether a safety helmet is worn or whether safety violations such as smoking behaviors exist are determined, the detection precision is reduced, and the environment interference is easily caused. Especially, when the distance is long, the smoke characteristics cannot be accurately extracted under the influence of the image resolution and when the smoke characteristics are used for judging whether the smoking behavior exists.
In summary, the problems of the prior art are: the image processing method adopted by the prior art for monitoring and preventing the security violation has low detection precision and is easy to be interfered by the environment.
The difficulty of solving the technical problems is as follows: the traditional method mainly utilizes color features, texture features, shape features and the like to detect the security violation behaviors, if the safety helmet is detected, the color features are often adopted to detect, if the smoking behavior exists, the shape features are often used to detect a face area to extract mouth features, and finally the smoking behavior is detected to judge. However, because the monitoring device is affected by illumination changes and shooting angles and scales, when the security violation is detected by using the features, it is often difficult to accurately extract the features, resulting in low detection precision and weak robustness.
The significance of solving the technical problems is as follows: the method for detecting the security violation behaviors, provided by the invention, can effectively solve the problem of detecting the security violation behaviors under the condition of large visual field and severe weather environment, can realize accurate detection of security violation personnel at different distances from a monitoring device, and has important significance for improving the automation and the intellectualization of construction site supervision.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a construction safety detection method and system based on tiny-YOLOv 3.
The invention is realized in such a way that a construction safety detection method based on tiny-Yolov3 comprises the following steps:
firstly, acquiring images of personnel constructed in a construction site by using a video monitoring camera, marking personnel contained in the images of the personnel with safety violations and personnel without safety violations, and making a data set;
secondly, training a tiny-Yolov3 network model by using a data set;
thirdly, acquiring a construction site image, synchronously transmitting the acquired image to a processor for processing, reading video data frame by frame, and scaling the video data to 416 multiplied by 416;
fourthly, sending each frame of image to be detected into a trained tiny-Yolov3 network model for detecting violation behaviors, and storing pictures of personnel with security violation behaviors;
and fifthly, pushing alarm information through the APP by the personnel image with the security violation.
Further, when the first step is used for collecting images, samples containing safety violations under different scales and different angles are obtained by adjusting the focal length of the video collecting device and adjusting the angle between the video collecting device and personnel with various safety violations.
Further, the training of the tiny-YOLOv3 network model by using the data set in the second step specifically includes:
(1) Each image is first scaled to 416 x 416;
(2) Each image is divided into S-by-S networks, and B bounding boxes and confidence degrees of the bounding boxes are predicted for each network; the size and position of the bounding box can be represented by 4 parameters (x, y, w, h), wherein (x, y) is the center coordinate of the bounding box, and (w, h) is the width and height of the bounding box, and the confidence expression of the bounding box is as follows:
confidence=Pr(Class|Object)×Pr(Object)×IOU;
in the formula: pr (Class | Object) represents the posterior probability of the predicted Object belonging to a certain Class, and Pr (Object) represents whether an Object falls into a cell corresponding to the candidate box; if Pr (Object) =0, it means none; if Pr (Object) =1, it means that there is Object falling into the candidate box; IOU represents the calculation intersection ratio IOU of the prediction box and the real target box;
Figure BSA0000194173100000031
when predicting the object class probability, each network is predicted only once; after the confidence values of the boundary frames are obtained, non-maximum value suppression processing is carried out, the boundary frames with the confidence values lower than the threshold value are removed, and the union of the boundary frames with the confidence values higher than the threshold value is taken as a prediction result;
(3) Predicting the coordinates of the boundary box by using a prior box, and obtaining 9 prior boxes in advance by using a k-means algorithm; the cluster center K is 9, and the dimensions of the prior boxes are (8, 12), (15, 15), (24, 14), (31, 60), (24, 78), (49, 114), (101, 90), (151, 180), and (332, 311), respectively.
Further, the performing non-maximum suppression processing includes: firstly, finding out a frame with the maximum confidence value from all detection frames; then calculating the overlapping degree of the frame and the rest frames, and if the value of the overlapping degree is larger than a set threshold value, rejecting the frame; then repeating the above process for the rest detection frames until all the detection frames are processed; the error term for the bounding box center coordinates is expressed as follows:
Figure BSA0000194173100000041
/>
λ coord for coordinate prediction error and taking the larger value to be 5,
Figure BSA0000194173100000042
means that the ith cell has a target, and the jth bounding box in the cell is responsible for predicting the target; (x) i ,y i ,w i ,h i ) For the predicted frame coordinate value, < >>
Figure BSA0000194173100000043
Real frame coordinates are obtained; the box confidence error terms with and without targets are expressed as follows:
Figure BSA0000194173100000044
Figure BSA0000194173100000045
the classification error term for the cell containing the target is expressed as follows:
Figure BSA0000194173100000046
wherein the content of the first and second substances,
Figure BSA0000194173100000047
and whether an object falls into the ith bounding box or not is represented, and the final loss function is represented as:
L yolo =L 1 +L 2 +L 3 +L 4
another object of the present invention is to provide a tiny-yollov 3-based construction safety detection system for executing the tiny-yollov 3-based construction safety detection method, wherein the tiny-yollov 3-based construction safety detection system comprises:
the behavior data set acquisition module is used for acquiring the safety violation behaviors of the construction personnel in the construction site by using the video monitoring camera and marking;
the data set training module is used for training the tiny-Yolov3 network model by using the data set;
the image acquisition module is used for synchronously transmitting the acquired image to the processor for processing;
the image detection module is used for detecting the acquired image by using the trained tiny-Yolov3 network model and storing the pictures of the personnel with the security violation;
and the information pushing module is used for pushing the alarm information of the personnel with the security violation.
Another object of the present invention is to provide a computer program for implementing the method for detecting construction safety based on tiny-YOLOv 3.
The invention also aims to provide an information data processing terminal for realizing the construction safety detection method based on tiny-YOLOv 3.
Another object of the present invention is to provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to execute the method for detecting construction safety based on tiny-YOLOv 3.
In summary, the advantages and positive effects of the invention are: the method has the advantages of simple steps, reasonable design, convenient implementation, high detection precision and lower investment cost; the safety violation detection and alarm information pushing system is simple and convenient to use and operate, can automatically detect and push alarm information by adopting the processor, is good in using effect, and can accurately detect a safety violation and push the alarm information under a complex construction environment.
The invention has the advantages of high detection precision, strong anti-interference capability, good real-time property, easy realization and the like, and is easy to realize in engineering.
Drawings
Fig. 1 is a flow chart of a construction safety detection method based on tiny-YOLOv3 according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a construction safety detection system based on tiny-YOLOv3 according to an embodiment of the present invention;
in the figure: 1. a behavior data set acquisition module; 2. a data set training module; 3. an image acquisition module; 4. an image detection module; 5. and an information pushing module.
Fig. 3 is a flow chart of an implementation of the construction safety detection method based on tiny-YOLOv3 according to the embodiment of the present invention.
Fig. 4 is a schematic diagram of a network structure of the Tiny-YOLOv3 according to an embodiment of the present invention.
FIG. 5 is a graph of a portion of the test results provided by an embodiment of the present invention;
in the figure: (a) a non-worn safety helmet; (b) not tying a rope; (c) not wearing protective clothing.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Aiming at the problems in the prior art, the invention provides a construction safety detection method based on tiny-YOLOv3, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the construction safety detection method based on tiny-YOLOv3 provided by the embodiment of the present invention includes the following steps:
s101: acquiring images (including smoking behaviors, without safety helmets and the like) of personnel who have safety violations during construction in a construction site by using a video monitoring camera, manually marking the personnel included in the images of the personnel, and making a data set;
s102: training a tiny-Yolov3 network model by using a data set;
s103: the construction site image acquisition equipment acquires a construction site image and synchronously transmits the acquired image to the processor for processing;
s104: detecting the acquired image by using the trained tiny-Yolov3 network model, and storing the pictures of the personnel with the security violation;
s105: and (4) pushing the alarm information to a safety responsible person by a person with a safety violation in a mobile phone APP mode.
As shown in fig. 2, the construction safety detection system based on tiny-YOLOv3 provided by the embodiment of the present invention includes:
the behavior data collection module 1 is used for collecting safety violation behaviors existing in construction in a construction site by using a video monitoring camera and carrying out manual marking;
a data set training module 2, configured to train a tiny-YOLOv3 network model using a data set;
the image acquisition module 3 is used for synchronously transmitting the acquired image to the processor for processing;
the image detection module 4 is used for detecting the acquired image by using the trained tiny-Yolov3 network model and storing the pictures of the personnel with the security violation;
and the information pushing module 5 is used for pushing the alarm information of the personnel with the security violation.
The technical solution of the present invention is further described below with reference to the accompanying drawings.
As shown in fig. 3, the construction safety detection method based on tiny-YOLOv3 provided by the embodiment of the present invention specifically includes the following steps:
step one, manufacturing a safety violation data set: and acquiring personnel images of safety violation behaviors in construction in a construction site by using a video monitoring camera, manually marking personnel contained in the images, and making a data set. When an image is collected, samples containing safety violations under different scales and different angles are obtained by adjusting the focal length of the video collecting device and adjusting the angle between the video collecting device and a smoker.
Step two, training a tiny-Yolov3 network model by using a data set;
(1) Each image is first scaled to 416 x 416;
(2) Each image is divided into S-by-S meshes, which are responsible for detecting an object if the object is centered in a cell. Since the aspect ratio of the corresponding target of each grid may be different, when training the design YOLO neural network, 5 parameters are predicted, which are the coordinates (x, y) of the central store, the width and the height (w, h), and the confidence of the bounding box. The confidence degree reflects whether the current bounding box contains the object and the accuracy of the position of the object, namely the confidence degree IOU of the detected bounding box for the detected object. If no Object exists in the cell, pr (Object) =0. Under the premise that a target exists, the YOLO network calculates the IOU according to the predicted bounding box and the real bounding box, and at the same time, predicts the posterior probability Pr (Class | Object) that the Object belongs to a certain Class. Each network is predicted only once when predicting the object class probability. After the confidence values of the bounding boxes are obtained, in order to reduce the repeated prediction boxes, the non-maximum value suppression processing is required to be carried out: firstly, finding out a frame with the maximum confidence value from all detection frames, then calculating the overlapping degree of the frame and the rest frames, and if the value of the frame is larger than a set threshold value, rejecting the frame; the above process is then repeated for the remaining test frames until all test frames have been processed. The error term for the bounding box center coordinates is expressed as follows:
Figure BSA0000194173100000071
λ coord for coordinate prediction error and taking the larger value to be 5,
Figure BSA0000194173100000081
it is meant that the ith cell has a target and the jth bounding box in that cell is responsible for predicting the target. (x) i ,y i ,w i ,h i ) For the prediction frame coordinate value,>
Figure BSA0000194173100000082
is the real frame coordinate. The box confidence error terms with and without targets are expressed as follows:
Figure BSA0000194173100000083
Figure BSA0000194173100000084
the classification error term for the cell containing the target is expressed as follows:
Figure BSA0000194173100000085
wherein the content of the first and second substances,
Figure BSA0000194173100000086
indicating whether any object falls within the ith bounding box. The final loss function is expressed as:
L yolo =L 1 +L 2 +L 3 +L 4 (5)
(3) And (3) sending the input image into a Tiny-YOLOv3 network structure, extracting picture characteristics and outputting a vector. As shown in fig. 4, after a series of convolution and maximum pooling operations, the network outputs a feature map of 13 × 13 × 18 size at the first scale, where 18=3 × (4 + 1), 3 represents that the grid of each scale predicts bounding boxes of 3 different sizes, 4 represents the coordinates of the center position and the width and height of the predicted bounding boxes, the 1 st 1 represents the category probability value of the predicted bounding box, and the second 1 represents the predicted category number of the present invention, and the present invention can predict a plurality of categories. And performing convolution on the output of the third to last layer of the first scale, then performing up-sampling, adding the up-sampling to the feature map with the size of 26 multiplied by 26 of the last layer of the first scale, and performing convolution on the two layers to obtain the final output of the second scale, wherein the size of the final output is 26 multiplied by 26. And the third scale is similar to the second scale, the output of the second-to-last layer of the second scale is convoluted and then is up-sampled, and then is added with the feature map with the size of 52 multiplied by 52 at the last of the first scale, and the final output of the third scale is obtained after two-layer convolution, wherein the size of the final output is 52 multiplied by 52. Calculating an output vector according to the network model, namely position information and confidence information of a boundary frame of grid prediction and category probability information corresponding to grids;
step three, construction site image acquisition: the construction site image acquisition equipment acquires a construction site image and synchronously transmits the acquired image to the processor for processing.
And step four, detecting the acquired image by using the trained tiny-Yolov3 network model, and storing the pictures of the personnel with the security violation.
Fifthly, sending alarm information to a safety responsible person by a person with a safety violation in a mobile phone APP mode; the method specifically comprises the following steps:
1. registering enterprise micro signals;
2. website address:
https://work.weixin.qq.com/wework_admin/register_wxfrom=myhome_qyh_ redirect&ref_from=myhome_baidu
3. acquiring an enterprise ID;
4. creating an application;
5. acquiring an application;
6. a user scans an interested enterprise number;
(1) User clicks enterprise small assistant after paying attention to
(2) After application, the administrator agrees to apply at the WeChat end or the web end of the mobile phone enterprise
(3) Adding personnel applying for entry to a construction site monitoring department
7. Data pushing through enterprise WeChat api interface
https://work.weixin.qq.com/api/doc#90000/90135/90235
8. The technology for developing app adopted by the invention is mui + h5.
The technical effects of the present invention will be described in detail below with reference to the accompanying drawings.
The detection content for the violation of the construction site comprises the following contents: the above examples are not intended to limit the present invention, and some of the results are shown in fig. 5.
After the APP is downloaded and installed, a user name and a password are input for login.
The construction job site interface comprises the functions of currently displayed items, query mode, screening, exporting, printing, forwarding and the like according to video time, analysis time, fault types, camera names, fault picture modes. The user can carry out different treatments according to the needs of the user.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus of the present invention and its modules may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, or software executed by various types of processors, or a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1. A construction safety detection method based on tiny-YOLOv3 is characterized by comprising the following steps:
firstly, acquiring personnel images of personnel images with safety violations in construction sites by using a video monitoring camera, marking personnel contained in the personnel images with the safety violations and the personnel images without the safety violations, and making a data set;
secondly, training a tiny-Yolov3 network model by using a data set;
thirdly, acquiring an image of the construction site, and synchronously transmitting the acquired image to a processor for processing;
fourthly, detecting the acquired image by using the trained tiny-Yolov3 network model, and storing the pictures of the personnel with the security violation;
fifthly, pushing alarm information through the APP by the personnel image with the security violation;
the training of the tiny-YOLOv3 network model by using the data set in the second step specifically comprises the following steps:
(1) Each image is first scaled to 416 x 416;
(2) Each image is divided into S-by-S networks, and B bounding boxes and confidence degrees of the bounding boxes are predicted for each network; the size and position of the bounding box can be represented by 4 parameters (x, y, w, h), wherein (x, y) is the center coordinate of the bounding box, and (w, h) is the width and height of the bounding box, and the confidence expression of the bounding box is as follows:
confidence=Pr(Class|Object)×Pr(Object)×IOU;
in the formula: pr (Class | Object) represents the posterior probability that the predicted target belongs to a certain Class, and Pr (Object) represents whether the target falls into the cell corresponding to the candidate box; if Pr (Object) =0, it means none; if Pr (Object) =1, it means that there is Object falling into the candidate box; the IOU represents the calculation intersection ratio IOU of the prediction box and the real target box;
Figure FSB0000201589180000011
when predicting the object class probability, each network is predicted only once; after the confidence value of each bounding box is obtained, carrying out non-maximum value inhibition processing, removing the bounding boxes with the confidence values lower than the threshold value, and taking the union of the bounding boxes with the confidence values higher than the threshold value as a prediction result;
(3) Predicting the coordinates of the boundary box by using a prior box, and obtaining 9 prior boxes in advance by using a k-means algorithm; cluster center K is 9, the dimensions of the prior boxes are (8, 12), (15, 15), (24, 14), (31, 60), (24, 78), (49, 114), (101, 90), (151, 180) and (332, 311), respectively;
the performing non-maximum suppression processing includes: firstly, finding out a frame with the maximum confidence value from all detection frames; then calculating the overlapping degree of the frame and the rest frames, and if the value of the overlapping degree is larger than a set threshold value, rejecting the frame; then repeating the above process for the rest detection frames until all the detection frames are processed; the error term for the bounding box center coordinates is expressed as follows:
Figure FSB0000201589180000021
λ coord for coordinate prediction error and taking the larger value to be 5,
Figure FSB0000201589180000022
means that the ith cell has a target, and the jth bounding box in the cell is responsible for predicting the target; (x) i ,y i ,w i ,h i ) For the predicted frame coordinate value, < >>
Figure FSB0000201589180000023
Real frame coordinates; the box confidence error terms with and without targets are expressed as follows:
Figure FSB0000201589180000024
/>
Figure FSB0000201589180000025
the classification error term for the cell containing the target is expressed as follows:
Figure FSB0000201589180000026
wherein, P i obj And whether an object falls into the ith bounding box or not is represented, and the final loss function is represented as:
L yolo =L 1 +L 2 +L 3 +L 4
2. the tiny-YOLOv 3-based construction safety detection method as claimed in claim 1, wherein in the first step, when the image is collected, the samples containing the safety violation behaviors at different scales and different angles are obtained by adjusting the focal length of the video collection device and adjusting the angle between the video collection device and the person with the safety violation behaviors.
3. The Tiny-YOLOv 3-based construction safety detection method as claimed in claim 1, wherein the sample image is sent to a Tiny-YOLOv3 network for training, and the training is stopped until the loss function value of the data output of the training set is less than a set threshold or reaches a set maximum iteration number; and taking the network obtained when the training is stopped as the trained network.
4. A construction safety detection system based on tiny-YOLOv3 for executing the detection method as claimed in any one of claims 1 to 3, wherein the construction safety detection system based on tiny-YOLOv3 comprises:
the behavior data set acquisition module is used for acquiring the safety violation behaviors of the construction personnel in the construction site by using the video monitoring camera and marking the behaviors;
the data set training module is used for training the tiny-Yolov3 network model by using the data set;
the image acquisition module is used for synchronously transmitting the acquired image to the processor for processing;
the image detection module is used for detecting the acquired image by using the trained tiny-Yolov3 network model and storing the pictures of the personnel with the security violation;
and the information pushing module is used for pushing the alarm information of the personnel with the security violation.
5. An information data processing terminal for implementing the construction safety detection method based on tiny-YOLOv3 as claimed in any one of claims 1 to 3.
6. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the tiny-YOLOv 3-based construction safety detection method as claimed in any one of claims 1 to 3.
CN201911073341.0A 2019-10-23 2019-10-23 Construction safety detection method and system based on tiny-YOLOv3 Active CN110807429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911073341.0A CN110807429B (en) 2019-10-23 2019-10-23 Construction safety detection method and system based on tiny-YOLOv3

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911073341.0A CN110807429B (en) 2019-10-23 2019-10-23 Construction safety detection method and system based on tiny-YOLOv3

Publications (2)

Publication Number Publication Date
CN110807429A CN110807429A (en) 2020-02-18
CN110807429B true CN110807429B (en) 2023-04-07

Family

ID=69501216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911073341.0A Active CN110807429B (en) 2019-10-23 2019-10-23 Construction safety detection method and system based on tiny-YOLOv3

Country Status (1)

Country Link
CN (1) CN110807429B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461028A (en) * 2020-04-02 2020-07-28 杭州视在科技有限公司 Mask detection model training and detection method, medium and device in complex scene
CN111401314B (en) * 2020-04-10 2023-06-13 上海东普信息科技有限公司 Dressing information detection method, device, equipment and storage medium
CN111723656B (en) * 2020-05-12 2023-08-22 中国电子系统技术有限公司 Smog detection method and device based on YOLO v3 and self-optimization
CN111985334B (en) * 2020-07-20 2023-09-26 华南理工大学 Gun detection method, system, device and storage medium
CN112001284A (en) * 2020-08-14 2020-11-27 中建海峡建设发展有限公司 Labor service real-name system management system based on artificial intelligence
CN112257492A (en) * 2020-08-27 2021-01-22 重庆科技学院 Real-time intrusion detection and tracking method for multiple cameras
CN112183265A (en) * 2020-09-17 2021-01-05 国家电网有限公司 Electric power construction video monitoring and alarming method and system based on image recognition
CN112233175B (en) * 2020-09-24 2023-10-24 西安交通大学 Chip positioning method and integrated positioning platform based on YOLOv3-tiny algorithm
CN112149583A (en) * 2020-09-27 2020-12-29 山东产研鲲云人工智能研究院有限公司 Smoke detection method, terminal device and storage medium
CN112329532A (en) * 2020-09-30 2021-02-05 浙江汉德瑞智能科技有限公司 Automatic tracking safety helmet monitoring method based on YOLOv4
CN112394356B (en) * 2020-09-30 2024-04-02 桂林电子科技大学 Small target unmanned aerial vehicle detection system and method based on U-Net
CN112216073B (en) * 2020-10-12 2022-05-03 浙江大华技术股份有限公司 Ladder violation operation warning method and device
CN112232239A (en) * 2020-10-20 2021-01-15 华雁智能科技(集团)股份有限公司 Substation construction safety signboard monitoring method and device and electronic equipment
CN112434827B (en) * 2020-11-23 2023-05-16 南京富岛软件有限公司 Safety protection recognition unit in 5T operation and maintenance
CN112434828B (en) * 2020-11-23 2023-05-16 南京富岛软件有限公司 Intelligent safety protection identification method in 5T operation and maintenance
CN113076683B (en) * 2020-12-08 2023-08-08 国网辽宁省电力有限公司锦州供电公司 Modeling method of convolutional neural network model for transformer substation behavior monitoring
CN112580627A (en) * 2020-12-16 2021-03-30 中国科学院软件研究所 Yoov 3 target detection method based on domestic intelligent chip K210 and electronic device
CN112818913B (en) * 2021-02-24 2023-04-07 西南石油大学 Real-time smoking calling identification method
CN113536885A (en) * 2021-04-02 2021-10-22 西安建筑科技大学 Human behavior recognition method and system based on YOLOv3-SPP
CN113191274A (en) * 2021-04-30 2021-07-30 西安聚全网络科技有限公司 Oil field video intelligent safety event detection method and system based on neural network
CN113449675B (en) * 2021-07-12 2024-03-29 西安科技大学 Method for detecting crossing of coal mine personnel
CN114596532A (en) * 2021-07-23 2022-06-07 平安科技(深圳)有限公司 Behavior detection method, behavior detection device, behavior detection equipment and storage medium
CN113534146B (en) * 2021-07-26 2023-12-01 中国人民解放军海军航空大学 Automatic detection method and system for radar video image target
CN113743256B (en) * 2021-08-17 2023-12-26 武汉大学 Intelligent early warning method and device for site safety
CN113792656B (en) * 2021-09-15 2023-07-18 山东大学 Behavior detection and alarm system using mobile communication equipment in personnel movement
CN113792665B (en) * 2021-09-16 2023-08-08 山东大学 Forbidden area intrusion detection method aiming at different role authorities

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018051349A1 (en) * 2016-09-15 2018-03-22 R.A.S Robotics Artificial Intelligence Ltd. Facility monitoring by a distributed robotic system
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN109948501A (en) * 2019-03-13 2019-06-28 东华大学 The detection method of personnel and safety cap in a kind of monitor video
CN110059558A (en) * 2019-03-15 2019-07-26 江苏大学 A kind of orchard barrier real-time detection method based on improvement SSD network
WO2019175686A1 (en) * 2018-03-12 2019-09-19 Ratti Jayant On-demand artificial intelligence and roadway stewardship system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018051349A1 (en) * 2016-09-15 2018-03-22 R.A.S Robotics Artificial Intelligence Ltd. Facility monitoring by a distributed robotic system
WO2019175686A1 (en) * 2018-03-12 2019-09-19 Ratti Jayant On-demand artificial intelligence and roadway stewardship system
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN109948501A (en) * 2019-03-13 2019-06-28 东华大学 The detection method of personnel and safety cap in a kind of monitor video
CN110059558A (en) * 2019-03-15 2019-07-26 江苏大学 A kind of orchard barrier real-time detection method based on improvement SSD network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘君 ; 谢颖华 ; .智能视频监控系统中改进YOLO算法的实现.信息技术与网络安全.2019,(04),全文. *
施辉 ; 陈先桥 ; 杨英 ; .改进YOLO v3的安全帽佩戴检测方法.计算机工程与应用.2019,(11),全文. *

Also Published As

Publication number Publication date
CN110807429A (en) 2020-02-18

Similar Documents

Publication Publication Date Title
CN110807429B (en) Construction safety detection method and system based on tiny-YOLOv3
CN108921159B (en) Method and device for detecting wearing condition of safety helmet
CN111062429A (en) Chef cap and mask wearing detection method based on deep learning
Zhan et al. A high-precision forest fire smoke detection approach based on ARGNet
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN111598040A (en) Construction worker identity identification and safety helmet wearing detection method and system
CN108875533B (en) Face recognition method, device, system and computer storage medium
CA3089307A1 (en) System and method for creating geo-localized enhanced floor plans
CN113177469B (en) Training method and device of human attribute detection model, electronic equipment and medium
CN112613569A (en) Image recognition method, and training method and device of image classification model
CN113111817A (en) Semantic segmentation face integrity measurement method, system, equipment and storage medium
CN109376736A (en) A kind of small video target detection method based on depth convolutional neural networks
WO2024022059A1 (en) Environment detection and alarming method and apparatus, computer device, and storage medium
CN111914656A (en) Personnel behavior detection method and device, electronic equipment and storage medium
CN113343779A (en) Environment anomaly detection method and device, computer equipment and storage medium
CN113723361A (en) Video monitoring method and device based on deep learning
CN112801227A (en) Typhoon identification model generation method, device, equipment and storage medium
Mei et al. Human intrusion detection in static hazardous areas at construction sites: Deep learning–based method
CN115393563A (en) Package detection method and system and electronic equipment
CN110298302A (en) A kind of human body target detection method and relevant device
Gündüz et al. A new YOLO-based method for social distancing from real-time videos
CN113314230A (en) Intelligent epidemic prevention method, device, equipment and storage medium based on big data
CN113505643A (en) Violation target detection method and related device
CN111667450A (en) Ship quantity counting method and device and electronic equipment
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant