CN113435278A - Crane safety detection method and system based on YOLO - Google Patents

Crane safety detection method and system based on YOLO Download PDF

Info

Publication number
CN113435278A
CN113435278A CN202110671257.XA CN202110671257A CN113435278A CN 113435278 A CN113435278 A CN 113435278A CN 202110671257 A CN202110671257 A CN 202110671257A CN 113435278 A CN113435278 A CN 113435278A
Authority
CN
China
Prior art keywords
yolo
pedestrian
electronic fence
frame
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110671257.XA
Other languages
Chinese (zh)
Inventor
夏祺皓
余超
顾恒豪
彭萍萍
郑正奇
赵昆
陈雯
黄帅
纪文清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202110671257.XA priority Critical patent/CN113435278A/en
Publication of CN113435278A publication Critical patent/CN113435278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a traveling crane safety detection method based on YOLO, and belongs to the technical field of information engineering. The method includes the steps of firstly, acquiring data according to a specific application scene, and training a network model. And installing a network camera at the top of the traveling crane, shooting downwards, acquiring a video below the traveling crane, inputting a trained network for real-time processing, and detecting pedestrians appearing in the video. When the goods are hoisted by a crane, an electronic fence is defined in the surrounding area of the goods by using an opencv function, and an area intrusion detection algorithm is designed, so that the function of alarming when a pedestrian intrudes into a dangerous area is realized. The invention also discloses a system for realizing the crane safety detection method. The method adopts a target detection classification model of a YOLO algorithm and optimizes specific application scenes, so that real-time alarm is given when people enter a dangerous area, and potential safety hazards in the using process of the crane are reduced.

Description

Crane safety detection method and system based on YOLO
Technical Field
The invention belongs to the technical field of information engineering, and relates to a traveling crane safety detection method and system based on YOLO.
Background
Nowadays, with the development of computer vision technology, vision-based target detection and tracking technology has been widely applied to the fields of human-computer interaction, security monitoring and the like, and has shown powerful functions.
The travelling crane system is widely installed in a factory workshop environment needing to hoist cargos, but in the using process, accidents caused by the fact that operators mistakenly break into a region below the travelling crane for hoisting cargos and are hit by the cargos often occur. In the traditional method for solving the problems, people's activities are mostly monitored by using sensing means such as an infrared sensor, however, the monitoring method based on the sensing means has low sensitivity, poor real-time performance and high-precision sensor cost.
YOLO is a target detection algorithm based on a deep learning neural network structure that segments an input picture into a grid of S × S, and then each cell is responsible for detecting those targets whose center points fall within the grid. The output of the algorithm is to predict the offsets based on the N default boxes at each grid and to predict the corresponding categories at the same time. The N preset default frames are representative frames obtained by clustering on the pre-training data set, so that the accuracy of the output frames and the convergence of algorithm regression can be ensured. The algorithm can eliminate detection errors caused by the change of the background, and can identify the target even if the background is very complicated.
Disclosure of Invention
In order to overcome the defects of the traditional crane security method technology, the invention aims to provide a crane safety detection method based on the YOLO (YOLO), a target detection algorithm in computer vision is applied to a crane system, and a target detection means of the YOLO algorithm is utilized to monitor the area below the crane in real time.
In order to achieve the purpose, the method for detecting the safety of the crane based on the YOLO is realized by the following scheme, and specifically comprises the following steps:
step 1: pre-training a YOLO network model, performing transfer learning according to the data acquired by the application scene, setting a confidence threshold, and framing out the target with the confidence greater than or equal to the confidence threshold by using a detection frame. The confidence threshold is set according to the actual detection effect, and the set range interval is (0-1).
Step 2: and a network camera with a WiFi module is installed at the top of the traveling crane, the area for lifting goods below the traveling crane is shot, and the video stream is transmitted to a local server in real time.
And step 3: and (3) using a retangle method in OpenCV to demarcate a rectangular electronic fence around a goods lifting area in the video, and setting the area as a dangerous area.
And 4, step 4: and designing an intrusion detection algorithm, and outputting a signal to trigger an alarm when detecting that the pedestrian enters a dangerous area defined by the electronic fence.
And 5: and intercepting the frame of image when the pedestrian breaks into the dangerous area, and storing the frame of image to the local for inspection.
Further, step 1 comprises the following sub-steps:
1.1, firstly, transplanting a neural network framework of the YOLO to deep learning software tensorflow, and training the YOLO network by adopting ImageNet1000 classification data set to obtain a pre-trained network weight file.
1.2, initializing a YOLO model by using the weight file obtained by pre-training in the step 1.1. Since the camera is installed above the pedestrian, the visual angle during detection is a overlooking visual angle, and the visual angle shot by the camera in the conventional data set is mostly in front of the pedestrian. In order to have better detection effect in the specific scene, pictures of the top view angle are taken in the second floor of a factory and a shopping mall, and a label is made by using a LabelImg tool to be used as a pedestrian top view data set, wherein the data set comprises 1000 images of the top view angle of the pedestrian. 80% of the data set is used as a training set, 20% of the data set is used as a testing set, transfer learning is carried out on the YOLO network, and the generalization capability of the model is improved. Improving the precision by improving the grid resolution in the migration learning process of the YOLO network, and fixing the identification type as 'person';
and 1.3, setting a confidence threshold value through repeated experiments and tests.
1.4, defining a detection frame for the image in the video area, wherein the detection frame is defined according to the following formula:
Figure BDA0003118883530000021
(x,y,w,h,confidence) (2)
the confidence level is determined by equation (1), where pr (object) indicates whether a pedestrian is present in one of the grids drawn by the YOLO algorithm, and is 1 if present and 0 if absent.
Figure BDA0003118883530000022
Represents the overlap ratio of the areas between the prediction box and the actual box. When the YoLO model detects a pedestrian, five values in the formula (2) are directly output, wherein (x, y) represents the relative value of the center of the predicted boundary box and the grid boundary line, and (w, h) represents the ratio of the width and the height of the predicted boundary box to the width and the height of the whole image, and the value is between 0 and 1. The prediction frame refers to a frame predicted by a YOLO algorithm, and the actual frame refers to a frame calibrated manually when the pedestrian overlook data set is manufactured in the step 1.2.
Further, the method for defining the electronic fence in step 3 comprises the following steps: and taking 1.5 times of the area of the lifted goods in the video as a dangerous area, and using a retangle method in OpenCV to demarcate the electronic fence of the dangerous area.
Further, the intrusion detection algorithm of step 4 adopts the following method: the intrusion detection algorithm mainly adopts the idea of logic judgment to judge whether a person enters the virtual electronic fence or not, and takes the upper left corner of the image as the original point (0, 0) of a Cartesian coordinate system and faces right in the imageAnd the downward direction is a positive value, and the coordinates (x) of the upper left corner of the target detection box are extracted from the YOLO algorithm1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate of the upper left corner of the dangerous area framed by the electronic fence is (x'1,y'1) And the lower right corner coordinate is (x'2,y'2). Whether the pedestrian is out of the range of the electronic fence is judged by the following four formulas:
when x is1>x'2Then, the horizontal coordinate of the upper left corner of the target frame is larger than the horizontal coordinate of the lower right corner of the electronic fence; when y is1>y'2Then, the ordinate of the upper left corner of the target frame is larger than the ordinate of the lower right corner of the electronic fence; when x is2<x'1Then, the horizontal coordinate of the lower right corner of the target frame is smaller than the horizontal coordinate of the upper left corner of the electronic fence; when y is2<y'1Then, the ordinate of the lower right corner of the target frame is smaller than the ordinate of the upper left corner of the electronic fence; in the four conditions, only any one of the four conditions is needed to be met, and the pedestrian can be judged to be out of the range of the electronic fence. And in other cases, the target frame of the pedestrian invades the range of the electronic fence, and the system triggers the alarm. For example, the coordinates of the upper left corner of the electronic fence are (200 ), the coordinates of the lower right corner of the electronic fence are (300 ), the coordinates of the upper left corner of the pedestrian detection frame are (100 ), and the coordinates of the lower right corner of the electronic fence are (150 ), and the relationship between the coordinates meets the condition that the pedestrian is out of the electronic fence, so that the pedestrian can be determined to be out of the dangerous area at the moment, and the pedestrian is in the dangerous area under other conditions. The pseudo code is as follows:
if(x1>x'2 or y1>y'2 or x2<x'1 or y2<y'1)
the then pedestrian is out of the range of the electronic fence
else pedestrians are within the range of the electronic fence
The invention also provides a system for realizing the YOLO crane safety detection method, and the system comprises: the system comprises a network camera, a workstation, a traveling crane system and a router, wherein the network camera is used for collecting videos of a region below a traveling crane and converting the videos into digital image signals to be transmitted to the local, the workstation is used for training and operating a YOLO network model, the traveling crane system is used for testing, and the router is used for configuring a network;
the network camera supports an rtsp video transmission protocol, and transmits the acquired video stream to the workstation for processing. The workstation used had 3070GPU for network training and image detection.
The beneficial effects of the invention include:
the invention uses the YOLO algorithm in computer vision as the algorithm for pedestrian detection, replaces the traditional method using a sensor, enables the detection task to be more feasible, only needs to install a camera with lower cost, and compared with the traditional detection, the detection of the YOLO algorithm has higher real-time performance and higher sensitivity, and reduces the accidents caused by missing detection.
The large data set pre-trained YOLO algorithm shows excellent detection effect when facing a complex background environment. As the application scenes of the method are all overlooking visual angles, the top view of the person needs to be detected, aiming at the special problem, the top views of the person in the factory and market environments are collected, are manually calibrated by a LabelImg tool to form a pedestrian overlooking data set, and the data set is used for carrying out transfer learning again on the YOLO network pre-trained by the large data set, so that the generalization capability of the model is further improved, and the detection precision is further improved.
When a dangerous area is defined, an electronic fence is defined in a video by using a powerful opencv tool bag. When judging whether the pedestrian breaks into the dangerous area below the travelling crane, writing an intrusion detection algorithm, wherein the algorithm judges whether the pedestrian breaks into the dangerous area by logically comparing the coordinate values of the upper left corner and the lower right corner of the pedestrian detection frame with the coordinate values of the upper left corner and the lower right corner of the electronic fence. The algorithm is verified to effectively solve the problem of judging whether the pedestrian breaks into the dangerous area in real time.
Drawings
Fig. 1 is a flow chart of a crane safety detection method of the invention.
Fig. 2 is a diagram showing effects of the present invention in practical operation.
FIG. 3 is a YOLO algorithm model diagram.
Detailed Description
The present invention will be described in further detail with reference to the following specific examples and the accompanying drawings. The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art except for the contents specifically mentioned below, and the present invention is not particularly limited.
The invention provides a YOLO crane safety detection system, which comprises: the system comprises a network camera, a workstation, a traveling crane system and a router, wherein the network camera is used for collecting videos of a region below a traveling crane and converting the videos into digital image signals to be transmitted to the local, the workstation is used for training and operating a YOLO network model, the traveling crane system is used for testing, and the router is used for configuring a network.
The network camera supports an rtsp video transmission protocol, and transmits the acquired video stream to the workstation for processing. The workstation used had 3070GPU for network training and image detection.
The invention also provides a method for detecting the safety of the crane by using the system, and as shown in fig. 1, the method specifically comprises the following steps:
step 1: pre-training a YOLO network model, continuously training the network model according to data acquired by an application scene, setting a confidence threshold, and framing out a target with the confidence greater than or equal to the confidence threshold by using a detection frame; when the confidence is set to 0.8, missing detection is often found in actual detection, especially for small targets. In order to reduce the probability of missed detection, the confidence coefficient is reduced.
The step 1 can be specifically divided into the following substeps:
1.1, firstly, transplanting a neural network framework of the YOLO to deep learning software tensorflow, and training the YOLO network by adopting ImageNet1000 classification data set to obtain a pre-trained network weight file.
1.2, initializing a YOLO model by the weight file obtained by pre-training in the step 1.1. Because the camera is installed above the pedestrian, the visual angle is a overlooking visual angle during detection, and the visual angle shot by the camera in the conventional data set is mostly in front of the pedestrian, so that a good effect cannot be achieved if the network obtained by training the conventional data set is used for detection. In order to have better detection effect in the specific scene, pictures of the top view angle are taken in the second floor of a factory and a shopping mall, and a label is made by using a LabelImg tool to be used as a pedestrian top view data set, wherein the data set comprises 1000 images of the top view angle of the pedestrian. 80% of the data set is used as a training set, 20% of the data set is used as a testing set, transfer learning is carried out on the YOLO network, and the generalization capability of the model is improved. Improving the precision by improving the grid resolution in the migration learning process of the YOLO network, and fixing the identification type as 'person';
and 1.3, setting a confidence threshold value through repeated experiments and tests.
1.4, defining a detection frame for the image in the video area, wherein the detection frame is defined according to the following formula:
Figure BDA0003118883530000041
(x,y,w,h,confidence) (2)
the confidence level is determined by equation (1), where pr (object) indicates whether a pedestrian is present in one of the grids drawn by the YOLO algorithm, and is 1 if present and 0 if absent.
Figure BDA0003118883530000051
Represents the overlap ratio of the areas between the prediction box and the actual box. When the YoLO model detects a pedestrian, five values in the formula (2) are directly output, wherein (x, y) represents the relative value of the center of the predicted boundary box and the grid boundary line, and (w, h) represents the ratio of the width and the height of the predicted boundary box to the width and the height of the whole image, and the value is between 0 and 1. The prediction box refers to a box predicted by a YOLO algorithm; the actual frame refers to a frame manually calibrated when the pedestrian overlooking data set is manufactured in the step 1.2.
Step 2: and a network camera with a WiFi module is installed at the top of the traveling crane, the area for lifting goods below the traveling crane is shot, and the video stream is transmitted to a local server in real time.
And step 3: and (3) using a retangle method in OpenCV to demarcate a rectangular electronic fence around a goods lifting area in the video, and setting the area as a dangerous area.
The method for defining the electronic fence in the step 3 comprises the following steps: and taking 1.5 times of the area of the lifted goods in the video as a dangerous area, and using a retangle method in OpenCV to demarcate the electronic fence of the dangerous area.
And 4, step 4: and designing an intrusion detection algorithm, and outputting a signal to trigger an alarm when detecting that the pedestrian enters a dangerous area defined by the electronic fence.
Step 4, the intrusion detection algorithm adopts the following method: the intrusion detection algorithm mainly adopts the idea of logic judgment to judge whether a person enters the virtual electronic fence or not, takes the upper left corner of the image as the original point (0, 0) of a Cartesian coordinate system and takes the right and downward directions in the image as positive values, and extracts the coordinates (x) of the upper left corner of the target detection box from the YOLO algorithm1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate of the upper left corner of the dangerous area framed by the electronic fence is (x'1,y'1) And the lower right corner coordinate is (x'2,y'2). Judging that the pedestrian is out of the range of the electronic fence by the following four formulas:
when x is1>x'2Then, the horizontal coordinate of the upper left corner of the target frame is larger than the horizontal coordinate of the lower right corner of the electronic fence; when y is1>y'2Then, the ordinate of the upper left corner of the target frame is larger than the ordinate of the lower right corner of the electronic fence; when x is2<x'1Then, the horizontal coordinate of the lower right corner of the target frame is smaller than the horizontal coordinate of the upper left corner of the electronic fence; when y is2<y'1Then, the ordinate of the lower right corner of the target frame is smaller than the ordinate of the upper left corner of the electronic fence; in the four conditions, only any one of the four conditions is needed to be met, and the pedestrian can be judged to be out of the range of the electronic fence. And in other cases, the target frame of the pedestrian invades the range of the electronic fence, and the system triggers the alarm. For example, the coordinates of the top left corner of the electronic fence are (200 ), the coordinates of the bottom right corner are (300 ), the coordinates of the top left corner of the pedestrian detection frame are (100 ), the coordinates of the bottom right corner are (150 ), and the relationship between the coordinates satisfies the condition that the pedestrian is outside the electronic fence, so that the pedestrian can be determined to be outside the dangerous area at the moment,otherwise the pedestrian is in a hazardous area. The pseudo code is as follows:
if(x1>x'2 or y1>y'2 or x2<x'1 or y2<y'1)
the then pedestrian is out of the range of the electronic fence
else pedestrians are within the range of the electronic fence
And 5: the coordinates of the detection frame can be obtained from the output of a YOLO algorithm, and the coordinates of the electronic fence area can be automatically adjusted according to the requirements of users.
Step 6: and intercepting the frame of image when the pedestrian breaks into the dangerous area, and storing the frame of image to the local for inspection.
The invention collects top views of characters under different scenes to be made into a data set. The characteristics of fast speed, high precision and strong generalization capability of a YOLO algorithm are utilized, a pedestrian detection model suitable for a traveling crane scene is trained by combining a self-made data set, and an intrusion detection algorithm is designed to realize the function of alarming when intrusion occurs. The model of the invention realizes stronger generalization capability under a traveling crane system, and obviously improves the detection precision.
The protection of the present invention is not limited to the above embodiments. Variations and advantages that may occur to those skilled in the art may be incorporated into the invention without departing from the spirit and scope of the inventive concept, which is set forth in the following claims.

Claims (9)

1. A traveling crane safety detection method based on YOLO is characterized by comprising the following steps:
step 1: pre-training a YOLO network model, performing transfer learning according to application scene acquired data, setting a confidence threshold, and framing out a target with the confidence greater than or equal to the confidence threshold by using a detection frame;
step 2: the method comprises the following steps that a network camera with a WiFi module is installed at the top of a traveling crane, an area for lifting goods below the traveling crane is shot, and video streams are transmitted to a local server in real time;
and step 3: a rectangular electronic fence is defined around a goods lifting area in a video in a line mode, and the area is set as a dangerous area;
and 4, step 4: designing an intrusion detection algorithm, and outputting a signal to trigger an alarm when detecting that a pedestrian enters a dangerous area defined by the electronic fence;
and 5: and intercepting the frame of image when the pedestrian breaks into the dangerous area, and storing the frame of image to the local for inspection.
2. The method according to claim 1, wherein the specific method in step 1 is as follows:
1.1, firstly, transplanting a neural network framework of the YOLO to deep learning software tensorflow, training the YOLO network by adopting ImageNet1000 classification data set, and obtaining a pre-trained network weight file;
1.2, initializing a YOLO model by using the weight file obtained by pre-training in the step 1.1, and additionally manufacturing a pedestrian overlook data set; taking 80% of data in the pedestrian overlook data set as a training set and 20% of data in the pedestrian overlook data set as a test set, performing migration learning on a YOLO network, and improving the generalization capability of the model, wherein in the migration learning process of the YOLO network, the accuracy is improved by improving the grid resolution, and the identification type is fixed to be 'person';
1.3, setting a confidence threshold value through repeated tests and tests;
1.4, defining a detection frame for the image in the video area, wherein the detection frame is defined according to the following formula:
Figure FDA0003118883520000011
(x,y,w,h,confidence) (2)
wherein, confidence is confidence, and the value is determined by formula (1); pr (object) indicates whether the pedestrian appears in one of the grids drawn by the YOLO algorithm, and the appearance is 1, and the non-appearance is 0;
Figure FDA0003118883520000012
representing the proportion of overlap of area between predicted and actual boxes(ii) a When a person is detected by the YOLO model, five values in the formula (2) are directly output, wherein (x, y) represents the relative value of the center of the predicted boundary box and a grid boundary line, and (w, h) represents the ratio of the width and the height of the predicted boundary box to the width and the height of the whole image, and the value is between 0 and 1; the prediction frame refers to a frame predicted by a YOLO algorithm, and the actual frame refers to a frame calibrated manually when the pedestrian overlook data set is manufactured in the step 1.2.
3. The method according to claim 1, wherein in step 1, the confidence threshold is set according to the actual detection effect, and the set range is (0-1).
4. The method of claim 2, wherein in step 1.2, the pedestrian overhead data set is obtained by taking overhead view photographs and labeling using a LabelImg tool.
5. The method of claim 1, wherein in step 3, the method of demarcating an electronic fence comprises: and taking 1.5 times of the area of the lifted goods in the video as a dangerous area, and defining the electronic fence of the dangerous area.
6. The method of claim 1, wherein in step 4, the intrusion detection algorithm uses logic to determine whether a person enters the virtual electronic fence, and specifically comprises the following steps:
taking the upper left corner of the image as the original point (0, 0) of a Cartesian coordinate system, and extracting the coordinates (x) of the upper left corner of the target detection frame from the YOLO algorithm1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate of the upper left corner of the dangerous area framed by the electronic fence is (x'1,y'1) And the lower right corner coordinate is (x'2,y'2) (ii) a And judging whether the pedestrian is in the range of the electronic fence or not by comparing the coordinate values of the target detection frame and the electronic fence frame.
7. The method according to claim 6, wherein the determination method is specifically as follows:
when x is1>x'2Then, the horizontal coordinate of the upper left corner of the target frame is larger than the horizontal coordinate of the lower right corner of the electronic fence; when y is1>y'2Then, the ordinate of the upper left corner of the target frame is larger than the ordinate of the lower right corner of the electronic fence; when x is2<x'1Then, the horizontal coordinate of the lower right corner of the target frame is smaller than the horizontal coordinate of the upper left corner of the electronic fence; when y is2<y'1Then, the ordinate of the lower right corner of the target frame is smaller than the ordinate of the upper left corner of the electronic fence; when any one or more of the four conditions are met, the pedestrian can be judged to be out of the range of the electronic fence; and in other cases, the target frame of the pedestrian invades the range of the electronic fence, and the system triggers the alarm.
8. A YOLO crane safety inspection system implementing the method of any one of claims 1-7, the system comprising: the system comprises a network camera, a workstation, a traveling crane system and a router, wherein the network camera is used for collecting videos of an area below a traveling crane and converting the videos into digital signals to be transmitted to the local, the workstation is used for training and operating a YOLO network model, the traveling crane system is used for testing, and the router is used for configuring a network.
9. The system according to claim 8, wherein the network camera used supports rtsp video transmission protocol, and transmits the collected video stream to the workstation for processing; the workstation used had 3070GPU for network training and image detection.
CN202110671257.XA 2021-06-17 2021-06-17 Crane safety detection method and system based on YOLO Pending CN113435278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110671257.XA CN113435278A (en) 2021-06-17 2021-06-17 Crane safety detection method and system based on YOLO

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110671257.XA CN113435278A (en) 2021-06-17 2021-06-17 Crane safety detection method and system based on YOLO

Publications (1)

Publication Number Publication Date
CN113435278A true CN113435278A (en) 2021-09-24

Family

ID=77756190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110671257.XA Pending CN113435278A (en) 2021-06-17 2021-06-17 Crane safety detection method and system based on YOLO

Country Status (1)

Country Link
CN (1) CN113435278A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920535A (en) * 2021-10-12 2022-01-11 广东电网有限责任公司广州供电局 Electronic region detection method based on YOLOv5
CN114399458A (en) * 2021-11-30 2022-04-26 中国电子科技集团公司第十五研究所 Crossing fence detection method and system based on deep learning target detection
CN115403258A (en) * 2022-08-30 2022-11-29 蚌埠凯盛工程技术有限公司 Glass deep processing system and scheduling method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009089A (en) * 2019-11-25 2020-04-14 国网安徽省电力有限公司建设分公司 Power grid infrastructure site virtual fence system based on RGB-D camera and control method thereof
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN112287816A (en) * 2020-10-28 2021-01-29 西安交通大学 Dangerous working area accident automatic detection and alarm method based on deep learning
CN112949520A (en) * 2021-03-10 2021-06-11 华东师范大学 Aerial photography vehicle detection method and detection system based on multi-scale small samples

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111009089A (en) * 2019-11-25 2020-04-14 国网安徽省电力有限公司建设分公司 Power grid infrastructure site virtual fence system based on RGB-D camera and control method thereof
CN111144232A (en) * 2019-12-09 2020-05-12 国网智能科技股份有限公司 Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment
CN112287816A (en) * 2020-10-28 2021-01-29 西安交通大学 Dangerous working area accident automatic detection and alarm method based on deep learning
CN112949520A (en) * 2021-03-10 2021-06-11 华东师范大学 Aerial photography vehicle detection method and detection system based on multi-scale small samples

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘鑫: "基于深度学习的危险区人员检测系统的应用研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李雪倩: "基于交通视频的路网运行状态智能监控系统研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920535A (en) * 2021-10-12 2022-01-11 广东电网有限责任公司广州供电局 Electronic region detection method based on YOLOv5
CN113920535B (en) * 2021-10-12 2023-11-17 广东电网有限责任公司广州供电局 Electronic region detection method based on YOLOv5
CN114399458A (en) * 2021-11-30 2022-04-26 中国电子科技集团公司第十五研究所 Crossing fence detection method and system based on deep learning target detection
CN115403258A (en) * 2022-08-30 2022-11-29 蚌埠凯盛工程技术有限公司 Glass deep processing system and scheduling method
CN115403258B (en) * 2022-08-30 2023-11-21 蚌埠凯盛工程技术有限公司 Glass deep processing system and scheduling method

Similar Documents

Publication Publication Date Title
CN113435278A (en) Crane safety detection method and system based on YOLO
US6816184B1 (en) Method and apparatus for mapping a location from a video image to a map
US5757287A (en) Object recognition system and abnormality detection system using image processing
CN110232320B (en) Method and system for detecting danger of workers approaching construction machinery on construction site in real time
CN112800860B (en) High-speed object scattering detection method and system with coordination of event camera and visual camera
WO2022135511A1 (en) Method and apparatus for positioning moving object, and electronic device and storage medium
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN109448326B (en) Geological disaster intelligent group defense monitoring system based on rapid image recognition
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN105611244A (en) Method for detecting airport foreign object debris based on monitoring video of dome camera
CN110703760B (en) Newly-added suspicious object detection method for security inspection robot
CN106778540B (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN109828267A (en) The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
JP3486229B2 (en) Image change detection device
CN108789500A (en) Man-machine safety guard system and safety protecting method
CN112287823A (en) Facial mask identification method based on video monitoring
CN110349172B (en) Power transmission line external damage prevention early warning method based on image processing and binocular stereo ranging
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
CN114332732A (en) Railway crisis monitoring method based on radar vision fusion
CN111274872B (en) Video monitoring dynamic irregular multi-supervision area discrimination method based on template matching
KR20210158037A (en) Method for tracking multi target in traffic image-monitoring-system
Lin et al. Airborne moving vehicle detection for urban traffic surveillance
JPH0514891A (en) Image monitor device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210924