CN113903002A - Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model - Google Patents

Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model Download PDF

Info

Publication number
CN113903002A
CN113903002A CN202111188166.7A CN202111188166A CN113903002A CN 113903002 A CN113903002 A CN 113903002A CN 202111188166 A CN202111188166 A CN 202111188166A CN 113903002 A CN113903002 A CN 113903002A
Authority
CN
China
Prior art keywords
tower crane
personnel
detection
full
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111188166.7A
Other languages
Chinese (zh)
Inventor
林其雄
陈畅
段斐
周鑫
蔡蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202111188166.7A priority Critical patent/CN113903002A/en
Publication of CN113903002A publication Critical patent/CN113903002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a tower crane below abnormal intrusion detection method based on a tower crane below personnel detection model. Marking collected field personnel sample pictures under the tower crane in the infrastructure scene, and then learning deep semantic information of top view personnel characteristics under the tower crane in the infrastructure scene by using a deep learning network model based on double head-RCNN; and detecting the data set of the top view personnel below the tower crane of the capital construction on the test set by using the trained model, predicting the positions of the operating personnel in the pictures, predicting the detection confidence coefficients of the corresponding positions, and finally removing the overlapped detection frames according to the set overlapped threshold value to finish the abnormal intrusion detection of the personnel below the tower crane of the capital construction. The method can realize automatic detection of abnormal intrusion of personnel below the infrastructure tower crane, has the advantages of high accuracy, good stability, strong anti-interference capability, high universality and the like, has good robustness, and can be applied to an intelligent supervision system on the infrastructure site.

Description

Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model
Technical Field
The invention relates to a method for detecting abnormal intrusion of the lower part of a capital construction tower crane, in particular to a tower crane lower part abnormal intrusion detection method based on a Double Head-RCNN tower crane lower part personnel detection model.
Background
The safety problem of personnel below a tower crane in infrastructure is an abnormal key, and in recent years, the accidents of rope falling, unhooking, rope breaking and hook breaking of the tower crane occur, so that the safety problem of personnel rushing into the lower part can be caused. Meanwhile, the abnormal intrusion of personnel below the tower crane needs to be monitored in real time. On the one hand, the existing personnel monitoring mode is time-consuming and labor-consuming, on the other hand, accidental operation can also occur, and danger can also occur. Therefore, the intelligent supervision of the abnormal intrusion personnel below the tower crane by deep learning instead of supervision personnel is also the most common intelligent supervision mode in the existing automatic infrastructure. The most important and fundamental problem to be solved in supervision is how to obtain accurate and rapid personnel detection results in a complex field environment.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a tower crane below abnormal intrusion detection method based on a tower crane below personnel detection model, which can realize automatic detection of abnormal intrusion of personnel below a tower crane under a capital construction, has the advantages of higher accuracy, good stability, strong anti-interference capability, high universality and the like, has good robustness, and can be applied to an intelligent supervision system in a capital construction site.
The technical scheme of the invention is as follows:
the invention comprises the following steps:
1) acquiring field personnel sample pictures under a tower crane in a capital construction scene, and making a corresponding sample label file for each picture;
2) establishing a deep learning network model based on Double Head-RCNN by combining a fast R-CNN detection network model with an FPN characteristic pyramid network model and a parallel detection Head module;
3) randomly dividing all the obtained field personnel sample pictures below the tower crane into a training set and a testing set;
4) performing data enhancement on the training set to obtain a training set after data enhancement;
5) training the deep learning network model by using the training set after data enhancement to obtain a preliminarily trained personnel detection model below the tower crane;
6) testing the performance of personnel detection below the tower crane after the initial training by adopting a test set, adjusting training parameters and a detection confidence coefficient threshold according to a test result, and optimizing and solidifying a personnel detection model below the tower crane;
7) and inputting a solidified personnel detection model below the tower crane aiming at the image to be detected, and outputting to obtain a detection result.
The tower crane lower detection personnel sample picture is a picture acquired by taking personnel rushing into a site below a tower crane as a target object under various infrastructure scenes of the tower crane and by using a monitoring camera to face the left-right deviation of the target object by 15 degrees and within a shooting distance range of 5-25 meters.
The step 2) is specifically as follows:
inputting second-layer to fifth-layer characteristic diagrams of a backbone network in a fast R-CNN detection network model into an FPN characteristic pyramid network model, inputting the output of the FPN characteristic pyramid network model into an interested area module, replacing a full connection layer behind the interested area module with a parallel detection Head module, and then establishing a tower crane lower person detection model based on Double Head-RCNN.
The parallel detection head module in the step 2) comprises a full-connection layer and a full-convolution layer which are arranged in parallel, the output of the interested area module is respectively input into the full-connection layer and the full-convolution layer, and the output of the full-connection layer and the output of the full-convolution layer are both used as the output of a personnel detection model below the tower crane.
Replacing the full convolution layer with a bottleneck residual module, wherein the bottleneck residual module comprises an identity mapping branch, 3 convolution layers and an activation layer;
the input of the bottleneck residual module is connected with the third convolution layer after passing through the first convolution layer and the second convolution layer in sequence, the output of the bottleneck residual module after passing through the identity mapping branch is added with the output of the third convolution layer and then input into the active layer, and the output of the active layer is used as the output of the bottleneck residual module.
The loss function of the tower crane below personnel detection model during training is as follows:
L=wfcLfc+wconvLconv+Lrpn
wherein L is the total loss function of the personnel detection model below the tower crane, wfcAnd wconvThe loss function weights, L, of the fully connected and fully convolved layers, respectively, in the parallel detection head modulefc、LconvAnd LrpnThe loss functions of the full link layer, the full convolution layer and the region-of-interest module in the parallel detection head module are respectively.
And step 4) specifically, performing multi-aspect processing of random overturning, random brightness enhancement and color channel standardization on the sample pictures of the field personnel below the tower crane in the training set in sequence to obtain the training set after data enhancement.
The color channel normalization in step 4) is specifically to process each color channel by using the following formula:
Figure BDA0003300140340000021
wherein mu and sigma respectively represent the mean value and the variance of the same channel obtained by counting the RGB channel values of the tower crane lower break-in personnel sample picture on the training set, and x' respectively represent the pixel value of one channel in the RGB channel of the tower crane lower break-in personnel sample picture before and after the color channel is standardized.
The performance of the tower crane below personnel detection model after the preliminary training is tested by adopting the test set in the step 6), and the method specifically comprises the following steps: the statistical test focuses on the proportion of the number of frames with the overlap ratio of the predicted frame and the real frame exceeding the overlap ratio threshold value to the total number of the real frames and is used as a test result.
The invention has the beneficial effects that:
compared with the traditional detection method for the personnel on the construction site, the method has the advantages of high accuracy, good robustness and universality to various construction environments;
in the invention, on a fast-RCNN target detection model taking ResNet50 as a backbone network, a Double Head structure is adopted to replace the original shared network structure, different functional biases of a full connection layer and a convolution layer are applied, the full connection layer is directly adopted to classify a characteristic diagram, and the full convolution layer is utilized to determine the position of a detection target. Furthermore, a bottleneck residual error network structure in ResNet is adopted to replace a common convolution structure, and the learning capability of the network on the characteristics is improved.
The method realizes high detection precision on the premise of high efficiency and has stronger anti-interference capability.
Drawings
Fig. 1 is a picture of an example training sample.
FIG. 2 shows an example of embodiment notation.
FIG. 3 is a schematic structural diagram of a Double Head module according to the present invention.
Fig. 4 is a schematic structural diagram of a bottleneck residual error module in the present invention.
FIG. 5 is a schematic structural diagram of a personnel detection model below the tower crane in the invention.
FIG. 6 is a diagram illustrating detection and location of abnormal intrusion under a tower crane of a capital construction according to an embodiment.
Detailed Description
The invention is described in further detail below with reference to the figures and the embodiments.
The implementation process of the complete method implemented according to the content of the invention is as follows:
the invention comprises the following steps:
1) acquiring site personnel sample pictures under a tower crane in a capital construction scene, wherein a typical picture of the pictures is shown in figure 1, and manufacturing a corresponding sample label file for each picture; the tag file meets the xml tag file standard in the Pascal VOC format. The label file includes the name of the image, the path of the image, the height and width of the image, and the position and width of the center point of the real target frame, and a typical diagram of the label file is shown in fig. 2.
The pictures of the sample of the detection personnel below the tower crane are acquired in various infrastructure scenes of the tower crane by taking personnel rushing into a site below the tower crane as a target object and controlling the left-right deviation of the target object by 15 degrees through a monitoring camera and taking the pictures within the range of 5-25 meters.
2) Establishing a deep learning network model based on Double Head-RCNN by combining a fast R-CNN detection network model with an FPN characteristic pyramid network model and a parallel detection Head Double Head module;
the step 2) is specifically as follows:
as shown in fig. 3 and 5, inputting second-layer to fifth-layer feature maps of a ResNet50 backbone network in a fast R-CNN detection network model into an FPN feature pyramid network model, inputting the output of the FPN feature pyramid network model into an interested region module, and establishing a Double Head-RCNN-based deep learning network model after replacing a full connection layer behind the interested region module with a parallel detection Head Double Head module;
and the backbone network uses ResNet50 to extract the features in stages, and then uses an FPN structure to mix the features of each stage, so as to supplement the missing semantic information in the bottom layer large-receptive-field low-semantic feature map of the network and supplement the missing accurate position information of the top layer small-receptive-field high-semantic feature map of the network. Namely, the image is input into an input layer of a ResNet50 network frame model, the outputs from the second feature extraction stage to the fifth feature extraction stage of the ResNet50 network frame model are all connected to the input of an FPN feature pyramid network model, and the FPN structure interpolates and fuses the stage feature maps output from the second feature extraction stage to the fifth feature extraction stage of the ResNet50 network frame model and outputs feature maps with different scales. Specifically, a small-size characteristic diagram in a high stage of layer by layer is subjected to bilinear interpolation to obtain a characteristic diagram with the same size as that of the previous stage, training fusion parameters are supervised, and information fusion is carried out on the characteristic diagrams with different scales to obtain a characteristic diagram group fused with information.
The parallel detection head module in the step 2) comprises a full-connection layer and a full-convolution layer which are arranged in parallel, and different functions of the full-connection layer and the full-convolution layer are used for deviation respectively. Specifically, the full-link layer has translation invariance, so that the full-link layer is directly adopted to classify the feature map, the full-convolution layer has a better effect of extracting object features, and the full-convolution layer is used for determining the position of a detection target. And the output of the region-of-interest module is respectively input into the full-connection layer and the full-convolution layer, and the output of the full-connection layer and the output of the full-convolution layer are both used as the output of a personnel detection model below the tower crane.
The full convolution layer is replaced by a bottleneck residual module, so that the extraction capability of the network is further enhanced. The bottleneck residual module includes identity mapping branches, 3 convolutional layers and an active layer, as shown in fig. 4;
the input of the bottleneck residual module is connected with the third convolution layer after passing through the first convolution layer and the second convolution layer in sequence, the output of the bottleneck residual module after passing through the identity mapping branch is added with the output of the third convolution layer and then input into the active layer, and the output of the active layer is used as the output of the bottleneck residual module. The sizes of convolution kernels of the first convolution layer, the second convolution layer and the third convolution layer are 1 x 1, 3 x 3 and 1 x 1 respectively, wherein the 1 x 1 convolution layer is used for reducing the calculation amount while increasing the number of channels; a 3 x 3 convolutional layer was used for feature extraction.
3) Randomly dividing all the obtained field personnel sample pictures below the tower crane into a training set and a testing set;
the total number of experimental pictures was 3847. 3500 pictures are used for training, and 347 pictures are used as a test set. And performing data enhancement before the training picture enters the model training, and adopting a random turning and color channel standardization method. The data enhanced pictures were scaled to 1000 × 600 size uniformly using ResNet50 model parameters pre-trained on ImageNet. The parameter updating mode is SGD, the initial learning rate is 0.02, the momentum term is 0.9, the weight attenuation coefficient is 1 multiplied by 10 < -4 >, the total batch training size is 16, and the training iteration times are 50000 times. The training is started slowly by adopting 2000 iterations, and the learning rate is reduced by 10 times in the stage reduction mode of the learning rate when the iteration times are 35000 and 45000.
4) Performing data enhancement on the training set to obtain a training set after data enhancement;
step 4) specifically, performing multi-aspect processing of random overturning, random brightness enhancement and color channel standardization on field personnel sample pictures below the tower crane in the training set in sequence to obtain a training set after data enhancement;
the color channel normalization in step 4) is specifically to process each color channel by using the following formula:
Figure BDA0003300140340000051
wherein mu and sigma respectively represent the mean value and the variance of the same channel obtained by counting the RGB channel values of the tower crane lower break-in personnel sample picture on the training set, and x' respectively represent the pixel value of one channel in the RGB channel of the tower crane lower break-in personnel sample picture before and after the color channel is standardized.
5) Training the deep learning network model by using the training set after data enhancement to obtain a preliminarily trained personnel detection model below the tower crane;
the loss function of the training of the personnel detection model below the tower crane is as follows:
L=wfcLfc+wconvLconv+Lrpn
wherein L is the total loss function of the personnel detection model below the tower crane, wfcAnd wconvRespectively, the loss function weight, L, of the fully connected layer and the fully convolved layer (or bottleneck residual module) in the parallel detection head modulefc、LconAnd LrpnRespectively a full connection layer and a full convolution layer in the parallel detection head module (Or bottleneck residual module) and a region of interest module.
In step 5), the trained pictures are uniformly scaled to the same size, parameters of a ResNet50 network framework model are pre-trained by using ImageNet known data, the parameter updating mode during training is SGD, the initial learning rate is 0.01, the momentum term is 0.9, and the weight attenuation coefficient is 1 multiplied by 10-4The batch training size is 50000 training iterations with 4 training iterations. Training was started slowly with 2000 iterations and a learning rate phase descent.
The slow start specifically refers to starting training by adopting 0.001 time of the initial learning rate in the 1 st stage of training, linearly increasing to the initial learning rate in the specified iteration times along with the increase of the iteration times, and then starting the 2 nd stage to recover the initial learning rate for training.
The learning rate phase reduction mode specifically means that the learning rate is scaled 1/10 on the original basis at 35000 iterations and 45000 iterations.
6) Testing the performance of personnel detection below the tower crane after the initial training by adopting a test set, adjusting training parameters and a detection confidence coefficient threshold according to a test result, and optimizing and solidifying a personnel detection model below the tower crane;
and 6) testing the performance of the personnel detection model below the tower crane after the preliminary training by adopting a test set, specifically comprising the following steps: the statistical test focuses on the proportion of the number of frames with the overlap ratio between the predicted frame and the real frame exceeding the overlap ratio threshold value to the total number of the real frames and is used as a test result, and generally, the overlap ratio threshold value is selected to be 0.5.
7) And inputting a solidified personnel detection model below the tower crane aiming at the image to be detected, and outputting to obtain a detection result.
Compared with the traditional model, the method has the advantage that the detection effect is obviously improved. Table 1 shows the comparison results of the method (result is 0.840) and the fast-RCNN detection network (result is 0.802) which uses ResNet50 as a backbone network and the Double Head structure (result is 0.831) detection result which adopts a common convolution kernel. The detection precision refers to the proportion of effective detection frames obtained by running on the test set to the total number of target frames. Wherein, the effective detection frame means that the coincidence degree of the detection frame and the marking frame exceeds 0.5.
TABLE 1 comparison of test models
Figure BDA0003300140340000061
And testing on the test set by using the trained model, framing a prediction box on a test sample picture, and marking the prediction confidence, wherein a typical test result is shown in fig. 6. And then calculating the average accuracy of the detection model, and solidifying the detection model with a better effect.
The foregoing detailed description is intended to illustrate and not limit the invention, which is intended to be within the spirit and scope of the appended claims, and any changes and modifications that fall within the true spirit and scope of the invention are intended to be covered by the following claims.

Claims (9)

1. A tower crane below abnormal intrusion detection method based on a tower crane below personnel detection model comprises the following steps:
1) acquiring field personnel sample pictures under a tower crane in a capital construction scene, and making a corresponding sample label file for each picture;
2) establishing a deep learning network model based on Double Head-RCNN by combining a fast R-CNN detection network model with an FPN characteristic pyramid network model and a parallel detection Head module;
3) randomly dividing all the obtained field personnel sample pictures below the tower crane into a training set and a testing set;
4) performing data enhancement on the training set to obtain a training set after data enhancement;
5) training the deep learning network model by using the training set after data enhancement to obtain a preliminarily trained personnel detection model below the tower crane;
6) testing the performance of personnel detection below the tower crane after the initial training by adopting a test set, adjusting training parameters and a detection confidence coefficient threshold according to a test result, and optimizing and solidifying a personnel detection model below the tower crane;
7) and inputting a solidified personnel detection model below the tower crane aiming at the image to be detected, and outputting to obtain a detection result.
2. The tower crane below abnormal intrusion detection method based on the tower crane below personnel detection model according to claim 1, characterized in that:
the tower crane lower detection personnel sample picture is a picture acquired by taking personnel rushing into a site below a tower crane as a target object under various infrastructure scenes of the tower crane and by using a monitoring camera to face the left-right deviation of the target object by 15 degrees and within a shooting distance range of 5-25 meters.
3. The method for detecting abnormal intrusion below a tower crane based on Double Head-RCNN as claimed in claim 1, wherein the step 2) is specifically as follows:
inputting second-layer to fifth-layer characteristic diagrams of a backbone network in a fast R-CNN detection network model into an FPN characteristic pyramid network model, inputting the output of the FPN characteristic pyramid network model into an interested area module, replacing a full connection layer behind the interested area module with a parallel detection Head module, and then establishing a tower crane lower person detection model based on Double Head-RCNN.
4. The tower crane below abnormal intrusion detection method based on the tower crane below personnel detection model according to claim 1, characterized in that:
the parallel detection head module in the step 2) comprises a full-connection layer and a full-convolution layer which are arranged in parallel, the output of the interested area module is respectively input into the full-connection layer and the full-convolution layer, and the output of the full-connection layer and the output of the full-convolution layer are both used as the output of a personnel detection model below the tower crane.
5. The tower crane below personnel detection model-based tower crane below abnormal intrusion detection method according to claim 4, characterized in that the full convolution layer is replaced by a bottleneck residual error module, and the bottleneck residual error module comprises an identity mapping branch, 3 convolution layers and an activation layer;
the input of the bottleneck residual module is connected with the third convolution layer after passing through the first convolution layer and the second convolution layer in sequence, the output of the bottleneck residual module after passing through the identity mapping branch is added with the output of the third convolution layer and then input into the active layer, and the output of the active layer is used as the output of the bottleneck residual module.
6. The method for detecting the abnormal intrusion of the lower part of the tower crane based on the personnel detection model of the lower part of the tower crane according to claim 1, wherein a loss function during training of the personnel detection model of the lower part of the tower crane is as follows:
Figure FDA0003300140330000022
wherein L is the total loss function of the personnel detection model below the tower crane,
Figure FDA0003300140330000023
and wconvThe loss function weights of the full-link layer and the full-convolution layer in the parallel detection head module are respectively,
Figure FDA0003300140330000024
Lconvand LrpnThe loss functions of the full link layer, the full convolution layer and the region-of-interest module in the parallel detection head module are respectively.
7. The tower crane below personnel detection model-based tower crane below abnormal intrusion detection method according to claim 1, characterized in that step 4) specifically comprises performing multi-aspect processing of random overturning, random brightness enhancement and color channel standardization on field personnel sample pictures below a tower crane in a training set in sequence to obtain a training set after data enhancement.
8. The tower crane below abnormal intrusion detection method based on the tower crane below personnel detection model according to claim 7, wherein the color channel standardization in the step 4) is specifically to process each color channel by adopting the following formula:
Figure FDA0003300140330000021
wherein mu and sigma respectively represent the mean value and the variance of the same channel obtained by counting the RGB channel values of the tower crane lower break-in personnel sample picture on the training set, and x' respectively represent the pixel value of one channel in the RGB channel of the tower crane lower break-in personnel sample picture before and after the color channel is standardized.
9. The tower crane below personnel detection model-based tower crane below abnormal intrusion detection method according to claim 1, characterized in that in step 6), a test set is adopted to test the performance of the preliminarily trained tower crane below personnel detection model, specifically: the statistical test focuses on the proportion of the number of frames with the overlap ratio of the predicted frame and the real frame exceeding the overlap ratio threshold value to the total number of the real frames and is used as a test result.
CN202111188166.7A 2021-10-12 2021-10-12 Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model Pending CN113903002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111188166.7A CN113903002A (en) 2021-10-12 2021-10-12 Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111188166.7A CN113903002A (en) 2021-10-12 2021-10-12 Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model

Publications (1)

Publication Number Publication Date
CN113903002A true CN113903002A (en) 2022-01-07

Family

ID=79191672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111188166.7A Pending CN113903002A (en) 2021-10-12 2021-10-12 Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model

Country Status (1)

Country Link
CN (1) CN113903002A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764799A (en) * 2022-05-07 2022-07-19 广东电网有限责任公司广州供电局 Material detection method based on Guided adsorbing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109019335A (en) * 2018-09-04 2018-12-18 大连理工大学 A kind of Hoisting Security distance detection method based on deep learning
CN111062373A (en) * 2020-03-18 2020-04-24 杭州鲁尔物联科技有限公司 Hoisting process danger identification method and system based on deep learning
CN111079722A (en) * 2020-03-23 2020-04-28 杭州鲁尔物联科技有限公司 Hoisting process personnel safety monitoring method and system
US20200334501A1 (en) * 2019-04-18 2020-10-22 Adobe Inc Robust training of large-scale object detectors with a noisy dataset
CN113313082A (en) * 2021-07-28 2021-08-27 北京电信易通信息技术股份有限公司 Target detection method and system based on multitask loss function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109019335A (en) * 2018-09-04 2018-12-18 大连理工大学 A kind of Hoisting Security distance detection method based on deep learning
US20200334501A1 (en) * 2019-04-18 2020-10-22 Adobe Inc Robust training of large-scale object detectors with a noisy dataset
CN111062373A (en) * 2020-03-18 2020-04-24 杭州鲁尔物联科技有限公司 Hoisting process danger identification method and system based on deep learning
CN111079722A (en) * 2020-03-23 2020-04-28 杭州鲁尔物联科技有限公司 Hoisting process personnel safety monitoring method and system
CN113313082A (en) * 2021-07-28 2021-08-27 北京电信易通信息技术股份有限公司 Target detection method and system based on multitask loss function

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUE WU 等: "Rethinking Classification and Localization for Object Detection", 《CVPR 2020》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764799A (en) * 2022-05-07 2022-07-19 广东电网有限责任公司广州供电局 Material detection method based on Guided adsorbing

Similar Documents

Publication Publication Date Title
CN108109385B (en) System and method for identifying and judging dangerous behaviors of power transmission line anti-external damage vehicle
KR20220023335A (en) Defect detection methods and related devices, devices, storage media, computer program products
CN111583198A (en) Insulator picture defect detection method combining FasterR-CNN + ResNet101+ FPN
CN111080598B (en) Bolt and nut missing detection method for coupler yoke key safety crane
CN103366506A (en) Device and method for automatically monitoring telephone call behavior of driver when driving
CN111582072A (en) Transformer substation picture bird nest detection method combining ResNet50+ FPN + DCN
CN111860160A (en) Method for detecting wearing of mask indoors
CN112153334B (en) Intelligent video box equipment for safety management and corresponding intelligent video analysis method
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
CN111444801A (en) Real-time detection method for infrared target of unmanned aerial vehicle
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN110602446A (en) Garbage recovery reminding method and system and storage medium
CN114120272A (en) Multi-supervision intelligent lane line semantic segmentation method fusing edge detection
CN114841920A (en) Flame identification method and device based on image processing and electronic equipment
CN112861646A (en) Cascade detection method for oil unloading worker safety helmet in complex environment small target recognition scene
CN113903002A (en) Tower crane below abnormal intrusion detection method based on tower crane below personnel detection model
CN115239710A (en) Insulator defect detection method based on attention feedback and double-space pyramid
CN108898098A (en) Early stage video smoke detection method based on monitor supervision platform
CN112861762B (en) Railway crossing abnormal event detection method and system based on generation countermeasure network
CN110796008A (en) Early fire detection method based on video image
CN113902958A (en) Anchor point self-adaption based infrastructure field personnel detection method
CN115588207A (en) Monitoring video date recognition method based on OCR
CN115909285A (en) Radar and video signal fused vehicle tracking method
CN116092115A (en) Real-time lightweight construction personnel safety dressing detection method
CN115393419A (en) Pavement pit area detection method and device based on size calibration cloth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220107