CN109858367B - Visual automatic detection method and system for worker through supporting unsafe behaviors - Google Patents

Visual automatic detection method and system for worker through supporting unsafe behaviors Download PDF

Info

Publication number
CN109858367B
CN109858367B CN201811632875.8A CN201811632875A CN109858367B CN 109858367 B CN109858367 B CN 109858367B CN 201811632875 A CN201811632875 A CN 201811632875A CN 109858367 B CN109858367 B CN 109858367B
Authority
CN
China
Prior art keywords
workers
mask
unsafe
worker
support
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811632875.8A
Other languages
Chinese (zh)
Other versions
CN109858367A (en
Inventor
骆汉宾
叶成
孔婷
方伟立
赵能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201811632875.8A priority Critical patent/CN109858367B/en
Publication of CN109858367A publication Critical patent/CN109858367A/en
Application granted granted Critical
Publication of CN109858367B publication Critical patent/CN109858367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of related building engineering informatization technology, and discloses a visual automatic detection method suitable for workers to support unsafe behaviors, which comprises the following steps: (1) based on a construction site monitoring video, acquiring original data of a passing support structure of workers under the condition of no safety protection facilities; (2) and manually labeling the images in the original data to form a data set. The labels are divided into 3 types, namely steel supports, concrete supports and workers. (3) And (4) training a Mask RCNN-based segmentation Mask prediction model by using a COCO public data set and the labeled data set. (4) And outputting the segmentation mask generated by the prediction model to an overlap judgment module, and judging the position relation between the workers and the supports by the overlap judgment module to determine whether unsafe behaviors of the workers passing through the deep foundation pit supports exist. The invention can detect the behavior of the foundation pit supporting structure which is not safe to pass, has low cost, is easy to popularize and has higher automation degree.

Description

Visual automatic detection method and system for worker through supporting unsafe behaviors
Technical Field
The invention belongs to the field of related building engineering informatization technologies, and particularly relates to an automatic detection method and system for unsafe behaviors of construction site workers through a support structure based on (monitoring) video instance segmentation.
Background
High fall (FFH) is an important cause of safety accidents on construction sites of building engineering. There are many safety policies and procedures for high fall (FFH). Such as: the Occupational Safety and Health Association (OSHA) requires that work on a six foot high face must be protected by fences, safety nets, worker fall arrest systems (PFAS), and the like.
In order to ensure the safety of foundation pit construction, the deep foundation is often provided with a concrete support, a steel support and other support structures. On-site experience and monitoring records indicate that workers often pass through support structures (steel supports, concrete supports) without enclosures for shortcuts while moving about the construction site. It should be mentioned that this behavior is often accompanied by the absence of a safety harness and is therefore unsafe in violation of on-site safety regulations. Therefore, the support for timely detecting and preventing the workers from passing under the condition of no safety protection measures is important for protecting the life safety of the workers and reducing safety accidents.
The current mainstream detection measures are based on field inspection, and the defects of time consumption, labor consumption, lack of timeliness and the like exist due to the fact that the current mainstream detection measures completely depend on field safety management personnel. At present, with the rapid development of machine vision technology based on deep learning, technicians in related fields study a large number of visual automatic detection methods based on construction engineering construction site monitoring videos and combined with neural networks. The application fields mainly include worker tracking, progress monitoring, productivity analysis, construction safety and the like. Aiming at the behavior of workers passing through a foundation pit supporting structure under the condition of no safety measures, the technical requirement of developing a visual automatic detection method which is low in cost, high in efficiency and easy to popularize exists.
Disclosure of Invention
In view of the above drawbacks and needs of the prior art, the present invention provides a method and system for visual automated inspection of workers by supporting unsafe behavior, which is suitable for visual automated inspection by a supporting structure without protective facilities. The method aims to collect image data of a support structure under the condition of no protection facilities through video monitoring of a construction site to form an unsafe traffic data set, conduct behavior feature recognition through a Mask RCNN model based on the image data of a video, and distinguish whether workers pass through the support structure through pixel overlapping detection, so that real-time continuous automatic capture of unsafe behaviors in construction is achieved.
To achieve the above object, according to one aspect of the present invention, there is provided a visual automated inspection method for a worker to detect unsafe behavior of the worker through a support for automatically inspecting whether the worker passes through a support structure of a foundation pit without protective measures, the method comprising the steps of:
step 1: constructing a deep neural network model for detecting the behaviors of the unsafe traffic support structure, wherein the deep neural network model comprises an example segmentation module and an overlapping detection module;
the instance segmentation module is based on Mask RCNN and is used for identifying semantic masks of workers and supports;
the overlap detection module is used for judging whether the worker passes on the support according to whether the pixel shared by the worker and the semantic mask of the support exceeds a predefined threshold value or not, so that whether unsafe behaviors of the worker passing through the foundation pit support exist or not is determined;
step 2: training an example segmentation module based on Mask RCNN, comprising:
2.1, collecting image data of a passing support structure of workers under the condition of no safety protection facilities based on monitoring video data stored in a construction site of the constructional engineering to form original data;
2.2, marking workers and supported semantic Mask masks on the images in the original data to form a data set containing the original data and the corresponding semantic masks; randomly dividing a data set into a training set, a verification set and a test set;
2.3, initializing a Mask RCNN detection module by using a COCO public data set, training the initialized Mask RCNN by using a training set, and verifying the trained Mask RCNN by using a verification set; if the Mask identification accuracy of the verification result meets a preset threshold, entering the step 3, otherwise, returning to the step 2.1, and marking, training and verifying again after the original data capacity is enlarged;
and step 3: and testing the detection effect of the deep neural network model by using the test set.
Further, in step (1), in order to reduce the deviation of the training model, picture data with different viewing angles, target sizes and lighting conditions are selected.
Further, in the step (2), the labels are divided into 3 types, namely steel supports, concrete supports and workers.
Further, in step 2.3, the Mask RCNN uses a residual error network + a feature pyramid as a feature extractor, and is configured to extract a feature image from an image of original data; inputting a characteristic image into a region extraction network to generate a candidate region; and then, aligning the candidate regions, performing convolution and identifying the semantic mask.
Further, anchor points are introduced into the regional extraction network so as to process objects with different scales and aspect ratios.
Further, the loss function L of Mask RCNN in step 2.3 is as follows:
L=Lcls+Lbox+Lmask
wherein L iscls、Lbox、LmaskRespectively representing loss functions of classification, regression and semantic prediction.
In order to achieve the above object, according to another aspect of the present invention, there is provided a visual automatic detection system for a worker by supporting unsafe behaviors, comprising a video monitoring device, a processor, and a deep neural network model program module for detecting behaviors of an unsafe traffic support structure, wherein the deep neural network model program module is obtained after training and verification according to any one of the visual automatic detection methods described above; the processor calls the deep neural network model program module to analyze the image acquired by the video monitoring device, so as to identify whether the worker has unsafe behaviors through the foundation pit supporting structure under the condition of no protective measures.
In general, compared with the prior art, the above technical solution contemplated by the present invention can obtain the following beneficial effects:
(1) the data cost is low: the data set based on the model training and testing is from the monitoring video of the construction site of the building engineering, so that the application effect of the model in the actual scene can be truly reflected, and the data acquisition cost is greatly reduced.
(2) Real-time, uninterrupted, easy to apply: the video-based detection process can complete real-time monitoring and alarming of unsafe behaviors of workers passing through the foundation pit supporting structure under the condition of no protective facilities through real-time and uninterrupted monitoring of monitoring videos commonly used on construction sites of building engineering.
(3) The effect is real and objective: the detection of unsafe behaviors is finished by using an algorithm model based on a convolutional neural network, and the method does not depend on expert experience and artificial judgment and has certain objectivity.
(4) Non-invasive, automated, low cost: the image acquisition can be directly carried out by means of video monitoring arranged on a construction site, so that non-invasive and full-automatic detection and monitoring are realized, and the time and the economic cost are saved.
(5) Easy popularization: no matter the construction site is a house, a subway or a market, the support of the foundation pit is similar, the implementation and the application of the method are not influenced by the detection object, and the detection model has stronger generalization capability and lower popularization cost.
Drawings
FIG. 1 is a schematic flow diagram of a preferred embodiment of the present invention;
FIG. 2 is a basic functional block diagram of a preferred embodiment of the present invention;
FIG. 3 is a schematic image processing flow diagram of the Mask RCNN of the preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a feature image processing flow of the preferred embodiment of the present invention;
fig. 5(a) -5 (d) are schematic diagrams illustrating candidate region alignment according to the preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1 and 2, the present invention provides a visual automatic detection method for unsafe actions of workers through a support, which is used for automatically detecting whether a worker passes through a support structure of a foundation pit without protective measures, and the method comprises the following steps:
step 1: constructing a deep neural network model for detecting the behaviors of the unsafe traffic support structure, wherein the deep neural network model comprises an example segmentation module and an overlapping detection module;
the instance segmentation module is based on Mask RCNN and is used for identifying semantic masks of workers and supports;
the overlap detection module is used for judging whether the worker passes on the support according to whether the pixel shared by the worker and the semantic mask of the support exceeds a predefined threshold value or not, so that whether unsafe behaviors of the worker passing through the foundation pit support exist or not is determined;
step 2: training an example segmentation module based on Mask RCNN, comprising:
2.1, collecting image data of a passing support structure of workers under the condition of no safety protection facilities based on monitoring video data stored in a construction site of the constructional engineering to form original data;
2.2, marking workers and supported semantic Mask masks on the images in the original data to form a data set containing the original data and the corresponding semantic masks; randomly dividing a data set into a training set, a verification set and a test set;
2.3, initializing a Mask RCNN detection module by using a COCO public data set, training the initialized Mask RCNN by using a training set, and verifying the trained Mask RCNN by using a verification set; if the Mask identification accuracy of the verification result meets a preset threshold, entering the step 3, otherwise, returning to the step 2.1, and marking, training and verifying again after the original data capacity is enlarged;
and step 3: and testing the detection effect of the deep neural network model by using the test set.
The principles and operation of the example segmentation module and the overlap detection module will be described in detail and illustrated with reference to fig. 2 to 4 and fig. 5(a) to 5 (d).
First, example segmentation module (based on MaskRCNN)
The Mask RCNN network adopted by the invention is an optimal algorithm solution of an example segmentation task in the field of computer vision at present. Mask RCNN is similar to other two-stage target detection networks based on candidate areas, a series of areas possibly containing targets to be detected are generated in the first stage, and the areas are classified into backgrounds or targets by the aid of a convolutional neural network in the second stage. Specifically, the Mask RCNN is based primarily on fast RCNN: the difference between the two cores is that a branch structure for predicting Segmentation Masks (Segmentation Masks) is added to the target candidate region, so that the Mask RCNN not only can perform target detection, but also can be competent for instance Segmentation tasks.
As shown in fig. 3, the image input Mask RCNN will first pass through a Convolutional Neural Network (CNN) based feature extractor. After the operations of convolution, pooling, activation and the like of the CNN module, a series of Feature images (Feature Maps) of the original image are obtained. The Region extraction network is essentially a full convolution neural network (full convolutional neural network), and functions to generate candidate regions (or Region of Interests) that may contain an object to be detected, using a feature image as an input, and each candidate Region will be accompanied by a classification prediction and a bounding box prediction.
After the candidate regions are generated, the candidate regions are segmented from the feature image according to the sizes and the positions of the candidate regions. Through the processing of the roiign layer, the original local feature images with different sizes and shapes are unified into a local feature image with a specific size and shape (aspect ratio), and the local feature images are taken as input to be processed as follows:
(1) as input to the Fully Connected Layers (Fully Connected Layers), the classification and bounding box prediction results are processed and output.
(2) The Segmentation Mask (Segmentation Mask) is generated and output by processing of a CNN module including a plurality of convolution Layers (Convolutional Layers).
1. Network architecture
(1) Feature extractor (CNN)
The first CNN module of the whole neural network model serves as a Feature extractor for generating a series of corresponding Feature images (Feature Maps) with the whole image as input. Naturally, there are many choices for the specific structure of the CNN module. The stronger the convolutional neural network, the stronger its feature extraction capability, the better the effect. In the invention, the model selects ResNet-50+ FPN (residual error network + characteristic pyramid) as a basic network structure, and has strong characteristic expression capability.
(2) Regional abstraction network (RPN)
The Mask RCNN adopts a candidate region generation method used by a fast RCNN algorithm: the regional abstraction network (RPN).
As shown in fig. 4, specifically, the RPN convolves Feature images (Feature Maps) with 3 × 3 convolution kernels using a Network structure of ZF networks to generate candidate regions of unknown classes. After ZF Network processing, a 256-dimensional feature vector is generated. As input to two independent fully connected (fc) layers, 2 x k scores for classification (cls) and 4 x k coordinates for regression (reg) layers were generated, respectively. Wherein the classification layer provides 2 probabilities of the detected object/background, and the regression layer provides 4 coordinate values of the detected object bounding box (Bbox).
The super-parameter k here is the number of anchors introduced in the RPN. In order to handle objects of different dimensions and aspect ratios, anchor points are introduced in the RPN. At each sliding position of the mapping, an anchor point is positioned at the center of each object bounding box, with three different sizes (1282, 2562, 5122) and aspect ratios (1: 1, 1: 2, 2: 1), co-located with k-9 anchor points, each object bounding box being parameterized to correspond to an anchor point. Thus, each position will generate 2 × 9 categorical prediction probabilities, 4 × 9 bounding box predictors.
If the size of the feature image output by the last convolutional layer is H × W, the corresponding number of ROIs will be H × W × k.
(3) Candidate region alignment (RoI Align)
Another major contribution of the Mask RCNN compared to the fast RCNN is to solve its "mismatch problem" (mismatching) by structural improvement of the ROI posing layer (module).
In ROI, the deformation is digitized: the cell boundaries of the target (local) feature image are forced to realign with the boundaries of the input feature image. Thus, after the ROI pooling process, the size of each cell may not be equal. Mask RCNN uses ROI Align, which avoids cell boundary digitization and allows each target cell to be the same size. It also applies bilinear interpolation to more accurately compute the elemental map values within the cells. As shown in fig. 5(a) to 5(d), the maximum eigenvalue at the upper left corner is changed from 0.8 to 0.85 by applying interpolation.
(4) End network
All local feature images have the same size and scale (aspect ratio) as input to the next three prediction branches, processed by the RoI Align. The classification prediction and the bounding box coordinate regression share a plurality of same full connection layers, the input of the classification prediction and the bounding box coordinate regression is expanded, the classification prediction and the bounding box coordinate regression are converted into one-dimensional vectors, and classification probability prediction and a bounding box relative coordinate value are respectively output. In addition, Mask RCNN especially adopts full convolution neural networks (full probabilistic Layers) to form Segmentation Mask (Segmentation Mask) prediction branches, and the Segmentation output dimension for each candidate region is K m (wherein m represents the size of a feature map aligned by using RoI Align), namely, m of binary semantic masks of K categories. Unlike the vector-wise realization of the fully-connected layer, it will preserve the spatial information of the local feature image.
2. Loss Function (Loss Function):
in the training process of the model, aiming at each candidate region, the Mask RCNN has a multi-task loss function which is composed of three parts of classification, regression and semantic prediction.
L=Lcls+Lbox+Lmask
Wherein L iscls、Lbox、LmaskRespectively representing loss functions of classification, regression and semantic prediction.
The classification adopts a common cross entropy function to calculate the distance between the predicted target class probability distribution and the real probability distribution; the regression adopts a general mean square error loss function to calculate the difference between the position coordinates and the sizes of the predicted boundary box and the real boundary box; in the aspect of segmentation, binary cross entropy based on single pixel Signmoid is adopted as a loss function.
Second, overlap Detection Module (overlaying Detection Module)
After the image is processed by an example segmentation module based on Mask RCNN, a series of segmentation masks (Mask) of workers are obtained, and a series of segmentation masks including list phi, steel supports and concrete supports are included, and the list psi is included; and sequentially taking one division mask in phi and psi, and recording sigma as a pixel overlapping area of the two masks, and calculating to obtain whether workers corresponding to the two division masks pass on the support or not according to a formula 3.
Figure BDA0001929347020000091
Wherein the hyper-parameter is a threshold value preset according to an experiment. In this embodiment, 5 is taken.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A visual automated method for detecting unsafe behavior of a worker through a support for automatically detecting whether the worker passes through a support structure of a foundation pit without protective measures, the method comprising the steps of:
step 1: constructing a deep neural network model for detecting the behaviors of the unsafe traffic support structure, wherein the deep neural network model comprises an example segmentation module and an overlapping detection module;
the instance segmentation module is based on Mask RCNN and is used for identifying semantic masks of workers and supports;
the overlap detection module is used for judging whether the worker passes on the support according to whether the pixel shared by the worker and the semantic mask of the support exceeds a predefined threshold value or not, so that whether unsafe behaviors of the worker passing through the foundation pit support exist or not is determined;
step 2: training an example segmentation module based on Mask RCNN, comprising:
2.1, collecting image data of a passing support structure of workers under the condition of no safety protection facilities based on monitoring video data stored in a construction site of the constructional engineering to form original data;
2.2, marking workers and supported semantic Mask masks on the images in the original data to form a data set containing the original data and the corresponding semantic masks; randomly dividing a data set into a training set, a verification set and a test set;
2.3, initializing a Mask RCNN detection module by using a COCO public data set, training the initialized Mask RCNN by using a training set, and verifying the trained Mask RCNN by using a verification set; if the Mask identification accuracy of the verification result meets a preset threshold, entering the step 3, otherwise, returning to the step 2.1, and marking, training and verifying again after the original data capacity is enlarged;
and step 3: and testing the detection effect of the deep neural network model by using the test set.
2. The visual automated method for detecting unsafe behavior of workers by supporting the workers as claimed in claim 1, wherein in the step (1), in order to reduce the deviation of the training model, the picture data with different viewing angles, target sizes and lighting conditions are selected.
3. A method for the visual automated detection of unsafe behaviour of workers through supports according to claim 1 or 2, wherein in step (2), the labels are divided into 3 categories, steel support, concrete support, worker, respectively.
4. The visual automated detection method of workers by supporting unsafe behavior according to any one of claims 1-2, wherein in step 2.3, the Mask RCNN employs a residual network + feature pyramid as a feature extractor for extracting a feature image from an image of raw data; inputting a characteristic image into a region extraction network to generate a candidate region; and then, aligning the candidate regions, performing convolution and identifying the semantic mask.
5. A method for visual automated detection of workers by supporting unsafe behavior according to claim 4, wherein anchor points are introduced in the area extraction network to handle objects of different dimensions and aspect ratios.
6. The method for visual automated detection of workers by support of unsafe behavior according to claim 4, wherein the loss function L of Mask RCNN in step 2.3 is as follows:
L=Lcls+Lbox+Lmask
wherein L iscls、Lbox、LmaskRespectively representing loss functions of classification, regression and semantic prediction.
7. A visual automatic detection system for supporting unsafe behaviors by workers is characterized by comprising a video monitoring device, a processor and a deep neural network model program module, wherein the deep neural network model program module is obtained after training and verification are carried out according to the visual automatic detection method of any one of claims 1 to 6 and is used for detecting the behaviors of the unsafe traffic supporting structure; and the processor calls the deep neural network model program module in real time to analyze the image acquired by the video monitoring device in real time, so as to identify whether a worker has unsafe behavior of passing through the foundation pit supporting structure under the condition of no protective measures.
CN201811632875.8A 2018-12-29 2018-12-29 Visual automatic detection method and system for worker through supporting unsafe behaviors Active CN109858367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811632875.8A CN109858367B (en) 2018-12-29 2018-12-29 Visual automatic detection method and system for worker through supporting unsafe behaviors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811632875.8A CN109858367B (en) 2018-12-29 2018-12-29 Visual automatic detection method and system for worker through supporting unsafe behaviors

Publications (2)

Publication Number Publication Date
CN109858367A CN109858367A (en) 2019-06-07
CN109858367B true CN109858367B (en) 2020-08-18

Family

ID=66893210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811632875.8A Active CN109858367B (en) 2018-12-29 2018-12-29 Visual automatic detection method and system for worker through supporting unsafe behaviors

Country Status (1)

Country Link
CN (1) CN109858367B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414551A (en) * 2019-06-14 2019-11-05 田洪涛 A kind of method and system classified automatically based on RCNN network to medical instrument
CN110363769B (en) * 2019-06-19 2023-03-10 西南交通大学 Image segmentation method for cantilever system of high-speed rail contact net supporting device
CN110992318A (en) * 2019-11-19 2020-04-10 上海交通大学 Special metal flaw detection system based on deep learning
CN111368726B (en) * 2020-03-04 2023-11-10 西安咏圣达电子科技有限公司 Construction site operation face personnel number statistics method, system, storage medium and device
CN112116195B (en) * 2020-07-21 2024-04-16 蓝卓数字科技有限公司 Railway beam production procedure identification method based on example segmentation
CN112541413B (en) * 2020-11-30 2024-02-23 阿拉善盟特种设备检验所 Dangerous behavior detection method and system for forklift driver real operation assessment and coaching
CN113052799A (en) * 2021-03-09 2021-06-29 重庆大学 Osteosarcoma and osteochondroma prediction method based on Mask RCNN network
CN113627302B (en) * 2021-08-03 2023-07-18 云南大学 Ascending construction compliance detection method and system
CN114495166A (en) * 2022-01-17 2022-05-13 北京小龙潜行科技有限公司 Pasture shoe changing action identification method applied to edge computing equipment
CN115272968A (en) * 2022-07-28 2022-11-01 三峡绿色发展有限公司 Computer vision-based construction worker edge unsafe behavior identification method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160127206A (en) * 2015-04-23 2016-11-03 가천대학교 산학협력단 System and method for removing eyelashes in iris region
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
CN107481260A (en) * 2017-06-22 2017-12-15 深圳市深网视界科技有限公司 A kind of region crowd is detained detection method, device and storage medium
CN108174165A (en) * 2018-01-17 2018-06-15 重庆览辉信息技术有限公司 Electric power safety operation and O&M intelligent monitoring system and method
CN108921004A (en) * 2018-04-27 2018-11-30 淘然视界(杭州)科技有限公司 Safety cap wears recognition methods, electronic equipment, storage medium and system

Also Published As

Publication number Publication date
CN109858367A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
Li et al. Automatic defect detection of metro tunnel surfaces using a vision-based inspection system
US20220084186A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
Chen et al. A self organizing map optimization based image recognition and processing model for bridge crack inspection
Yang et al. Deep learning‐based bolt loosening detection for wind turbine towers
CN112700444B (en) Bridge bolt detection method based on self-attention and central point regression model
CN109685075A (en) A kind of power equipment recognition methods based on image, apparatus and system
WO2023287276A1 (en) Geographic data processing methods and systems for detecting encroachment by objects into a geographic corridor
CN112613454A (en) Electric power infrastructure construction site violation identification method and system
Guo et al. Evaluation-oriented façade defects detection using rule-based deep learning method
CN110910360B (en) Positioning method of power grid image and training method of image positioning model
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN111144749A (en) Intelligent labeling crowdsourcing platform for power grid images and working method
CN111125290B (en) Intelligent river patrol method and device based on river growth system and storage medium
CN114049620A (en) Image data feature extraction and defect identification method, device and system
CN117173791A (en) Distribution network constructor violation detection method and system based on action recognition
Yu YOLO V5s-based deep learning approach for concrete cracks detection
Park et al. A framework for improving object recognition of structural components in construction site photos using deep learning approaches
Ashraf et al. Machine learning-based pavement crack detection, classification, and characterization: a review
Bush et al. Deep Neural Networks for visual bridge inspections and defect visualisation in Civil Engineering
Samadzadegan et al. Automatic Road Crack Recognition Based on Deep Learning Networks from UAV Imagery
Rakshit et al. Railway Track Fault Detection using Deep Neural Networks
Zhao et al. High-resolution infrastructure defect detection dataset sourced by unmanned systems and validated with deep learning
CN115082650A (en) Implementation method of automatic pipeline defect labeling tool based on convolutional neural network
CN116543327A (en) Method, device, computer equipment and storage medium for identifying work types of operators

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant