CN112884747B - Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network - Google Patents

Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network Download PDF

Info

Publication number
CN112884747B
CN112884747B CN202110222241.0A CN202110222241A CN112884747B CN 112884747 B CN112884747 B CN 112884747B CN 202110222241 A CN202110222241 A CN 202110222241A CN 112884747 B CN112884747 B CN 112884747B
Authority
CN
China
Prior art keywords
convolution
crack
model
bridge crack
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110222241.0A
Other languages
Chinese (zh)
Other versions
CN112884747A (en
Inventor
李刚
周盼
李喜媛
沈倩
兰栋超
陈永强
代玉
张帅龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202110222241.0A priority Critical patent/CN112884747B/en
Publication of CN112884747A publication Critical patent/CN112884747A/en
Application granted granted Critical
Publication of CN112884747B publication Critical patent/CN112884747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Abstract

The invention relates to the technical field of image recognition, in particular to an automatic bridge crack detection system integrating cyclic residual convolution and a context extractor network. The method comprises the following steps: acquiring a bridge crack image by using an image acquisition device, and creating a bridge crack data set for training a deep learning model; creating a novel signature encoder-decoder network of standard convolution in the encoder replaced by a cyclic residual convolution block (RRCNN); using a context extractor network comprising a hole convolution, a dense hole convolution block (DAC), and a residual multi-core pooling block (RMP); constructing a bridge crack automatic detection model by combining a novel feature encoder-decoder network and a context extractor network; training a bridge crack automatic detection model through a bridge crack data set to obtain ideal accuracy; and inputting an image to be detected according to the parameters obtained by training, and outputting a result. The invention has the advantages of low labor cost, high detection precision, strong operability and the like.

Description

Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network
Technical Field
The invention relates to the technical field of image recognition, in particular to an automatic bridge crack detection system integrating cyclic residual convolution and a context extractor network.
Background
Along with the enhancement of the comprehensive national force of China, the traffic industry is rapidly developed. At the same time, bridge construction has also been unprecedented, which not only led to the front of the world, but also achieved well-known world achievements, such as the harbor pearl Australian bridge built in 2018. Therefore, the safety and maintenance problems of the bridge enter the vision of more and more people, and the risks of influencing the safety of the bridge are numerous, such as steel bar leakage, bridge deck material falling and cracks of the bridge body, and the bridge cracks are one of the main risks, and are important hidden dangers of bridge collapse. To ensure the safety of the bridge, it is necessary to detect and maintain the bridge regularly.
However, the inspection environment of the bridge is complex and dangerous, and the conventional bridge crack detection, i.e. visual inspection, not only wastes manpower and time, but also may threaten the personal safety of the inspector. In addition, visual inspection cannot accurately locate the lost portion of the bridge, severely delaying the optimal time for bridge maintenance. In order to discover and repair bridge cracks in time so as to reduce risk indexes and workload of inspection personnel, a method for automatically detecting bridge cracks is urgently needed.
Disclosure of Invention
Based on the above problems, the invention provides an automatic bridge crack detection system integrating cyclic residual convolution with a context extractor network. The method can solve the problems of high labor cost, insufficient precision and the like of the traditional detection method for visually inspecting the bridge cracks. In order to solve the above problems, the present invention provides the following techniques:
an automatic bridge crack detection system integrating cyclic residual convolution with a context extractor network, the method comprising the steps of:
s1, acquiring a bridge crack image by using image acquisition equipment, and creating a bridge crack data set for training a deep learning model;
s2, replacing standard convolution by a cyclic residual convolution block (RRCNN) to obtain a novel characteristic encoder-decoder network model;
s3, using a context extractor network comprising hole convolution, dense hole convolution blocks (DAC) and residual multi-core pooling blocks (RMP);
s4, constructing an automatic bridge crack detection model by combining a novel feature encoder-decoder network and a context extractor network;
s5, training a bridge crack automatic detection model through a bridge crack data set to obtain ideal accuracy;
s6, inputting an image to be detected according to the parameters obtained by training, and outputting a result.
Further, the step S1 specifically includes:
s11, acquiring bridge crack images of different types by using a DUI bridge inspection vehicle to approach a bridge to-be-detected area, and classifying the images according to crack types through a handheld portable crack width measuring instrument (model: TD-FCV-21) and a Sony camera (recommended model: A7M 3);
and S12, marking the crack images by using LabelMe software, amplifying the number of the images by using a sliding window technology, and building a bridge crack data set.
S13, training the required proportion according to a deep learning model, wherein the training is carried out according to the following steps: 2:2, randomly dividing the bridge crack data set according to the proportion to respectively obtain a training set, a verification set and a test set.
Further, the step S2 specifically includes:
s21, a cyclic residual convolution block (RRCNN) is an improved module with a residual network added by the cyclic convolution network, and the main component is a recursive convolution layer, and the novel characteristic encoder is obtained by replacing the standard convolution of the characteristic encoder with the cyclic residual convolution block without additionally increasing parameters;
s22, the feature encoder and the feature decoder are directly connected through jump connection.
Further, the step S3 specifically includes:
s31, hole (Atrous) convolution is that under the condition of the same parameters, a convolution kernel is increased to enlarge the receptive field, so that downsampling operation is not needed;
s32, a dense cavity convolution block (DAC) comprises four cascade branches, the four cascade branches are formed by cavity convolutions with different expansion rates, four different receptive fields are formed, and a 1X 1 convolution neural network is added into the last three branches to correct linear activation;
s33, four pooling layers with different sizes are contained in the residual multi-core pooling block (RMP) and used as an acceptance domain for encoding global context information. And adding a 1 x 1 convolutional layer after each pool stage reduces the dimension of the feature map to 1/N, where N represents the number of channels in the original feature map.
Further, the step S4 specifically includes:
s41, inputting the acquired bridge crack image into a feature encoding module, wherein the output of each layer of the feature encoder comprises the input of a standard convolution layer and the input of a previous cyclic convolution layer;
s42, the input of a context extractor comprising a DAC block and an RMP block is the output of a feature encoder, the input is input to the RMP block through four cascade branches of the DAC block, the dimension of a feature map is reduced through pooling operation and up-sampling, and the output of the context extractor is ensured to be identical with the dimension feature of the original feature map;
s43, the output obtained from the context extractor is input to a feature decoder, and the feature decoder uses transposed convolution to recover high-resolution feature information.
Further, the step S5 specifically includes:
s51, selecting an experimental environment for training a deep learning model, wherein the network structure is based onPytorch in Core (i) i7-4790CPU environment;
s52, determining optimal parameters and iteration times of the model, selecting a small batch gradient descent Method (MBGD) training model by comparing gradient descent methods, and selecting an adaptive optimization method Adam as a model optimizer;
s53, using a Dice loss function added with binary cross entropy loss as a loss function of a model, wherein a loss function formula can be expressed as follows:
L bce =-ω k [plog(q)+(1-p)log(1-q)] (4)
L loss =L dice +L bce (5)
where N is the number of pixels, p (k,i) ∈[0,1]And g (k,i) E {0,1} represents the predictive probability and ground truth labels for class k, respectively. K is the number of categories, sigma k ω k =1 is a class weight, we set empirically
S54, determining that the batch size of the model is 8, the momentum is 0.9, the weight attenuation is 0.0001 and the learning rate is 2e according to the methods of S52 and S53 -4 As shown in FIG. 6, the number of iterations increases during the model training processThe change in the loss value and accuracy can determine the optimal number of iterations to be 100.
S55, the model evaluation method selects typical deep learning model evaluation index Precision (Precision), recall rate (Recall) and F1 score (F1-score), and semantic segmentation model evaluation index average phase-to-phase ratio (mIoU). The model can obtain the best evaluation index value by using the best parameters and the iteration times obtained in the step S52, and an evaluation index formula can be expressed as follows:
wherein True Positive (TP), false Positive (FP), true Negative (TN) and False Negative (FN) represent four different states of crack detection, respectively. Which represent a ground truth crack sub-image identified as a crack, a non-crack sub-image identified as a crack, a ground truth crack sub-image not identified as a crack, and a non-crack sub-image not identified as a crack, respectively. Under the conditions of optimal parameters and iteration number, the values of Precision, recall, F-score and mIoU of the model were 99.28%,98.62%,98.95% and 80.93%, respectively.
Further, the step S6 specifically includes:
and (3) according to the optimal parameters and the iteration times determined in the step (5), using a test set in a database for model test, and selecting different types of cracks and crack evaluation detection results under different noise conditions.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention has low cost, can acquire the image of the bridge crack without a very precise instrument, and can be used for an ordinary camera;
2. the invention discloses an automatic bridge crack detection system integrating cyclic residual convolution and a context extractor network, which realizes high precision of crack detection and effectively eliminates the influence of noise contained in a bridge on crack detection.
3. The training of the model does not need too many images, so that the required labor cost is low;
drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a flow chart of data collection and database creation in an embodiment of the present invention;
FIG. 3 is a schematic representation of the amplification of a dataset using a sliding window technique in an embodiment of the invention;
FIG. 4 is a block diagram of an automated bridge crack detection system incorporating cyclic residual convolution with context extractor network in an embodiment of the present invention;
FIG. 5 is a block diagram of a feature encoder, context extractor, feature decoder module in an embodiment of the invention;
wherein, fig. 5 (a) is a hole convolution schematic diagram, fig. 5 (b) is an RRCNN block schematic diagram, fig. 5 (c) is a DAC block schematic diagram, fig. 5 (d) is an RMP block schematic diagram, and fig. 5 (e) is a feature decoder block schematic diagram;
FIG. 6 shows the variation of the loss value and the accuracy with the number of iterations in the model training process of the automatic bridge crack detection system according to the embodiment of the present invention;
FIG. 7 is a graph showing the effect of detecting a crack in a bridge according to an embodiment of the present invention;
wherein fig. 7 (a) is a detection result of detecting different types of cracks, and fig. 7 (b) is a detection result of detecting cracks under different noise conditions;
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
Example 1:
as shown in fig. 1, an automatic bridge crack detection system that fuses cyclic residual convolution with a context extractor network, comprising the steps of:
step 1, acquiring bridge crack images by using image acquisition equipment, and creating a bridge crack data set for training a deep learning model; the method comprises the following steps:
s11, using a DUI bridge inspection vehicle shown in FIG. 2 to approach a bridge to-be-detected area, acquiring different types of bridge crack images through a handheld portable crack width measuring instrument (model: TD-FCV-21) and Sony camera (recommended model: A7M 3) image acquisition equipment, and classifying the images according to the crack types;
s12, marking the crack image by using a deep learning marking tool LabelMe software. And then the number of the images is amplified by utilizing a sliding window technology shown in fig. 3, and the bridge crack images are subjected to non-overlapping sliding by using a window with a fixed size of 240×240, so that a bridge crack data set which accords with the training number of the deep learning model is obtained.
S13, training the required proportion according to a deep learning model, wherein the training is carried out according to the following steps: 2:2, randomly dividing the bridge crack data set according to the proportion to respectively obtain a training set, a verification set and a test set.
Step 2, replacing standard convolution with a cyclic residual convolution block (RRCNN) to obtain a novel characteristic encoder-decoder network model;
s21, replacing the standard convolution block in the feature encoder with the RRCNN block as shown in fig. 5 (b). RRCNN is an improved module with a residual error network added by a cyclic convolution network, the main component is a recursive convolution layer, parameters are not additionally increased on the basis of improving the network, and a novel characteristic encoder is obtained;
s22 is a feature decoder block as shown in fig. 5 (e), and the feature encoder and the feature decoder are directly connected through a jump connection as shown in fig. 4.
Step 3, using a context extractor network comprising hole convolution, dense hole convolution blocks (DACs) and residual multi-core pooling blocks (RMPs);
s31, under the condition that parameters are the same, receiving fields are expanded by adding atrous, under the condition that parameters and the number of calculation are the same, the original 3×3 convolution kernel is changed into 5×5 or more, and the receiving fields are increased, so that downsampling operation is not needed, and the schematic diagram is shown in fig. 5 (a);
s32, four cascade branches are included in the DAC block and are formed by cavity convolution with different expansion rates, four different receptive fields are formed, and a 1X 1 convolution neural network is added into the last three branches to correct linear activation, wherein a schematic diagram is shown in fig. 5 (b);
four different-sized pooling layers are included in the RMP block to serve as an acceptance field for encoding global context information. And a 1 x 1 convolutional layer is added after each pool stage to reduce the dimension of the feature map to 1/N, where N represents the number of channels in the original feature map, the schematic diagram of which is shown in fig. 5 (c).
S34, the context extractor network is formed by the cavity convolution block, the DAC block and the RMP block.
Step 4, constructing an automatic bridge crack detection model by combining a novel feature encoder-decoder network and a context extractor network;
s41, inputting the bridge crack image to be detected with the size of 224 multiplied by 224 in the data set into a feature coding module as shown in FIG. 4, wherein the output of each layer of the feature coder comprises the input of a standard convolution layer and the input of the last cyclic convolution layer;
s42, the input of a context extractor comprising a DAC block and an RMP block is the output of a feature encoder, the input is input to the RMP block through four cascade branches of the DAC block, the dimension of a feature map is reduced through pooling operation and up-sampling, and the output of the context extractor is ensured to be identical with the dimension feature of the original feature map;
s43, the output obtained from the context extractor is input to a feature decoder, and the feature decoder uses transposed convolution to recover high-resolution feature information.
Training the bridge fracture automatic detection model through a bridge fracture data set to obtain ideal accuracy;
s51, selecting an experimental environment trained by a deep learning model, wherein an operating system used is Windows 10 based on a Pytorch deep learning framework, and the experimental environment comprisesCore (i) i7-4790CPU@3.60GHz,16GB memory and a single TitanX GPU,12GB memory;
s52, determining optimal parameters and iteration times of the model, selecting a small batch gradient descent Method (MBGD) training model by comparing gradient descent methods, and selecting an adaptive optimization method Adam as a model optimizer;
s53, using a Dice loss function added with binary cross entropy loss as a loss function of a model, wherein a loss function formula can be expressed as follows:
L bce =-ω k [plog(q)+(1-p)log(1-q)] (4)
L loss =L dice +L bca (5)
where N is the number of pixels, p (k,i) ∈[0,1]And g (k,i) E {0,1} represents the predictive probability and ground truth labels for class k, respectively. K is the number of categories, sigma k ω k =1 is a class weight, we set empirically
S54, determining that the batch size of the model is 8, the momentum is 0.9, the weight attenuation is 0.0001 and the learning rate is 2e according to the methods of S52 and S53 -4 As shown in fig. 6, the optimal iteration number is determined to be 100 as the loss value and the accuracy change with the increase of the iteration number in the model training process.
S55, the model evaluation method selects typical deep learning model evaluation index Precision (Precision), recall rate (Recall) and F1 score (F1-score), and semantic segmentation model evaluation index average phase-to-phase ratio (mIoU). The model can obtain the best evaluation index value by using the best parameters and the iteration times obtained in S52, S53 and S53, and an evaluation index formula can be expressed as follows:
wherein True Positive (TP), false Positive (FP), true Negative (TN) and False Negative (FN) represent four different states of crack detection, respectively. Which represent a ground truth crack sub-image identified as a crack, a non-crack sub-image identified as a crack, a ground truth crack sub-image not identified as a crack, and a non-crack sub-image not identified as a crack, respectively. Under the conditions of optimal parameters and iteration number, the values of Precision, recall, F-score and mIoU of the model were 99.28%,98.62%,98.95% and 80.93%, respectively.
And 6, using a test set in a database for model test according to the optimal parameters and the iteration times determined in the step 5, and selecting different types of cracks and crack evaluation detection results under different noise conditions. As shown in fig. 7, fig. 7 (a) shows the crack detection results of oblique cracks, cross cracks, net cracks, and wide cracks, and fig. 7 (b) shows the crack detection results under the conditions of handwriting, shadow, low light, and reinforcing bar noise.
The above examples of the present invention are merely illustrative of the present invention and are not intended to represent limitations on the scope of the invention. Reasonable variations and modifications will be apparent to those skilled in the art without departing from the spirit of the invention, and such variations are intended to be included within the scope of the appended claims.

Claims (6)

1. An automatic bridge crack detection system integrating cyclic residual convolution and a context extractor network is characterized by comprising the following steps:
s1, acquiring a bridge crack image by using image acquisition equipment, and creating a bridge crack data set for training a deep learning model;
s2, replacing standard convolution by a cyclic residual convolution block RRCNN to obtain a novel characteristic encoder-decoder network model;
s3, using a context extractor network comprising a hole Atrous convolution, a dense hole convolution block DAC and a residual multi-core pooling block RMP;
s4, constructing a bridge crack automatic detection model by combining a novel feature encoder-decoder network and a context extractor network, inputting the acquired bridge crack image into a feature encoding module, wherein the output of each layer of the feature encoder comprises the input of a standard convolution layer and the input of a last cyclic convolution layer, the input of the context extractor comprising a DAC block and a RMP block is the output of the feature encoder, the input of the context extractor comprises four cascade branches of the DAC block, the output of the context extractor is input into the RMP block, the dimension of a feature map is reduced by pooling operation and up sampling, the output of the context extractor is ensured to be identical with the dimension feature of an original feature map, the output obtained from the context extractor is input into a feature decoder, and the feature decoder uses transposed convolution to recover high-resolution feature information;
s5, training a bridge crack automatic detection model through a bridge crack data set to obtain ideal accuracy;
s6, inputting an image to be detected according to the parameters obtained by training, and outputting a result.
2. The automatic bridge crack detection system of the fusion cyclic residual convolution and context extractor network according to claim 1, wherein said step S1 specifically comprises:
s11, acquiring bridge crack images of different types by using a DUI bridge inspection vehicle to approach a bridge to-be-detected area through a handheld portable crack width measuring instrument and a Sony camera, and classifying the images according to the crack types;
s12, marking crack images by using LabelMe software, amplifying the number of the images by using a sliding window technology, and building a bridge crack data set;
s13, training the required proportion according to a deep learning model, wherein the training is carried out according to the following steps: 2:2, randomly dividing the bridge crack data set according to the proportion to respectively obtain a training set, a verification set and a test set.
3. The automatic bridge crack detection system of the fusion cyclic residual convolution and context extractor network according to claim 1, wherein said step S2 specifically comprises:
s21, a cyclic residual convolution block is an improved module with a residual network added by the cyclic convolution network, the main component is a recursive convolution layer, and a novel characteristic encoder is obtained by replacing standard convolution of the characteristic encoder with the cyclic residual convolution block without additionally adding parameters;
s22, the feature encoder and the feature decoder are directly connected through jump connection.
4. The automatic bridge crack detection system of the fusion cyclic residual convolution and context extractor network according to claim 1, wherein said step S3 specifically comprises:
s31, under the condition that parameters are the same, the convolution kernel is increased to enlarge the receptive field, so that downsampling operation is not needed;
s32, four cascade branches are included in the DAC block and are formed by cavity convolution with different expansion rates, four different receptive fields are formed, and a 1X 1 convolution neural network is added into the last three branches to correct linear activation;
four different sized pooling layers are included in the RMP block to act as an acceptance field, encode global context information, and add a 1 x 1 convolutional layer after each pool level to reduce the dimension of the feature map to 1/N, where N represents the number of channels in the original feature map.
5. The automatic bridge crack detection system of the fusion cyclic residual convolution and context extractor network according to claim 1, wherein said step S5 specifically comprises:
s51, selecting an experimental environment for training a deep learning model, wherein the network structure is based onPytorch in Core (i) i7-4790CPU environment;
s52, determining optimal parameters and iteration times of the model, selecting a small batch of MBGD training model by a gradient descent method, and selecting an adaptive optimization method Adam as a model optimizer;
s53, using a Dice loss function added with binary cross entropy loss as a loss function of a model, wherein a loss function formula can be expressed as follows:
L bce =-ω k [plog(q)+(1-p)log(1-q)] (4)
L loss =L dice +L bce (5)
where N is the number of pixels, p (k,i) ∈[0,1]And g (k,i) E {0,1} represents the predictive probability and ground truth label of class K, K is the number of classes, sigma, respectively k ω k =1 is a class weight, we set empirically
S54, according to the methods of S52, S53,determining model with batch size of 8, momentum of 0.9, weight attenuation of 0.0001 and learning rate of 2e -4 The optimal iteration number can be determined to be 100 along with the change of the loss value and the accuracy degree of the increase of the iteration number in the model training process;
s55, a model evaluation method selects typical deep learning model evaluation index Precision (Precision), recall rate (Recall) and F1 score (F1-score), and semantic segmentation model evaluation index average cross ratio (mIoU), the model can obtain the best evaluation index value by using the best parameters and iteration times obtained in S52, and an evaluation index formula can be expressed as follows:
wherein True Positive (TP), false Positive (FP), true Negative (TN) and False Negative (FN) represent four different states of crack detection, respectively, representing a ground true crack sub-image identified as a crack, a non-crack sub-image identified as a crack, a ground true crack sub-image not identified as a crack and a non-crack sub-image not identified as a crack, respectively, and the values of Precision, recall, F-score and mIoU of the model are 99.28%,98.62%,98.95% and 80.93%, respectively, under the conditions of the optimal parameters and the number of iterations.
6. The automatic bridge crack detection system of the fusion cyclic residual convolution and context extractor network according to claim 1, wherein said step S6 specifically comprises:
and (3) according to the optimal parameters and the iteration times determined in the step (5), using a test set in a database for model test, and selecting different types of cracks and crack evaluation detection results under different noise conditions.
CN202110222241.0A 2021-02-28 2021-02-28 Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network Active CN112884747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110222241.0A CN112884747B (en) 2021-02-28 2021-02-28 Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110222241.0A CN112884747B (en) 2021-02-28 2021-02-28 Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network

Publications (2)

Publication Number Publication Date
CN112884747A CN112884747A (en) 2021-06-01
CN112884747B true CN112884747B (en) 2024-04-16

Family

ID=76054930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110222241.0A Active CN112884747B (en) 2021-02-28 2021-02-28 Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network

Country Status (1)

Country Link
CN (1) CN112884747B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610778B (en) * 2021-07-20 2024-03-26 武汉工程大学 Bridge surface crack detection method and system based on semantic segmentation
CN113658142B (en) * 2021-08-19 2024-03-12 江苏金马扬名信息技术股份有限公司 Hip joint femur near-end segmentation method based on improved U-Net neural network
CN113838014B (en) * 2021-09-15 2023-06-23 南京工业大学 Aero-engine damage video detection method based on double spatial distortion
CN114092815B (en) * 2021-11-29 2022-04-15 自然资源部国土卫星遥感应用中心 Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN114418937B (en) * 2021-12-06 2022-10-14 北京邮电大学 Pavement crack detection method and related equipment
CN114267003B (en) * 2022-03-02 2022-06-10 城云科技(中国)有限公司 Road damage detection method, device and application
CN114662619B (en) * 2022-05-23 2022-08-16 中大检测(湖南)股份有限公司 Bridge monitoring system based on multi-source data fusion
CN115239733B (en) * 2022-09-23 2023-01-03 深圳大学 Crack detection method and apparatus, terminal device and storage medium
CN115880557B (en) * 2023-03-02 2023-05-30 中国科学院地理科学与资源研究所 Pavement crack extraction method and device based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127449A (en) * 2019-12-25 2020-05-08 汕头大学 Automatic crack detection method based on encoder-decoder
WO2020156028A1 (en) * 2019-01-28 2020-08-06 南京航空航天大学 Outdoor non-fixed scene weather identification method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020156028A1 (en) * 2019-01-28 2020-08-06 南京航空航天大学 Outdoor non-fixed scene weather identification method based on deep learning
CN111127449A (en) * 2019-12-25 2020-05-08 汕头大学 Automatic crack detection method based on encoder-decoder

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙梦园 ; 刘义 ; 范文慧 ; .基于多尺度卷积网络的路面图像裂缝分割方法.软件.2020,(第05期),全文. *
张焯林 ; 赵建伟 ; 曹飞龙 ; .构建带空洞卷积的深度神经网络重建高分辨率图像.模式识别与人工智能.2019,(第03期),全文. *

Also Published As

Publication number Publication date
CN112884747A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112884747B (en) Automatic bridge crack detection system integrating cyclic residual convolution and context extractor network
CN111507990B (en) Tunnel surface defect segmentation method based on deep learning
CN111797890A (en) Method and system for detecting defects of power transmission line equipment
CN111507998B (en) Depth cascade-based multi-scale excitation mechanism tunnel surface defect segmentation method
CN111861978A (en) Bridge crack example segmentation method based on Faster R-CNN
CN112949783B (en) Road crack detection method based on improved U-Net neural network
CN111966076A (en) Fault positioning method based on finite-state machine and graph neural network
CN110060251A (en) A kind of building surface crack detecting method based on U-Net
CN111899225A (en) Nuclear power pipeline defect detection method based on multi-scale pyramid structure
CN114049356B (en) Method, device and system for detecting structure apparent crack
CN111507972A (en) Tunnel surface defect detection method combining convolutional neural network and support vector machine
CN114705689A (en) Unmanned aerial vehicle-based method and system for detecting cracks of outer vertical face of building
CN112270187A (en) Bert-LSTM-based rumor detection model
CN115272826A (en) Image identification method, device and system based on convolutional neural network
CN114758329A (en) System and method for predicting temperature of target area in thermal imaging graph based on deep learning
CN113420619A (en) Remote sensing image building extraction method
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN117522149A (en) Tunnel security risk identification method and device and security management platform
CN113516652A (en) Battery surface defect and adhesive detection method, device, medium and electronic equipment
CN116481791A (en) Steel structure connection stability monitoring system and method thereof
CN116542911A (en) End-to-end semi-supervised steel surface defect detection method and system
CN113343861B (en) Remote sensing image water body region extraction method based on neural network model
CN115221045A (en) Multi-target software defect prediction method based on multi-task and multi-view learning
CN115017015A (en) Method and system for detecting abnormal behavior of program in edge computing environment
CN117079048B (en) Geological disaster image recognition method and system based on CLIP model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Gang

Inventor after: Zhou Pan

Inventor after: Li Xiyuan

Inventor after: Shen Qian

Inventor after: Lan Dongchao

Inventor after: Chen Yongqiang

Inventor after: Dai Yu

Inventor after: Zhang Shuailong

Inventor before: Li Gang

Inventor before: Li Xiyuan

Inventor before: Shen Qian

Inventor before: Lan Dongchao

Inventor before: Chen Yongqiang

Inventor before: Dai Yu

Inventor before: Zhang Shuailong

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant