CN113284107B - Attention mechanism-introduced improved U-net concrete crack real-time detection method - Google Patents

Attention mechanism-introduced improved U-net concrete crack real-time detection method Download PDF

Info

Publication number
CN113284107B
CN113284107B CN202110572608.1A CN202110572608A CN113284107B CN 113284107 B CN113284107 B CN 113284107B CN 202110572608 A CN202110572608 A CN 202110572608A CN 113284107 B CN113284107 B CN 113284107B
Authority
CN
China
Prior art keywords
feature map
layer
attention module
detection
passing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110572608.1A
Other languages
Chinese (zh)
Other versions
CN113284107A (en
Inventor
李锐
余猛
许可非
张佳欣
杨平安
吴德成
程隆奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110572608.1A priority Critical patent/CN113284107B/en
Publication of CN113284107A publication Critical patent/CN113284107A/en
Application granted granted Critical
Publication of CN113284107B publication Critical patent/CN113284107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a concrete crack real-time detection method of an attention mechanism-induced improved U-net, and belongs to the field of image detection. The method comprises the following steps: s1: acquiring a data set; s2: carrying out pixel-by-pixel labeling on cracks in the pictures in the data set; s3: constructing a convolutional neural network segmentation model of an attention mechanism-introduced improved U-net, using a MobileNet as a backbone network on the basis of a traditional U-net network, introducing a channel attention module in an encoding part, and introducing a space attention module in a decoding part; s4: sending the data set picture into an improved network model for training; s5: packaging the trained optimal model to a detection platform; s6: acquiring and transmitting pictures to a mobile terminal by using a mobile acquisition platform; s7: and uploading the picture to a detection platform by using the mobile terminal for detection. The invention can realize acquisition and detection at the same time, and increases the working efficiency of concrete surface structure damage detection personnel.

Description

Attention mechanism-introduced improved U-net concrete crack real-time detection method
Technical Field
The invention belongs to the field of image detection, relates to the field of concrete structure health detection and evaluation, and particularly relates to a concrete crack real-time detection method introducing an attention mechanism improved U-net.
Background
In the field of engineering construction, concrete has been widely applied to public transportation facility construction, residential building construction and public service facility construction, and great results have been obtained, which is convenient for people's daily life. However, the concrete facilities are inevitably interfered by various complex factors (such as rain and snow scouring, load impact, earthquake, debris flow and the like) in the operation and use process, the factors can reduce the strength of the concrete structure, finally, the concrete surface is damaged in different shapes, the service life of the concrete facilities is shortened, and potential safety hazards are generated. Among the many concrete damages, concrete structure cracks are among the most significant and fatal. How to rapidly and accurately detect the cracks of the existing concrete structure is a key problem in the field of engineering construction.
The traditional concrete crack detection method mainly adopts manual detection, but the manual detection has great limitation, and the crack damage of the concrete cannot be detected accurately and quickly in real time, so that the maintenance of various concrete facilities is influenced. At present, the methods for crack detection are mainly divided into two categories:
1) And (5) traditional image detection.
Traditional image detection adopts digital image processing technique to carry out automatic concrete detection, compare and have improvement to a certain extent in the manual detection in the past, but the problem still exists, traditional digital image processing technique detects based on concrete crack's specific character mostly, before detecting, the engineer must rely on self experience to carry out the data parameter adjustment to the algorithm model, look for the best matching parameter, when the detection environment changes, the engineer also must change the parameter of algorithm model, this leads to the robustness of algorithm model not enough and the engineer need consume a large amount of time to carry out model adjustment, can't get rid of the influence of engineer's subjective factor completely, thereby influence the testing result.
For example, zhao Fang et al propose an improved Canny edge detection method (combining multi-scale morphology and bilateral filtering) based on the traditional Canny operator for noise and edge detection problems in road crack detection; the quality of comfort and the like propose to utilize CLAHE and median filtering for denoising for the detection and identification of various pavement cracks when the contrast is limited, and remove pseudo cracks in an image by using morphology; welchoste et al propose an automatic crack detection method based on adaptive threshold in an automatic detection method based on adaptive threshold for fine cracks and micro-gray level difference cracks; in the design and research of a pavement crack detection system based on image processing, yaojingping et al provides an image crack detection system based on Matlab.
Although the traditional image detection methods solve the problems of time and labor consumption of manual detection to a certain extent. But still has certain limitations such as low detection precision, poor robustness and the like.
2) And detecting the cracks by using a deep learning method.
In order to overcome the defects of the conventional image detection method, researchers begin to research the detection of cracks by using a deep learning method. Although the deep learning for crack detection is improved in detection precision and robustness compared with the traditional image processing technology, professional engineers are required to collect images, and finally the collected images are taken to a laboratory to be detected by using a computer, so that the time and the labor are still consumed.
For example, zhang l. et al adopts a deep convolutional neural network to automatically detect pavement cracks, and the method has a good recognition effect in a strong noise environment, but generally performs in a weak noise environment. Yun Liu et al propose an edge detection method using multiple convolution features, the network makes full use of multi-scale and multi-level information of a target object, and makes a training image approach a target image by fusing all convolution features, but the network model greatly increases complexity and detection time. Jianghong Tang et al propose a multitask enhanced dam crack image detection method based on fast R-CNN (ME-fast R-CNN) in order to improve the detection accuracy of the fast R-CNN model on a small scale, but all crack characteristics in the image cannot be detected by using the method.
At present, compared with the traditional image processing algorithm, the various crack detection algorithms based on deep learning are improved to a certain extent, but the detection precision and the timeliness still cannot meet the requirements of practical engineering application.
Disclosure of Invention
In view of the above, the present invention provides a concrete crack real-time detection method using an attention mechanism improved U-net, which is used for improving crack detection accuracy and realizing on-line detection.
In order to achieve the purpose, the invention provides the following technical scheme:
a concrete crack real-time detection method of an attention-inducing mechanism improved U-net comprises the following steps:
s1: acquiring a data set comprising a training set and a test set;
s2: marking the cracks in the picture in the data set in the step S1 pixel by pixel to form a closed space;
s3: constructing an attention mechanism-introduced improved U-net convolutional neural network segmentation model, wherein the model is based on a traditional U-net network, and uses MobileNet as a backbone network to extract crack characteristics, a channel attention module is introduced into an encoding part, and a space attention module is introduced into a decoding part;
s4: inputting the training set picture marked in the step S2 into the convolution neural network segmentation model of the attention mechanism-induced improved U-net constructed in the step S3 for training;
s5: packaging the optimal model trained in the step S4 to a detection platform;
s6: collecting a transmission picture by using a mobile collection platform;
s7: and uploading the picture to a detection platform by using the mobile terminal for detection.
Further, in step S1, acquiring a data set specifically includes: acquiring concrete crack damage pictures with different shapes generated under the action of complex factors in weather such as overcast and rainy days, heavy fog days, shady days, semi-shady days and the like; and manually shooting damage cracks on the spot, dividing the damage cracks according to the shapes of the damage cracks, expanding the data volume by using an image augmentation technology, mixing the augmented crack damage pictures of various categories with the crack pictures of various types collected on the network, and finally creating a data set containing various concrete cracks.
Further, in step S3, the introducing of channel attention in the encoding part specifically includes: a channel attention module is added in each layer of the U-net encoder part, so that the crack characteristics are noticed layer by layer in the encoding process, the extraction capability of the network on the image crack detail characteristics is further enhanced, and the detection precision is improved.
Further, in step S3, spatial attention is drawn to the decoding portion, which specifically includes: a spatial attention module is added in each layer of the U-net decoder part, so that the network can be more concentrated on positioning and predicting crack pixels according to the weight of each region, the extraction of background pixel characteristics by the network is weakened, and background interference is inhibited.
Further, in step S3, the U-net decoding section fuses the feature map of the previous layer subjected to upsampling (Upsample) and the channel attention module and the feature map subjected to only upsampling, and sends the feature map into the lightweight network MobileNet.
Further, in step S4, training the improved U-net convolutional neural network segmentation model specifically includes: training is carried out by adopting an SGD (generalized regression) optimizer, a binary cross entropy function (BCEWithLogitsLoss) is used as a loss function in the training process, 150 rounds of training are carried out, the learning rate is set to be 0.001, the momentum is set to be 0.9, and the batch training size is set to be 4, wherein the binary cross entropy function is as follows:
Figure GDA0003383998370000031
wherein L is BCE-loss Representing loss value, N representing the total number of pixels of a concrete image, L i And y i Respectively representing the label value and the prediction probability value of the ith pixel point.
Further, packaging the trained network model in the S4 to a concrete crack detection platform;
the concrete crack image mobile acquisition platform is used for acquiring and transmitting real-time images on the surface of the concrete structure;
the mobile terminal is used for receiving and storing the real-time pictures acquired by the image acquisition system and transmitting the pictures to the concrete crack image detection platform;
detecting the image requested to be detected by the mobile terminal in real time by using a concrete crack image detection platform; and feeding back the detection result to the mobile terminal, and displaying the detection result and controlling the acquisition platform to move by the mobile terminal.
The invention has the beneficial effects that:
1) Compared with the traditional U-net network, the method takes the MobileNet as the backbone network, so that the network is light and the operation speed is increased;
2) Compared with the traditional U-net network, in order to better extract the crack characteristics, the method adds the channel attention module into the coding part, utilizes the characteristic that the channel attention can pay attention to the target, and uses the channel attention to make the crack characteristics obvious layer by layer after each layer is coded. And a spatial attention module is added in the decoding part, and the spatial attention module carries out spatial attention weighting on each region of the feature map aiming at the attention crack feature map generated by adding channel attention in the encoding part, so that the network can concentrate on positioning and predicting crack pixels layer by layer according to the weight of each region. The extraction capability of the network on the crack detail characteristics is enhanced, and the detection precision is improved.
3) The concrete crack picture acquisition platform and the concrete damage detection platform designed by the invention can realize acquisition and detection at the same time, save labor, improve detection efficiency, can be applied in engineering and the like.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof.
Drawings
For a better understanding of the objects, aspects and advantages of the present invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic diagram of the concrete crack real-time detection method of the present invention;
FIG. 2 is a schematic diagram of the operation of the image inspection platform;
FIG. 3 is a diagram of a U-net network architecture incorporating the improvements of the present invention attention mechanism;
FIG. 4 is a schematic diagram of a backbone network MobileNet;
FIG. 5 is a schematic view of a channel attention module;
FIG. 6 is a schematic view of a spatial attention module;
FIG. 7 is a comparison graph of the crack detection effect of the method of the present invention and the conventional image processing, FCN, attU-Net method.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and embodiments may be combined with each other without conflict.
Referring to fig. 1 to 7, the present embodiment designs a concrete crack real-time detection method of an improved U-net with attention mechanism for improving detection accuracy and simultaneously realizing detection while collecting, the method specifically includes the following steps:
step 1: collecting a data set which comprises concrete surface damage cracks of concrete buildings, roads, bridge wall surfaces and the like in different weather conditions, specially shooting the cracks on site and dividing the damage cracks according to the surface shapes, amplifying data volume by rotating, flat projection transformation, scaling, random cutting and other modes, mixing the amplified damage cracks of various types with the pictures of the cracks of various types collected on the network, and finally creating a concrete crack data set. According to the following steps of 8:2: a ratio of 1 divides the data set into a training set, a validation set, and a test set.
And 2, step: and (3) using Labelme software to label the cracks in the picture of the data set pixel by pixel to form a complete closed loop.
And step 3: as shown in fig. 3, a convolutional neural network segmentation model was constructed by improving the conventional U-net network and introducing channel attention (ECA) and spatial Attention (AG). The method comprises the following specific steps:
1) The improved U-net network adopts a lightweight convolutional network MobileNet (as shown in figure 4) as a backbone network to extract fracture characteristics, so that the complexity of the network is reduced, the weight of a model is reduced, and the real-time performance of an algorithm is improved.
2) The contraction path of feature extraction specifically comprises: for an input concrete image, convolution operation is performed once on the layer 1, the generated feature map is sent to a channel attention module after the convolution operation is completed, the feature map after passing through the channel attention module is subjected to maximum pooling operation and then enters the layer 2, the feature map is subjected to depth residual convolution once on the layer 2 and then is sent to the channel attention module, the feature map after passing through the channel attention module is subjected to maximum pooling operation and then enters the layer 3, the feature map is subjected to depth residual convolution once on the layer 3 and then is sent to the channel attention module, the feature map after passing through the channel attention module is subjected to maximum pooling operation and then enters the layer 4, the feature map is subjected to depth residual convolution once on the layer 4 and then is sent to the channel attention module, and the feature map after passing through the channel attention module is subjected to maximum pooling operation and then enters the layer 5. And performing depth residual convolution on the feature map for one time at the 5 th layer, sending the feature map into a channel attention module, and performing maximum pooling operation on the feature map passing through the channel attention module.
And in the encoding part, when the characteristic diagram passes through the channel attention module each time, the channel attention further pays attention to the crack characteristics in the characteristic diagram, and the extraction of the network on the crack detailed characteristics is increased layer by enhancing the channel attention module layer by layer in the encoding part. As shown in FIG. 5, the channel attention module inputs the crack feature map for the encoded portion using a global mean pooling method without changing dimensionality
Figure GDA0003383998370000051
Pooling is performed and then channel attention α is learned using a one-dimensional convolution and SigMoid activation function. The convolution kernel size is set to be 3 in the convolution process, so that the network can better focus on learning the characteristics of the crack, and the crack characteristics can be better extracted in the coding part. The specific formula is as follows:
α=σ S (W k ⊙(P(y l-1 ))+b y )
y l =α(y l-1 )
wherein σ S Represents an activation function, W k Represents a convolution kernel,. Represents a convolution operation,. P represents a global average pooling operation,. B y Representing the bias term. The channel attention weight alpha is increased after the characteristic diagram is input and passes through the channel attention module, so that more crack detail characteristics are extracted, and the extraction capability of the model for the crack detail characteristics is improved.
3) The segmentation of the predicted dilation path specifically includes: the feature map output by the 5 th layer is up-sampled on the 6 th layer, the up-sampled feature map and the feature map of the 4 th layer after passing through the channel attention module are sent as 2 inputs of the space attention module, 1 feature map is output after passing through the space attention module, the feature maps and the feature map output by the 4 th layer are spliced, and then the depth residual error convolution is carried out to enter the 7 th layer; the feature map output by the layer 6 is up-sampled on the layer 7, the up-sampled feature map and the feature map of the layer 3 passing through the channel attention module are sent as 2 inputs of the space attention module, 1 feature map is output after passing through the space attention module, the feature maps and the feature map output by the layer 3 are spliced, and then the feature maps are subjected to depth residual convolution and enter the layer 8; the feature diagram output by the 7 th layer is up-sampled on the 8 th layer, the up-sampled feature diagram and the feature diagram output by the 2 nd layer after passing through the channel attention module are sent as 2 inputs of the space attention module, the 1 st feature diagram is output after passing through the space attention module, the feature diagram and the feature diagram output by the 2 nd layer are spliced and then subjected to a depth residual error convolution, the spliced feature diagram enters the 9 th layer, the feature diagram output by the 8 th layer is up-sampled on the 9 th layer, the up-sampled feature diagram and the feature diagram output by the 1 st layer after passing through the channel attention module are sent as 2 inputs of the space attention module, the 1 st feature diagram is output after passing through the space attention module, the feature diagram and the feature diagram output by the 1 st layer are spliced and then subjected to a depth residual error convolution, the feature diagram enters the 10 th layer, and after the 10 th layer is subjected to a convolution operation, the model outputs a result.
In the decoding part, when the feature map passes through the spatial attention module each time, the spatial attention module can perform spatial attention weighting on each region of the feature map, so that the network concentrates on positioning and predicting crack pixels layer by layer, and extraction of background pixel features by the network is reduced. As shown in fig. 6, the spatial attention module mainly includes a convolution layer, a batch normalization layer, and an activation function layer. The concrete formula is as follows:
q=σ R (B(W k ⊙x l-1 +b x )+B(W k ⊙y l-1 +b y ))
β=σ S (B(W k ⊙q+b q ))
y l =β(x l-1 ,y l-1 )
wherein, y l-1 Representing the input profile (i.e. the profile of the layer after down-sampling), x l-1 Is the encoder profile (i.e., the input profile). For y l-1 And x l-1 And performing one-dimensional channel convolution and normalization processing, and then generating a weight value through a ReLU, a one-dimensional channel convolution and a SigMoid activation function. Finally, multiplying the input characteristic diagram with the attention coefficient and the weight diagram pixel by pixel to obtain a space attention coefficient beta epsilon [0,1 ∈ ]]。σ R Corresponding to ReLU activation function, σ S Corresponding to SigMoid activation function, b q And b y Is an offset term, B x And B y Is a normalization operation.
4) The convolution operation used for the layer 1 and the layer 10 is set to select the convolution kernel size of 1 × 1, the step size is 1, and the number of convolution kernels used for the layer 1 to the layer 10 is 1, 64, 128, 256, 512, 1024, 512, 256, 128, 1.
And 4, step 4: and setting network training parameters. In the training process, the SGD optimizer is used for training, that is, each time the gradient is calculated, a training sample is randomly selected from the training set to update the parameters. Training is carried out for 150 rounds by using a binary cross entropy function (BCEWithLogitsLoss) as a training process loss function, wherein the size of batch-size is 4, the size of an input image is 448 multiplied by 448, the learning rate lambda is 0.001, and the iteration number is 150 epochs.
And 5: and (4) sending the training set and the verification set into the convolutional neural network segmentation model of the attention mechanism-introducing improved U-net constructed in the step (3), training and verifying by using the network parameters set in the step (4), and storing the trained model.
And 6: comparative experiment: drawing an LOSS curve according to the trained model data, finding out a model with lower LOSS, using the model to compare Adaptive Threshold, canny, FCN and attU-Net algorithms to carry out crack detection on the Test picture, and using Precision (Precision), recall (Recall) and F-Net algorithms to detect the crack of the Test picture 1 Fraction (F) 1 Score) and mean Intersection over Union (mlou) were evaluated in a model.
The model evaluation index is specifically constituted as follows:
FP represents the total number of pixels that misjudge the background as a crack, and TP represents the total number of pixels of the crack region that were correctly extracted. FN represents the total number of pixels that originally belonged to the crack region but were erroneously judged as background. For more accurate evaluation F 1 Score incorporation accuracy (Precision), recall (Recall). The calculation formula is as follows:
Figure GDA0003383998370000071
Figure GDA0003383998370000072
Figure GDA0003383998370000073
Figure GDA0003383998370000074
TABLE 1 comparison of the indicators of the algorithm of the present invention with other algorithms
Figure GDA0003383998370000075
Table 2 comparison of the algorithm of the present invention with other algorithms for detecting speed
Figure GDA0003383998370000076
Figure GDA0003383998370000081
The experimental results show that, as shown in tables 1 and 2, the invention uses the MobileNet as the backbone network, introduces the channel attention in the encoding part, and introduces the space attention in the decoding part to improve the accuracy, the recall rate and the F rate of the traditional U-net algorithm 1 Fraction (F) 1 Score) and average cross-over ratio (mIoU), compared with a traditional image processing algorithm and an attention mechanism introduced into FCN and U-net, the crack detection precision is improved, concrete cracks can be accurately and rapidly detected in various environments, and then the model is packaged into a detection module and is convenient to call.
And 7: and constructing a transceiver module of the detection system. The detection system adopts GET and POST requests in HTTP to complete receiving and sending, and data transmission of the whole system is based on wireless network transmission data information.
And 8: and constructing a software framework of the detection system. And adopting a lightweight web end frame flash developed based on python language. The flash frame is free and flexible in configuration, strong in expansibility and easy to package. The whole framework of the method adopts flash to comprehensively construct the detection module and the transceiver module into a whole.
And step 9: and adopting intelligent mobile equipment as a concrete crack picture acquisition platform. The operation personnel can control the equipment to move through the remote terminal, and the equipment specifically comprises the following components:
1) The intelligent mobile device adopts the high-definition camera as a concrete structure image acquisition module, and can realize that the device shoots stable images when moving.
2) The intelligent mobile equipment adopts a high-speed image transmission module, and the image transmission module can send the image collected by the camera to an upper computer which generates and installs a receiver.
3) The intelligent mobile device uses the power supply to supply power and provide energy.
4) The intelligent mobile device is provided with a wireless communication module, and can communicate with the terminal through the wireless communication module.
Step 10: professional detection personnel carry the intelligent mobile device to reach the concrete structure that needs to detect, and through mobile terminal remote control intelligent mobile device at the structure surface removal and take the photo.
Step 11: and (3) the intelligent mobile equipment transmits the picture shot in the moving process to the mobile terminal through the image transmission module, the mobile terminal uploads the picture to the image detection system for detection processing, and the detection system calls the optimal network model obtained by training in the comparison experiment in the step (6) after receiving the detection request, so that the concrete damage crack detection is carried out on the input picture. And outputting a detection result image after the detection is finished.
Step 12: the ratio of the cracks in the binary image to the whole picture (the ratio of white pixels to the total number of pixels) is calculated by a pixel-by-pixel reading method for the predicted binary image. And finally, returning the prediction result and the concrete structure damage data to the mobile terminal, and evaluating the structure damage condition by a detector according to the prediction image and the damage data of the image acquired by the intelligent mobile equipment.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (3)

1. A concrete crack real-time detection method for an attention-inducing mechanism improved U-net is characterized by comprising the following steps:
s1: acquiring a data set comprising a training set and a test set; the method specifically comprises the following steps: acquiring concrete crack damage pictures of different shapes generated under the action of weather factors; dividing according to the shape of the damaged crack, expanding the data volume by using an image augmentation technology, mixing the augmented crack damage pictures of various types with the crack pictures of various types collected on the network, and finally creating a data set containing various concrete cracks;
s2: marking the cracks in the picture in the data set in the step S1 pixel by pixel to form a closed space;
s3: constructing a convolutional neural network segmentation model of an attention mechanism-introduced improved U-net, wherein the model uses the MobileNet as a backbone network on the basis of a traditional U-net network;
adding a channel attention module in each layer of the coding part; the contraction path of the feature extraction is specifically as follows: for an input image, performing primary depth residual convolution operation on a layer 1, sending a generated feature map into a channel attention module after convolution operation is completed, performing maximum pooling operation on the feature map passing through the channel attention module, then entering a layer 2, performing primary depth residual convolution on the feature map at the layer 2, then sending the feature map into the channel attention module, performing maximum pooling operation on the feature map passing through the channel attention module, then entering a layer 3, performing primary depth residual convolution on the feature map at the layer 3, then sending the feature map into the channel attention module, performing maximum pooling operation on the feature map passing through the channel attention module, then entering a layer 4, performing primary depth residual convolution on the feature map at the layer 4, then sending the feature map into the channel attention module, and performing maximum pooling operation on the feature map passing through the channel attention module, then entering a layer 5; performing primary depth residual convolution on the feature map at the 5 th layer, sending the feature map into a channel attention module, and performing maximum pooling operation on the feature map passing through the channel attention module;
adding a spatial attention module into each layer of the decoding part, fusing a feature map of the upper layer subjected to the upsampling and channel attention module with a feature map only subjected to the upsampling, and sending the feature map into a lightweight network MobileNet; the expansion path of the segmentation prediction specifically comprises the following steps: the feature map output by the 5 th layer is up-sampled on the 6 th layer, the up-sampled feature map and the feature map of the 4 th layer after passing through the channel attention module are sent as 2 inputs of the space attention module, 1 feature map is output after passing through the space attention module, the feature maps and the feature map output by the 4 th layer are spliced, and then the depth residual error convolution is carried out to enter the 7 th layer; the feature map output by the layer 6 is up-sampled on the layer 7, the up-sampled feature map and the feature map of the layer 3 passing through the channel attention module are sent as 2 inputs of the space attention module, 1 feature map is output after passing through the space attention module, the feature maps and the feature map output by the layer 3 are spliced, and then the feature maps are subjected to depth residual convolution and enter the layer 8; the feature map output by the 7 th layer is up-sampled on the 8 th layer, the up-sampled feature map and the feature map of the 2 nd layer after passing through a channel attention module are taken as 2 inputs of a space attention module to be sent, the 1 feature map is output after passing through the space attention module, the feature map and the feature map output by the 2 nd layer are spliced and then subjected to depth residual error convolution once to enter the 9 th layer, the feature map output by the 8 th layer is up-sampled on the 9 th layer, the feature map after the up-sampling and the feature map of the 1 st layer after passing through the channel attention module are taken as 2 inputs of the space attention module to be sent, the 1 feature map is output after passing through the space attention module, the feature map and the feature map output by the 1 st layer are spliced and then subjected to depth residual error convolution once to enter the 10 th layer, and after the convolution operation once is performed on the 10 th layer, a model outputs a result;
s4: sending the training set picture marked in the step S2 into the convolution neural network segmentation model of the attention mechanism-introducing improved U-net constructed in the step S3 for training;
s5: packaging the optimal model trained in the step S4 to a detection platform;
s6: collecting a transmission picture by using a mobile collection platform;
s7: and uploading the picture to a detection platform by using the mobile terminal for detection.
2. The real-time concrete crack detection method according to claim 1, wherein in step S4, training the improved U-net convolutional neural network segmentation model specifically includes: training by adopting an SGD optimizer, and using a binary cross entropy function as a loss function in the training process, wherein the binary cross entropy function is as follows:
Figure FDA0003798018730000021
wherein L is BCE-loss Representing loss value, N representing the total number of pixels of a concrete image, L i And y i Respectively representing the label value and the prediction probability value of the ith pixel point.
3. The real-time concrete crack detection method according to any one of claims 1-2, characterized in that the detection method is packaged in a concrete crack image detection platform;
the concrete crack image mobile acquisition platform is used for acquiring and transmitting real-time images on the surface of the concrete structure;
the mobile terminal is used for receiving and storing the real-time pictures acquired by the image acquisition system and transmitting the pictures to the concrete crack image detection platform;
detecting the image requested to be detected by the mobile terminal in real time by using a concrete crack image detection platform; and feeding back the detection result to the mobile terminal, and displaying the detection result and controlling the acquisition platform to move by the mobile terminal.
CN202110572608.1A 2021-05-25 2021-05-25 Attention mechanism-introduced improved U-net concrete crack real-time detection method Active CN113284107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110572608.1A CN113284107B (en) 2021-05-25 2021-05-25 Attention mechanism-introduced improved U-net concrete crack real-time detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110572608.1A CN113284107B (en) 2021-05-25 2021-05-25 Attention mechanism-introduced improved U-net concrete crack real-time detection method

Publications (2)

Publication Number Publication Date
CN113284107A CN113284107A (en) 2021-08-20
CN113284107B true CN113284107B (en) 2022-10-11

Family

ID=77281492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110572608.1A Active CN113284107B (en) 2021-05-25 2021-05-25 Attention mechanism-introduced improved U-net concrete crack real-time detection method

Country Status (1)

Country Link
CN (1) CN113284107B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596266B (en) * 2022-02-25 2023-04-07 烟台大学 Concrete crack detection method based on ConcreteCrackSegNet model
CN115147375B (en) * 2022-07-04 2023-07-25 河海大学 Concrete surface defect feature detection method based on multi-scale attention
CN115115610B (en) * 2022-07-20 2023-08-22 南京航空航天大学 Industrial CT composite material internal defect identification method based on improved convolutional neural network
CN116580328B (en) * 2023-07-12 2023-09-19 江西省水利科学院(江西省大坝安全管理中心、江西省水资源管理中心) Intelligent recognition method for leakage danger of thermal infrared image dykes and dams based on multitasking assistance
CN117291913B (en) * 2023-11-24 2024-04-16 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738642A (en) * 2019-10-08 2020-01-31 福建船政交通职业学院 Mask R-CNN-based reinforced concrete crack identification and measurement method and storage medium
WO2020047316A1 (en) * 2018-08-31 2020-03-05 Alibaba Group Holding Limited System and method for training a damage identification model
CN111784667A (en) * 2020-06-30 2020-10-16 北京海益同展信息科技有限公司 Crack identification method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345507B (en) * 2018-08-24 2021-07-13 河海大学 Dam image crack detection method based on transfer learning
WO2020051545A1 (en) * 2018-09-07 2020-03-12 Alibaba Group Holding Limited Method and computer-readable storage medium for generating training samples for training a target detector
CN110569699B (en) * 2018-09-07 2020-12-29 创新先进技术有限公司 Method and device for carrying out target sampling on picture
CN111931800B (en) * 2020-04-21 2022-02-11 南京航空航天大学 Tunnel surface defect classification method based on deep convolutional neural network
CN112418027A (en) * 2020-11-11 2021-02-26 青岛科技大学 Remote sensing image road extraction method for improving U-Net network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020047316A1 (en) * 2018-08-31 2020-03-05 Alibaba Group Holding Limited System and method for training a damage identification model
CN110738642A (en) * 2019-10-08 2020-01-31 福建船政交通职业学院 Mask R-CNN-based reinforced concrete crack identification and measurement method and storage medium
CN111784667A (en) * 2020-06-30 2020-10-16 北京海益同展信息科技有限公司 Crack identification method and device

Also Published As

Publication number Publication date
CN113284107A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN113284107B (en) Attention mechanism-introduced improved U-net concrete crack real-time detection method
Yu et al. Vision-based concrete crack detection using a hybrid framework considering noise effect
CN112232349A (en) Model training method, image segmentation method and device
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN112200143A (en) Road disease detection method based on candidate area network and machine vision
CN112597815A (en) Synthetic aperture radar image ship detection method based on Group-G0 model
CN114049356B (en) Method, device and system for detecting structure apparent crack
CN111767874B (en) Pavement disease detection method based on deep learning
CN111489352A (en) Tunnel gap detection and measurement method and device based on digital image processing
WO2023124442A1 (en) Method and device for measuring depth of accumulated water
CN115131640A (en) Target detection method and system utilizing illumination guide and attention mechanism
CN115147819A (en) Driver fixation point prediction method based on fixation point prediction model
CN116823800A (en) Bridge concrete crack detection method based on deep learning under complex background
CN113469097B (en) Multi-camera real-time detection method for water surface floaters based on SSD network
CN112668375A (en) System and method for analyzing tourist distribution in scenic spot
CN113379719A (en) Road defect detection method, road defect detection device, electronic equipment and storage medium
CN110929739B (en) Automatic impervious surface range remote sensing iterative extraction method
CN115527118A (en) Remote sensing image target detection method fused with attention mechanism
CN115240070A (en) Crack detection method
CN115147439A (en) Concrete crack segmentation method and system based on deep learning and attention mechanism
CN114463628A (en) Deep learning remote sensing image ship target identification method based on threshold value constraint
CN114283336A (en) Anchor-frame-free remote sensing image small target detection method based on mixed attention
CN113793069A (en) Urban waterlogging intelligent identification method of deep residual error network
CN113128559A (en) Remote sensing image target detection method based on cross-scale feature fusion pyramid network
Li et al. Multiple structural defect detection for reinforced concrete buildings using YOLOv5s

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant