CN112215819A - Airport pavement crack detection method based on depth feature fusion - Google Patents

Airport pavement crack detection method based on depth feature fusion Download PDF

Info

Publication number
CN112215819A
CN112215819A CN202011091708.4A CN202011091708A CN112215819A CN 112215819 A CN112215819 A CN 112215819A CN 202011091708 A CN202011091708 A CN 202011091708A CN 112215819 A CN112215819 A CN 112215819A
Authority
CN
China
Prior art keywords
crack
convolution
module
fracture
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011091708.4A
Other languages
Chinese (zh)
Other versions
CN112215819B (en
Inventor
李海丰
景攀
韩红阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN202011091708.4A priority Critical patent/CN112215819B/en
Publication of CN112215819A publication Critical patent/CN112215819A/en
Application granted granted Critical
Publication of CN112215819B publication Critical patent/CN112215819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

Disclosed is an airport pavement crack detection method based on depth feature fusion. Inputting an original color image into a deep neural network model, and strengthening the learning of a characteristic extraction network on the fracture morphological characteristics by a deformable convolution module; inputting the obtained crack characteristic diagram into a multi-scale convolution module, and capturing global information of cracks under different receptive fields; and finally, extracting the characteristics of different levels of the cracks through a characteristic fusion module, fusing the characteristics of different stages of the cracks, and realizing accurate segmentation of the airport pavement cracks. The invention utilizes the deformable convolution module to enhance the learning of the network on the crack form information and the position information and extract the characteristics under multi-scale, so that the disease characteristic information after fusion is more comprehensive, the crack information generated at each stage is fully utilized, the expression of the crack characteristics can be enhanced, the method can be effectively used for detecting the crack diseases of the airport pavement, and the average detection precision is higher than that of the prior known method.

Description

Airport pavement crack detection method based on depth feature fusion
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to an airport pavement crack detection method based on depth feature fusion.
Background
Airport pavement cracks have been the focus of attention of the field management and maintenance departments. According to technical specification of civil airport pavement evaluation and management, the cracks are caused by plate cracking caused by comprehensive actions of repeated load, temperature warping stress, temperature shrinkage stress and the like. The cracks are one of the most main diseases of the airport pavement, and the safe operation of the airport is seriously influenced. Therefore, effective detection of airport pavement cracks is of great importance.
The traditional pavement cracks are detected in a manual mode, and the method is low in efficiency, high in cost and poor in safety. With the development of image processing technology, the automatic acquisition and identification technology of pavement cracks becomes the current mainstream. In order to extract the cracks from the images efficiently, accurately and quickly, the researchers at home and abroad have conducted extensive and intensive research on the cracks. Liu et al extract the crack regions using a threshold segmentation-based method, which is simple but susceptible to illumination, texture, and noise, and has a limited range of applications. In order to solve the problem, Anders et al use a morphological method and a logistic regression statistical method to detect cracks on steel products, and although noise is suppressed, excessive segmentation is caused, and false detection is serious. Sunweaker et al, by using wavelet technology to perform multi-resolution two-dimensional wavelet decomposition on the road surface image, highlight the edge position and enhance the crack edge extraction effect, can suppress noise to some extent, but need to adjust the wavelet and decomposition level. The FFA (Free-form crack) algorithm proposed by Tien et al comprehensively considers brightness and connectivity and then carries out crack detection on a road, the FFA algorithm has a good detection effect on a crack image with a simple background, but the FFA algorithm is sensitive to shadows and marked line areas in the image and is easy to form false detection. The CrackForest algorithm proposed by Shi et al introduces a random structure forest, integrates multiple levels of complementary features to represent cracks, can distinguish the cracks from noise to a certain extent, and has high detection accuracy. The airport pavement has the characteristics of complex texture, strong noise and low contrast, and meanwhile, the runway also has interference factors such as airplane wheel marks, rubber pollution and the like, so that the traditional crack detection algorithm and the machine learning algorithm cannot effectively detect cracks of the airport pavement.
With the wide application of the deep learning technique to various fields, the image classification and recognition have been successful, and researchers have conducted intensive studies on the image crack detection by the deep learning. The Deng et al use the Faster R-CNN algorithm for detecting concrete bridge cracks with complex backgrounds and have the problems of missing detection and poor precision. In order to solve the problem, Fang et al provides an image crack detection method based on a deep learning model and Bayesian probability analysis, and the precision is improved to a certain extent. Zhang et al first divides the acquired image into image blocks with cracks, and then uses a deep learning network to divide the cracks, but the crack division result is too wide and the accuracy is poor. The Cao brocade et al constructs a crack detection network based on an attention mechanism by adding the attention mechanism in an encoder-decoder structure, and improves the performance of detecting cracks on road surfaces. Wenqing et al use Mask Rcnn network to detect building surface cracks, the crack segmentation results are from the detection results, and the crack segmentation results are poor due to inaccurate extraction of candidate frames. Liu et al propose a deep hierarchical convolutional neural network, called deep crack (deep crack) for short, and perform pixel-level end-to-end semantic segmentation by introducing a supervisory network and a conditional random field, which improves crack segmentation capability to a certain extent, but has the problem of more false detections, especially when the contrast between diseases and backgrounds is low, the false detections are heavy. Cao et al use a full convolution network to detect concrete cracks, and use an encoder-decoder structure to improve crack detection capability to a certain extent, but because deconvolution only uses the characteristics of the last several layers of convolution layers, the crack segmentation edges are blurred, and detailed information cannot be well expressed. The problem is that the existing disease detection algorithm cannot be completely suitable for detecting the crack disease of the airport pavement.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a depth feature fusion-based airport pavement crack detection method, so as to solve the problem of crack defect detection that the shape is variable, the width is narrow, the length is different, and the spatial trend is a free curve.
In order to achieve the purpose, the airport pavement crack detection method based on depth feature fusion provided by the invention comprises the following steps which are carried out in sequence:
the method comprises the following steps: firstly, a deep neural network model is constructed, the deep neural network model comprises a deformable convolution module, a multi-scale convolution module and a feature fusion module, then an original color image is input into the deformable convolution module of the deep neural network model, and fracture feature maps with different forms are extracted by using the deformable convolution module so as to enhance the learning of the network on the fracture form features;
step two: inputting the crack characteristic diagrams extracted by the deformable convolution module into a multi-scale convolution module with four convolution kernels with different sizes to obtain the crack characteristic diagrams under different receptive fields, and fusing the crack characteristic diagrams to enable the crack characteristic diagrams to contain richer global information of cracks;
step three: and inputting the crack characteristic diagrams fused by the multi-scale convolution module into a characteristic fusion module, and fusing the crack characteristic diagrams at different stages generated by the characteristic fusion module, thereby refining the crack segmentation result.
In the first step, the method for inputting the original color image into the deformable convolution module of the deep neural network model and extracting the fracture feature maps with different shapes by using the deformable convolution module comprises the following steps:
in standard convolution, the sample point p is up-sampled for the input original color image x0The output feature map y of (a) is defined as:
Figure BDA0002722333180000031
wherein grid G defines the size and the dilation of the receptive fieldThe expansion rate; p is a radical ofnThe nth sampling point in the grid G is shown, and w (×) is the weight of the sampling point; and the deformable convolution module adds an offset deltap on the sampling point of the standard convolution regular grid Gn,{Δp n1,2, N, where N is | G |, at which point the sampling point will become pn+ΔpnThus, the above equation becomes:
Figure BDA0002722333180000041
the convolution layer utilizes the offset with the same spatial resolution as the input characteristic mapping to enable the original sampling point to expand outwards so as to focus on the crack profile; for the crack feature map of the input network, to learn the offset Δ pnThe deformable convolution module adds a convolution layer on the original convolution layer; during training, the convolution kernel that generates the output features learns the feature information of the crack simultaneously with the offset, which is learned from the additional convolution layer.
In step two, the method for inputting the fracture feature maps extracted by the deformable convolution module into the multi-scale convolution module with four convolution kernels with different sizes to obtain the fracture feature maps under different receptive fields and then fusing the fracture feature maps comprises the following steps:
the multi-scale convolution module extracts fracture features in parallel by using four convolution kernels with different sizes, firstly effectively reduces parameters under the condition of not changing the size of a fracture feature map through 1 x 1 convolution operation, then carries out 3 x 3, 5 x 5 and 7 x 7 convolution operations on the fracture feature map respectively to extract the fracture features so as to obtain the fracture feature maps under different receptive fields, and then fuses the fracture feature maps by using an element level addition method.
In step three, the method for inputting the fracture feature map fused by the multi-scale convolution module into the feature fusion module and then fusing the fracture feature maps in different stages generated by the feature fusion module comprises the following steps:
in the feature fusion module, before up-sampling the side output, adding and fusing crack feature maps with the same size generated in each stage, then performing deconvolution operation on the crack feature maps to obtain crack feature maps with the same size as an original color image, fusing the crack feature maps by using a Concatenate function, and finally obtaining a segmentation result predicted by a deep convolution network model through two convolutions.
Compared with the prior art, the airport pavement crack detection method based on depth feature fusion has the following beneficial effects: the deformable convolution module is used for extracting the crack characteristic diagram, so that the characteristic learning is more concerned about the form information and the position information of the crack, the characteristics are extracted under multiple scales, the fused crack characteristic information is more comprehensive, the method can be effectively used for detecting the crack diseases of the airport pavement, and the higher standard is reached.
Drawings
Fig. 1 is a diagram of a deep neural network model structure provided by the present invention.
FIG. 2 is a schematic diagram of a feature extraction process of a deformable convolution module according to the present invention.
FIG. 3 is a schematic diagram of the multi-scale convolution module feature extraction process in the present invention.
Fig. 4 is a diagram showing an output network structure on the feature fusion module side in the present invention.
Fig. 5 is an example of experimental comparison results on the airport pavement crack data set provided by the present invention.
Detailed Description
In order to make the technical solutions of the present invention more clear and definite for those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific examples, but the embodiments of the present invention are not limited thereto.
The method for detecting the airport pavement crack diseases based on depth feature fusion comprises the following steps in sequence:
the method comprises the following steps: firstly, constructing a deep convolutional network model shown in figure 1, wherein the deep neural network model comprises a deformable convolutional module, a multi-scale convolutional module and a feature fusion module, then inputting an original color image into the deformable convolutional module of the deep neural network model, and extracting crack feature maps with different forms by using the deformable convolutional module so as to enhance the learning of the network on the crack form features;
the deformable convolution calculation builds on top of the standard convolution and no additional supervision mechanism is required. The standard two-dimensional convolution calculation includes two steps: (1) sampling an input feature map x by using a regular grid G; and (2) carrying out weighted summation on the sampling points. Grid G defines the size and dilation rate of the receptive field. A 3 × 3 size grid G with an expansion ratio of 1 is defined, G { (-1, -1), (-1,0),. -, (0,1), (1,1) }. Thus, in standard convolution, the upsampling point p is applied to the input original color image x0The output feature map y of (a) may be defined as:
Figure BDA0002722333180000061
wherein p isnIs the nth sample in the grid G and w (×) is the sample weight. And the deformable convolution module adds an offset deltap on the sampling point of the standard convolution regular grid Gn,{Δp n1,2, N, where N is G. At this point, the sampling point will become pn+ΔpnThus, the above equation becomes:
Figure BDA0002722333180000062
the convolution layer uses an offset that has the same spatial resolution as the input feature map, causing the original sample points to expand outward to focus on the fracture profile. The process of extracting the crack features by the deformable convolution module is shown in fig. 3, and for the crack feature map of the input network, in order to learn the offset Δ pnThe deformable convolution module adds a convolution layer on the original convolution layer. During training, the convolution kernel that generates the output features learns the feature information of the crack simultaneously with the offsets learned by the additional convolution layer, forming an offset domain, where the channel dimension 2N corresponds to N2-dimensional offsets (including the x-direction and the y-direction). Due to the offset Δ pnAnd is not generally an integer number of times,in order to effectively learn the offset, a bilinear interpolation algorithm is adopted to determine the values of the offset sampling points. The deformable convolution module can enable the sampling points to be freely transformed, so that dynamic adjustment can be carried out according to the shapes of the cracks in the characteristic extraction process, the geometric deformation of the cracks in different shapes can be learned, and the method for adaptively determining the deformation size and position of the cracks has great influence on crack characteristic extraction.
Step two: inputting the crack characteristic diagrams extracted by the deformable convolution module into a multi-scale convolution module with four convolution kernels with different sizes as shown in FIG. 3 to obtain crack characteristic diagrams under different receptive fields, and fusing the crack characteristic diagrams to enable the crack characteristic diagrams to contain richer global information of cracks;
the module extracts fracture features in parallel by using four convolution kernels, firstly effectively reduces parameters under the condition of not changing the size of a fracture feature map through 1 × 1 convolution operation, then performs 3 × 3, 5 × 5 and 7 × 7 convolution operations on the fracture feature map respectively to extract the fracture features, obtains the fracture feature maps under different receptive fields, and then fuses the fracture feature maps by using Element-wise add (Element-wise add).
Step three: and inputting the crack characteristic diagrams fused by the multi-scale convolution module into a characteristic fusion module, and fusing the crack characteristic diagrams at different stages generated by the characteristic fusion module, thereby refining the crack segmentation result.
Information among different convolutional layers can be complemented, the traditional method for fusing network structural features has the problem of insufficient information utilization rate, and most networks only adopt the last convolutional layer before pooling. The feature fusion module is improved in the deep convolution network model in order to fully utilize the features extracted by the network, the structure is shown in fig. 4, before the side output is subjected to up-sampling, the crack feature maps with the same size generated in each stage are added and fused, then the deconvolution operation is performed on the crack feature maps to obtain the crack feature maps with the same size as the original color image, the crack feature maps are fused by using a Concatenate function, and finally, the segmentation result predicted by the deep convolution network model is obtained through two times of convolution.
The effect of the airport pavement crack disease detection method based on depth feature fusion provided by the invention can be further illustrated by the following experimental results. Description of the experimental data: the data set used in the invention is a plurality of airport real data collected by airport pavement detection robots (developed by Chengdou Yokou robot Co., Ltd.). The camera mounted on the pavement inspection robot is a CMOS area-array camera nano m1920 developed by Teledyne DALSA, canada. The specific acquisition method is that a camera is fixed on the robot at a certain height, a driving route is set for the robot in advance, the robot continuously takes pictures on an airport runway at the speed of 20-30km/h, and images are acquired. The invention selects 960 images with cracks from the shot images, wherein the size of the images is 1800 x 900 pixels. In order to train the model conveniently, a window sliding algorithm is adopted to cut the data set according to the size of 512 × 512 pixels, and then operations such as horizontal turning, vertical turning, rotation transformation and the like are carried out to enhance the data, so that 12960 images with the size of 512 × 512 pixels are obtained. And finally, adopting a random classification algorithm according to the following steps of 8: 1: a scale of 1 divides the data set into a training set of 10368, a validation set of 1296, and a test set of 1296.
Selecting deep neural network model training parameters: in the training process, the batchsize is set to be 2, the optimizer selects Adam, the learning rate is le-5, the Loss function is a cross entropy Loss function (Cross Entry Loss), the network activation function selects ReLu, and the shuffle is set to be True.
Description of evaluation indexes: in order to quantify the detection result, the invention adopts the Precision (Pixel accuracycacy, PA), the cross-over ratio (IoU), the accuracy (Precision), the Recall (Recall) and the F1 value to quantitatively analyze the result. The accuracy rate refers to the ratio of the number of correctly predicted pixels to the total number of pixels in the image; the intersection ratio refers to the proportion of intersection and union between the crack prediction region and the label region; the accuracy rate represents the proportion of the number of correctly detected pixels in the crack region to the total number of the detected pixels; the recall rate represents the proportion of the number of correctly detected pixels of the crack region to the number of pixels of the crack region which should be accurately detected; the F1 value is a comprehensive evaluation index of accuracy and recall rate. These evaluation indexes are important criteria for evaluating the performance of the semantic segmentation model. In the binary division task, the evaluation indexes are respectively defined as follows:
Figure BDA0002722333180000081
Figure BDA0002722333180000082
Figure BDA0002722333180000083
Figure BDA0002722333180000084
Figure BDA0002722333180000091
wherein TP represents the number of pixels in which the crack region is correctly detected, FP represents the number of pixels in which the background region is predicted as a crack pixel, FN represents the number of pixels in which the pixel region is predicted as a background, and TN represents the number of pixels in which the background region is correctly detected.
Description of the comparative method: the method provided by the invention is compared with the following 6 methods:
(1) the Canny algorithm. The algorithm is a multi-stage edge detection algorithm, and determines potential edges by smoothing an image by adopting Gaussian filtering, removing noise and applying a dual-threshold method.
(2) FFA algorithm. An algorithm specially used for detecting pavement cracks is disclosed in the references: nguyen T S, Stephane B got, Duculty F, et al, free-form and abnormal copy A new method for crack detection on development surface images [ C ]// IEEE International Conference on Image processing.
(3) The algorithm of crackfiest. The crack detection frame is a novel road crack detection frame based on a random structured forest, and references are shown in the following text: Y.Shi, L.Cui, Z.Qi, et al.automatic Road Crack Detection Using Random Structured forms [ J ]. IEEE Transactions on Intelligent transfer Systems,2016,17(12): 3434-.
(4) The FCN algorithm. The algorithm uses VGG16 as a backbone network, uses a jump connection structure to extract features, and adopts a full convolution network to output a crack prediction map with the same size as an original map, and reference documents are as follows: long J, Shell E, Darrell T.Fully volumetric networks for the semantic segmentation [ C ]// Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Piscataway: IEEE Press,2015: 3431-.
(5) U-net algorithm. The U-net network adopts a symmetrical encoder-decoder framework to extract target information, and reference documents are as follows: ronneberger O, Fischer P, Brox T.U-net: volumetric network for biological image segmentation [ C ]// International Conference on Medical image computing and computer-assisted interaction, Berlin: Springer 2015:234-241.
(6) The deep crack algorithm. One of the currently widely used crack detection algorithm references is found in: yahui Liu, Jian Yao, Xiaohu Lu, et al, deep crack A deep hierarchical feature learning architecture for crack segmentation [ J ]. neuro-compression, 2019, 338: 139-.
Comparing the method of the present invention with the existing methods, it can be seen from table 1 that the method of the present invention achieves the optimal results of 99.59%, 56.20%, 92.21%, 89.72% and 90.95% in terms of accuracy, cross-over ratio, accuracy, recall and F1 value for such diseases as cracks, respectively. All are superior to other six comparison algorithms. The detection result of the partial image is shown in fig. 5.
TABLE 1
Figure BDA0002722333180000101
The above description is only for the purpose of illustrating the present invention and is not intended to limit the scope of the present invention, and any person skilled in the art can substitute or change the technical solution of the present invention and its conception within the scope of the present invention.

Claims (4)

1. A depth feature fusion-based airport pavement crack detection method is characterized by comprising the following steps: the airport pavement crack detection method based on depth feature fusion comprises the following steps in sequence:
the method comprises the following steps: firstly, a deep neural network model is constructed, the deep neural network model comprises a deformable convolution module, a multi-scale convolution module and a feature fusion module, then an original color image is input into the deformable convolution module of the deep neural network model, and fracture feature maps with different forms are extracted by using the deformable convolution module so as to enhance the learning of the network on the fracture form features;
step two: inputting the crack characteristic diagrams extracted by the deformable convolution module into a multi-scale convolution module with four convolution kernels with different sizes to obtain the crack characteristic diagrams under different receptive fields, and fusing the crack characteristic diagrams to enable the crack characteristic diagrams to contain richer global information of cracks;
step three: and inputting the crack characteristic diagrams fused by the multi-scale convolution module into a characteristic fusion module, and fusing the crack characteristic diagrams at different stages generated by the characteristic fusion module, thereby refining the crack segmentation result.
2. The depth feature fusion-based airport pavement crack detection method of claim 1, wherein: in the first step, the method for inputting the original color image into the deformable convolution module of the deep neural network model and extracting the fracture feature maps with different shapes by using the deformable convolution module comprises the following steps:
in standard convolution, for an input original color image xUpper sampling point p0The output feature map y of (a) is defined as:
Figure FDA0002722333170000011
wherein grid G defines the size and inflation rate of the receptive field; p is a radical ofnThe nth sampling point in the grid G is shown, and w (×) is the weight of the sampling point; and the deformable convolution module adds an offset deltap on the sampling point of the standard convolution regular grid Gn,{Δpn1,2, N, where N is | G |, at which point the sampling point will become pn+ΔpnThus, the above equation becomes:
Figure FDA0002722333170000021
the convolution layer utilizes the offset with the same spatial resolution as the input characteristic mapping to enable the original sampling point to expand outwards so as to focus on the crack profile; for the crack feature map of the input network, to learn the offset Δ pnThe deformable convolution module adds a convolution layer on the original convolution layer; during training, the convolution kernel that generates the output features learns the feature information of the crack simultaneously with the offset, which is learned from the additional convolution layer.
3. The depth feature fusion-based airport pavement crack detection method of claim 1, wherein: in step two, the method for inputting the fracture feature maps extracted by the deformable convolution module into the multi-scale convolution module with four convolution kernels with different sizes to obtain the fracture feature maps under different receptive fields and then fusing the fracture feature maps comprises the following steps:
the multi-scale convolution module extracts fracture features in parallel by using four convolution kernels with different sizes, firstly effectively reduces parameters under the condition of not changing the size of a fracture feature map through 1 x 1 convolution operation, then carries out 3 x 3, 5 x 5 and 7 x 7 convolution operations on the fracture feature map respectively to extract the fracture features so as to obtain the fracture feature maps under different receptive fields, and then fuses the fracture feature maps by using an element level addition method.
4. The depth feature fusion-based airport pavement crack detection method of claim 1, wherein: in step three, the method for inputting the fracture feature map fused by the multi-scale convolution module into the feature fusion module and then fusing the fracture feature maps in different stages generated by the feature fusion module comprises the following steps:
in the feature fusion module, before up-sampling the side output, adding and fusing crack feature maps with the same size generated in each stage, then performing deconvolution operation on the crack feature maps to obtain crack feature maps with the same size as an original color image, fusing the crack feature maps by using a Concatenate function, and finally obtaining a segmentation result predicted by a deep convolution network model through two convolutions.
CN202011091708.4A 2020-10-13 2020-10-13 Airport pavement crack detection method based on depth feature fusion Active CN112215819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011091708.4A CN112215819B (en) 2020-10-13 2020-10-13 Airport pavement crack detection method based on depth feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011091708.4A CN112215819B (en) 2020-10-13 2020-10-13 Airport pavement crack detection method based on depth feature fusion

Publications (2)

Publication Number Publication Date
CN112215819A true CN112215819A (en) 2021-01-12
CN112215819B CN112215819B (en) 2023-06-30

Family

ID=74053318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011091708.4A Active CN112215819B (en) 2020-10-13 2020-10-13 Airport pavement crack detection method based on depth feature fusion

Country Status (1)

Country Link
CN (1) CN112215819B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989981A (en) * 2021-03-05 2021-06-18 五邑大学 Pavement crack detection method, system and storage medium
CN113537037A (en) * 2021-07-12 2021-10-22 北京洞微科技发展有限公司 Pavement disease identification method, system, electronic device and storage medium
CN113792769A (en) * 2021-08-30 2021-12-14 普达迪泰(天津)智能装备科技有限公司 Detection method based on airport pavement cracks
CN115512324A (en) * 2022-10-13 2022-12-23 中国矿业大学 Pavement disease detection method based on edge symmetric filling and large receptive field
CN115546768A (en) * 2022-12-01 2022-12-30 四川蜀道新能源科技发展有限公司 Pavement marking identification method and system based on multi-scale mechanism and attention mechanism
CN117173618A (en) * 2023-09-06 2023-12-05 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN
CN117291913A (en) * 2023-11-24 2023-12-26 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117764988A (en) * 2024-02-22 2024-03-26 山东省计算中心(国家超级计算济南中心) Road crack detection method and system based on heteronuclear convolution multi-receptive field network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133960A (en) * 2017-04-21 2017-09-05 武汉大学 Image crack dividing method based on depth convolutional neural networks
CN108257194A (en) * 2018-01-23 2018-07-06 哈尔滨工程大学 Face simple picture generation method based on convolutional neural networks
CN108710919A (en) * 2018-05-25 2018-10-26 东南大学 A kind of crack automation delineation method based on multi-scale feature fusion deep learning
CN110276756A (en) * 2019-06-25 2019-09-24 百度在线网络技术(北京)有限公司 Road surface crack detection method, device and equipment
CN111257341A (en) * 2020-03-30 2020-06-09 河海大学常州校区 Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method
CN111598854A (en) * 2020-05-01 2020-08-28 河北工业大学 Complex texture small defect segmentation method based on rich robust convolution characteristic model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133960A (en) * 2017-04-21 2017-09-05 武汉大学 Image crack dividing method based on depth convolutional neural networks
CN108257194A (en) * 2018-01-23 2018-07-06 哈尔滨工程大学 Face simple picture generation method based on convolutional neural networks
CN108710919A (en) * 2018-05-25 2018-10-26 东南大学 A kind of crack automation delineation method based on multi-scale feature fusion deep learning
CN110276756A (en) * 2019-06-25 2019-09-24 百度在线网络技术(北京)有限公司 Road surface crack detection method, device and equipment
CN111292305A (en) * 2020-01-22 2020-06-16 重庆大学 Improved YOLO-V3 metal processing surface defect detection method
CN111257341A (en) * 2020-03-30 2020-06-09 河海大学常州校区 Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN111598854A (en) * 2020-05-01 2020-08-28 河北工业大学 Complex texture small defect segmentation method based on rich robust convolution characteristic model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
S. Y. GUAN ET AL.: ""A SURFACE DEFECT DETECTION METHOD OF THE MAGNESIUM ALLOY SHEET BASED ON DEFORMABLE CONVOLUTION NEURAL NETWORK"", 《METALURGIJA》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989981A (en) * 2021-03-05 2021-06-18 五邑大学 Pavement crack detection method, system and storage medium
CN112989981B (en) * 2021-03-05 2023-10-17 五邑大学 Pavement crack detection method, system and storage medium
CN113537037A (en) * 2021-07-12 2021-10-22 北京洞微科技发展有限公司 Pavement disease identification method, system, electronic device and storage medium
CN113792769A (en) * 2021-08-30 2021-12-14 普达迪泰(天津)智能装备科技有限公司 Detection method based on airport pavement cracks
CN115512324A (en) * 2022-10-13 2022-12-23 中国矿业大学 Pavement disease detection method based on edge symmetric filling and large receptive field
CN115546768A (en) * 2022-12-01 2022-12-30 四川蜀道新能源科技发展有限公司 Pavement marking identification method and system based on multi-scale mechanism and attention mechanism
CN117173618A (en) * 2023-09-06 2023-12-05 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN
CN117173618B (en) * 2023-09-06 2024-04-30 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN
CN117291913A (en) * 2023-11-24 2023-12-26 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117291913B (en) * 2023-11-24 2024-04-16 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117764988A (en) * 2024-02-22 2024-03-26 山东省计算中心(国家超级计算济南中心) Road crack detection method and system based on heteronuclear convolution multi-receptive field network
CN117764988B (en) * 2024-02-22 2024-04-30 山东省计算中心(国家超级计算济南中心) Road crack detection method and system based on heteronuclear convolution multi-receptive field network

Also Published As

Publication number Publication date
CN112215819B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN112215819B (en) Airport pavement crack detection method based on depth feature fusion
Ali et al. Structural crack detection using deep convolutional neural networks
CN111553387B (en) Personnel target detection method based on Yolov3
Jiang et al. A deep learning approach for fast detection and classification of concrete damage
CN111681240B (en) Bridge surface crack detection method based on YOLO v3 and attention mechanism
KR101403876B1 (en) Method and Apparatus for Vehicle License Plate Recognition
CN115311507B (en) Building board classification method based on data processing
CN110335233B (en) Highway guardrail plate defect detection system and method based on image processing technology
CN102915544A (en) Video image motion target extracting method based on pattern detection and color segmentation
CN111008632A (en) License plate character segmentation method based on deep learning
CN112991271A (en) Aluminum profile surface defect visual detection method based on improved yolov3
CN111062381A (en) License plate position detection method based on deep learning
Su et al. A new local-main-gradient-orientation HOG and contour differences based algorithm for object classification
CN111414938B (en) Target detection method for bubbles in plate heat exchanger
CN1564600A (en) Detection method of moving object under dynamic scene
CN116433629A (en) Airport pavement defect identification method based on GA-Unet
Gooda et al. Automatic detection of road cracks using EfficientNet with residual U-net-based segmentation and YOLOv5-based detection
CN113326846A (en) Rapid bridge apparent disease detection method based on machine vision
Nguyen et al. A robust approach for road pavement defects detection and classification
CN113011392B (en) Pavement type identification method based on pavement image multi-texture feature fusion
Kaur et al. An Efficient Method of Number Plate Extraction from Indian Vehicles Image
Eslami et al. Comparison of deep convolutional neural network classifiers and the effect of scale encoding for automated pavement assessment
CN104112144A (en) Person and vehicle identification method and device
Yang et al. Residual shape adaptive dense-nested Unet: Redesign the long lateral skip connections for metal surface tiny defect inspection
CN109784176B (en) Vehicle-mounted thermal imaging pedestrian detection Rois extraction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant