CN112700418A - Crack detection method based on improved coding and decoding network model - Google Patents
Crack detection method based on improved coding and decoding network model Download PDFInfo
- Publication number
- CN112700418A CN112700418A CN202011633762.7A CN202011633762A CN112700418A CN 112700418 A CN112700418 A CN 112700418A CN 202011633762 A CN202011633762 A CN 202011633762A CN 112700418 A CN112700418 A CN 112700418A
- Authority
- CN
- China
- Prior art keywords
- size
- feature map
- crack
- output
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 26
- 238000011176 pooling Methods 0.000 claims abstract description 22
- 238000000034 method Methods 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000012804 iterative process Methods 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims 1
- 238000002474 experimental method Methods 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract description 2
- 230000004927 fusion Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a crack detection method based on an improved coding and decoding network model, which is characterized in that a data set subjected to data preprocessing is sent to a coder for feature extraction, the model adopts a coder-decoder structure, a main network is ResNet34 pre-trained in ImageNet, a cascaded dual-core cavity convolution is added in the middle layer of the coder so as to keep bottom layer semantic information and space structure information with more cracks for fusion at a jump connection stage, a multi-core pooling module is introduced at the decoder stage for obtaining and fusing crack information with different sizes, the information obtained by the method is more global compared with the information obtained by a single pooling layer, and fine cracks in a picture can be effectively detected. The invention performs experiments on different data sets and performs comparison experiments with other mainstream algorithm models at the same time, and the result shows that the method provided by the invention has higher precision and makes up the defects of the traditional method.
Description
Technical Field
The invention relates to the technical field of safety monitoring image processing, in particular to a crack detection method based on an improved coding and decoding network model.
Background
Concrete surface crack detection is an important content of concrete building structure health monitoring. If the surface of the building cracks and continues to extend, the structure can fail in the long term, and serious economic loss and casualties are caused. The process of manually detecting cracks is time-consuming and labor-consuming, certain subjective judgment factors influence detection precision, and structures such as high-rise buildings, bridges and the like are difficult to manually detect. Therefore, a method for automatically detecting a concrete crack based on an image processing technology becomes a hot spot of current research.
With the rapid development of information technology, researchers propose to apply computer vision and image processing technologies to crack detection, and the traditional crack detection methods include Gabor filters, histogram of oriented gradient, local binary pattern, threshold-based estimation and the like. Although these methods also obtain good detection results, the requirements for the quality of the data set are high, and the method is not good enough for the data set with uneven illumination, complex crack topology and much noise. With the wide application and excellent performance of deep learning in various fields, researchers have been working on applying convolutional neural networks to crack detection to solve the limitations of the conventional methods in recent years.
In the crack detection field, the resolution of a feature map is reduced in the traditional convolution neural network feature extraction stage, so that low-level semantic information and fine-grained spatial structure information of a plurality of crack images are lost, and some fine cracks are easily missed to be detected. Even if the image detail information is restored through operations such as up-sampling and feature fusion, the network still has difficulty in accurately extracting the crack image detail features with complex topological structures and extremely unbalanced foreground and background.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in order to overcome the defects in the prior art, the invention provides a crack detection method based on an improved coding and decoding network model, and aims to solve the problems that the detection accuracy of the concrete apparent crack is low, the tiny crack is easy to lose, and the pixel ratio of the crack is small in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a crack detection method based on an improved coding and decoding network model comprises the following steps:
s1, data acquisition: shooting and collecting crack image data on the concrete building by using intelligent equipment;
s2, data marking: label classification is carried out on the collected image data by using labelme software;
s3, preprocessing data: multiplying the acquired data set by adopting a data enhancement method;
s4, sending the data set subjected to data preprocessing into an encoder for feature extraction: the backbone Network in the encoder uses the pre-trained Resnet34, and the main idea of ResNet is to add a direct channel, namely the idea of high way Network, in the Network. ResNet directly bypasses input information to output, integrity of the information is protected, the whole network only needs to learn the part of input and output difference, and learning objectives and difficulty are simplified.
The method is characterized in that a cavity convolution is added in the middle layer of an encoder, the convolutional neural network can automatically learn the picture characteristics in order to extract the picture characteristics in the encoder stage, the characteristic information in the picture is divided into low-layer semantic information and high-layer semantic information, the shallow neural network learns the low-layer information such as the outline and the texture of the picture, and the network can learn more abstract and high-level characteristics along with the deepening of the layer number of the neural network. However, in the stage of extracting features, that is, in the operation of maximal pooling of multiple layers of convolutions, the method reduces the resolution of the feature map and increases the receptive field, so that much image detail information and spatial information are lost, and some tiny cracks are easy to miss detection.
The hole convolution is proposed to achieve an increase in the receptive field without reducing the size of the feature map, allowing the output of each convolution to contain a larger range of information.
Defining k as the size of the convolution kernel, k' as the size of the receptive field, and d as the size of the expansion rate, the calculation formula of the receptive field is:
k′=k+(k-1)*(d-1) (1)
explaining the hole convolution from a one-dimensional angle, defining H as the input size, FR as the convolution kernel size, Output as the Output characteristic diagram size, D as the expansion rate, P as the filling and S as the step length. The calculation formula of the size and the dimension of the output characteristic graph is as follows:
and S5, finishing the feature extraction, and performing the decoder operation of restoring the feature map size to the original size: adding a multi-core pooling module at the front end of the decoder to generate a plurality of receptive fields for capturing information on different scales, wherein a calculation formula of the size of an output characteristic diagram after pooling operation is as follows:
in the above formula, (H, W) is the feature map resolution, (FH, FW) is the pooling kernel size, (OutputH, OutputW) is the output feature map resolution, P is the padding, S is the step size, and the outermost symbol is the rounding-down;
and S6, detecting the data set in the step S3 by using the improved network model, generating a training model, monitoring the model by using a loss function in the process of generating the model, training the improved coding and decoding network model by using the crack data set to generate a detection model, judging the crack pixel area of the picture, and outputting a crack segmentation result.
Specifically, in step S5, the acquiring context information by the multi-core pooling module includes the following steps:
s5-1, performing up-sampling on the multi-size low-dimensional feature graphs output under different pooling kernels through bilinear interpolation to enable the multi-size low-dimensional feature graphs to be the same as the original feature graphs in size;
s5-2, after each feature map, using 1x1 convolution to reduce the dimension of the feature map to 1/n of the original feature, wherein n is the number of channels of the original feature map;
and S5-3, splicing the feature maps of different levels and the original feature map into a final output feature map, wherein the splicing is a concat process.
Preferably, when the training model is generated, a combined loss function Bce loss + Dice loss is selected, wherein Bce loss is a two-class cross entropy loss function, and the formula is represented as:
wherein y isiExpressed as the label value of the ith pixel point,for the prediction label value of the ith pixel point, the common cross entropy loss function is slow and may not be optimized to be optimal in the iteration process of a large number of simple samples,
dice may be understood as the degree of similarity of two outline regions, represented by A, B as the set of points encompassed by the two outline regions, defined as:
DSC(A,B)=2|A∩B|/(|A|+|B|) (5)
the formula of the binary class Dice loss is as follows:
the two are linearly combined to obtain a loss function, and the formula is expressed as follows:
Loss=0.5BL+DL (7)
the invention has the beneficial effects that: the invention provides a cascade dual-core hole convolution for crack detection, which is used in a lower sampling intermediate layer and aims to keep low-layer semantic information with more cracks and spatial structure information which is easy to lose to be fused in a jump connection stage, thereby realizing more detailed image information retention; by introducing the multi-core pooling module at the decoder stage, a plurality of receptive fields are generated to capture information on different scales and are fused, compared with the information obtained by a single pooling layer, the information is more global, and the detection effect of cracks with different sizes can be effectively improved.
Drawings
The invention is further illustrated with reference to the following figures and examples.
Fig. 1 is a schematic diagram of data enhancement in the present invention.
Fig. 2 is a schematic diagram of the residual error of the ResNet in the present invention.
FIG. 3 is a schematic diagram of the convolution of holes at different expansion rates as described in the present invention.
FIG. 4 is a schematic diagram of the cascaded dual-core hole convolution module according to the present invention.
FIG. 5 is a schematic diagram of the structure of the multi-core pooling module described in the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
A crack detection method based on an improved coding and decoding network model comprises the steps of collecting crack image data sets for marking, multiplying the number of the data sets through data enhancement to reduce the risk of overfitting, and sending test data into the improved coding and decoding network model for experiment to obtain a result; in terms of network improvement: the network training speed is increased by changing the backbone network to a pre-trained ResNet 34.
The specific scheme is as follows:
firstly, the data preprocessing aspect is as shown in fig. 1.
S1, data acquisition: selecting equipment such as a smart phone and the like to shoot and collect crack images on concrete buildings such as bridges and pavements;
s2, data marking: after data acquisition, label classification is carried out on image data by using labelme software;
s3, preprocessing data: doubling the data set by using a data enhancement method such as Rotate90 degrees, random GridShuffle, horizon Flip and the like, processing the marked crack data set to obtain a data set with multiple times, solving the over-fitting phenomenon caused by too few data sets and improving the detection precision.
Secondly, in the aspect of network model improvement
And S4, the network training speed is accelerated by changing the backbone network into a pre-trained ResNet 34. As shown in fig. 2, the main idea of ResNet is to add a direct connection channel, i.e. the idea of Highway Network, in the Network. ResNet directly bypasses input information to output, integrity of the information is protected, the whole network only needs to learn the part of input and output difference, and learning objectives and difficulty are simplified.
In order to realize the increase of the receptive field under the condition of not reducing the size (resolution) of the feature graph and enable the output of each convolution to contain larger range of information, the invention adds cascaded dual-core hole convolutions in the middle layer of an encoder stage, as shown in fig. 3, the hole convolution schematic diagram with different expansion rates is shown, wherein (a) subgraphs are common convolutions with the expansion rate of 1 and the receptive field is 3, (b) subgraphs are hole convolutions with the expansion rate of 2 and the receptive field is 5, and (c) subgraphs are hole convolutions with the expansion rate of 3 and the receptive field is 7. The calculation formula of the receptive field under different expansion rates is as follows:
k′=k+(k-1)×(d-1) (1)
where k is the convolution kernel size, k' is the receptive field size, and d is the dilation rate size.
Explaining the hole convolution from a one-dimensional angle, defining H as the input size, FR as the convolution kernel size, Output as the Output characteristic diagram size, D as the expansion rate, P as the filling and S as the step length. The calculation formula of the size and the dimension of the output characteristic graph is as follows:
FIG. 4 is a schematic diagram of a cascaded dual-core hole convolution, which is composed of two hole convolutions with different expansion rates. Wherein the receptive field is 7x7 when the expansion rate is 3 and the receptive field is 11x11 when the expansion rate is 5. The expanded receptive field enables the convolution output to contain semantic information in a larger range, the smaller receptive field can extract the features of smaller objects, and the larger receptive field can extract the features of larger objects. And performing corresponding pixel addition on the output feature map after the expansion convolution operation, wherein the corresponding value addition can ensure that the dimension of the feature map is not changed, but each dimension can contain more features, namely semantic information. The method selects a lower sampling middle layer to use cascaded dual-core hole convolution, and aims to keep low-layer semantic information with more cracks and small crack information which is easy to lose to be fused at a jump connection stage, so that more accurate segmentation is realized.
S5, in order to solve the problem of crack multi-scale, the invention proposes to adopt a multi-core pooling module to generate a plurality of receptive fields to capture information on different scales at the decoder stage. As shown in fig. 5, the multi-core pooling module fuses features at four different scales, and uses pooling combinations of 2x2, 3x3, 5x5 and 7x7, and the output feature map size calculation formula after the pooling operation is as follows:
(H, W) is the feature map resolution, (FH, FW) is the pooled kernel size, (OutputH, OutputW) is the output feature map resolution, P is the padding, S is the step size, and the outermost symbol is the floor.
The process of the multi-core pooling module for acquiring the context information comprises the following steps: firstly, multi-size low-dimensional feature maps output under different pooling kernels are up-sampled through bilinear interpolation, so that the size of the feature maps is the same as that of the original feature map. Then, in order to maintain the weight of the global feature, 1x1 convolution is used after each feature map to reduce the dimension of the feature map to 1/n of the original feature, where n is expressed as the number of channels of the original feature map. And finally, splicing the feature maps of different levels and the original feature map into a final output feature map, wherein the splicing is a concat process.
For unbalanced samples with extremely large background proportion and extremely small object proportion in crack detection, the selection of a proper loss function is very important, and the combined loss function Bce loss + Dice loss is selected in the invention.
The Bce loss two-class cross entropy loss function and the classification loss function which is commonly used in semantic segmentation are expressed by a formula:
wherein y isiExpressed as the label value of the ith pixel point,and the predicted label value of the ith pixel point is obtained. Common cross-entropy loss functions are slow and may not be optimized to be optimal in an iterative process of a large number of simple samples.
Because the area needing to be segmented occupies a small picture, learning is easy to fall into the local minimum value of a loss function, and therefore the weight of the foreground area is increased by the loss. Dice may be understood as the degree of similarity between two outline regions, represented by A, B as the set of points contained in the two outline regions, defined as
DSC(A,B)=2|A∩B|/(|A|+|B|) (5)
The formula of the binary class Dice loss is expressed as
The loss function herein combines the two linearly, and is formulated as:
Loss=0.5BL+DL (7)
the combined loss function can focus on the classification accuracy of the pixel level and the segmentation effect of the image foreground at the same time, so that the model training is more stable, and the problem of unbalanced distribution of positive and negative samples in the crack image can be effectively solved, thereby obtaining a more accurate pixel level detection result.
In light of the foregoing description of the preferred embodiment of the present invention, many modifications and variations will be apparent to those skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011633762.7A CN112700418B (en) | 2020-12-31 | 2020-12-31 | Crack detection method based on improved coding and decoding network model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011633762.7A CN112700418B (en) | 2020-12-31 | 2020-12-31 | Crack detection method based on improved coding and decoding network model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112700418A true CN112700418A (en) | 2021-04-23 |
CN112700418B CN112700418B (en) | 2024-03-15 |
Family
ID=75513710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011633762.7A Active CN112700418B (en) | 2020-12-31 | 2020-12-31 | Crack detection method based on improved coding and decoding network model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112700418B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222926A (en) * | 2021-05-06 | 2021-08-06 | 西安电子科技大学 | Zipper abnormity detection method based on depth support vector data description model |
CN113705575A (en) * | 2021-10-27 | 2021-11-26 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN115019068A (en) * | 2022-05-26 | 2022-09-06 | 杭州电子科技大学 | Progressive salient object identification method based on coding and decoding framework |
CN117764988A (en) * | 2024-02-22 | 2024-03-26 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN119090873A (en) * | 2024-11-06 | 2024-12-06 | 中煤科工集团信息技术有限公司 | A high-speed pavement crack detection method based on multi-dimensional feature extraction |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111080641A (en) * | 2019-12-30 | 2020-04-28 | 上海商汤智能科技有限公司 | Crack detection method and device, computer equipment and storage medium |
CN111222580A (en) * | 2020-01-13 | 2020-06-02 | 西南科技大学 | High-precision crack detection method |
WO2020215236A1 (en) * | 2019-04-24 | 2020-10-29 | 哈尔滨工业大学(深圳) | Image semantic segmentation method and system |
-
2020
- 2020-12-31 CN CN202011633762.7A patent/CN112700418B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020215236A1 (en) * | 2019-04-24 | 2020-10-29 | 哈尔滨工业大学(深圳) | Image semantic segmentation method and system |
CN111080641A (en) * | 2019-12-30 | 2020-04-28 | 上海商汤智能科技有限公司 | Crack detection method and device, computer equipment and storage medium |
CN111222580A (en) * | 2020-01-13 | 2020-06-02 | 西南科技大学 | High-precision crack detection method |
Non-Patent Citations (1)
Title |
---|
常斌, 李宁, 郑政文, 张平: "前馈网络升级策略及其复合射孔岩层裂深自适应动态升级预测模型", 岩石力学与工程学报, no. 14, 15 February 2005 (2005-02-15) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222926A (en) * | 2021-05-06 | 2021-08-06 | 西安电子科技大学 | Zipper abnormity detection method based on depth support vector data description model |
CN113222926B (en) * | 2021-05-06 | 2023-04-18 | 西安电子科技大学 | Zipper abnormity detection method based on depth support vector data description model |
CN113705575A (en) * | 2021-10-27 | 2021-11-26 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN113705575B (en) * | 2021-10-27 | 2022-04-08 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN115019068A (en) * | 2022-05-26 | 2022-09-06 | 杭州电子科技大学 | Progressive salient object identification method based on coding and decoding framework |
CN115019068B (en) * | 2022-05-26 | 2024-02-23 | 杭州电子科技大学 | Progressive salient target identification method based on coding and decoding architecture |
CN117764988A (en) * | 2024-02-22 | 2024-03-26 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN117764988B (en) * | 2024-02-22 | 2024-04-30 | 山东省计算中心(国家超级计算济南中心) | Road crack detection method and system based on heteronuclear convolution multi-receptive field network |
CN119090873A (en) * | 2024-11-06 | 2024-12-06 | 中煤科工集团信息技术有限公司 | A high-speed pavement crack detection method based on multi-dimensional feature extraction |
CN119090873B (en) * | 2024-11-06 | 2025-01-14 | 中煤科工集团信息技术有限公司 | High-speed pavement crack detection method with multi-dimensional feature extraction |
Also Published As
Publication number | Publication date |
---|---|
CN112700418B (en) | 2024-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112700418A (en) | Crack detection method based on improved coding and decoding network model | |
CN110738207B (en) | Character detection method for fusing character area edge information in character image | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN110059768B (en) | Semantic segmentation method and system for fusion of point and area features for street view understanding | |
CN110136154A (en) | Semantic Segmentation Method of Remote Sensing Image Based on Fully Convolutional Network and Morphological Processing | |
CN111915613A (en) | Image instance segmentation method, device, equipment and storage medium | |
CN111723732A (en) | A kind of optical remote sensing image change detection method, storage medium and computing device | |
CN110348383B (en) | Road center line and double line extraction method based on convolutional neural network regression | |
CN114943876A (en) | Cloud and cloud shadow detection method and device for multi-level semantic fusion and storage medium | |
CN110070091A (en) | The semantic segmentation method and system rebuild based on dynamic interpolation understood for streetscape | |
CN112489054A (en) | Remote sensing image semantic segmentation method based on deep learning | |
CN110717886A (en) | Pavement pool detection method based on machine vision in complex environment | |
CN110135354A (en) | A Change Detection Method Based on Real-Scene 3D Model | |
CN113870286B (en) | Foreground segmentation method based on multi-level feature and mask fusion | |
CN110909741A (en) | Vehicle re-identification method based on background segmentation | |
CN113487610B (en) | Herpes image recognition method and device, computer equipment and storage medium | |
CN113139544A (en) | Saliency target detection method based on multi-scale feature dynamic fusion | |
CN112287802A (en) | Face image detection method, system, storage medium and equipment | |
CN112348762A (en) | Single image rain removing method for generating confrontation network based on multi-scale fusion | |
CN112560719A (en) | High-resolution image water body extraction method based on multi-scale convolution-multi-core pooling | |
CN113239865A (en) | Deep learning-based lane line detection method | |
CN109461177A (en) | A kind of monocular image depth prediction approach neural network based | |
CN108876776B (en) | Classification model generation method, fundus image classification method and device | |
Guo et al. | D3-Net: Integrated multi-task convolutional neural network for water surface deblurring, dehazing and object detection | |
CN116977674A (en) | Image matching method, related device, storage medium and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |