CN111080641A - Crack detection method and device, computer equipment and storage medium - Google Patents

Crack detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111080641A
CN111080641A CN201911401125.4A CN201911401125A CN111080641A CN 111080641 A CN111080641 A CN 111080641A CN 201911401125 A CN201911401125 A CN 201911401125A CN 111080641 A CN111080641 A CN 111080641A
Authority
CN
China
Prior art keywords
features
image
crack
detected
level image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911401125.4A
Other languages
Chinese (zh)
Inventor
吴刚
倪枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN201911401125.4A priority Critical patent/CN111080641A/en
Publication of CN111080641A publication Critical patent/CN111080641A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a crack detection method, apparatus, computer device and storage medium, wherein the method comprises: acquiring an image to be detected; extracting a first-level image feature and a second-level image feature from an image to be detected based on a pre-trained hole convolution neural network; the second-level image features and the first-level image features are respectively extracted from a second convolutional layer and a first convolutional layer which are included in the void convolutional neural network, and the hierarchy of the second convolutional layer is higher than that of the first convolutional layer; determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features; and determining a crack detection result based on the first-level image features and the pooled features. The accuracy of crack detection is improved by excavating more abundant image features, the robustness of crack detection is improved by excavating different scale information of the image features, the problems of time consumption and labor consumption in manual detection are also avoided, and the detection efficiency is high.

Description

Crack detection method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a crack detection method and apparatus, a computer device, and a storage medium.
Background
Cracks are important marks for reflecting the damage conditions of buildings such as roads, bridges, houses and the like, and the functions of the buildings can be influenced if the cracks are not treated properly, and even the personal safety is threatened. In order to maintain good building conditions, it is important to detect cracks in a timely manner.
The detection of the related cracks mainly adopts a manual detection method, namely, the related cracks are directly detected and recorded by a professional technician by means of some simple instruments or naked eyes. However, the above method for manually detecting cracks occupies a large amount of labor cost, and has low detection efficiency and accuracy.
Disclosure of Invention
The embodiment of the disclosure provides at least one crack detection scheme, which can automatically detect cracks through a pre-trained cavity convolution neural network and a space pyramid pooling network, and has high detection efficiency and accuracy.
Mainly comprises the following aspects:
in a first aspect, an embodiment of the present disclosure provides a crack detection method, where the method includes:
acquiring an image to be detected;
extracting a first-level image feature and a second-level image feature from the image to be detected based on a pre-trained cavity convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the second convolutional layer is higher in level than the first convolutional layer;
determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features;
determining a fracture detection result based on the first-level image features and the pooled features.
By adopting the scheme, the high-level image features (the image features extracted by the second convolution layer at a higher level) and the low-level image features (the image features extracted by the first convolution layer at a lower level) of the image to be detected are extracted through the trained hole convolution neural network, the low-level image features can correspond to basic attribute features such as the outline, the color and the like of the image to be detected, and the high-level image can correspond to deep semantic features such as the texture, the spatial structure and the like of the image to be detected. In addition, the scheme can also extract the pooled features of the image to be detected based on the trained spatial pyramid pooled network for the high-level image features, and can realize crack detection based on the first-level image features and the pooled features. That is, according to the method and the device, on one hand, richer image features are excavated by using the cavity convolution neural network, so that the accuracy of crack detection can be improved, on the other hand, different scale information of the image features is excavated by using the space pyramid pooling network, so that the robustness of crack detection can be improved, the problems of time consumption and labor consumption in a manual detection mode are solved, and the detection efficiency is high.
In one possible embodiment, the hole convolutional neural network comprises a plurality of convolutional layers connected in sequence; a first convolutional layer of the second convolutional layers is connected with a last convolutional layer of the first convolutional layers; the method for extracting the first-level image features and the second-level image features from the image to be detected based on the pre-trained cavity convolutional neural network comprises the following steps:
inputting the image to be detected into the first convolution layer for convolution processing to obtain a first-level image characteristic;
and inputting the first-level image features into the second convolution layer for convolution processing to obtain second-level image features.
In one possible implementation, the spatial pyramid pooling network includes a plurality of convolutional layers with different convolution parameters including a void rate and a convolutional kernel size, and a pooling layer; the method for determining the pooled features of the image to be detected based on the pre-trained spatial pyramid pooling network and the second-level image features comprises the following steps:
inputting the second-level image features into each convolution layer of the spatial pyramid pooling network for convolution processing respectively to obtain first features, and,
inputting the second-level image features into the pooling layer for pooling processing to obtain second features;
and performing feature fusion on the first feature and the second feature of the spatial pyramid pooling network after the convolutional layers are processed to obtain the pooled feature.
By adopting the scheme, the feature extraction can be carried out on the high-level image features by utilizing the convolution layers with different convolution parameters and one pooling layer, and the scales of the image features extracted by the convolution layers with different convolution parameters are different, so that the accurate detection can be carried out no matter whether the image to be detected contains all cracks or local cracks, and the detection robustness is better.
In one possible embodiment, the determining a crack detection result based on the first-level image feature and the pooled feature includes:
determining the probability of a crack corresponding to each pixel point in the image to be detected based on the first-level image characteristics and the pooled characteristics;
and determining whether the image to be detected has cracks or not based on the crack probability corresponding to each pixel point.
In a possible implementation manner, the determining whether a crack exists in the image to be detected based on the crack probability corresponding to each pixel point includes:
determining a crack region to be detected in the image to be detected based on the crack probability corresponding to each pixel point;
and determining whether the image to be detected has cracks or not based on the gray value of the crack region to be detected and the size of the crack region to be detected.
In a possible implementation manner, the determining whether the image to be detected has a crack based on the gray-scale value of the crack region to be detected and the size of the crack region to be detected includes:
determining the confidence coefficient that the crack region to be detected is a crack based on the gray value of the crack region to be detected and the size of the crack region to be detected;
and determining whether the image to be detected has the crack or not based on the confidence coefficient and a preset confidence coefficient threshold value of the crack.
By adopting the scheme, whether the crack area to be detected has cracks or not can be determined based on the comparison result between the confidence coefficient and the preset confidence coefficient threshold value, if the crack area to be detected has cracks, the gray value of the crack area to be detected can be consistent or not different, at the moment, the determined confidence coefficient is higher, namely, the probability that the crack area to be detected has cracks is determined by utilizing the gray processing result, and therefore the crack detection accuracy is further improved.
In a possible embodiment, the determining, based on the first-level image features and the pooled features, a crack probability corresponding to each pixel point in the image to be detected includes:
performing upsampling processing on the pooled features to obtain processed pooled features with the same dimension as the first-level image features;
and performing feature fusion on the first-level image features and the processed pooled features, and determining the probability of a crack corresponding to each pixel point in the image to be detected based on the fused features.
In a possible implementation manner, the determining, based on the fused features, a crack probability corresponding to each pixel point in the image to be detected includes:
inputting the fused features into a convolution layer with convolution parameters for convolution processing to obtain features after convolution;
performing upsampling processing on the convolved features to obtain the processed convolved features with the same dimensionality as the to-be-detected image;
and determining the crack probability corresponding to each pixel point in the image to be detected based on the processed features after convolution.
In a possible implementation, the method further comprises the step of pre-training the hole convolutional neural network and the spatial pyramid pooling network;
the cavity convolution neural network and the space pyramid pooling network are obtained by training through the acquired crack image samples and the crack marking information corresponding to the crack image samples.
In a second aspect, the present disclosure also provides a crack detection device, the device comprising:
the acquisition module is used for acquiring an image to be detected;
the extraction module is used for extracting a first level image feature and a second level image feature from the image to be detected based on a pre-trained cavity convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the second convolutional layer is higher in level than the first convolutional layer;
the determining module is used for determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features;
and the detection module is used for determining a crack detection result based on the first-level image characteristics and the pooled characteristics.
In a third aspect, the present disclosure also provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions, when executed by the processor, performing the steps of the crack detection method as set forth in the first aspect and any of its various embodiments.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the crack detection method according to the first aspect and any of its various embodiments.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 illustrates a flow chart of a crack detection method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for determining a crack probability in a crack detection method provided in an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific method for determining a crack probability in a crack detection method provided in an embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating a specific method for crack determination in the crack detection method provided by the embodiment of the disclosure;
FIG. 5 is a schematic diagram illustrating an application of a crack detection method provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a crack detection device provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
According to researches, related crack detection is mainly carried out by adopting a manual detection method, namely, the crack is directly detected and recorded by a professional technician by means of some simple instruments or naked eyes. However, the above method for manually detecting cracks occupies a large amount of labor cost, and has low detection efficiency and accuracy.
Based on the research, the method provides at least one crack detection scheme, cracks can be automatically detected through a pre-trained cavity convolution neural network and a space pyramid pooling network, and the detection efficiency and the accuracy are high.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
The technical solutions in the present disclosure will be described clearly and completely with reference to the accompanying drawings in the present disclosure, and it is to be understood that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. The components of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a crack detection method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the crack detection method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the crack detection method may be implemented by a processor calling computer readable instructions stored in a memory.
The crack detection method provided by the embodiment of the present disclosure is described below by taking an execution subject as a server.
Referring to fig. 1, a flow chart of a crack detection method provided in some embodiments of the present disclosure includes steps S101 to S104, where:
and S101, acquiring an image to be detected.
The image to be detected in the embodiment of the present disclosure may be any image that needs crack detection, for example, an image related to the surface condition of a road or a house captured by a camera.
It should be noted that, in the embodiment of the present disclosure, the captured image of the relevant road or house surface condition may be directly used as the image to be detected, or the image to be detected may be determined after the captured image is processed. For example, for a captured image with a size of 1028 × 1028, the image may be scaled to 512 × 512 to serve as an image to be detected, the captured image may be subjected to blocking processing, each image area is determined as an image to be detected, and other processing manners may be adopted to determine the image to be detected.
S102, extracting a first-level image feature and a second-level image feature from an image to be detected based on a pre-trained hole convolution neural network; the second-level image features are extracted from a second convolutional layer included by the void convolutional neural network, the first-level image features are extracted from a first convolutional layer included by the void convolutional neural network, and the level of the second convolutional layer is higher than that of the first convolutional layer.
Here, in the case of acquiring an image to be detected, two levels of image features (i.e., a first level image feature and a second level image feature) may be extracted based on a previously trained hole convolution neural network.
For convenience of explaining the extraction process of the first-level image features and the second-level image features, first, a network structure of the hole convolutional neural network is briefly explained.
The hole convolution neural network in the embodiment of the disclosure is mainly composed of a plurality of convolution layers connected in sequence, and each convolution layer can be provided with corresponding convolution parameters (which may be the same or different), so that after the image features are input to the convolution layer, convolution operation can be performed on the convolution layer by using the corresponding convolution parameters, and richer image features can be mined. The convolutional layers may include not only a first convolutional layer at a lower level to extract corresponding low-level image features (i.e., first-level image features), but also a second convolutional layer at a higher level to extract corresponding high-level image features (i.e., second-level image features), that is, as the hierarchy of the convolutional layers progresses, the depth of the extracted image features is deeper.
In the embodiment of the present disclosure, it is only required to ensure that the level of the convolutional layer used by the second convolutional layer is higher than that of the convolutional layer used by the first convolutional layer, and the number of layers of the convolutional layer included by the first convolutional layer and the number of layers of the convolutional layer included by the second convolutional layer are not specifically limited.
In the embodiment of the present disclosure, the first-level image features extracted correspondingly are different based on the different convolutional layers adopted by the first convolutional layer, and similarly, the second-level image features extracted correspondingly are different based on the different convolutional layers adopted by the second convolutional layer. In a specific application, the first-level image feature may be a low-level image feature including an image color, an image edge, an outline, and the like, and the second-level image feature may be a high-level image feature including a texture, a spatial relationship, and the like, and may also be a first-level image feature and a second-level image feature including other information, which is not specifically limited in this disclosure.
It should be noted that the extraction of the second-level image features in the embodiment of the present disclosure depends on the extraction of the first-level image features, that is, before determining the second-level image features, the embodiment of the present disclosure needs to input the image to be detected to the first convolution layer for convolution processing to obtain the first-level image features, and thus, the first-level image features are input to the second convolution layer for convolution processing to obtain the second-level image features. Therefore, in the embodiments of the present disclosure, it is necessary to ensure that the first convolution layer of the second convolution layer and the last convolution layer of the first convolution layer are connected to each other.
For example, for a hole convolutional neural network including 18 convolutional layers, the first convolutional layer includes the 1 st convolutional layer to the 4 th convolutional layer, and the second convolutional layer includes the 5 th convolutional layer to the 18 th convolutional layer, so that the 5 th convolutional layer serves as the first convolutional layer in the second convolutional layer, and the 4 th convolutional layer serves as the last convolutional layer in the first convolutional layer.
In practical applications, the above-mentioned hole convolutional neural network may adopt any one of the following models: the network model may be a Resnet convolutional neural network model, a Mobilenet convolutional neural network model, or an Xception convolutional neural network model, and in addition, other convolutional neural network models capable of performing multilayer convolutional operation may also be selected in the embodiments of the present disclosure, which is not specifically limited in the embodiments of the present disclosure.
It is worth to be noted that, in order to facilitate extraction of the first-level image features, in the embodiment of the present disclosure, before the first-level image features are extracted from the image to be detected based on the trained hole convolution neural network, enhancement processing may be performed on the image to be detected.
The enhancement processing may be random horizontal flipping processing for horizontally flipping an image to be detected, random size cutting processing corresponding to cutting the image to be detected according to different cutting sizes, gaussian noise processing for improving image detection robustness, normalization regularization processing for realizing normalization, or other enhancement processing.
S103, determining the pooled features of the image to be detected based on the pre-trained spatial pyramid pooling network and the second-level image features.
Here, in the crack detection method provided by the embodiment of the present disclosure, in the case that the second-level image features are obtained by extraction, the second-level image features may be processed based on a spatial pyramid pooling network to obtain pooled features of the image to be detected.
For convenience of explaining the extraction process of the pooled features, a simple explanation of the network structure of the spatial pyramid pooled network may be first given below.
The spatial pyramid pooling network in the embodiment of the disclosure mainly comprises a plurality of convolution layers and a pooling layer which are connected in parallel, wherein convolution parameters of the plurality of convolution layers which are connected in parallel are different, so that after second-level image features are respectively input to the plurality of convolution layers, convolution operation can be performed by using corresponding convolution parameters, and first features with different scales are excavated. Meanwhile, the second-level image features can be input into the pooling layer to be pooled by using the set pooling parameters to obtain second features, and after feature fusion is carried out on the plurality of first features and one second feature, the pooled features can be obtained. The pooled feature obtained after feature fusion may be a result obtained by stitching a first feature output by the convolutional layer and a second feature output by the pooled layer as image features of the two channels.
The convolution parameters set by each convolution layer mainly comprise a void rate and a convolution kernel size, so that the convolution parameters set by different convolution layers can be only different in void rate, can also be only different in convolution kernel size, and can also be different in both.
For example, for a spatial pyramid pooling network comprising 4 convolutional layers, convolutional kernel sizes of 1 × 1, 3 × 3, and 3 × 3, respectively, may be set, and void ratios of 1, 6, 12, and 18, respectively, may be set. The filling related to the void ratio is to add the void points with the weight 2 of 0 around each point of the original convolution kernel, so that different receptive fields can be increased, namely, the features of the image to be detected under different size transformations can be excavated, and the robustness of subsequent crack detection is improved.
In practical application, the pyramid pooling network model may adopt a deep lab network model or other spatial pyramid structure models with cavity convolution.
And S104, determining a crack detection result based on the first-level image features and the pooled features.
In the embodiment of the present disclosure, the process of determining the first-level image features and the pooled features may be understood as a process of encoding an image to be detected, that is, converting the image to be detected into a digital matrix representation, which may represent multi-dimensional features of the image to be detected by using the digital matrix representation. Corresponding to the image coding process, image decoding can be performed, namely, the size of the image to be detected can be recovered to the multi-dimensional characteristic after characterization, and a crack detection result is obtained.
In the embodiment of the present disclosure, the crack detection result is a determination result of whether a crack exists in a crack region to be detected, and the crack detection result may be determined based on a crack probability corresponding to each pixel point in an image to be detected, and the crack probability may be obtained based on the first-level image feature obtained by the encoding and the pooled feature. That is, in a case that it is determined that the probability of the crack corresponding to a certain pixel point is large (for example, greater than 0.5), it may be determined that the probability that the pixel point is a pixel point in the crack region is higher.
Next, the determination process of the crack probability will be described first.
As shown in fig. 2, the process of determining the fracture probability specifically includes the following steps:
s201, performing upsampling processing on the pooled features to obtain the processed pooled features with the same dimensionality as the first-level image features;
s202, performing feature fusion on the first-level image features and the processed pooled features, and determining the probability of a crack corresponding to each pixel point in the image to be detected based on the fused features.
Here, the embodiment of the present disclosure may determine a crack probability corresponding to each pixel point in the image to be detected based on the fusion result of the first-level image feature and the pooled feature.
In order to facilitate the fusion of the first-level image features and the pooled features, the crack detection method provided by the embodiment of the disclosure may perform upsampling processing on the pooled features to obtain the processed pooled features with the same dimension as the first-level image features, thereby realizing the fusion of the two features. Similar to the feature fusion corresponding to the spatial pyramid pooling network, the feature fusion here may be a result obtained by splicing the first-level image features and the pooled features after the upsampling process as image features of two channels.
For the fused features, in order to ensure that the size of the decoded image is consistent with that of the image to be detected, convolution operation may be performed first, and then upsampling processing may be performed. As shown in fig. 3, the process of determining the fracture probability based on the fused features specifically includes the following steps:
s301, inputting the fused features into a convolution layer with convolution parameters for convolution processing to obtain features after convolution;
s302, performing up-sampling processing on the convolved features to obtain the processed convolved features with the same dimensionality as the image to be detected;
s303, determining the crack probability corresponding to each pixel point in the image to be detected based on the processed convolved features.
Here, in the embodiment of the present disclosure, the feature after fusion may be input into a convolution layer having convolution parameters to perform convolution processing, so as to obtain a feature after convolution, and then the feature after convolution may be subjected to upsampling processing, so as to obtain a processed feature after convolution having the same dimension as that of the image to be detected, so that based on the processed feature after convolution, a crack probability corresponding to each pixel point in the image to be detected may be determined. In the embodiment of the present disclosure, for each dimension feature included in the processed convolved features, the more the dimension feature conforms to the feature to which the crack belongs, the higher the crack probability of the corresponding pixel point is, and conversely, the less the dimension feature does not conform to the feature to which the crack belongs, the lower the crack probability of the corresponding pixel point is.
It should be noted that, in the embodiment of the present disclosure, regardless of the process of performing upsampling on the pooled features or the process of performing upsampling on the convolved features, the selection of the sampling rate related to upsampling may be adaptively adjusted in combination with other features (for example, features that need to maintain consistent dimensions).
In the embodiment of the disclosure, whether the image to be detected has a crack or not can be determined based on the crack probability. As shown in fig. 4, the above-mentioned crack judgment process specifically includes the following steps:
s401, determining a crack area to be detected in the image to be detected based on the crack probability corresponding to each pixel point;
s402, determining whether the image to be detected has cracks or not based on the gray value of the crack region to be detected and the size of the crack region to be detected.
Here, it is considered that in an actual crack detection scenario, cracks are generally presented in the form of regions. Therefore, the embodiment of the disclosure may determine the crack region to be detected in the image to be detected based on the crack probability corresponding to each pixel point, where the pixel points with the crack probability greater than the preset probability value may be determined as the pixel points in the crack region to be detected, and the pixel points with the crack probability greater than the preset probability value may be aggregated to determine the crack region to be detected.
After the crack region to be detected is determined, whether a crack exists in the image to be detected can be determined based on the gray value of the crack region to be detected and the size of the crack region to be detected. The method mainly considers that the gray values of pixel points to which the cracks belong can be consistent or not different, and at the moment, the reliability degree of the crack area to be detected determined as the crack can be judged by utilizing the gray characteristics of the area.
Therefore, according to the embodiment of the disclosure, the confidence that the crack region to be detected is a crack is determined based on the gray value of the crack region to be detected and the size of the crack region to be detected, the higher the confidence is, the higher the reliability that the crack region to be detected is determined to be a crack is, and at this time, when the confidence that the crack region to be detected is greater than the preset confidence threshold value of the crack, the crack of the image to be detected can be determined.
It should be noted that there may be one or more crack regions to be detected. When one image to be detected comprises a plurality of crack regions to be detected, the reliability degree of each crack region to be detected which is judged as a crack can be determined according to the method, and a final crack detection result can be determined based on the reliability degree, so that the crack detection accuracy is further improved.
The size of the crack region to be detected may be determined by extracting edge features after morphological operations are performed on the binary image corresponding to the crack region to be detected, and the gray value of the crack region to be detected may be determined by the sum of the gray values of the pixel points included in the crack region to be detected.
In the embodiment of the present disclosure, the confidence that the crack region to be detected is a crack may be determined according to a formula: the confidence coefficient is the gray value/255 of the crack region to be detected/the size of the crack region to be detected. The gray value of the crack region to be detected is the sum of the gray values of the pixel points included in the crack region to be detected.
To facilitate a further understanding of the above-described method of crack detection, a specific example is described below with reference to fig. 5.
As shown in fig. 5, for the image to be detected, if the size of the image to be detected is 512 × 512, after the image enhancement processing, the image to be detected after the enhancement processing may be input to a hollow convolutional neural network, and after the related convolution processing by the hollow convolutional neural network, on one hand, a first-level image feature (the dimension of the first-level image feature is 128 × 128) may be output, and on the other hand, a second-level image feature (the dimension of the second-level image feature is 32 × 32) may be output.
For the second-level image features, the second-level image features are respectively input into 4 convolutional layers of the spatial pyramid pooling network (for example, the images are respectively identified by convolutional kernel size and void ratio), so that 4 first features can be obtained, meanwhile, the second-level image features are input into 1 pooling layer of the spatial pyramid pooling network, so that 1 second feature can be obtained, and the 4 first features and the 1 second feature are subjected to feature fusion, so that the pooled features can be obtained. After the pooled features are convolved by 1 convolution layer with convolution parameters (namely, the size of a convolution kernel is 1 multiplied by 1), 4 times of sampling is carried out, and the pooled features after processing with the same dimension as the first-level image features can be obtained. In this case, the first-level image features and the processed pooled features are feature-fused to obtain fused features.
For the fused features, the convolution operation of the convolution layer with the convolution kernel size of 3 × 3 may be performed first, and the convolved features are sampled by 4 times, so that the processed convolved features with the same dimension as the image to be detected can be obtained, and the corresponding crack detection result can be obtained, as shown in fig. 5.
In the embodiment of the disclosure, in order to realize crack detection of an image to be detected, a cavity convolution neural network and a spatial pyramid pooling network need to be trained in advance.
The embodiment of the disclosure can obtain the cavity convolution neural network and the spatial pyramid pooling network based on the obtained crack image samples and the crack marking information training corresponding to each crack image sample. Under the condition that the cavity convolution neural network and the space pyramid pooling network are obtained through training, the image to be detected can be input into the trained cavity convolution neural network and the trained space pyramid pooling network, and the crack can be accurately and efficiently detected according to the crack detection method.
In the embodiment of the disclosure, for each acquired crack image sample, the crack image sample is sequentially input into each convolution layer included in a to-be-trained void convolution neural network, and based on a void convolution operation between the crack image sample and an initial convolution feature matrix of each convolution layer, an initial first-level image feature and an initial second-level image feature are obtained; and the number of the first and second groups,
respectively inputting the second-level image features into each convolution layer and one pooling layer included in the spatial pyramid pooling network to be trained, obtaining initial first features based on the second-level image features and cavity convolution operation among initial convolution feature matrixes of the convolution layers, and obtaining initial second features based on pooling operation among the second-level image features and the initial pooling feature matrixes of the pooling layers; fusing the initial first characteristic and the initial second characteristic to obtain the pooled characteristic of the crack image to be detected; and the number of the first and second groups,
outputting identification information for the fracture image sample based on the initial first-level image features and the pooled features; and comparing the output identification information with the crack marking information corresponding to the crack image sample, and if the comparison is inconsistent, adjusting the initial convolution characteristic matrix in each convolution layer included in the cavity convolution neural network, the initial convolution characteristic matrix in each convolution layer included in the space pyramid pooling network and the initial pooling characteristic matrix in one pooling layer until the comparison is consistent.
It can be known that the process of training the cavity convolutional neural network and the spatial pyramid pooling network in the embodiment of the present disclosure is a process of training convolution parameters (i.e., convolution feature matrices) of convolutional layers included in the cavity convolutional neural network and the spatial pyramid pooling network or a process of pooling parameters (i.e., pooling feature matrices) of pooling layers.
The crack labeling information corresponding to each crack image sample can be pixel-level labeling, that is, labeling pixel points by pixel points of each pixel point in each crack image sample can be performed. For example, when a pixel point of the crack image sample is determined to be a crack region, the pixel point may be labeled as 1, and when the pixel point is not a crack, the pixel point may be labeled as 0.
In a specific application, the position of the crack can be determined by means of a LabelMe pixel-level marking tool, and the crack area can be marked based on two values of 0 and 1, wherein white is the crack area and black is the background area.
After each crack image sample is labeled, the labeled crack image samples can be divided into a training data set and a verification data set in proportion, wherein the training data set is used for realizing the training of the cavity convolution neural network and the space pyramid pooling network, and the verification data set is used for verifying the accuracy of the cavity convolution neural network and the space pyramid pooling network obtained through training and can be subjected to adaptability adjustment.
It should be noted that, in the training process of the void convolutional neural network and the spatial pyramid pooling network, methods such as image enhancement processing, feature fusion, upsampling processing and the like adopted in the network application stage may also be included, and specific processes refer to the above-mentioned related description of performing crack detection, and are not described herein again.
According to the crack detection method provided by the embodiment of the disclosure, when the server acquires the image to be detected, the server can firstly extract the first-level image feature of the image to be detected based on the first convolution layer of the lower level included in the cavity convolution neural network, then extract the second-level image feature of the image to be detected based on the second convolution layer of the higher level included in the cavity convolution neural network, then determine the pooled feature of the image to be detected by using the second-level image feature and the spatial pyramid pooling network, and finally fuse the first-level image feature and the pooled feature to determine the crack detection result. Therefore, according to the embodiment of the disclosure, on one hand, a hole convolution neural network is used for excavating richer image features, so that the accuracy of crack detection can be improved, and on the other hand, different scale information of the image features is excavated by using a space pyramid pooling network, so that the robustness of crack detection can be improved, the problems of time consumption and labor consumption in a manual detection mode are avoided, and the detection efficiency is high.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a crack detection device corresponding to the crack detection method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the crack detection method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 6, a schematic view of a crack detection device according to some embodiments of the present disclosure is provided, the crack detection device including:
an obtaining module 601, configured to obtain an image to be detected;
the extraction module 602 is configured to extract a first-level image feature and a second-level image feature from an image to be detected based on a pre-trained hole convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the level of the second convolutional layer is higher than that of the first convolutional layer;
a determining module 603, configured to determine pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and second-level image features;
a detection module 604, configured to determine a crack detection result based on the first-level image feature and the pooled feature.
By adopting the scheme, on one hand, a hole convolution neural network is utilized to excavate richer image features, so that the accuracy of crack detection can be improved, on the other hand, different scale information of the image features is excavated by utilizing a space pyramid pooling network, so that the robustness of crack detection can be improved, the problems of time consumption and labor consumption in a manual detection mode are avoided, and the detection efficiency is higher.
In one embodiment, the hole convolutional neural network comprises a plurality of convolutional layers connected in sequence; the first convolution layer in the second convolution layer is connected with the last convolution layer in the first convolution layer; an extracting module 602, configured to extract a first-level image feature and a second-level image feature according to the following steps:
inputting an image to be detected into a first convolution layer for convolution processing to obtain a first-level image characteristic;
and inputting the first-level image features into a second convolution layer for convolution processing to obtain second-level image features.
In one embodiment, a spatial pyramid pooling network includes a plurality of convolutional layers with different convolution parameters including void rate and convolutional kernel size, and a pooling layer; a determining module 603, configured to determine post-pooling characteristics of the image to be detected according to the following steps:
inputting the second-level image features into each convolution layer of the spatial pyramid pooling network for convolution processing respectively to obtain first features, and,
inputting the second-level image features into a pooling layer for pooling processing to obtain second features;
and performing feature fusion on the first feature and the second feature after processing of each convolution layer of the spatial pyramid pooling network to obtain pooled features.
In one embodiment, the detection module 604 is configured to determine the crack detection result according to the following steps:
determining the probability of a crack corresponding to each pixel point in the image to be detected based on the first-level image characteristics and the pooled characteristics;
and determining whether the image to be detected has cracks or not based on the crack probability corresponding to each pixel point.
In one embodiment, the detecting module 604 is configured to determine whether a crack exists in the image to be detected according to the following steps:
determining a crack region to be detected in the image to be detected based on the crack probability corresponding to each pixel point;
and determining whether the image to be detected has cracks or not based on the gray value of the crack region to be detected and the size of the crack region to be detected.
In one embodiment, the detecting module 604 is configured to determine whether a crack exists in the image to be detected according to the following steps:
determining the confidence coefficient that the crack region to be detected is a crack based on the gray value of the crack region to be detected and the size of the crack region to be detected;
and determining whether the image to be detected has cracks or not based on the confidence coefficient and a preset confidence coefficient threshold value of the existence of cracks.
In one embodiment, the detecting module 604 is configured to determine a crack probability corresponding to each pixel point in the image to be detected according to the following steps:
performing upsampling processing on the pooled features to obtain processed pooled features with the same dimensionality as the first-level image features;
and performing feature fusion on the first-level image features and the processed pooled features, and determining the crack probability corresponding to each pixel point in the image to be detected based on the fused features.
In one embodiment, the detecting module 604 is configured to determine a crack probability corresponding to each pixel point in the image to be detected according to the following steps:
inputting the fused features into a convolution layer with convolution parameters for convolution processing to obtain features after convolution;
performing up-sampling processing on the convolved features to obtain the processed convolved features with the same dimensionality as the image to be detected;
and determining the crack probability corresponding to each pixel point in the image to be detected based on the processed features after convolution.
In one embodiment, the apparatus further comprises:
a training module 605, configured to train the void convolutional neural network and the spatial pyramid pooling network in advance; the cavity convolution neural network and the space pyramid pooling network are obtained by training through the acquired crack image samples and the crack marking information corresponding to each crack image sample.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 7, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and includes: a processor 701, a memory 702, and a bus 703. The memory 702 stores machine-readable instructions executable by the processor 701 (for example, execution instructions corresponding to the acquisition module 601, the extraction module 602, the determination module 603, and the detection module 604 in the crack detection apparatus in fig. 6, and the like), when the computer device is operated, the processor 701 and the memory 702 communicate via the bus 703, and when the machine-readable instructions are executed by the processor 701, the following processes are performed:
acquiring an image to be detected;
extracting a first-level image feature and a second-level image feature from an image to be detected based on a pre-trained hole convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the level of the second convolutional layer is higher than that of the first convolutional layer;
determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features;
and determining a crack detection result based on the first-level image features and the pooled features.
In one possible embodiment, the hole convolutional neural network comprises a plurality of convolutional layers connected in sequence; the first convolution layer in the second convolution layer is connected with the last convolution layer in the first convolution layer; in the instructions executed by the processor 701, extracting a first-level image feature and a second-level image feature from an image to be detected based on a pre-trained hole convolution neural network includes:
inputting an image to be detected into a first convolution layer for convolution processing to obtain a first-level image characteristic;
and inputting the first-level image features into a second convolution layer for convolution processing to obtain second-level image features.
In one possible implementation, the spatial pyramid pooling network includes a plurality of convolutional layers with different convolution parameters and a pooling layer, the convolution parameters including a void rate and a convolution kernel size; in the instructions executed by the processor 701, determining the pooled features of the image to be detected based on the pre-trained spatial pyramid pooling network and the second-level image features includes:
inputting the second-level image features into each convolution layer of the spatial pyramid pooling network for convolution processing respectively to obtain first features, and,
inputting the second-level image features into a pooling layer for pooling processing to obtain second features;
and performing feature fusion on the first feature and the second feature after processing of each convolution layer of the spatial pyramid pooling network to obtain pooled features.
In one possible embodiment, the instructions executed by the processor 701, for determining the crack detection result based on the first-level image feature and the pooled feature, include:
determining the probability of a crack corresponding to each pixel point in the image to be detected based on the first-level image characteristics and the pooled characteristics;
and determining whether the image to be detected has cracks or not based on the crack probability corresponding to each pixel point.
In a possible implementation manner, the instructions executed by the processor 701, for determining whether a crack exists in the image to be detected based on the crack probability corresponding to each pixel point, includes:
determining a crack region to be detected in the image to be detected based on the crack probability corresponding to each pixel point;
and determining whether the image to be detected has cracks or not based on the gray value of the crack region to be detected and the size of the crack region to be detected.
In a possible implementation manner, the instructions executed by the processor 701, for determining whether a crack exists in the image to be detected based on the gray value of the crack region to be detected and the size of the crack region to be detected, includes:
determining the confidence coefficient that the crack region to be detected is a crack based on the gray value of the crack region to be detected and the size of the crack region to be detected;
and determining whether the image to be detected has cracks or not based on the confidence coefficient and a preset confidence coefficient threshold value of the existence of cracks.
In a possible implementation, the instructions executed by the processor 701 for determining a crack probability corresponding to each pixel point in the image to be detected based on the first-level image feature and the pooled features includes:
performing upsampling processing on the pooled features to obtain processed pooled features with the same dimensionality as the first-level image features;
and performing feature fusion on the first-level image features and the processed pooled features, and determining the crack probability corresponding to each pixel point in the image to be detected based on the fused features.
In a possible implementation manner, the instructions executed by the processor 701, for determining a crack probability corresponding to each pixel point in the image to be detected based on the fused features, include:
inputting the fused features into a convolution layer with convolution parameters for convolution processing to obtain features after convolution;
performing up-sampling processing on the convolved features to obtain the processed convolved features with the same dimensionality as the image to be detected;
and determining the crack probability corresponding to each pixel point in the image to be detected based on the processed features after convolution.
In a possible implementation manner, the instructions executed by the processor 701 further include a step of training a hole convolutional neural network and a spatial pyramid pooling network in advance;
the cavity convolution neural network and the space pyramid pooling network are obtained by training through the acquired crack image samples and the crack marking information corresponding to each crack image sample.
The present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by the processor 701 to perform the steps of the crack detection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the crack detection method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the crack detection method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A crack detection method, characterized in that the method comprises:
acquiring an image to be detected;
extracting a first-level image feature and a second-level image feature from the image to be detected based on a pre-trained cavity convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the second convolutional layer is higher in level than the first convolutional layer;
determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features;
determining a fracture detection result based on the first-level image features and the pooled features.
2. The method of claim 1, wherein the hole convolutional neural network comprises a plurality of convolutional layers connected in sequence; a first convolutional layer of the second convolutional layers is connected with a last convolutional layer of the first convolutional layers; the method for extracting the first-level image features and the second-level image features from the image to be detected based on the pre-trained cavity convolutional neural network comprises the following steps:
inputting the image to be detected into the first convolution layer for convolution processing to obtain a first-level image characteristic;
and inputting the first-level image features into the second convolution layer for convolution processing to obtain second-level image features.
3. The method of claim 1, wherein the spatial pyramid pooling network comprises a plurality of convolutional layers with different convolution parameters including void rate and convolutional kernel size and a pooling layer; the method for determining the pooled features of the image to be detected based on the pre-trained spatial pyramid pooling network and the second-level image features comprises the following steps:
inputting the second-level image features into each convolution layer of the spatial pyramid pooling network for convolution processing respectively to obtain first features, and,
inputting the second-level image features into the pooling layer for pooling processing to obtain second features;
and performing feature fusion on the first feature and the second feature of the spatial pyramid pooling network after the convolutional layers are processed to obtain the pooled feature.
4. The method of claim 1, wherein determining a fracture detection result based on the first-level image features and the pooled features comprises:
determining the probability of a crack corresponding to each pixel point in the image to be detected based on the first-level image characteristics and the pooled characteristics;
and determining whether the image to be detected has cracks or not based on the crack probability corresponding to each pixel point.
5. The method according to claim 4, wherein the determining whether the image to be detected has a crack based on the crack probability corresponding to each pixel point comprises:
determining a crack region to be detected in the image to be detected based on the crack probability corresponding to each pixel point;
and determining whether the image to be detected has cracks or not based on the gray value of the crack region to be detected and the size of the crack region to be detected.
6. The method according to claim 5, wherein the determining whether the image to be detected has a crack based on the gray value of the crack region to be detected and the size of the crack region to be detected comprises:
determining the confidence coefficient that the crack region to be detected is a crack based on the gray value of the crack region to be detected and the size of the crack region to be detected;
and determining whether the image to be detected has the crack or not based on the confidence coefficient and a preset confidence coefficient threshold value of the crack.
7. The method of claim 4, wherein the determining the probability of the crack corresponding to each pixel point in the image to be detected based on the first-level image features and the pooled features comprises:
performing upsampling processing on the pooled features to obtain processed pooled features with the same dimension as the first-level image features;
and performing feature fusion on the first-level image features and the processed pooled features, and determining the probability of a crack corresponding to each pixel point in the image to be detected based on the fused features.
8. The method according to claim 7, wherein the determining the probability of the crack corresponding to each pixel point in the image to be detected based on the fused features comprises:
inputting the fused features into a convolution layer with convolution parameters for convolution processing to obtain features after convolution;
performing upsampling processing on the convolved features to obtain the processed convolved features with the same dimensionality as the to-be-detected image;
and determining the crack probability corresponding to each pixel point in the image to be detected based on the processed features after convolution.
9. The method according to any one of claims 1 to 8, further comprising the step of pre-training the hole convolutional neural network and the spatial pyramid pooling network;
the cavity convolution neural network and the space pyramid pooling network are obtained by training through the acquired crack image samples and the crack marking information corresponding to the crack image samples.
10. A crack detection device, characterized in that the device comprises:
the acquisition module is used for acquiring an image to be detected;
the extraction module is used for extracting a first level image feature and a second level image feature from the image to be detected based on a pre-trained cavity convolution neural network; the second-level image features are extracted from a second convolutional layer included in the cavity convolutional neural network, the first-level image features are extracted from a first convolutional layer included in the cavity convolutional neural network, and the second convolutional layer is higher in level than the first convolutional layer;
the determining module is used for determining the pooled features of the image to be detected based on a pre-trained spatial pyramid pooling network and the second-level image features;
and the detection module is used for determining a crack detection result based on the first-level image characteristics and the pooled characteristics.
11. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the crack detection method of any of claims 1 to 9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the crack detection method as claimed in any one of the claims 1 to 9.
CN201911401125.4A 2019-12-30 2019-12-30 Crack detection method and device, computer equipment and storage medium Withdrawn CN111080641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401125.4A CN111080641A (en) 2019-12-30 2019-12-30 Crack detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401125.4A CN111080641A (en) 2019-12-30 2019-12-30 Crack detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111080641A true CN111080641A (en) 2020-04-28

Family

ID=70320164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401125.4A Withdrawn CN111080641A (en) 2019-12-30 2019-12-30 Crack detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111080641A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184687A (en) * 2020-10-10 2021-01-05 南京信息工程大学 Road crack detection method based on capsule characteristic pyramid and storage medium
CN112686869A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Cloth flaw detection method and device
CN112700418A (en) * 2020-12-31 2021-04-23 常州大学 Crack detection method based on improved coding and decoding network model
CN113506281A (en) * 2021-07-23 2021-10-15 西北工业大学 Bridge crack detection method based on deep learning framework
CN114049356A (en) * 2022-01-17 2022-02-15 湖南大学 Method, device and system for detecting structure apparent crack
CN114441546A (en) * 2022-04-08 2022-05-06 湖南万航科技有限公司 Crack measurement endoscope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978032A (en) * 2019-03-15 2019-07-05 西安电子科技大学 Bridge Crack detection method based on spatial pyramid cavity convolutional network
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
CN110111334A (en) * 2019-04-01 2019-08-09 浙江大华技术股份有限公司 A kind of crack dividing method, device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978032A (en) * 2019-03-15 2019-07-05 西安电子科技大学 Bridge Crack detection method based on spatial pyramid cavity convolutional network
CN110111334A (en) * 2019-04-01 2019-08-09 浙江大华技术股份有限公司 A kind of crack dividing method, device, electronic equipment and storage medium
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIANG-CHIEH CHEN 等: "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation" *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184687A (en) * 2020-10-10 2021-01-05 南京信息工程大学 Road crack detection method based on capsule characteristic pyramid and storage medium
CN112184687B (en) * 2020-10-10 2023-09-26 南京信息工程大学 Road crack detection method based on capsule feature pyramid and storage medium
CN112686869A (en) * 2020-12-31 2021-04-20 上海智臻智能网络科技股份有限公司 Cloth flaw detection method and device
CN112700418A (en) * 2020-12-31 2021-04-23 常州大学 Crack detection method based on improved coding and decoding network model
CN112700418B (en) * 2020-12-31 2024-03-15 常州大学 Crack detection method based on improved coding and decoding network model
CN113506281A (en) * 2021-07-23 2021-10-15 西北工业大学 Bridge crack detection method based on deep learning framework
CN113506281B (en) * 2021-07-23 2024-02-27 西北工业大学 Bridge crack detection method based on deep learning framework
CN114049356A (en) * 2022-01-17 2022-02-15 湖南大学 Method, device and system for detecting structure apparent crack
CN114049356B (en) * 2022-01-17 2022-04-12 湖南大学 Method, device and system for detecting structure apparent crack
CN114441546A (en) * 2022-04-08 2022-05-06 湖南万航科技有限公司 Crack measurement endoscope
CN114441546B (en) * 2022-04-08 2022-06-24 湖南万航科技有限公司 Crack measurement endoscope

Similar Documents

Publication Publication Date Title
CN111080641A (en) Crack detection method and device, computer equipment and storage medium
CN110516201B (en) Image processing method, image processing device, electronic equipment and storage medium
JP6595714B2 (en) Method and apparatus for generating a two-dimensional code image having a dynamic effect
CN110111334B (en) Crack segmentation method and device, electronic equipment and storage medium
CN108885699A (en) Character identifying method, device, storage medium and electronic equipment
CN113642659B (en) Training sample set generation method and device, electronic equipment and storage medium
WO2014014678A1 (en) Feature extraction and use with a probability density function and divergence|metric
CN106780727B (en) Vehicle head detection model reconstruction method and device
CN112836756B (en) Image recognition model training method, system and computer equipment
CN103946865B (en) Method and apparatus for contributing to the text in detection image
CN105447508A (en) Identification method and system for character image verification codes
CN111339787B (en) Language identification method and device, electronic equipment and storage medium
CN116311214B (en) License plate recognition method and device
CN113887472A (en) Remote sensing image cloud detection method based on cascade color and texture feature attention
JP2022550195A (en) Text recognition method, device, equipment, storage medium and computer program
CN111144335A (en) Method and device for building deep learning model
CN115272887A (en) Coastal zone garbage identification method, device and equipment based on unmanned aerial vehicle detection
CN112861844A (en) Service data processing method and device and server
CN108460388A (en) Detection method, device and the computer readable storage medium of witness marker
CN113297986A (en) Handwritten character recognition method, device, medium and electronic equipment
CN112418345A (en) Method and device for quickly identifying fine-grained small target
CN112634382B (en) Method and device for identifying and replacing images of unnatural objects
CN115311630A (en) Method and device for generating distinguishing threshold, training target recognition model and recognizing target
CN113610832B (en) Logo defect detection method, device, equipment and storage medium
CN111027325B (en) Model generation method, entity identification device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200428