CN114814769A - Ground penetrating radar image automatic classification method - Google Patents

Ground penetrating radar image automatic classification method Download PDF

Info

Publication number
CN114814769A
CN114814769A CN202210385731.7A CN202210385731A CN114814769A CN 114814769 A CN114814769 A CN 114814769A CN 202210385731 A CN202210385731 A CN 202210385731A CN 114814769 A CN114814769 A CN 114814769A
Authority
CN
China
Prior art keywords
ground penetrating
penetrating radar
radar
convolution
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210385731.7A
Other languages
Chinese (zh)
Inventor
马子骥
蒋志文
帅智康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202210385731.7A priority Critical patent/CN114814769A/en
Publication of CN114814769A publication Critical patent/CN114814769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/885Radar or analogous systems specially adapted for specific applications for ground probing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the disclosure provides a ground penetrating radar image automatic classification, belongs to the technical field of image processing, and specifically comprises: the method comprises the steps that the ground penetrating radar transmits radar waves to scan the concrete, and a receiver in the radar captures a plurality of echo data to form a scanning echo data set; preprocessing the scanning echo data, eliminating noise signals in the scanning echo data set and enhancing echo signals of deep targets to obtain a ground penetrating radar gray map with clear grammatical information; inputting the ground penetrating radar gray level map into the optimized residual error network model, and outputting the defect type corresponding to the image as an identification result through operation; and adding the identification result into a data set to update the residual error network model under the condition that the identification result is accurately detected in the later period through actual detection. Through the scheme disclosed by the invention, the collected data is subjected to normalized processing, and the convolutional neural network is improved to be more suitable for the identification of radar images, so that the accuracy, efficiency and adaptability of the detection of the internal defects of the concrete are improved.

Description

Ground penetrating radar image automatic classification method
Technical Field
The embodiment of the disclosure relates to the technical field of image processing, in particular to a ground penetrating radar image automatic classification method.
Background
At present, with the progress of national infrastructure, the density of national highway networks is continuously increased, and the total mileage of national roads reaches more than five hundred kilometers by the end of 2020; the highway bridge reaches ninety or more than ten thousand seats and six or more than ten thousand kilometers; more than twenty thousand places of the highway tunnel and more than twenty thousand long meters. Various problems develop in the inner concrete structures of roads, bridges and tunnels as time goes by, and various defects also occur in the construction process. Therefore, the demand for detection of these safety hazards has increased dramatically in recent years.
At present, the detection method can be divided into contact detection and non-contact detection. The contact may be other than surface contact and internal contact. The internal contact type is used for detecting internal incompact generated during concrete pouring through a probe. However, the method has low detection efficiency, a probe needs to be inserted into concrete, the detection defect types and depth are limited, and the method depends heavily on the experience of detection personnel. The distributed optical fiber is also an effective method for detecting the interior of concrete, but the method has the defects that the detection position is fixed, the sensors cannot be replaced and maintained, and the cost cannot be controlled due to the fact that the number of the sensors needs to be embedded is increased greatly when the area of a detection area is too large. Surface contact detection schemes include those using ballistic elastic waves, (ultra) sonic waves, and CT. The application to the elastic wave needs place exciter output elastic wave at the different mark position of concrete to corresponding the position in opposite sides and setting up the receiver and receive, single detection range is little, and unable quick replacement measuring position leads to detection efficiency not high. Ultrasonic can detect continuous defects in concrete through tomography, but the method is easy to generate diffraction, so that the size of the detected defects cannot be too small and the detection depth is shallow.
The ground penetrating radar method is widely applied to the non-contact detection method, the ground penetrating radar method is high in detection speed and resolution and high in precision, detection can be carried out without contacting a detection target, and the detection target cannot be damaged or damaged. The development of cities drives the rapid growth of roads, tunnels, bridges and the like constructed by utilizing concrete structures, and the efficiency of the detection method is more challenged, and the ground penetrating radar can realize a high-speed detection task through rapid scanning, so that the application is very wide. Although the ground penetrating radar is efficient in data acquisition, for massive acquired data, the analysis task still depends on manual operation, the efficiency is low, the analysis cannot be carried out for a long time, and the accuracy of identification of complex environments in concrete is not high due to the fact that relevant manual expert analysis processes depend on personal subjective feeling. In addition, the detection of deep defects by the method is greatly influenced by reflected wave attenuation, and an effective unified standard is to be formed for preprocessing collected data.
By combining the reasons, the concrete internal defect detection method is more, the application range of the nondestructive non-contact ground penetrating radar is large, but the interpretation work of the collected radar data depends on the subjectivity of an artificial expert seriously, and the accuracy and the efficiency are not high and lack of automation; the same defect is diversified and can not be unified due to overlarge difference of pretreatment modes; the echo of the deep defect of the ground penetrating radar is easy to attenuate and is influenced by noise.
Therefore, a ground penetrating radar image automatic classification method with high detection precision, efficiency and adaptability is needed.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide an automatic classification method for ground penetrating radar images, which at least partially solves the problem in the prior art that detection efficiency, accuracy, and adaptability are poor.
In a first aspect, an embodiment of the present disclosure provides a ground penetrating radar image automatic classification method, including:
step 1, scanning concrete by transmitting radar waves through a ground penetrating radar, and forming a scanning echo data set after a receiver in the radar captures a plurality of echo data;
step 2, preprocessing the scanning echo data, eliminating noise signals in the scanning echo data set and enhancing echo signals of deep targets to obtain a ground penetrating radar gray map with clear grammatical information;
step 3, inputting the ground penetrating radar gray level map into the optimized residual error network model, and outputting the defect type corresponding to the image as an identification result through operation;
and 4, adding the identification result into a data set to update the residual error network model under the condition of accurate actual detection in the later period.
According to a specific implementation manner of the embodiment of the present disclosure, the step 1 specifically includes:
the ground penetrating radar transmitter continuously transmits radar waves with preset fixed center frequency along with radar movement, the receiver receives multi-channel single-row echo data, and the multi-channel single-row echo data are spliced to form scanning echo data.
According to a specific implementation manner of the embodiment of the present disclosure, the step 2 specifically includes:
step 2.1, performing direct current drift removal on the scanning echo data;
step 2.2, removing horizontal signals in the data after the direct current drift is removed, and cutting the data through the depth corresponding to the peak value of the echo waveform after static correction processing;
step 2.3, performing gain processing on the data after the horizontal signal noise is removed, and amplifying the deep signal amplitude through gain to improve the texture characteristics in the gray level image;
step 2.4, performing band-pass filtering processing on the gained data to remove low-frequency and high-frequency noise signals far away from the center frequency of the radar transmitting antenna;
step 2.5, carrying out background noise removal treatment on the gray-scale image after data conversion to further increase effective signals;
step 2.6, carrying out moving average filtering processing on the gray level image to remove irregular burst noise and burr noise;
step 2.7, classifying and marking the ground penetrating radar gray level map;
and 2.8, scaling all the ground penetrating radar gray level images to a uniform resolution, and carrying out normalization processing on pixel values to convert the pixel values to the value between [0 and 1 ].
According to a specific implementation manner of the embodiment of the present disclosure, the residual error network model includes a plurality of building blocks stacked in series, where each building block includes a convolutional layer, an active layer, and a Batch Normalization layer, and includes two propagation paths;
the bypass of the building block is a direct connection structure, and the input is directly transmitted to the tail end of the building block and is output after being fused with the main path.
According to a specific implementation manner of the embodiment of the present disclosure, before the step 3, the method further includes:
for convolution grouping in an original residual error network model, dividing a plurality of convolution kernels which originally extract features by taking n as step length into a plurality of groups, respectively extracting the features from skipped regions on a feature map, and then stacking the results of the convolution of each group into a new feature map, wherein n is a positive integer greater than 2, and the output calculation formula is as follows:
Figure BDA0003594911890000031
in the formula (I), the compound is shown in the specification,
Figure BDA0003594911890000032
the nth packet representing the first layer convolution in the building block,
Figure BDA0003594911890000033
representing the nth packet, W, of the building block that bypasses the first layer of convolution 2 A second layer of convolution, W, representing the main path in said building block s2 A second layer of convolutions representing bypasses in the building blocks;
adding convolution layers on the branches of the dimensionality reduction building block to enable the feature graph dimensionality of the branches to be increased and to be the same as the main road feature graph;
and constructing a new building block-region dimension matching structure by combining bypass ascending and grouping convolution in the dimension reduction building block to obtain an optimized residual error network model.
According to a specific implementation manner of the embodiment of the present disclosure, the step 3 specifically includes:
step 3.1, inputting the ground penetrating radar gray level map into an optimized residual error network model, and enhancing the richness of the ground penetrating radar gray level map by performing rotation, turning, translation, noise addition, scaling, random shielding and brightness adjustment operations or composite operations on a radar gray level image sample;
step 3.2, sequentially inputting the enhanced ground penetrating radar gray level images into a plurality of groups of building blocks, and outputting a characteristic diagram;
step 3.3, inputting the feature map into a global pooling layer;
and 3.4, inputting the output of the global pooling layer into a full-connection layer containing 4 neurons to obtain the identification result.
The ground penetrating radar image automatic classification scheme in the embodiment of the disclosure comprises the following steps: step 1, scanning concrete by transmitting radar waves through a ground penetrating radar, and forming a scanning echo data set after a receiver in the radar captures a plurality of echo data; step 2, preprocessing the scanning echo data, eliminating noise signals in the scanning echo data set and enhancing echo signals of deep targets to obtain a ground penetrating radar gray map with clear grammatical information; step 3, inputting the ground penetrating radar gray level map into the optimized residual error network model, and outputting the defect type corresponding to the image as an identification result through operation; and 4, adding the identification result into a data set to update the residual error network model under the condition of accurate actual detection in the later period.
The beneficial effects of the embodiment of the disclosure are: through the scheme disclosed by the invention, the collected data is subjected to normalized processing, and the convolutional neural network is improved to be more suitable for the identification of radar images, so that the accuracy and efficiency of the detection of the internal defects of the concrete are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for automatically classifying ground penetrating radar images according to an embodiment of the present disclosure;
fig. 2 is a block diagram of an original building block of a residual error network according to an embodiment of the present disclosure;
fig. 3 is a diagram of a region dimension matching structure provided in an embodiment of the present disclosure;
fig. 4 is a flowchart of detecting internal defects of concrete by using a residual error network based on region dimension matching according to an embodiment of the present disclosure;
FIG. 5 is a process diagram of an example of a preprocessing flow provided by an embodiment of the present disclosure;
fig. 6 is a sample data enhancement example diagram provided by an embodiment of the present disclosure;
fig. 7 is an exemplary diagram of a disease sample provided in an embodiment of the disclosure;
fig. 8 is a process diagram of training a residual error network detection model based on region dimension matching according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure of the present disclosure. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a ground penetrating radar image automatic classification method, which can be applied to the defect detection process of internal concrete structures in scenes such as roads, bridges and tunnels.
Referring to fig. 1, a schematic flow chart of an automatic classification method for ground penetrating radar images according to an embodiment of the present disclosure is shown. As shown in fig. 1, the method mainly comprises the following steps:
step 1, scanning concrete by transmitting radar waves through a ground penetrating radar, and forming a scanning echo data set after a receiver in the radar captures a plurality of echo data;
optionally, step 1 specifically includes:
the ground penetrating radar transmitter continuously transmits radar waves with preset fixed center frequency along with radar movement, the receiver receives multi-channel single-row echo data, and the multi-channel single-row echo data are spliced to form scanning echo data.
In specific implementation, model training of the recognition method needs to be performed on a data set, so the data set is established first. In view of the fact that the public data sets are few and the internal defects of the concrete are difficult to acquire in batch, a plurality of schemes are adopted to jointly establish the data sets. For example, a ground penetrating radar transmitter continuously transmits radar waves with preset fixed center frequency along with the movement of a radar, a receiver receives multiple single-row echo data, and the multiple single-row echo data are spliced to form the scanning echo data. Meanwhile, the data set source may further include field collection (concrete members including roads, tunnels and the like), test piece collection, forward simulation (as shown in a dashed line frame of fig. 5), and data enhancement processing on the data, as shown in fig. 6, where a in fig. 6 is an original diagram, and the rest is enhanced data. The test piece is manufactured by stirring cement and crushed sand and then pouring the mixture into a standard mould for forming, wherein in the manufacturing process, soil blocks, small piles of sand bags, empty plastic bottles, plastic bottles filled with part of water and the like are added into the mould to cause artificial defects, and the test piece is a defect-free normal test piece if the soil blocks, the small piles of sand bags, the empty plastic bottles, the plastic bottles filled with part of water and the like are not added. And scanning the test piece by the ground penetrating radar to obtain a radar gray scale map containing various defects. In addition, forward simulation is realized through open source software grrMax, and a simulation image is obtained by adding materials with different attributes and shapes in a forward model to serve as defect samples.
Step 2, preprocessing the scanning echo data, eliminating noise signals in the scanning echo data set and enhancing echo signals of deep targets to obtain a ground penetrating radar gray map with clear grammatical information;
further, the step 2 specifically includes:
step 2.1, performing direct current drift removal on the scanning echo data;
step 2.2, removing horizontal signals in the data after the direct current drift is removed, and cutting the data through the depth corresponding to the peak value of the echo waveform after static correction processing;
step 2.3, performing gain processing on the data after the horizontal signal noise is removed, and amplifying the deep signal amplitude through gain to improve the texture characteristics in the gray level image;
step 2.4, performing band-pass filtering processing on the gained data to remove low-frequency and high-frequency noise signals far away from the center frequency of the radar transmitting antenna;
step 2.5, carrying out background noise removal treatment on the gray level image after data conversion to further increase effective signals;
step 2.6, carrying out moving average filtering processing on the gray level image to remove irregular burst noise and burr noise;
step 2.7, classifying and marking the ground penetrating radar gray level map;
and 2.8, scaling all the ground penetrating radar gray level images to a uniform resolution, and carrying out normalization processing on pixel values to convert the pixel values to the value between [0 and 1 ].
In specific implementation, considering that a sample is preprocessed based on the obtained sample, the preprocessing sequence and the value of each step affect the result, and the specific processing steps can be as follows:
y01: the overall dc offset of the sample is removed, and the present example uses an averaging scheme in which each datum is subtracted from the entire trace of echo data. Some original image data is shown as a in fig. 5, and the result after dc-drift removal is shown as b in fig. 5.
Y02: and removing horizontal textures of the data after the drift is removed, removing a horizontal signal band by a static correction method in the embodiment, selecting the depth corresponding to the waveform of the second peak after the start to cut off, and reserving the second half part of the second peak. As shown in fig. 5 c.
Y03: after the horizontal signal noise is removed, gain processing is performed, and the gain processing mode includes multiple modes, such as (AGC gain, energy attenuation gain, manual control gain, and the like). This example applies energy attenuation-an attenuation curve is calculated from the waveform of the data after the step P01 and is back-referenced to the entire echo, so that the amplitude of the reflected echo from a deep object is expanded to be comparable to that of the reflected echo from a shallow object, while the amplitude of the shallow object does not vary much. In order to prevent the amplitude value of the part of echo after gain from being too large to exceed the range [0,255], a scaling factor is required to control the whole waveform, the scaling factor is related to the maximum peak value in the echo, the value range is [0,1], and in the example, the scaling factor is 0.41. The texture information in the image increases after the gain, as shown by d in fig. 5.
Y04: and further performing band-pass filtering on the gained data, wherein the center frequency of the ground penetrating radar antenna is 300MHz, performing Fourier transform on the gained data to obtain a spectrogram, and setting upper and lower boundary frequencies according to the conditions that the power is high and the power is concentrated near 300 MHz. The closer the upper line boundary is set, the more serious the filtering of effective signals is, and otherwise, the farther the upper line boundary is set, the less the filtering of early-shot signals is. In this example, the lower boundary frequency is 167.3203MHz, and the upper boundary frequency is 585.6208MHz, and the band-pass filtering is performed, as shown in e in FIG. 5.
Y05: the global background is continuously filtered after the band-pass filtering, the deep fine horizontal texture signals can be further removed by the north pole filtering, and the method adopted by the embodiment comprises the following steps: firstly, calculating average whole-channel echo data of multi-channel echo data in a whole image; the average echo data is then subtracted from each echo data trace and the result of the processing is shown as f in figure 5.
Y06: the image is filtered by the sliding mean, the image still contains burst single-point noise after background filtering, and the influence of the noise can be reduced by using the sliding mean filtering. An oversized filter tends to blur the image, and this example uses a size 2 filter for filtering, the result being shown in fig. 5 g.
Y07: after the preprocessing step is completed, different types of defect data are respectively stored and marked, all the ground penetrating radar gray level images are scaled to be uniform in resolution, and pixel values are converted to be between [0 and 1] through normalization processing.
Step 3, inputting the ground penetrating radar gray level map into the optimized residual error network model, and outputting the defect type corresponding to the image as an identification result through operation;
optionally, the residual error network model includes a plurality of building blocks stacked in series, where each building block includes a convolutional layer, an active layer, and a Batch Normalization layer, and includes two propagation paths;
the bypass of the building block is a direct connection structure, and the input is directly transmitted to the tail end of the building block and is output after being fused with the main path.
Further, before the step 3, the method further includes:
for convolution grouping in an original residual error network model, dividing a plurality of convolution kernels which originally extract features by taking n as step length into a plurality of groups, respectively extracting the features from skipped regions on a feature map, and then stacking the results of the convolution of each group into a new feature map, wherein n is a positive integer greater than 2, and the output calculation formula is as follows:
Figure BDA0003594911890000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003594911890000092
the nth packet representing the first layer convolution in the building block,
Figure BDA0003594911890000093
representing the nth packet, W, of the building block that bypasses the first layer of convolution 2 A second layer of convolution, W, representing the main path in said building block s2 A second layer of convolutions representing bypasses in the building blocks;
adding convolution layers on the branches of the dimensionality reduction building block to enable the feature graph dimensionality of the branches to be increased and to be the same as the main road feature graph;
and constructing a new building block-region dimension matching structure by combining bypass ascending and grouping convolution in the dimension reduction building block to obtain an optimized residual error network model.
On the basis of the above embodiment, the step 3 specifically includes:
step 3.1, inputting the ground penetrating radar gray level map into an optimized residual error network model, and enhancing the richness of the ground penetrating radar gray level map by performing rotation, turning, translation, noise addition, scaling, random shielding and brightness adjustment operations or composite operations on a radar gray level image sample;
step 3.2, sequentially inputting the enhanced ground penetrating radar gray level images into a plurality of groups of building blocks, and outputting a characteristic diagram;
step 3.3, inputting the feature map into a global pooling layer;
and 3.4, inputting the output of the global pooling layer into a full-connection layer containing 4 neurons to obtain the identification result.
In specific implementation, the residual error network is considered to be one of the convolutional neural networks, and the convolutional neural network has the advantages of being convenient in structure adjustment, not easy to over-fit, capable of preventing performance degradation of a deep network and the like. The multilayer film is characterized by being formed by stacking and connecting a plurality of building blocks in series, wherein each building block is composed of a convolution layer, an active layer and a Batch Normalization (BN) layer and comprises two propagation paths. The building block bypass is of a direct connection structure, and the input is directly transmitted to the tail end of the building block and is output after being fused with the main path. In the dimension-reduced building block, convolutional layers are also used on the side to match the size of the main output, the structure diagram is shown in fig. 2, and the building block output is calculated as follows:
O i =f(H(x i ))=f(F(x i )+x i )
in the formula, x i And O i Input and output of the ith building block, F (x), respectively i ) And H (x) i ) Mapping before and after the fusion of the two paths, F (x), respectively i ) And what the network needs to learn, can be further expressed as follows:
O i =f(F(x i W i )+x i ),F=W 2 f(W 1 x i )
wherein f is the activation function ReLu, W 1 And W 2 The weight parameters of the first layer and the second layer in the building block are respectively.
And for the dimension-reduced building blocks, it is further expressed as follows:
O i =f(F(x i W i )+x i W s ),F=W 2 f(W 1 x i )
in the formula, W s The sizes for adjusting the output characteristic maps of the two paths are the same.
Each column of pixels in the ground penetrating radar gray level image represents a received complete echo and carries information of a target object of a user. The adjacent pixel columns are echoes of the ground penetrating radar after moving for a small short distance, certain similarity is achieved, and the trend change can be reflected by the pixels in the multiple columns. A dimension reduction building block is inserted into a residual error network every a certain number of building blocks, a convolution kernel in the dimension reduction building block adopts the step length of n to carry out down-sampling dimension reduction, wherein the value of n is generally 2, and an integer with n being more than 2 can also be taken to carry out larger dimension reduction. The dimension reduction process can reduce the whole calculation amount, reduce the occurrence of overfitting and simultaneously cause the condition of sweeping an entire column of pixels, and the entire column of pixels has important significance on the ground penetrating radar image, so the accuracy of the model can be reduced. In addition, the larger the number of the convolution layers of the image, the higher the dimension of the feature map, and the better the generalization performance of the extracted features. The number of bypass convolution layers in the dimension reduction building block is less than that of main path convolution layers, the dimensions of feature graphs are different, and the feature reinforcement after fusion is not obvious.
In view of the defects of the application of the original residual error network on the ground penetrating radar image, the invention better solves the problems by introducing a convolution grouping and bypass upscaling scheme.
Further, for convolution grouping, a plurality of convolution kernels which originally extract the features with the step size of 2 are divided into a plurality of groups, the groups are respectively used for extracting the features again from the skipped areas on the feature map, and then the results of the convolution of each group are stacked into a new feature map. The output calculation formula is as follows:
Figure BDA0003594911890000101
in the formula (I), the compound is shown in the specification,
Figure BDA0003594911890000102
is the nth packet of the first layer convolution in the building block,
Figure BDA0003594911890000103
is the nth packet in the building block that bypasses the first layer of convolution; w 2 Second layer convolution of main path in building block, W s2 Is the second layer convolution that is bypassed in the building block.
Further, for the advantage of high dimensionality and the two-way dimensionality mismatching of the dimensionality reduction building block, convolution layers are added on the branches of the dimensionality reduction building block to enable the dimensionality of the feature graphs of the branches to be increased and the feature graphs of the branches to be the same as that of the main route, and then the two-way feature graphs are added and then activated together to enable the output feature to be strengthened. The output calculation formula is as follows:
O i =f(W 2 f(W 1 x i )+W s2 f(W s1 x i ))
in the formula, W 1 First layer convolution of main path in building block, W s1 The bypassed first layer convolution in the building block.
Based on the above method, the invention combines bypass ascending and grouping convolution in the dimensionality reduction building block to construct a new building block-area dimensionality matching structure, as shown in fig. 3, and the output calculation formula is as follows:
Figure BDA0003594911890000111
meanwhile, considering that the concrete ground penetrating radar gray level image samples obtained by the three schemes still lack of balance and diversity, the number of various defects can be enriched and the overall generalization of the samples can be improved through data enhancement. The concrete internal defects of this example were classified into 4 types, void-containing, block-containing soil, block-containing sand pack, as shown in fig. 7. Data enhancement is done in the following ways:
h01: rotating each defect sample image, wherein the rotation angle of each sample is randomly selected from (0-360 DEG)
H02: and (4) carrying out turnover conversion treatment on each defect sample image, and randomly carrying out horizontal turnover or vertical turnover on each sample.
H03: and (4) carrying out translation processing on each defect sample image, and randomly moving to other coordinates by taking the upper right corner of each sample as a coordinate origin.
H04: and carrying out scaling processing on each defect sample image, wherein each sample is randomly obtained with a scaling factor alpha, alpha epsilon (0,1) represents reduction, and alpha >1 represents enlargement. The reduction is achieved by down-sampling, the amplification by up-sampling experiment, this example by bilinear interpolation. The image texture characteristics are invalid due to too large and too small scaling factors, and the value range alpha of the scaling factor in the embodiment belongs to (0.8, 1.5).
H05: and carrying out shielding treatment on each defect sample image immediately, selecting a rectangular position in the image randomly by taking the upper right corner of each sample as a coordinate origin, and filling pixel points in the rectangle randomly again. The oversized rectangular frame covers the original image, so that the image is invalid, the shielding effect of the undersized rectangular frame is limited, and the rectangular frame is long in the quality range of 20-50 pixel points.
H06: and (3) adding noise to each defect sample image, wherein the noise randomly selects Gaussian noise or salt and pepper noise, the average value of the Gaussian noise is 0, the range of standard deviation values is [1,5], and the density of the salt and pepper noise is in the range of [0.1,0.5 ].
H07: each defect sample image is subjected to contrast adjustment processing, the contrast represents the difference between the brightest pixel concentration area and the darkest pixel concentration area in the image, the difference reflects the contrast, and the contrast is distorted by too much contrast and too little contrast, wherein the contrast range in the example is [0.7,2 ].
H08: the brightness of each defect sample image is adjusted, wherein the brightness is the brightness of the image and is represented by pixel gray scale values, the pixel value is close to 0 to indicate that the brightness is low, and conversely, the brightness is high. In the example, the gray value of the ground penetrating radar gray map is in the range of [0,255], the value range of the brightness adjustment coefficient is [0.7,2], and the bias is 10.
Wherein, the brightness control mode:
h(i,j)=k*f(i,j)+b
wherein f (i, j) is the initial pixel value at (i, j), h (i, j) is the pixel value output after adjustment, k is the adjustment coefficient, and b is the offset, so that all the pixel values can be uniformly changed without changing the contrast.
H09: a portion of the images resulting from the operations H01-H08 are randomly selected, and each image is again randomly subjected to one of the operations H01-H08, 50% of which are selected for the secondary enhancement operation in this example, as shown in the bottom dashed box of FIG. 6.
And forming a data set by the data enhanced data for training a classification model. In the embodiment, an 18-layer area dimension matching residual error network is taken as an example, and the network is optimized for ground penetrating radar image reconstruction on the basis of the 18-layer residual error network and is formed by piling 8 building blocks. The building blocks can be divided into dimension reduction building blocks and non-dimension reduction building blocks, the dimension reduction building blocks contain convolution layers with convolution step length of 2, and the convolution step length in the non-dimension reduction building blocks is 1. Every two building blocks can be divided into a group, the structure of each building block is the same, the number of convolution kernels is the same, the main path convolution kernel is 3 x 3 in size, the bypass convolution kernel in the dimensionality reduction building block is 1 x 1 in size, the convolution and the number are doubled after passing through the two building blocks, and data are subjected to the following steps in the model:
t01: the image data is input into the network in batches for training, 64 pieces of image data are selected as one batch for input in the example, and after the image data is input into the network, a preprocessing part in the network further carries out corresponding processing on the data to obtain standard and unified training data. The method comprises the steps of scaling the image to a uniform size M N (in the example, M and N are both 128), and normalizing all pixel points to be between 0 and 1.
T02: the normalized image is firstly characterized by extracting features from a group of 64 3 x 3 convolution kernels, and then activated and output by a ReLu function after passing through a normalization layer, wherein the normalization layer performs normalization processing on the same channel of a batch of images.
T03: the data is input into a first group of building blocks, the first building block receives the feature graph after the initial convolution, the building blocks are composed of a direct connection path and a main path, the main path contains two convolution layers, the convolution layers use 64 3 × 3 convolution kernels, the convolution step length is one, a BN layer is connected in series behind each convolution layer, the first BN layer is connected in series with a ReLu activation layer, the output of the second BN layer is added and fused with the direct connection layer, and then the output of the second BN layer is connected with a second activation layer and output to the next group of building blocks. Where the second building block is the same as the first, the output feature image size is (B, M, N, C) (the example size is (64,128, 64)), where B denotes the number of batch input feature maps, M and N denote the feature image size, and C denotes the feature map depth.
T04: the data enters a second set of building blocks, the third building block is a dimensionality reduction building block structure, the main-path first-layer convolutional layer uses 128 convolution kernels of 3 x 3, the convolution kernels are divided into 4 groups, each group of 32 convolution kernels starts convolution by taking the crossing position of the [1,3] th row and the [1,3] column, the crossing position of the [1,3] th row and the [2,4] column, the crossing position of the [2,4] th row and the [1,3] column and the crossing position of the [2,4] th row and the [2,4] column of the input feature map (64,128, 64) as starting positions, and the convolution step size is 2, as shown in P1, P2, P3 and P4 in FIG. 3. The sizes of the four groups of feature maps after the features are extracted by the four groups of convolution kernels are the same as (64,64,64 and 32), and the sizes of the new feature maps obtained by stacking the four groups of maps are (64,64,64 and 128) and are marked as F31. F31 passes through the BN layer, and the size of the output characteristic graph is unchanged and is marked as F31B. Meanwhile, the bypassed first layer of convolution also uses 128 sets of convolution kernels with the size of 1 × 1, the real positions of each set of convolution kernels on the input characteristic diagram are respectively the 1 st row and the 1 st column, the 1 st row and the 2 nd column, the 2 nd row and the 1 st column and the 2 nd row and the 2 nd column, and the step size of the convolution kernels is 2, as shown in Q1, Q2, Q3 and Q4 in fig. 3. After bypassing the first convolution, the four feature maps have the size (64,64,64,32) and the size (64,64,64,128) after stacking, which is denoted as M31, and then continuing to normalize the feature maps by using the BN layer to obtain M31B, the feature maps have unchanged sizes. The first BN layer of the two paths is output and then fused, namely F31B and M31B are added, the output size is (64,64, 128), and is marked as F31S. The feature graph after the primary fusion continues to be on the second convolution layer in the main path propagation machine, features are extracted by 128 convolution kernels with the size of 3 × 3, then the feature graph continues to be propagated into the second BN layer of the main path, the output is marked as F32B, and the size of the feature graph is not changed. Bypassing the second convolutional layer, features were extracted from 128 1 × 1 sized convolutional kernels, and the second BN layer output was labeled M32B, with the size unchanged. From above, the third building block outputs: and F (F32B + M32B) after the main path and the output of the bypass passing through the BN layer for the second time are added and fused, and then the ReLu activates the main path and the output.
The fourth building block input data is the output of the previous building block, i.e., F (F32B + M32B), which is identical in structure to the first building block except that the number of convolution kernels used is doubled to 128 and the output size is (64,64, 128).
T05: the data enters a third group of building blocks, the overall structure of the building blocks is the same as that of the second group, the number of used convolution kernels is increased to 256, the data is propagated in the third group of building blocks and is referred to the upper group of building blocks, and the output characteristic graph size is as follows: (64,32,32,256).
T06: the data enters the fourth set of building blocks, which is the last set of building blocks, and has the same structure as the second set of building blocks, the input signature propagates through the fourth set of building blocks as the second set of building blocks, the number of convolution kernels used by all convolution layers reaches 512, and the output signature has a size (64,16, 256).
T07: and transmitting the feature graph data to enter the global pooling layer, and outputting the feature graph from the last building block to enter the global pooling layer. The global pooling layer takes the average value of the depth of each layer of the feature map instead of the original layer, so that the feature map with the size of (16,16) is globally pooled and then has the size of (1, 1). The output data size after global pooling for this example is (128,1, 512).
T08: and finally, outputting the recognition result through a full connection layer containing 4 neurons.
And 4, adding the identification result into a data set to update the residual error network model under the condition of accurate actual detection in the later period.
In specific implementation, the steps can be as follows:
t09: after the prediction result is obtained, the result is compared with the label value to calculate the loss, and the cross entropy loss function is adopted in the embodiment. And reversely updating the weight parameters of each layer of the regional dimension matching network by an Adam optimizer at a learning rate of 0.001 after the cross entropy loss is obtained.
T10: after the parameters of the whole network are updated, training data of the next batch are continuously input into the network from the beginning, and the steps from T01 to T09 are repeated, in this example, the number of training cycles is 800, after the training is finished, a detection model is generated for actually classifying the ground penetrating radar image of the concrete defect, and the training process is shown in FIG. 8, wherein (a) is an accuracy graph of the training process, and (b) is a loss graph of the training process.
E01: the network model trained by the example is obtained through the steps. In practical application, firstly, a B-Scan graph is formed by scanning concrete structures such as roads, tunnels and bridges with ground penetrating radar to obtain echo data. And (3) inputting the acquired image into a model formed in T10 through Y01-Y07 steps, and obtaining the defect prediction classification of the ground penetrating radar image in the model through T01-T08 steps.
E02: after the images collected in practical application are identified by the method, the images can be further added into a database, and after the database is updated, the training and updating of the model can be continued, as shown by the dotted arrow in fig. 4.
According to the automatic classification method for the ground penetrating radar images, collected data are subjected to standardized processing, the convolutional neural network is improved to be more suitable for radar image identification, and the accuracy and efficiency of concrete internal defect detection are improved.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (6)

1. A ground penetrating radar image automatic classification method is characterized by comprising the following steps:
step 1, scanning concrete by transmitting radar waves through a ground penetrating radar, and forming a scanning echo data set after a receiver in the radar captures a plurality of echo data;
step 2, preprocessing the scanning echo data, eliminating noise signals in the scanning echo data set and enhancing echo signals of deep targets to obtain a ground penetrating radar gray map with clear grammatical information;
step 3, inputting the ground penetrating radar gray level map into the optimized residual error network model, and outputting the defect type corresponding to the image as an identification result through operation;
and 4, adding the identification result into a data set to update the residual error network model under the condition of accurate actual inspection in the later period.
2. The method according to claim 1, wherein step 1 specifically comprises:
the ground penetrating radar transmitter continuously transmits radar waves with preset fixed center frequency along with radar movement, the receiver receives multi-channel single-row echo data, and the multi-channel single-row echo data are spliced to form scanning echo data.
3. The method according to claim 1, wherein the step 2 specifically comprises:
step 2.1, performing direct current drift removal on the scanning echo data;
step 2.2, removing horizontal signals in the data after the direct current drift is removed, and cutting the data through the depth corresponding to the peak value of the echo waveform after static correction processing;
step 2.3, performing gain processing on the data after the horizontal signal noise is removed, and amplifying the deep signal amplitude through gain to improve the texture characteristics in the gray level image;
step 2.4, performing band-pass filtering processing on the gained data to remove low-frequency and high-frequency noise signals far away from the center frequency of the radar transmitting antenna;
step 2.5, carrying out background noise removal treatment on the gray level image after data conversion to further increase effective signals;
step 2.6, carrying out moving average filtering processing on the gray level image to remove irregular burst noise and burr noise;
step 2.7, classifying and marking the ground penetrating radar gray level map;
and 2.8, scaling all the ground penetrating radar gray level images to a uniform resolution, and carrying out normalization processing on pixel values to convert the pixel values to the value between [0 and 1 ].
4. The method of claim 3, wherein the residual net model comprises a plurality of building blocks stacked in series, the building blocks comprising convolutional layers, active layers, and Batch Normalization layers, containing two propagation paths;
the bypass of the building block is a direct connection structure, and the input is directly transmitted to the tail end of the building block and is output after being fused with the main path.
5. The method of claim 4, wherein prior to step 3, the method further comprises:
for convolution grouping in an original residual error network model, dividing a plurality of convolution kernels which originally extract features by taking n as step length into a plurality of groups, respectively extracting the features from skipped regions on a feature map, and then stacking the results of the convolution of each group into a new feature map, wherein n is a positive integer greater than 2, and the output calculation formula is as follows:
Figure FDA0003594911880000021
in the formula (I), the compound is shown in the specification,
Figure FDA0003594911880000022
the nth packet representing the first layer convolution in the building block,
Figure FDA0003594911880000023
representing the nth packet, W, of the building block that bypasses the first layer of convolution 2 A second layer of convolution, W, representing the main path in said building block s2 A second layer of convolutions representing bypasses in the building blocks;
adding convolution layers on the branches of the dimensionality reduction building block to enable the feature graph dimensionality of the branches to be increased and to be the same as the main road feature graph;
and constructing a new building block-region dimension matching structure by combining bypass ascending and grouping convolution in the dimension reduction building block to obtain an optimized residual error network model.
6. The method according to claim 5, wherein the step 3 specifically comprises:
step 3.1, inputting the ground penetrating radar gray level map into an optimized residual error network model, and enhancing the richness of the ground penetrating radar gray level map by performing rotation, turning, translation, noise addition, scaling, random shielding and brightness adjustment operations or composite operations on a radar gray level image sample;
step 3.2, sequentially inputting the enhanced ground penetrating radar gray level images into a plurality of groups of building blocks, and outputting a characteristic diagram;
step 3.3, inputting the feature map into a global pooling layer;
and 3.4, inputting the output of the global pooling layer into a full-connection layer containing 4 neurons to obtain the identification result.
CN202210385731.7A 2022-04-13 2022-04-13 Ground penetrating radar image automatic classification method Pending CN114814769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210385731.7A CN114814769A (en) 2022-04-13 2022-04-13 Ground penetrating radar image automatic classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210385731.7A CN114814769A (en) 2022-04-13 2022-04-13 Ground penetrating radar image automatic classification method

Publications (1)

Publication Number Publication Date
CN114814769A true CN114814769A (en) 2022-07-29

Family

ID=82536685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210385731.7A Pending CN114814769A (en) 2022-04-13 2022-04-13 Ground penetrating radar image automatic classification method

Country Status (1)

Country Link
CN (1) CN114814769A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598639A (en) * 2022-12-14 2023-01-13 山东大学(Cn) Device and method for collecting face geological conditions by millimeter wave radar in tunnel environment
CN116381821A (en) * 2023-06-05 2023-07-04 中国科学院地质与地球物理研究所 Device and method for indoor verification of detection resolution of ground penetrating radar on complex stratum

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115598639A (en) * 2022-12-14 2023-01-13 山东大学(Cn) Device and method for collecting face geological conditions by millimeter wave radar in tunnel environment
CN116381821A (en) * 2023-06-05 2023-07-04 中国科学院地质与地球物理研究所 Device and method for indoor verification of detection resolution of ground penetrating radar on complex stratum
CN116381821B (en) * 2023-06-05 2023-08-08 中国科学院地质与地球物理研究所 Device and method for indoor verification of detection resolution of ground penetrating radar on complex stratum

Similar Documents

Publication Publication Date Title
CN112462346B (en) Ground penetrating radar subgrade disease target detection method based on convolutional neural network
CN114814769A (en) Ground penetrating radar image automatic classification method
CN112287807B (en) Remote sensing image road extraction method based on multi-branch pyramid neural network
CN102013015B (en) Object-oriented remote sensing image coastline extraction method
CN103236063B (en) Based on the SAR image oil spilling detection method of multiple dimensioned spectral clustering and decision level fusion
CN111476088B (en) Asphalt pavement water damage identification model construction method, identification method and system
WO2023123568A1 (en) Ground penetrating radar image artificial intelligence recognition method and device
CN111025286B (en) Ground penetrating radar map self-adaptive selection method for water damage detection
CN112130132A (en) Underground pipeline detection method and system based on ground penetrating radar and deep learning
CN107247927B (en) Method and system for extracting coastline information of remote sensing image based on tassel cap transformation
CN113256562A (en) Road underground hidden danger detection method and system based on radar images and artificial intelligence
CN102201125A (en) Method for visualizing three-dimensional imaging sonar data
CN112989481B (en) Method for processing stable visual image data of complex geological tunnel construction surrounding rock
CN115393712B (en) SAR image road extraction method and system based on dynamic hybrid pooling strategy
CN114170527A (en) Remote sensing target detection method represented by rotating frame
CN111025285A (en) Asphalt pavement water damage detection method based on map gray scale self-adaptive selection
Wang et al. Deep learning-based rebar clutters removal and defect echoes enhancement in GPR images
CN116203559A (en) Intelligent recognition and early warning system and method for underground rock and soil disease body
CN116246169A (en) SAH-Unet-based high-resolution remote sensing image impervious surface extraction method
CN113469097B (en) Multi-camera real-time detection method for water surface floaters based on SSD network
CN117351321A (en) Single-stage lightweight subway lining cavity recognition method and related equipment
CN115761038A (en) Tunnel face geological sketch method and system based on image spectrum technology
CN111160182B (en) Method for identifying ancient river ground surface calcium rock formation
CN115220098A (en) Automatic recognition method and device for broken and crack-controlled carbonatite hole body
CN114708514B (en) Method and device for detecting forest felling change based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination