CN115291210B - 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism - Google Patents
3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism Download PDFInfo
- Publication number
- CN115291210B CN115291210B CN202210881070.7A CN202210881070A CN115291210B CN 115291210 B CN115291210 B CN 115291210B CN 202210881070 A CN202210881070 A CN 202210881070A CN 115291210 B CN115291210 B CN 115291210B
- Authority
- CN
- China
- Prior art keywords
- penetrating radar
- ground penetrating
- image
- gain
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000149 penetrating effect Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000007246 mechanism Effects 0.000 title claims abstract description 34
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 44
- 238000012549 training Methods 0.000 claims abstract description 34
- 238000003062 neural network model Methods 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 238000012795 verification Methods 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 18
- 238000001514 detection method Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 9
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 8
- 230000002829 reductive effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000002592 echocardiography Methods 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 4
- 238000013135 deep learning Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/885—Radar or analogous systems specially adapted for specific applications for ground probing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention provides a 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combining an attention mechanism. The method comprises the steps of preprocessing an obtained actual three-dimensional echo image of the ground penetrating radar; manually marking the obtained three-dimensional echo image of the ground penetrating radar, scrambling, and then randomly distributing the three-dimensional echo image to a training set and a verification set, wherein the number of samples of each category in the training set after each scrambling is required to be ensured to be the same; training the neural network model of the 3D-CNN added with the attention mechanism by using the generated training set to obtain a trained neural network model; and carrying out pipeline identification on the three-dimensional image of the ground penetrating radar by using the obtained neural network model. The invention solves the problems of low recognition accuracy and low efficiency of the traditional recognition method and the neural network recognition method based on the 2D-CNN, improves the 3D-CNN by adding the attention mechanism, and improves the recognition accuracy of the network model to the three-dimensional image of the ground penetrating radar.
Description
Technical Field
The invention belongs to the technical field of target detection of three-dimensional echo map post-processing of a ground penetrating radar, and particularly relates to a 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combining an attention mechanism.
Background
Ground penetrating radar (group PENETRATING RADAR, GPR) is a promising underground pipeline detection tool, and has the advantages of no damage, high speed, high efficiency, low cost, flexible operation and the like compared with other conventional detection methods. The two-dimensional ground penetrating radar has strong limitation that it can only give the buried depth of the underground object of the measured section, but cannot obtain the information of the trend, shape and the like of the underground object, so that the type of the underground object is difficult to determine. Therefore, the detection by using the three-dimensional ground penetrating radar is a development trend of pipeline detection. Three-dimensional ground penetrating radar generally adopts array transmitting antenna and receiving antenna, and each antenna will obtain a two-dimensional image of longitudinal section in depth direction, through stacking a plurality of two-dimensional B-Scan images in space in order, will obtain a pair of three-dimensional images (C-Scan) that can reflect underground space structure completely.
Ground penetrating radar has been widely used for underground pipeline target detection. However, interpretation of GPR image data remains challenging. First, GPR data is susceptible to subsurface media, which is complex and more noisy. Persons without specialized training are difficult to read the original GPR image, the original GPR image is completely dependent on the experience of an inspector in a manual reading mode, unified standards are not available, high subjectivity is achieved, the reading result is possibly inaccurate, and missed detection and false detection are also possible. Even business-skilled inspectors have difficulty finding the desired target correctly in numerous disturbances. Secondly, the image data of the ground penetrating radar is huge, the data of only one kilometer is up to several GB, and even a business skilled inspector can read the data acquired by the vehicle-mounted GPR equipment in one day, the data can take weeks or even months. The heavy manual interpretation labor limits the application of GPR in large-scale urban scenes, so that detection time and cost can be remarkably reduced by researching an accurate and automatic identification algorithm for GPR images.
Automatic target recognition methods are mainly divided into two categories: a machine learning (MACHINE LEARNING, ML) based method and a deep learning (DEEP LEARNING, DL) based method. Conventional machine learning typically requires a great deal of expertise and experience to design the feature extractor manually. The image data of the ground penetrating radar is complex, more interference is caused, and the image data is easily influenced by different underground medium environments, so that generalization of the image data is difficult to ensure by traditional machine learning. Deep learning has proven to be an effective method for learning and extracting more accurate features from measured data without manual design, so most advanced algorithms for ground penetrating radar image recognition are based on a deep learning network. However, most of these deep learning networks rely on two-dimensional B-Scan images, however, it is difficult to obtain more accurate information of subsurface targets from only two-dimensional B-Scan images, and reflections caused by holes, pipelines or other urban utilities show similar hyperbolic patterns. For example, CNN achieves object detection by extracting features on a two-dimensional image layer by layer through a convolution layer. However, the network models are basically designed according to specific two-dimensional input images, so that the purpose is to extract the features on the two-dimensional images layer by layer to realize target detection, and the two-dimensional convolutional neural network is not sensitive enough to space information and cannot complete the task of accurately explaining the three-dimensional echo image of the ground penetrating radar.
Disclosure of Invention
The invention aims to solve the problems of low recognition accuracy and low recognition efficiency of the traditional recognition method and the neural network recognition method based on 2D/3D-CNN, and provides a three-dimensional image pipeline recognition method of a 3D-CNN ground penetrating radar combined with an attention mechanism. The invention improves the 3D-CNN by adding the attention mechanism, and improves the accuracy of the network model for identifying the three-dimensional image of the ground penetrating radar.
The invention is realized by the following technical scheme, and provides a 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with an attention mechanism, which specifically comprises the following steps:
Step 1: preprocessing the obtained actual three-dimensional echo image of the ground penetrating radar, wherein the preprocessing comprises direct wave filtering by means of mean filtering and gain processing of the image after direct wave filtering by means of extreme value envelope method;
step 2: the three-dimensional echo images of the ground penetrating radar obtained in the step 1 are manually marked, the categories of the manual marking are respectively pipeline, cavity and non-target, scrambling is carried out to remove the accidental division of the training set and the verification set, and then the training set and the verification set are randomly distributed, but the number of samples of each category in the training set after scrambling each time is required to be ensured to be the same;
Step 3: training the neural network model of the 3D-CNN added with the attention mechanism by utilizing the training set generated in the step 2 to obtain a trained neural network model; the training process comprises the following steps: firstly, applying a channel attention mechanism to a horizontal tangent plane graph, enabling a network to pay more attention to the horizontal tangent plane graph with key feature information, inhibiting an unimportant horizontal tangent plane graph, enabling the network to learn the position information of key features in the horizontal tangent plane graph through a spatial attention mechanism, and finally, sending an image optimized by an attention mechanism module into a 3D-CNN;
The neural network model of the 3D-CNN specifically comprises the following steps: first, the single input size of the network is (1, 36, 128, 128), 1 represents only one C-Scan image, 36 is the channel number, i.e. the number of B-Scan images contained in one C-Scan image, 128 and 128 represent the size of a single B-Scan image 128 x 128; performing three-dimensional convolution operation on an input image and 4 convolution kernels with the size of 3 x 3 to obtain three-dimensional feature images with the size of (4, 36, 128, 128), normalizing the obtained three-dimensional feature images by batch normalization, performing nonlinear transformation by using Relu activation functions, and finally taking the obtained 4 three-dimensional feature images as input of the next layer; the second layer is 1 BasicBlock modules with input dimensions (4, 36, 128, 128), the output dimensions (8, 36, 128, 128), the third layer is also 1 BasicBlock modules with input dimensions (8, 36, 128, 128), the output dimensions (8, 36, 128, 128); the fourth layer is 3 groups BasicBlock of modules, each group is composed of BasicBlock modules with input sizes (8, 36, 128, 128) and output sizes (16, 18, 64, 64) and BasicBlock modules with input sizes (16, 18, 64, 64), three-dimensional feature images with the sizes (16, 18, 64, 64) are finally obtained through the four layers, global average pooling and flat layer flattening are carried out for feature extraction, and finally the extracted features are sent to a full-connection layer for classified output;
Step 4: and (3) carrying out pipeline identification on the three-dimensional image of the ground penetrating radar by utilizing the neural network model obtained in the step (3).
Furthermore, the direct wave can be represented as two horizontal black and white stripes in the B-Scan image, echoes of the underground detection target are uncorrelated in the A-Scan echo signals of all the measuring points on the same measuring line, and the direct wave interference signals in the B-Scan image can be effectively removed by the mean filtering method according to the uncorrelation.
Further, the gain processing adopts adaptive segmentation gain processing, and the adaptive segmentation gain can determine gain weight in a time window according to average amplitude of the signal in the time window, namely, the adaptive gain weight setting can be carried out according to the ground penetrating radar signal.
Further, the generation method of the adaptive gain function is as follows: firstly, taking absolute values of A-Scan data, calculating an average value H (i) of a plurality of channels of A-Scan signals, then calculating an average value of H (i) in each section of time window according to the number of segments, taking the reciprocal of the average value as a gain value of a starting point of the time window, obtaining other positions in a gain function through linear interpolation, and obtaining the gain weight of each section of the gain function through B-Scan image self-adaption; and taking the envelope extreme point of the H (i) as a segmentation point, namely, adaptively obtaining the optimal segmentation number and the segmentation point, namely, extreme envelope gain.
Further, the training set and validation set division ratio is 6:4.
Further, meaningless gain is applied before the direct wave and excessive gain is applied to the deep noise, so special processing is applied in these two stages:
(1) Before the maximum point of the envelope of the function H (i) occurs, the signal before this is considered to require no gain;
(2) In order to prevent excessive gain on deep signals, the latter extreme points are no longer taken when the maximum of the envelope of the function H (i) is less than one thirty times the maximum.
Further, in the training process, training parameters are 16batch and 180epochs, the initial learning rate is set to be 0.01, and the learning rate is reduced by 10 times after 60: 60epochs, so that a trained neural network model is finally obtained.
Further, in step 4, the three-dimensional image of the ground penetrating radar which is not input into the neural network model is used as the input of the trained neural network model, so that the network automatically carries out target pipeline identification on the three-dimensional image of the ground penetrating radar, and finally the three-dimensional image of the ground penetrating radar with the pipeline is marked.
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with an attention mechanism when executing the computer program.
The invention proposes a computer readable storage medium for storing computer instructions which, when executed by a processor, implement the steps of the 3D-CNN ground penetrating radar three-dimensional image pipeline recognition method in combination with an attention mechanism.
The beneficial effects of the invention are as follows:
the three-dimensional image of the ground penetrating radar is preprocessed and marked manually, so that a three-dimensional image data set of the ground penetrating radar is manufactured, and a neural network model capable of automatically identifying and detecting underground pipelines of the follow-up three-dimensional image of the ground penetrating radar is trained by utilizing a training set in the data set. The method can effectively improve the identification accuracy and the detection efficiency of the three-dimensional echo image of the ground penetrating radar. The invention can improve the target recognition probability of the underground pipeline to more than 92 percent.
Due to the complexity of the underground environment, more noise exists in the ground penetrating radar image, so that the echo characteristics of the target are very easy to be interfered by the noise, and the complete underground target can be hardly ensured to be recovered only by means of the two-dimensional B-Scan image of the ground penetrating radar, thereby increasing the probability of false alarm and false alarm caused by target identification. The invention aims to pre-process a three-dimensional echo image obtained by a ground penetrating radar, then input the three-dimensional echo image into a 3D convolutional neural network model combined with an attention mechanism, extract the characteristics of the three-dimensional image of the ground penetrating radar and the characteristics of more details in a key area by a 3D convolutional kernel and an attention module, thereby inhibiting the interference of other irrelevant information on classification results and solving the problem that the 2D convolutional neural network is insensitive to space information. And finally, using the trained neural network model for automatically identifying the underground pipeline in the three-dimensional image of the ground penetrating radar.
Drawings
FIG. 1 is a flow chart of a three-dimensional ground penetrating radar image underground pipeline identification method of the 3D-CNN algorithm combined with an attention mechanism.
Fig. 2 is a comparison of the removal of the direct wavefront by mean filtering.
Fig. 3 is a comparison of the extreme envelope gain processing before and after.
Fig. 4 is a diagram of a 3D-CNN model structure.
Fig. 5 is a flow chart of a 3D-CNN network architecture.
Fig. 6 is a diagram of the block diagram BasicBlock in a 3D-CNN network.
FIG. 7 is a graph comparing recognition accuracy of neural network models, namely A-3D-CNN, 3D-CNN and 2D-CNN, incorporating an attention mechanism.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
With reference to fig. 1-7, the invention provides a three-dimensional ground penetrating radar image underground pipeline identification method of a 3D-CNN algorithm combined with an attention mechanism, and the automatic identification method comprises the following steps:
Step 1: preprocessing an obtained actual three-dimensional echo image of the ground penetrating radar, wherein the preprocessing comprises direct wave filtering by means of mean filtering and gain processing of the image after direct wave filtering by means of extreme value envelope method;
step 2: the three-dimensional echo images of the ground penetrating radar obtained in the step 1 are manually marked and are respectively in three types of pipelines, holes and non-targets, scrambling is carried out to remove the accidental division of a training set and a verification set, and then the three-dimensional echo images are randomly distributed to the training set and the verification set, but the number of samples of each type in the training set after scrambling each time is required to be kept the same;
Step 3: training the neural network model of the 3D-CNN added with the attention mechanism by utilizing the training set generated in the step 2 to obtain a trained neural network model;
Step 4: and (3) carrying out pipeline identification on the three-dimensional image of the ground penetrating radar by utilizing the neural network model obtained in the step (3).
And step 1, filtering the direct wave of the three-dimensional image of the ground penetrating radar by using a mean filtering method. The direct wave usually presents two horizontal black-white stripes in the B-Scan image, and the distance between the vehicle-mounted ground penetrating radar and the road surface does not change obviously during actual road engineering detection, so that the arrival time of the direct wave in the A-Scan echo signals of different measuring points on the same measuring line of the ground penetrating radar is basically the same, and the direct wave is reflected to the B-Scan image to be a horizontal straight line. The echoes of the underground detection targets are not related to each other in the A-Scan echo signals of all the measuring points on the same measuring line, and by utilizing the characteristic, the direct wave interference signals in the B-Scan image can be conveniently and effectively removed by the mean filtering method. The direct wave is the most dominant interference noise in the ground penetrating radar signal, and compared with the target reflected echo, the amplitude of the direct wave signal is much larger, so that in a B-Scan image of the ground penetrating radar, the signal characteristics below the ground are seriously suppressed and can not be seen, as shown in fig. 2, the detection and identification of the neural network to the underground target are greatly influenced, and therefore, the direct wave interference is firstly removed in the preprocessing of the ground penetrating radar data.
The specific method of the mean value filtering is as follows:
First, assume that a B-Scan image on one line of the ground penetrating radar is made up of N a-Scan echo signals, each of which contains M sampling points. The B-Scan image can be represented as an mxn matrix and the mathematical expression for the mean filtering method can be represented as:
Where w i (i, j) is the value of the ith row and jth column in the B-Scan image matrix before filtering, and corresponds to the value of the ith sample point in the jth a-Scan echo signal. w' i (i, j) is the value of the ith row and jth column in the filtered B-Scan image matrix.
The gain processing adopts self-adaptive segmented gain processing, and the self-adaptive segmented gain can be determined according to the average amplitude of the signal in a specified time window, namely, the self-adaptive gain weight setting can be carried out according to the ground penetrating radar signal. The generation method of the self-adaptive gain function is as follows: firstly taking absolute value of A-Scan data, calculating average value H (i) of multi-channel A-Scan signals, then calculating average value of H (i) in each time window according to the number of segments, taking the reciprocal of the average value as gain value of starting point of the time window, obtaining other positions in gain function by linear interpolation, and obtaining gain weight of each segment by B-Scan image self-adaption. In order to solve the problem that the optimal segmentation number and segmentation point are difficult to obtain by the segmentation adaptive gain, the envelope extreme point of the H (i) can be taken as the segmentation point, namely the optimal segmentation number and segmentation point can be obtained in a self-adaptive manner, which is called extreme envelope gain.
The specific method for gain processing comprises the following steps: firstly, hilbert transformation is carried out on H (i), an envelope value of the H (i) is obtained, a maximum value of the envelope is obtained, and a maximum value point is used as a segmentation point. At the same time, considering that meaningless gain is carried out before the direct wave and excessive gain is carried out on deep noise, special treatment is carried out on the two sections:
(1) Before the maximum point of the envelope of the function H (i) occurs, the signal before this is considered to require no gain.
(2) In order to prevent excessive gain on deep signals, the latter extreme points are no longer taken when the maximum of the envelope of the function H (i) is less than one thirty times the maximum.
The ground penetrating radar echo image after extreme value envelope gain processing is shown in figure 3.
And 2, manually labeling the three-dimensional image of the ground penetrating radar obtained in the step 1, wherein the obtained data set is divided into 3 categories, namely pipelines, holes and no targets, and the dividing ratio of the training set to the verification set is about 6:4.
Considering that when an abnormal target is found in a three-dimensional image of a ground penetrating radar by manpower in the process of making a data set, hyperbolic features are found in two-dimensional B-Scan images of a plurality of channels, and then a horizontal section view of a corresponding position is seen. Throughout the process of target feature analysis, the final conclusion presented is often the image features in the horizontal section. Therefore, the convolutional neural network can simulate the process of manually analyzing and judging the target characteristics through an attention mechanism.
And 3, performing iterative training on the 3D-CNN neural network model combined with the attention mechanism by using the training set obtained in the step 2, wherein the network model structure is shown in fig. 4, and the channel attention mechanism is firstly applied to the horizontal section graph to enable the network to pay more attention to the horizontal section graph with key feature information, inhibit unimportant horizontal section graphs and then enable the network to learn the position information of the key features in the horizontal section graph through the spatial attention mechanism. And then the image optimized by the attention mechanism module is sent to the 3D-CNN. The network model of the 3D-CNN used is shown in fig. 5. First the single input size of the network is (1, 36, 128, 128), 1 denotes only one C-Scan image, 36 is the number of channels, i.e. the number of B-Scan images contained in one C-Scan image, 128 and 128 denote the size of a single B-Scan image 128 x 128. Performing three-dimensional convolution operation on an input image and 4 convolution kernels with the size of 3 x 3 to obtain three-dimensional feature images with the size of (4, 36, 128, 128), normalizing the obtained three-dimensional feature images by batch normalization, performing nonlinear transformation by using Relu activation functions to increase nonlinear relations among layers of a neural network, relieving the problems of overfitting and gradient explosion of the network model, and finally taking the obtained 4 three-dimensional feature images as input of the next layer; the second layer is 1 BasicBlock modules (BasicBlock module structure is shown in fig. 6), the input size is (4, 36, 128, 128), the output size is (8, 36, 128, 128), the third layer is also 1 BasicBlock module, but the input size is (8, 36, 128, 128), the output size is (8, 36, 128, 128); the fourth layer is 3 groups BasicBlock of modules, each group is composed of BasicBlock modules with input sizes (8, 36, 128, 128) and output sizes (16, 18, 64, 64) and BasicBlock modules with input sizes (16, 18, 64, 64), three-dimensional feature images with sizes (16, 18, 64, 64) are finally obtained through the four layers, global average pooling and flat layer flattening are carried out to extract features, and the extracted features are finally sent to the full-connection layer to be classified and output. The training parameters selected are 16batch and 180epochs, the initial learning rate is set to be 0.01, the learning rate is reduced by 10 times after every 60 th epochs th, and the recognition accuracy obtained by training and combining the neural network models of the attention mechanism, namely A-3D-CNN, 3D-CNN and 2D-CNN is shown in a graph of FIG. 7. It can be seen that the A-3D-CNN has better recognition effect, which proves that the invention is effective for the underground pipeline recognition of the three-dimensional echo image of the ground penetrating radar, and compared with the common 3D-CNN, the accuracy is improved. Finally obtaining the trained neural network model.
And 4, performing pipeline identification on the three-dimensional echo image of the ground penetrating radar by using the neural network model obtained in the step 3. And taking the three-dimensional image of the ground penetrating radar which is not input into the neural network model as the input of the trained neural network model, so that the network automatically carries out target pipeline identification on the three-dimensional image of the ground penetrating radar, and finally labeling the three-dimensional image of the ground penetrating radar with the pipeline.
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with an attention mechanism when executing the computer program.
The invention proposes a computer readable storage medium for storing computer instructions which, when executed by a processor, implement the steps of the 3D-CNN ground penetrating radar three-dimensional image pipeline recognition method in combination with an attention mechanism.
The memory in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM). It should be noted that the memory of the methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a solid-state disk (solid-state drive STATE DISC, SSD)), or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip with signal processing capability. In implementation, the steps of the above method embodiments may be implemented by integrated logic circuits of hardware in a processor or instructions in software form. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The invention provides a 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with an attention mechanism, which is described in detail, wherein specific examples are applied to illustrate the principle and the implementation mode of the invention, and the description of the above examples is only used for helping to understand the method and the core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (10)
1. A3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with an attention mechanism is characterized by comprising the following steps:
Step 1: preprocessing the obtained actual three-dimensional echo image of the ground penetrating radar, wherein the preprocessing comprises direct wave filtering by means of mean filtering and gain processing of the image after direct wave filtering by means of extreme value envelope method;
step 2: the three-dimensional echo images of the ground penetrating radar obtained in the step 1 are manually marked, the categories of the manual marking are respectively pipeline, cavity and non-target, scrambling is carried out to remove the accidental division of the training set and the verification set, and then the training set and the verification set are randomly distributed, but the number of samples of each category in the training set after scrambling each time is required to be ensured to be the same;
Step 3: training the neural network model of the 3D-CNN added with the attention mechanism by utilizing the training set generated in the step 2 to obtain a trained neural network model; the training process comprises the following steps: firstly, applying a channel attention mechanism to a horizontal tangent plane graph, enabling a network to pay more attention to the horizontal tangent plane graph with key feature information, inhibiting an unimportant horizontal tangent plane graph, enabling the network to learn the position information of key features in the horizontal tangent plane graph through a spatial attention mechanism, and finally, sending an image optimized by an attention mechanism module into a 3D-CNN;
The neural network model of the 3D-CNN specifically comprises the following steps: first, the single input size of the network is (1, 36, 128, 128), 1 represents only one C-Scan image, 36 is the channel number, i.e. the number of B-Scan images contained in one C-Scan image, 128 and 128 represent the size of a single B-Scan image 128 x 128; performing three-dimensional convolution operation on an input image and 4 convolution kernels with the size of 3 x 3 to obtain three-dimensional feature images with the size of (4, 36, 128, 128), normalizing the obtained three-dimensional feature images by batch normalization, performing nonlinear transformation by using Relu activation functions, and finally taking the obtained 4 three-dimensional feature images as input of the next layer; the second layer is 1 BasicBlock modules with input dimensions (4, 36, 128, 128), the output dimensions (8, 36, 128, 128), the third layer is also 1 BasicBlock modules with input dimensions (8, 36, 128, 128), the output dimensions (8, 36, 128, 128); the fourth layer is 3 groups BasicBlock of modules, each group is composed of BasicBlock modules with input sizes (8, 36, 128, 128) and output sizes (16, 18, 64, 64) and BasicBlock modules with input sizes (16, 18, 64, 64), three-dimensional feature images with the sizes (16, 18, 64, 64) are finally obtained through the four layers, global average pooling and flat layer flattening are carried out for feature extraction, and finally the extracted features are sent to a full-connection layer for classified output;
Step 4: and (3) carrying out pipeline identification on the three-dimensional image of the ground penetrating radar by utilizing the neural network model obtained in the step (3).
2. The method of claim 1, wherein the direct wave appears as two horizontal black-white stripes in the B-Scan image, echoes of the underground detection target are uncorrelated in the a-Scan echo signals of each measuring point on the same measuring line, and the mean filtering method can effectively remove the direct wave interference signals in the B-Scan image according to the uncorrelation.
3. The method of claim 2, wherein the gain processing employs an adaptive segment gain processing that is capable of determining gain weights within a specified time window based on the mean magnitude of the signal within the time window, i.e., an adaptive gain weight setting based on the ground penetrating radar signal.
4. A method according to claim 3, characterized in that the generation method of the adaptive gain function is: firstly, taking absolute values of A-Scan data, calculating an average value H (i) of a plurality of channels of A-Scan signals, then calculating an average value of H (i) in each section of time window according to the number of segments, taking the reciprocal of the average value as a gain value of a starting point of the time window, obtaining other positions in a gain function through linear interpolation, and obtaining the gain weight of each section of the gain function through B-Scan image self-adaption; and taking the envelope extreme point of the H (i) as a segmentation point, namely, adaptively obtaining the optimal segmentation number and the segmentation point, namely, extreme envelope gain.
5. The method of claim 1, wherein the training set and the validation set are partitioned at a ratio of 6:4.
6. The method of claim 4, wherein the meaningless gain is applied before the direct wave and the excessive gain is applied to the deep noise, so that the special processing is applied in the two stages:
(1) Before the maximum point of the envelope of the function H (i) occurs, the signal before this is considered to require no gain;
(2) In order to prevent excessive gain on deep signals, the latter extreme points are no longer taken when the maximum of the envelope of the function H (i) is less than one thirty times the maximum.
7. The method of claim 1, wherein during the training process, training parameters are 16batch and 180epochs, the initial learning rate is set to 0.01, and the learning rate is reduced by 10 times every 60: 60epochs, so as to obtain the trained neural network model.
8. The method according to claim 1, wherein in step 4, the three-dimensional image of the ground penetrating radar which is not input into the neural network model is used as the input of the trained neural network model, so that the network automatically performs target pipeline identification on the three-dimensional image of the ground penetrating radar, and finally the three-dimensional image of the ground penetrating radar with the pipeline is marked.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1-8 when the computer program is executed.
10. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210881070.7A CN115291210B (en) | 2022-07-26 | 2022-07-26 | 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210881070.7A CN115291210B (en) | 2022-07-26 | 2022-07-26 | 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115291210A CN115291210A (en) | 2022-11-04 |
CN115291210B true CN115291210B (en) | 2024-04-30 |
Family
ID=83823611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210881070.7A Active CN115291210B (en) | 2022-07-26 | 2022-07-26 | 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115291210B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116106856B (en) * | 2023-04-13 | 2023-08-18 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Identification model establishment method and identification method for thunderstorm strong wind and computing equipment |
CN116256701B (en) * | 2023-05-16 | 2023-08-01 | 中南大学 | Ground penetrating radar mutual interference wave suppression method and system based on deep learning |
CN117115641B (en) * | 2023-07-20 | 2024-03-22 | 中国科学院空天信息创新研究院 | Building information extraction method and device, electronic equipment and storage medium |
CN117218783A (en) * | 2023-09-12 | 2023-12-12 | 广东云百科技有限公司 | Internet of things safety management system and method |
CN117784123B (en) * | 2023-11-15 | 2024-09-13 | 北京市燃气集团有限责任公司 | Method and system for acquiring deeper underground medium data |
CN117951599B (en) * | 2024-01-16 | 2024-07-23 | 北京市科学技术研究院 | Underground piping diagram generation method and device based on radar image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780361A (en) * | 2021-08-17 | 2021-12-10 | 哈尔滨工业大学 | Three-dimensional ground penetrating radar image underground pipeline identification method based on 2.5D-CNN algorithm |
CN113901878A (en) * | 2021-09-13 | 2022-01-07 | 哈尔滨工业大学 | CNN + RNN algorithm-based three-dimensional ground penetrating radar image underground pipeline identification method |
CN114169411A (en) * | 2021-11-22 | 2022-03-11 | 哈尔滨工业大学 | Three-dimensional ground penetrating radar image underground pipeline identification method based on 3D-CNN algorithm |
-
2022
- 2022-07-26 CN CN202210881070.7A patent/CN115291210B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780361A (en) * | 2021-08-17 | 2021-12-10 | 哈尔滨工业大学 | Three-dimensional ground penetrating radar image underground pipeline identification method based on 2.5D-CNN algorithm |
CN113901878A (en) * | 2021-09-13 | 2022-01-07 | 哈尔滨工业大学 | CNN + RNN algorithm-based three-dimensional ground penetrating radar image underground pipeline identification method |
CN114169411A (en) * | 2021-11-22 | 2022-03-11 | 哈尔滨工业大学 | Three-dimensional ground penetrating radar image underground pipeline identification method based on 3D-CNN algorithm |
Non-Patent Citations (4)
Title |
---|
一种基于注意力机制的小目标检测深度学习模型;吴湘宁;贺鹏;邓中港;李佳琪;王稳;陈苗;计算机工程与科学;20211231;第43卷(第001期);全文 * |
基于注意力卷积神经网络的大豆害虫图像识别;孙鹏;陈桂芬;曹丽英;;中国农机化学报;20200215(第02期);全文 * |
序列-序列模型注意力机制模块基本原理探究;马春鹏;赵铁军;;智能计算机与应用;20200101(第01期);全文 * |
改进的UWB TWI成像算法及分辨率研究;马琳;张中兆;谭学治;白旭;;哈尔滨工业大学学报;20081115(第11期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115291210A (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115291210B (en) | 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism | |
CN110866545A (en) | Method and system for automatically identifying pipeline target in ground penetrating radar data | |
CN107808138B (en) | Communication signal identification method based on FasterR-CNN | |
CN117079139B (en) | Remote sensing image target detection method and system based on multi-scale semantic features | |
CN105844279A (en) | Depth learning and SIFT feature-based SAR image change detection method | |
CN107729926B (en) | Data amplification method and machine identification system based on high-dimensional space transformation | |
CN115343703A (en) | Pipeline identification method of 3D-CNN ground penetrating radar three-dimensional image based on self-training | |
CN115311531A (en) | Ground penetrating radar underground cavity target automatic detection method of RefineDet network model | |
CN104680536A (en) | Method for detecting SAR image change by utilizing improved non-local average algorithm | |
CN113901878B (en) | Three-dimensional ground penetrating radar image underground pipeline identification method based on CNN+RNN algorithm | |
CN113962968B (en) | Multi-source mixed interference radar image target detection system oriented to complex electromagnetic environment | |
CN113780361B (en) | Three-dimensional ground penetrating radar image underground pipeline identification method based on 2.5D-CNN algorithm | |
CN113887583A (en) | Radar RD image target detection method based on deep learning under low signal-to-noise ratio | |
CN117633588A (en) | Pipeline leakage positioning method based on spectrum weighting and residual convolution neural network | |
CN115311532A (en) | Ground penetrating radar underground cavity target automatic identification method based on ResNet network model | |
CN114169411B (en) | Three-dimensional ground penetrating radar image underground pipeline identification method based on 3D-CNN algorithm | |
CN116756486A (en) | Offshore target identification method and device based on acousto-optic electromagnetic multi-source data fusion | |
CN116047418A (en) | Multi-mode radar active deception jamming identification method based on small sample | |
CN112346056B (en) | Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals | |
CN117409329B (en) | Method and system for reducing false alarm rate of underground cavity detection by three-dimensional ground penetrating radar | |
CN118334525B (en) | Modern marine pasture site selection reliability assessment method based on decision tree model | |
CN113780364B (en) | SAR image target recognition method driven by combination of model and data | |
CN118262104A (en) | Training method, generating method, device and storage medium for target semantic segmentation model | |
CN116977746A (en) | Millimeter wave image target classification method, device, equipment and storage medium | |
CN117876841A (en) | Deep learning data model for removing clutter of underground pipeline ground penetrating radar and construction method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |