CN112232392B - Data interpretation and identification method for three-dimensional ground penetrating radar - Google Patents

Data interpretation and identification method for three-dimensional ground penetrating radar Download PDF

Info

Publication number
CN112232392B
CN112232392B CN202011049464.3A CN202011049464A CN112232392B CN 112232392 B CN112232392 B CN 112232392B CN 202011049464 A CN202011049464 A CN 202011049464A CN 112232392 B CN112232392 B CN 112232392B
Authority
CN
China
Prior art keywords
data
training
stage
ground penetrating
penetrating radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011049464.3A
Other languages
Chinese (zh)
Other versions
CN112232392A (en
Inventor
项芒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ande Space Technology Co ltd
Original Assignee
Shenzhen Ande Space Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ande Space Technology Co ltd filed Critical Shenzhen Ande Space Technology Co ltd
Priority to CN202011049464.3A priority Critical patent/CN112232392B/en
Publication of CN112232392A publication Critical patent/CN112232392A/en
Application granted granted Critical
Publication of CN112232392B publication Critical patent/CN112232392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/885Radar or analogous systems specially adapted for specific applications for ground probing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of ground penetrating radars, in particular to a data interpretation and identification method for a three-dimensional ground penetrating radar. The method adopts artificial intelligence based on deep learning to interpret and identify the three-dimensional ground penetrating radar data, and comprises the steps of reading data of each channel of the three-dimensional ground penetrating radar and preprocessing the data; scanning the preprocessed data in a B-scan and C-scan mode, combining the scanned images to generate sample pictures, and carrying out classification and labeling on the sample picture sets; dividing a sample picture set into a training set, a verification set and a test set according to a specific proportion, and carrying out migration training by using an IncepotionV 3 deep convolution neural network; and reading data acquired by the three-dimensional ground penetrating radar, preprocessing the data to generate a combined scanning image, inputting a file in a depth convolution model hdf5 format, outputting confidence, and classifying and identifying the target object according to the confidence.

Description

Data interpretation and identification method for three-dimensional ground penetrating radar
Technical Field
The invention relates to the technical field of ground penetrating radars, in particular to a data interpretation and identification method for a three-dimensional ground penetrating radar.
Background
The Ground Penetrating Radar (also called geological Radar) detects underground media by emitting high-frequency pulse electromagnetic waves (the frequency is between 1MHz and 1GHz) to determine the distribution of the underground media, has the characteristics of simple operation, high detection precision, no damage, high acquisition speed and the like, is the most active detection technology for engineering detection and exploration at present, and has increasingly wide application in urban road underground disease detection. Compared with the traditional two-dimensional radar which only has one longitudinal vertical section oscillogram, the three-dimensional radar acquires horizontal section data of different depths and vertical section data of any longitudinal, transverse and oblique angles by real-time and rapid sampling and seamless splicing of radar data and position information and uses three-dimensional data processing software for interpretation, can accurately reflect abnormal point types and depth information and effectively detect underground hidden dangers. At present, data processing and image interpretation aiming at the three-dimensional ground penetrating radar are mainly carried out manually, the execution efficiency is low, and continuous work cannot be realized. The non-uniformity of the standards among different treatment personnel results in deviations of the treatment results. In view of this, the invention provides a data interpretation and identification method for a three-dimensional ground penetrating radar, which has high operation efficiency, consistent standard and processing speed far higher than that of manual work.
Disclosure of Invention
The present invention is directed to a data interpretation and identification method for a three-dimensional ground penetrating radar, so as to solve the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme:
a data interpretation and identification method for a three-dimensional ground penetrating radar is provided, which comprises the following steps:
s1, reading data of each channel of the three-dimensional ground penetrating radar, and preprocessing the data;
s2, scanning the preprocessed data in a B-scan and C-scan mode, combining the scanned images to generate sample pictures, and carrying out classification and labeling on the sample picture sets;
s3, dividing the sample picture set into a training set, a verification set and a test set according to a specific proportion, and performing migration training by using an IncepotionV 3 deep convolution neural network;
and S4, reading data collected by the three-dimensional ground penetrating radar, preprocessing the data to generate a combined scanning image, inputting the combined scanning image into a depth convolution model, outputting confidence, and classifying and identifying the target object according to the confidence.
Further, the step S1 further includes:
a1, reading radar raw data of all 16 channels in all measuring lines and converting the radar raw data into an image domain, wherein the conversion formula is as follows:
Figure GDA0002927076640000021
wherein x isiIs the original i-th echo value, f (x)i) As a filter function, yiFor conversion to a grey value range of [0, b]In the image domain.
A2, performing gain and filtering pretreatment on the original data, wherein the specific mode comprises the following steps:
static correction excision:
compared with the conventional single-channel two-dimensional radar, the three-dimensional array radar has the biggest difference of time zero setting. Since it is not guaranteed that the circuits and cables of all channels have exactly the same transit time, it is necessary to set a time zero in order to be able to safely adjust the direct waves of all channels. And selecting the number of channels according to the principle of searching for the shortest time zero point, and searching and removing the zero point, so as to ensure that the time zero points of all the channels appear in the time window range.
Global background elimination:
this filtering removes horizontal and approximately horizontal images by subtracting the calculated average from each trace, the average trace being obtained through the entire profile or a designated portion of the profile.
Reverse energy attenuation gain:
the filtering compensates for amplitude loss due to diffusion and attenuation with a time-varying gain, the trace being multiplied by a gain function that includes linear and exponential gains.
DC removing direct current drift:
there is often a constant offset, also referred to as a DC level or DC drift, in the amplitude of the acquired traces. This filtering is to remove the DC drift from the data, which is calculated and removed separately for each trace.
K-L transformation:
the filtering is the optimal orthogonal transformation in the sense of minimum mean square error, processed feature images are orthogonal and uncorrelated with each other, information contents between different wave bands are enabled not to be overlapped, original features are converted into new features with small quantity, the correlation between mode features is eliminated, and the difference is highlighted.
Further, the step S2 further includes:
a1, segmenting each measuring line according to 100 channels (tracks), wherein the length of each measuring line is 200 channels, namely, the measuring line is segmented by stepping 5 meters along the direction of the measuring line, the length of each segment is 10 meters after segmentation, and 5 meter overlapping regions are formed between adjacent segments;
a2, extracting data of adjacent even channels from the three-dimensional ground penetrating radar data according to sections to obtain 8 vertical section scanning maps (B-scan);
a3, extracting data with the interval of 10 centimeters in the depth direction from the three-dimensional ground penetrating radar data by sections to obtain 24 horizontal section scanning images (C-scan);
a4, combining 8B-scans and 24C-scans into a JPG picture with 227x227 pixels to form a sample picture;
a5, extracting data of different channels and depth combinations according to the methods of the stages 2, 3 and 4 to form a sample graph multiplied by 9 times;
and A6, manually classifying the sample picture set and marking the sample picture set as four target objects of intact, sand-well, pipeline and void.
length:1000cm,interval:5cm/trace
depth:256cm,interval:1cm/slice|
width:128cm,interval:8cm/channel
B-scan:8slices,interval:16cm,56*56pixels
C-scan:24slices,interval:10cm,56*18pixels
2Dgrid:8B-scan images&24C-scan images,spacing 1 pixel between images
2Dgrid:227x227x8bpp
Further, the step S3 further includes:
a1, dividing a sample set labeled by classification into a training set, a verification set and a test set according to the proportion of 7:2: 1;
a2, using a deep convolutional neural network to perform transfer learning, wherein the transfer learning steps are as follows:
(1) loading an Inception V3 skeleton model, removing the top layer and outputting a tensor of 8 x 2048;
(2) and then the user-defined new layer. The output layer firstly uses the function of GlobavalagePooling 2D to convert the output of 8 × 2048 into the tensor of 1 × 2048, then is connected with a fully-connected layer of 1024 nodes, and finally is an output layer of 4 nodes, and softmaX is used as an activation function.
Further, the parameters of the training are as follows: the batch _ size is set to 64, which represents the number of images input in one iteration during training, the epoch is 300, the number of iterations for training all training images in the training process is one time, and when the training is performed after one epoch and then the test on the verification set is performed, the larger the batch _ size is, the more images are input in a single iteration, the more the training effect can be fitted to the data distribution on the whole training set.
Further, during training, a result graph which shows training loss and accuracy is used for judging whether the network is trained in place or not and how to adjust training parameters. And (3) finding out a proper learning rate by observing the descending amplitude of the training loss, and completely training the network.
The learning rate variation formula is:
Figure GDA0002927076640000041
wherein, base _ lr is the basic learning rate during training, iter is the current iteration number, stepsize is the learning rate change interval, and floor is rounding-down.
Further, the initial learning rate is 10-4The first 3000 iterations used 10-4The learning rate of (2), reuse 10-5The learning rate of (2) is continued to be trained to 4000 iterations, and finally 10 are used-6The learning rate is trained to 5000 times of the maximum iteration times, and the deep convolutional neural network is trained.
After the transfer learning, fine adjustment is carried out on the 17 th layer of the neural network, and the correct value is confirmed by matching with a printing graph and a debugging method.
And A3, saving the optimal solution network model to the hdf5 format file.
Further, the step S4 further includes:
a1, performing area coverage detection by using a three-dimensional ground penetrating radar;
a2, reading and preprocessing three-dimensional ground penetrating radar original data;
a3, scanning the preprocessed data in a B-scan and C-scan mode, and combining the scanned images to generate an actual picture;
a4, inputting an optimal solution network model hdf5 format file by an actual picture, outputting confidence, and identifying a target object according to the confidence classification.
Compared with the prior art, the invention has the beneficial effects that:
the traditional nondestructive detection technology adopted by engineering detection and exploration is mainly carried out by using a two-dimensional radar. The two-dimensional radar can only record one piece of waveform data of a longitudinal vertical section in each detection, cannot finish the full-coverage detection from line to surface, is easy to cause missed detection, and certainly cannot describe the three-dimensional characteristics of a detected object; the three-dimensional radar is provided with any number of receiving and transmitting antennas, radar data and position information are seamlessly spliced in the acquisition process, 8-32 pieces of longitudinal vertical section waveform data can be acquired at a very small section interval (several centimeters) for each detection to form horizontal section images with different depths, and the shape, position, trend and the like of the underground abnormal body are very intuitively reflected. By processing and interpreting the data collected by the three-dimensional radar, the advantages of the three-dimensional radar can be fully exerted. However, the manual data processing and interpretation work is slow, only several kilometers of radar data can be processed and interpreted every day, and the efficiency is low; the standards are not uniform, and different processing personnel influence the processing result due to factors such as professional level, working state and the like, so that deviation is caused. By adopting artificial intelligence data processing and interpretation based on deep learning, abnormal bodies can be quickly positioned, accurate classification can be realized, continuous work can be carried out for 7 × 24 hours, the efficiency is high, the standards are consistent and have no deviation, and the processing speed is hundreds of times of that of manual work;
drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a pre-processing process performed on ground penetrating radar data;
FIG. 3 is a schematic view of a scanning direction of the three-dimensional ground penetrating radar;
FIG. 4 is a B-scan of the ground penetrating radar after data preprocessing;
FIG. 5 is a C-scan of ground penetrating radar data after preprocessing;
FIG. 6-1 is a sample picture of three-dimensional ground penetrating radar data;
FIG. 6-2 is a sand well sample picture of three-dimensional ground penetrating radar data;
6-3 are pipeline sample pictures of three-dimensional ground penetrating radar data;
6-4 are images of void samples of three-dimensional ground penetrating radar data;
FIG. 7 is a schematic diagram of deep convolutional neural network transfer learning;
FIG. 8 is a schematic diagram of a training set, validation set, and test set distribution;
FIG. 9 is a schematic diagram of the classification and identification effects of underground objects;
FIG. 10 is a list of folder pictures for the empty category in the four datasets;
Detailed Description
Referring to fig. 1, the present invention provides a technical solution:
a data interpretation and identification method for a three-dimensional ground penetrating radar comprises steps S1 to S4.
In step S1, reading data of each channel of the three-dimensional ground penetrating radar, and preprocessing the data;
in this embodiment, the step S1 preferably further includes a stage 1 and a stage 2:
in stage 1, the radar raw data of all 16 channels in all lines are read and converted to the image domain, and the conversion formula is:
Figure GDA0002927076640000071
wherein x isiIs the original i-th echo value, f (x)i) As a filter function, yiFor conversion to a grey value range of [0, b]In the image domain.
Through the conversion, the normalization processing of the radar data amplitude component to the image pixel is completed.
In stage 2, the raw data is subjected to gain and filtering preprocessing, and the method comprises the following five sequential steps:
static correction excision:
compared with the conventional single-channel two-dimensional radar, the three-dimensional array radar has the biggest difference of time zero setting. Since it is not guaranteed that the circuits and cables of all channels have exactly the same transit time, it is necessary to set a time zero in order to be able to safely adjust the direct waves of all channels. And selecting the number of channels according to the principle of searching for the shortest time zero point, and searching and removing the zero point, so as to ensure that the time zero points of all the channels appear in the time window range.
Global background elimination:
this filtering removes horizontal and approximately horizontal images by subtracting the calculated average from each trace, the average trace being obtained through the entire profile or a designated portion of the profile.
Reverse energy attenuation gain:
the filtering compensates for amplitude loss due to diffusion and attenuation with a time-varying gain, the trace being multiplied by a gain function that includes linear and exponential gains.
DC removing direct current drift:
there is often a constant offset, also referred to as a DC level or DC drift, in the amplitude of the acquired traces. This filtering is to remove the DC drift from the data, which is calculated and removed separately for each trace.
K-L transformation:
the filtering is the optimal orthogonal transformation in the sense of minimum mean square error, processed feature images are orthogonal and uncorrelated with each other, information contents between different wave bands are enabled not to be overlapped, original features are converted into new features with small quantity, the correlation between mode features is eliminated, and the difference is highlighted.
The detailed execution parameters of the above respective pretreatments are shown with reference to fig. 2.
In step S2, the preprocessed data is scanned in a B-scan and C-scan manner, and the scanned images are combined to generate sample pictures, and the sample picture sets are classified and labeled;
fig. 3 is a schematic view of the scanning direction of the three-dimensional ground penetrating radar, and fig. 4 and 5 are a vertical scanning view and a horizontal scanning view, respectively.
In this embodiment, step S2 preferably further includes stage 1 to stage 6:
in the stage 1, each measuring line is segmented according to 100 measuring channels (tracks), the length of each segment is 200 measuring channels, namely, the segmentation is carried out by stepping 5 meters along the direction of the measuring line, the length of each segment after the segmentation is 10 meters, and 5-meter overlapping regions are formed between adjacent segments;
in the stage 2, extracting data of adjacent even channels from the three-dimensional ground penetrating radar data by sections to obtain 8 vertical section scanning maps (B-scan);
in the stage 3, extracting data with 10 cm intervals in the depth direction from the three-dimensional ground penetrating radar data by sections to obtain 24 horizontal section scanning images (C-scan);
in stage 4, combining 8B-scans and 24C-scans into a JPG picture of 227x227 pixels to form a sample picture, which is detailed in reference to FIGS. 6-1 (intact), 6-2 (sand well), 6-3 (pipe, pipeline) and 6-4 (void, cavity), wherein signals of the positions of the sand well, the pipe and the void boxes represent typical abnormal characteristics of corresponding target classes, and intact samples do not have the characteristics as negative samples;
in the stage 5, extracting data of different channels and depth combinations according to the methods of the stages 2, 3 and 4 to form a sample graph multiplied by 10 times;
8 channels are randomly selected from 16 channels of the three-dimensional ground penetrating radar, and 10 different combinations can form 8B-scans:
{0,2,4,6,8,10,12,14},{1,3,5,7,9,11,13,15},{0,1,2,3,4,5,6,7},{1,2,3,4,5,6,7,8},{2,3,4,5,6,7,8,9},{3,4,5,6,7,8,9,10},{4,5,6,7,8,9,10,11},{5,6,7,8,9,10,11,12},{6,7,8,9,10,11,12,13},{7,8,9,10,11,12,13,14};
the effective time depth of the three-dimensional ground penetrating radar is 256 cm, and 24C-scans can be formed by 10 combinations of different depths and 10 cm intervals:
{10,20,30,40,50,60,70,80,90,100,...,240},{11,21,31,41,51,61,71,81,91,101,...,241},{12,22,32,42,52,62,72,82,92,102,...,242},...,{19,29,39,49,59,69,79,89,99,109,...,249}
in stage 6, the sample picture set is manually classified and labeled as four types of target objects of intact, sand-well, pipeline and void.
length:1000cm,interval:5cm/trace
depth:256cm,interval:1cm/slice|
width:128cm,interval:8cm/channel
B-scan:8slices,interval:16cm,56*56pixels
C-scan:24slices,interval:10cm,56*18pixels
2Dgrid:8B-scan images&24 Cscan images,spacing 1 pixel between images
2Dgrid:227x227x8bpp
It is worth to be noted that all sample pictures labeled in classification are converted from data verified by drilling in the field, and have more authenticity and robustness than pictures obtained through forward modeling software simulation.
In step S3, the sample picture set is divided into a training set, a verification set, and a test set according to a specific ratio, and migration training is performed using the inclusion v3 deep convolutional neural network;
in this embodiment, step S3 preferably further includes stage 1 to stage 3:
in stage 1, the sample set labeled for classification is divided into a training set, a validation set and a test set according to the ratio of 7:2: 1. The data set format mainly comprises four folders: intact (intact), manhole (sand well), pipe, risk (empty), each folder containing a JPEG picture individually labeled and classified. These pictures are scaled into training, validation and testing sets by a python-written transformation routine, see FIG. 8 and FIG. 10
In stage 2, referring to fig. 7, the deep convolutional neural network is used for the transfer learning, and the steps of the transfer learning are as follows:
firstly, a skeleton model, namely an inclusion V3 model, needs to be loaded, wherein two parameters of the model are important, one is weights, if the model is 'ImageNet', Keras automatically downloads parameters which are trained on the ImageNet, and if the model is None, a system initializes the parameters in a random mode, and the parameters are only selected at present. Another parameter is include _ top, and if True, the output is a fully connected layer of 1000 nodes. If False, the top layer is removed and an 8 x 2048 tensor is output. There are many other preset models in the kers applications such as VGG, ResNet, and MobileNet for mobile terminals.
The top layer is then removed and followed by other new layers that are custom-defined. The output layer firstly converts the output of 8 × 2048 into the tensor of 1 × 2048 by using the globalavagpooling 2D function, then is followed by a fully connected layer of 1024 nodes, and finally is an output layer of 4 nodes, and softmax is used as an activation function.
The parameters of the training are as follows: the batch _ size (64) is the number of images input by one iteration during training, the epoch (300) is the iteration number of training all training images for one time in the training process, when the training is carried out after one epoch, the test on the verification set is carried out, the larger the batch _ size is, the more the images input by a single iteration are, the more the training effect can be fitted to the data distribution on the whole training set.
During training, judging whether the network is trained in place and how to adjust training parameters by using a result graph which shows training loss and accuracy, sequentially testing the change condition of loss after the change of the learning rate from 0.1 to 0.5 times of the previous change each time from big to small, and if the loss is quickly increased to NAN, then learning too big; if loss is reduced seriously and then is kept unchanged, the learning rate is still high; then, the learning rate is adjusted to be small, if the loss is reduced as a straight line, the learning rate is over small; and (3) finding out a proper learning rate by observing the descending amplitude of the training loss, and completely training the network.
The learning rate variation formula is:
Figure GDA0002927076640000111
wherein, base _ lr is the basic learning rate during training, iter is the current iteration number, stepsize is the learning rate change interval, and floor is rounding-down.
Initial learning rate of 10-4The first 3000 iterations used 10-4The learning rate of (2), reuse 10-5The learning rate of (2) is continued to be trained to 4000 iterations, and finally 10 are used-6The learning rate is trained to 5000 times of the maximum iteration times, and the deep convolutional neural network is trained.
After the transfer learning, fine adjustment is started at the 17 th layer of the neural network, and the correct value needs to be confirmed by matching with a printing graph and a debugging method. Both fine and transition learning have two parameters, model and base _ model. The first tier of a model and the first tier of a base _ model are directed to the same memory address. The base _ model is used as a parameter to facilitate setting the skeleton model.
In stage 3, the trained optimal solution network model is saved to the hdf5 format file.
The experimental platform consists of Ubuntu16.04, CUDA10.1 and a Keras framework, wherein the CUDA10.1 is used for GPU acceleration, and the Keras framework is used because the migration learning based on the Incepion V3 can be rapidly deployed and carried out.
In step S4, data collected by the three-dimensional ground penetrating radar is read, preprocessed, and a combined scan is generated, a depth convolution model is input, a confidence is output, and a target object is identified according to the confidence.
In this embodiment, step S4 preferably further includes stage 1 to stage 4:
in the stage 1, a three-dimensional ground penetrating radar is used for area coverage detection;
in the stage 2, reading and preprocessing the three-dimensional ground penetrating radar raw data, just like the step S1;
in the stage 3, the preprocessed data is scanned in a B-scan and C-scan mode, and the scanned images are combined to generate an actual picture, just like the step S2;
in the stage 4, the actual picture is input into the format file of the optimal solution network model hdf5, the confidence coefficient is output, and the target object is identified according to the confidence coefficient classification.
Figure GDA0002927076640000121
Generation samples from D: ' Ande space
Using TensorFlow backend.
Found 8016 images belonging to 4 classes.
80168016[==============================]-1427s 178ms/step
[2020-June-03 Wednesday,10:30:12]Marking samples...
Total: 37.3 survey line kilometers, 1.4 minutes/survey line kilometer
Waiting for new rar package comes...
Referring to fig. 9, the classification and identification effects of the underground target are shown, which show the accuracy of the four categories, i.e. intactum, manhole, pipe, and risk, in the test set.
The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts of the present invention. The foregoing is only a preferred embodiment of the present invention, and it should be noted that there are objectively infinite specific structures due to the limited character expressions, and it will be apparent to those skilled in the art that a plurality of modifications, decorations or changes may be made without departing from the principle of the present invention, and the technical features described above may be combined in a suitable manner; such modifications, variations, combinations, or adaptations of the invention using its spirit and scope, as defined by the claims, may be directed to other uses and embodiments.

Claims (1)

1. A data interpretation and identification method for a three-dimensional ground penetrating radar, comprising:
s1, reading data of each channel of the three-dimensional ground penetrating radar, and preprocessing the data;
s2, scanning the preprocessed data in a B-scan and C-scan mode, combining the scanned images to generate sample pictures, and carrying out classification and labeling on the sample picture sets;
s3, dividing the sample picture set into a training set, a verification set and a test set according to a specific proportion, and performing migration training by using an IncepotionV 3 deep convolution neural network;
s4, reading data collected by the three-dimensional ground penetrating radar, preprocessing the data to generate a combined scanning image, inputting a depth convolution model, outputting confidence, and classifying and identifying target objects according to the confidence;
the step S1 further includes:
stage 1: reading radar raw data of all 16 channels in all measuring lines and converting the radar raw data into an image domain, wherein the conversion formula is as follows:
Figure FDA0002709101110000011
wherein x isiIs the original i-th echo value, f (x)i) As a filter function, yiFor conversion to a grey value range of [0, b]A corresponding value in the image domain of (a);
and (2) stage: the method comprises the following steps of preprocessing the original data by gain and filtering, and comprises the following specific steps:
static correction removal, global background elimination, reverse energy attenuation gain, DC direct current drift removal and K-L conversion;
the step S2 further includes:
stage 1: segmenting each measuring line according to 100 measuring channels (tracks), wherein the length of each measuring line is 200 measuring channels, namely, each measuring line is segmented by stepping 5 meters along the direction of the measuring line, the length of each segment is 10 meters after segmentation, and 5 meter overlapping regions are formed between adjacent segments;
and (2) stage: extracting data of adjacent even channels from the three-dimensional ground penetrating radar data according to sections to obtain 8 vertical section scanning maps (B-scan);
and (3) stage: extracting data with 10 cm intervals in the depth direction from the three-dimensional ground penetrating radar data according to sections to obtain 24 horizontal section scanning images (C-scan);
and (4) stage: combining 8B-scans and 24C-scans into a JPG picture with 227x227 pixels to form a sample picture;
and (5) stage: extracting data of different channels and depth combinations according to the methods of the stages 2, 3 and 4 to form a sample graph multiplied by 9 times;
and 6: manually classifying the sample picture set and marking the sample picture set as four target objects of intact, sand well, pipeline and void;
the step S3 further includes:
stage 1: dividing the sample set labeled in classification into a training set, a verification set and a test set according to the proportion of 7:2: 1;
and (2) stage: using a deep convolutional neural network to perform transfer learning, wherein the transfer learning comprises the following steps:
(1) loading an Inception V3 skeleton model, removing the top layer and outputting a tensor of 8 x 2048;
(2) then connecting a user-defined new layer; the output layer firstly converts the output of 8 × 2048 into the tensor of 1 × 2048 by using a globalavagPowing 2D function, then is connected with a full-connection layer of 1024 nodes, and finally is an output layer of 4 nodes, and softmax is used as an activation function;
further, the parameters of the training are as follows: the batch _ size is set to 64, which represents the number of images input in one iteration during training, the epoch is 300, the iteration times of training all training images for one time in the training process are calculated, and when the training is performed after one epoch, the test on the verification set is performed, the larger the batch _ size is, the more the images input in a single iteration are, the more the training effect can be fitted to the data distribution on the whole training set;
further, during training, a result graph which shows training loss and accuracy is used for judging whether the network is trained in place or not and how to adjust training parameters; finding out a proper learning rate by observing the descending amplitude of the training loss, and completely training the network;
the learning rate variation formula is:
Figure FDA0002709101110000031
wherein, base _ lr is the basic learning rate during training, iter is the current iteration number, stepsize is the learning rate change interval, and floor is rounding-down;
further, the initial learning rate is 10-4The first 3000 iterations used 10-4The learning rate of (2), reuse 10-5The learning rate of (2) is continued to be trained to 4000 iterations, and finally 10 are used-6Training the learning rate to 5000 times of the maximum iteration times to complete the training of the deep convolutional neural network;
after the transfer learning, fine adjustment is carried out on the 17 th layer of the neural network, and a correct value is confirmed by matching with a printing graph and a debugging method;
and (3) stage: saving the optimal solution network model to a hdf5 format file;
the step S4 further includes:
stage 1: carrying out area coverage detection by using a three-dimensional ground penetrating radar;
and (2) stage: reading and preprocessing three-dimensional ground penetrating radar original data;
and (3) stage: scanning the preprocessed data in a B-scan and C-scan mode, and combining the scanned images to generate an actual picture;
and (4) stage: and inputting the optimal solution network model hdf5 format file into the actual picture, outputting confidence, and classifying and identifying the target object according to the confidence.
CN202011049464.3A 2020-09-29 2020-09-29 Data interpretation and identification method for three-dimensional ground penetrating radar Active CN112232392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011049464.3A CN112232392B (en) 2020-09-29 2020-09-29 Data interpretation and identification method for three-dimensional ground penetrating radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011049464.3A CN112232392B (en) 2020-09-29 2020-09-29 Data interpretation and identification method for three-dimensional ground penetrating radar

Publications (2)

Publication Number Publication Date
CN112232392A CN112232392A (en) 2021-01-15
CN112232392B true CN112232392B (en) 2022-03-22

Family

ID=74121211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011049464.3A Active CN112232392B (en) 2020-09-29 2020-09-29 Data interpretation and identification method for three-dimensional ground penetrating radar

Country Status (1)

Country Link
CN (1) CN112232392B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256562B (en) * 2021-04-22 2021-12-14 深圳安德空间技术有限公司 Road underground hidden danger detection method and system based on radar images and artificial intelligence
CN113126083A (en) * 2021-04-29 2021-07-16 深圳安德空间技术有限公司 Ground penetrating radar auxiliary positioning method and positioning system based on field video
CN113792783A (en) * 2021-09-13 2021-12-14 陕西师范大学 Automatic identification method and system for dough mixing stage based on deep learning
CN113901878B (en) * 2021-09-13 2024-04-05 哈尔滨工业大学 Three-dimensional ground penetrating radar image underground pipeline identification method based on CNN+RNN algorithm
CN113759337B (en) * 2021-11-09 2022-02-08 深圳安德空间技术有限公司 Three-dimensional ground penetrating radar real-time interpretation method and system for underground space data
CN114137517B (en) * 2022-02-07 2022-05-10 北京中科蓝图科技有限公司 Two-three-dimensional integrated road detection method and device
CN114821296A (en) * 2022-03-14 2022-07-29 西安电子科技大学 Underground disease ground penetrating radar image identification method and system, storage medium and terminal
CN114578348B (en) * 2022-05-05 2022-07-29 深圳安德空间技术有限公司 Autonomous intelligent scanning and navigation method for ground penetrating radar based on deep learning
CN115619687B (en) * 2022-12-20 2023-05-09 安徽数智建造研究院有限公司 Tunnel lining void radar signal identification method, equipment and storage medium
CN117079268B (en) * 2023-10-17 2023-12-26 深圳市城市交通规划设计研究中心股份有限公司 Construction method and application method of three-dimensional data set of internal diseases of road
CN117409329B (en) * 2023-12-15 2024-04-05 深圳安德空间技术有限公司 Method and system for reducing false alarm rate of underground cavity detection by three-dimensional ground penetrating radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595195A (en) * 2004-06-17 2005-03-16 上海交通大学 Super broad band land radar automatic target identification method based on information fusion
CN105005042A (en) * 2015-07-27 2015-10-28 河南工业大学 Ground penetrating radar underground target locating method
CN107527067A (en) * 2017-08-01 2017-12-29 中国铁道科学研究院铁道建筑研究所 A kind of Railway Roadbed intelligent identification Method based on GPR
CN107688180A (en) * 2017-07-28 2018-02-13 河南工程学院 The shallow surface layer spatial distribution detection method of active fault based on GPR
CN108169745A (en) * 2017-12-18 2018-06-15 电子科技大学 A kind of borehole radar target identification method based on convolutional neural networks
CN110866512A (en) * 2019-11-21 2020-03-06 南京大学 Monitoring camera shielding detection method based on video classification
CN111323764A (en) * 2020-01-21 2020-06-23 山东大学 Underground engineering target body intelligent identification method and system based on ground penetrating radar

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6700526B2 (en) * 2000-09-08 2004-03-02 Witten Technologies Inc. Method and apparatus for identifying buried objects using ground penetrating radar
US7675454B2 (en) * 2007-09-07 2010-03-09 Niitek, Inc. System, method, and computer program product providing three-dimensional visualization of ground penetrating radar data
US8854248B2 (en) * 2010-08-26 2014-10-07 Lawrence Livermore National Security, Llc Real-time system for imaging and object detection with a multistatic GPR array

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595195A (en) * 2004-06-17 2005-03-16 上海交通大学 Super broad band land radar automatic target identification method based on information fusion
CN105005042A (en) * 2015-07-27 2015-10-28 河南工业大学 Ground penetrating radar underground target locating method
CN107688180A (en) * 2017-07-28 2018-02-13 河南工程学院 The shallow surface layer spatial distribution detection method of active fault based on GPR
CN107527067A (en) * 2017-08-01 2017-12-29 中国铁道科学研究院铁道建筑研究所 A kind of Railway Roadbed intelligent identification Method based on GPR
CN108169745A (en) * 2017-12-18 2018-06-15 电子科技大学 A kind of borehole radar target identification method based on convolutional neural networks
CN110866512A (en) * 2019-11-21 2020-03-06 南京大学 Monitoring camera shielding detection method based on video classification
CN111323764A (en) * 2020-01-21 2020-06-23 山东大学 Underground engineering target body intelligent identification method and system based on ground penetrating radar

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Classification of High-Spatial-Resolution Remote Sensing Scenes Method Using Transfer Learning and Deep Convolutional Neural Network";Wenmei Li et al;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20200513;第13卷;第1-10页 *
"Inception-v3 for flower classification";Xiaoling Xia et al;《 2017 2nd International Conference on Image, Vision and Computing (ICIVC)》;20170720;第1-5页 *
"三维车载阵列雷达在机场道面检测中的应用研究";邱成;《能源技术与管理》;20190831;第44卷(第4期);第1-3页 *
"结合迁移学习和Inception-v3模型的路面干湿状态识别方法";杨炜等;《中国科技论文》;20190831;第14卷(第8期);第1-5页 *

Also Published As

Publication number Publication date
CN112232392A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112232392B (en) Data interpretation and identification method for three-dimensional ground penetrating radar
CN109932708B (en) Method for classifying targets on water surface and underwater based on interference fringes and deep learning
CN113009447B (en) Road underground cavity detection and early warning method based on deep learning and ground penetrating radar
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN111445515B (en) Underground cylinder target radius estimation method and system based on feature fusion network
CN114266892A (en) Pavement disease identification method and system for multi-source data deep learning
CN112255685B (en) OBS and sea surface streamer seismic data combined imaging method and processing terminal
CN115308803A (en) Coal seam thickness prediction method, device, equipment and medium
KR102309343B1 (en) Frequency-wavenumber analysis method and apparatus through deep learning-based super resolution ground penetrating radar image generation
CN111598340B (en) Thin sand body plane spread prediction method based on fractional order Hilbert transform
CN115576014B (en) Crack type reservoir intelligent identification method based on acoustic wave far detection imaging
CN117031539A (en) Low-frequency reconstruction method and system for self-supervision deep learning seismic data
CN116224324A (en) Frequency-wave number analysis method of super-resolution 3D-GPR image based on deep learning
CN115170428A (en) Noise reduction method for acoustic wave remote detection imaging graph
CN115223044A (en) End-to-end three-dimensional ground penetrating radar target identification method and system based on deep learning
CN113570041B (en) Neural network and method for pressing seismic data noise of marine optical fiber towing cable by using same
CN113607068B (en) Method for establishing and extracting recognition model of photoacoustic measurement signal characteristics
CN112800664A (en) Method for estimating tree root diameter based on ground penetrating radar A-scan data
CN113392705A (en) Method for identifying pipeline leakage target in desert area based on convolutional neural network
CN117214398B (en) Deep underground water body pollutant detection method and system
CN112578437A (en) Automatic gain method and system for seismic record
Qian et al. Deep Learning-Augmented Stand-off Radar Scheme for Rapidly Detecting Tree Defects
CN115630492B (en) Tunnel lining defect change characteristic intelligent inversion method, system and storage medium
CN117538863A (en) Method, device, equipment and medium for improving tunnel lining defect geological radar detection precision
CN117991377B (en) First arrival wave travel time tomography method and system based on multi-source information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant