CN114966560B - Ground penetrating radar backward projection imaging method and system - Google Patents

Ground penetrating radar backward projection imaging method and system Download PDF

Info

Publication number
CN114966560B
CN114966560B CN202210902645.9A CN202210902645A CN114966560B CN 114966560 B CN114966560 B CN 114966560B CN 202210902645 A CN202210902645 A CN 202210902645A CN 114966560 B CN114966560 B CN 114966560B
Authority
CN
China
Prior art keywords
image
target
imaging
pixel
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210902645.9A
Other languages
Chinese (zh)
Other versions
CN114966560A (en
Inventor
雷文太
隋浩
毛凌青
辛常乐
王睿卿
罗诗光
张硕
王义为
宋千
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202210902645.9A priority Critical patent/CN114966560B/en
Publication of CN114966560A publication Critical patent/CN114966560A/en
Application granted granted Critical
Publication of CN114966560B publication Critical patent/CN114966560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/2813Means providing a modification of the radiation pattern for cancelling noise, clutter or interfering signals, e.g. side lobe suppression, side lobe blanking, null-steering arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/885Radar or analogous systems specially adapted for specific applications for ground probing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/28Details of pulse systems
    • G01S7/285Receivers
    • G01S7/292Extracting wanted echo-signals
    • G01S7/2923Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
    • G01S7/2927Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods by deriving and controlling a threshold value
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/35Details of non-pulse systems
    • G01S7/352Receivers
    • G01S7/354Extracting wanted echo-signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention discloses a ground penetrating radar back projection imaging method and a system, wherein the method comprises the following steps: b-scan data are obtained and preprocessed, and a tag data set is constructed according to the preprocessed B-scan data; constructing a YOLOX network, and training the YOLOX network through a label data set; acquiring a target potential area of a B-scan image to be imaged through a trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image; and carrying out double-threshold processing and integral focusing processing on the initial imaging image to obtain a target imaging image. According to the method, the target potential region in the B-scan image is framed through the YOLOX network, imaging is carried out only in the region, the global backward projection calculation is avoided, and the calculation amount is saved; meanwhile, the image is enhanced through double threshold processing and integral focusing processing, and the imaging quality is improved.

Description

Ground penetrating radar backward projection imaging method and system
Technical Field
The invention relates to the technical field of ground penetrating radar imaging, in particular to a ground penetrating radar back projection imaging method and system.
Background
Ground Penetrating Radar (GPR) is an effective underground nondestructive detection technique. The ground penetrating radar radiates electromagnetic waves to the underground through a ground transmitting antenna, the electromagnetic waves can be reflected and scattered at the discontinuous position of the electromagnetic property, so that a receiving antenna on the ground receives reflected signals, and underground target detection is further achieved. Due to the characteristics of high resolution, high efficiency, low cost and non-destructiveness, the ground penetrating radar can be widely applied to many fields such as archaeology, civil engineering, physics, geoscience and the like. The ground penetrating radar can obtain reflected signals which can be converted into depth profiles according to different propagation speeds of electromagnetic waves in different underground media, the reflected signals of underground targets are usually in an inverted hyperbolic shape, but the hyperbolic shape cannot completely reflect the specific conditions of the underground targets, so that a ground penetrating radar imaging technology is needed, and the underground targets are focused and positioned by recovering some intensity information and distribution information of the underground targets through analyzing different characteristics of multiple paths of reflected signals, so that the imaging of the underground targets is a key part of the ground penetrating radar.
The Back Projection (BP) algorithm is an algorithm that has utility and representativeness in the field of georadar imaging. The traditional back projection algorithm is that after an imaging area is determined, design parameters are used for carrying out grid division on the imaging area, and the imaging area is divided into equal intervals
Figure DEST_PATH_IMAGE001
And for each grid in the imaging area, the received signal intensity of the corresponding measuring point of the grid is found by calculating the time delay from the grid to each measuring point, and finally, the scattering intensity at the grid is obtained by accumulating. A conventional backprojection algorithm may comprise the following steps:
(1) Calculating the size of the imaging area and the position of each measuring point
Figure DEST_PATH_IMAGE002
Dividing the imaging area into equal intervals
Figure 22656DEST_PATH_IMAGE001
A grid. And moving the earth surface of the imaging area on the measuring line by using a ground penetrating radar, obtaining the A-scan echo signals of all the measuring points, and synthesizing the A-scan echo signals of all the measuring points to obtain a B-scan echo signal.
(2) For each grid in the imaging region
Figure 877479DEST_PATH_IMAGE003
And calculating the two-way propagation delay from the grid to a certain measuring point, obtaining a time index corresponding to the two-way propagation delay according to the time interval of the grid during grid division, substituting the time index into the echo of the measuring point to obtain the intensity data from the grid to the measuring point, and adding the intensity data into a list. For each grid, the intensity values from the grid to all measuring points are obtained in the same way, and the echo scattering intensity of the whole measuring line corresponding to the grid can be obtained
Figure 589040DEST_PATH_IMAGE004
(3) For each grid, summing the intensity data from the grid to all the measuring points in the list, namely summing different intensities obtained under different time delay conditions to obtain the imaging numerical value of the grid
Figure 984249DEST_PATH_IMAGE005
The calculation formula of the imaging numerical value is as follows:
Figure 470725DEST_PATH_IMAGE006
(4) And (4) repeating the steps (1) to (3) for each grid point of the whole imaging area to obtain a back projection imaging result.
Although conventional back projection imaging methods can image subsurface targets to some extent, they are accompanied by very energetic strong artifacts, referred to as artifacts, the presence of which makes it difficult to distinguish between targets and non-targets. To suppress these artifacts and enhance the imaging quality, methods have been proposed to improve back projection imaging. For example, non-patent document 1 analyzes a statistical relationship between different scattering data, designs a weighted BP algorithm, sets a weight for each imaging unit by using a mean and a variance, and increases the quality of an imaging result by weighting to reduce artifacts obtained by the BP algorithm; non-patent document 2 relates intensity data of different measurement points, designs a multiplication cross-correlation BP algorithm, multiplies every two data obtained at each grid point, and finally sums the data obtained by multiplication, wherein the multiplication cross-correlation BP algorithm considers cross-correlation of received data and can also remove a large amount of artifacts; non-patent document 3 considers that a Coherence Factor (CF) is also applied to an imaging image as a weighting Factor, and designs a BP algorithm in which the Coherence Factor and a back projection are combined, thereby improving the quality of an imaging result.
The improved back projection imaging method solves the problem of more artifacts in the traditional imaging to a certain extent, but the traditional back projection imaging method calculates each point of the whole area of a detection area, when the detection area is larger, the calculation speed is very slow due to higher calculation complexity, in addition, the distribution of underground targets is usually sparse, the calculation and the imaging are carried out in a time-consuming manner on the area without the targets, the time is wasted, and the method has no much significance.
On the basis, how to save the calculated amount, reduce the calculating time, avoid imaging some noise points and improve the imaging quality is a problem which is urgently needed to be solved in the field of detecting radar imaging.
Reference list
Non-patent literature
Non-patent document 1: "improved back projection imaging for surface target detection", wentai Lei et al, turkish Journal of electric Engineering and Computer Science,2013-11-07
Non-patent document 2: "A GPR Imaging Algorithm with arms supression", lin Zhou et al Proceedings of the XIII interfacial Conference on group conferencing Radar,2010-08-16
Non-patent document 3: "Coherence Factor improvement of thread-Wall radius Images", robert J. Burkholder et al, IEEE Antennas and Wireless Provisioning Letters,2010-01-01.
Disclosure of Invention
Based on the method, the invention provides a ground penetrating radar back projection imaging method and a ground penetrating radar back projection imaging system, and aims to solve the problems that the traditional back projection imaging method is long in calculation time and serious in side lobe and artifact interference.
Based on the above purpose, the invention provides a ground penetrating radar back projection imaging method, which comprises the following steps:
b-scan data are obtained and preprocessed, and a label data set is constructed according to the preprocessed B-scan data, wherein the label data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame label corresponding to the B-scan image;
constructing a YOLOX network, and training the YOLOX network through the tag data set;
acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
carrying out double-threshold processing on the initial imaging image to obtain an artifact suppression image;
and carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
In addition, the invention also provides a ground penetrating radar back projection imaging system, which comprises:
the data acquisition and processing module is used for acquiring and preprocessing B-scan data and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
a network training module, configured to construct a YOLOX network, and train the YOLOX network through the tag dataset;
the back projection module is used for acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
the artifact suppression module is used for carrying out double-threshold processing on the initial imaging image to obtain an artifact suppression image;
and the target imaging module is used for carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
According to the ground penetrating radar back projection imaging method and system, the trained YOLOX network is used for determining the potential position of the target before imaging, the potential area of the target is defined, and only the potential area of the target is imaged during imaging, so that a large amount of calculated amount is reduced, interference of underground noise points or non-target points to the imaging process is avoided, and the imaging quality is effectively improved. And then, enhancing and position calibrating the imaged image through double threshold processing and integral focusing processing, so that most side lobes and artifacts are effectively removed, and the imaging quality is further improved. Experimental results show that compared with the existing back projection imaging method, the ground penetrating radar back projection imaging method provided by the invention has the advantage that the imaging efficiency is obviously improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart illustrating a ground penetrating radar back projection imaging method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a YOLOX network according to an embodiment of the invention;
FIG. 3 is a schematic diagram of the structure of the YOLOX CSP layer, SPP layer, attention layer and base convolutional layer in one embodiment of the present invention;
FIG. 4 is a schematic diagram of a B-scan image to be imaged according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target imaging image corresponding to the B-scan image to be imaged shown in FIG. 4;
fig. 6 is a schematic structural diagram of a ground penetrating radar rear projection imaging system according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, a ground penetrating radar back projection imaging method provided in an embodiment of the present invention specifically includes the following steps:
and S10, acquiring and preprocessing B-scan data, and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image.
In this embodiment, a Ground Penetrating Radar (GPR) is used to probe a subsurface region to obtain B-scan data. More specifically, when the GPR detects the underground area, a one-dimensional measuring line is arranged on the ground surface in advance, a plurality of measuring points are arranged on the measuring line, the GPR scans the underground area along the measuring line direction, and in the scanning process, the relative positions of a GPR transmitting antenna and a GPR receiving antenna are fixed and synchronously move forwards for scanning. When the transmitting antenna and the receiving antenna move to a first measuring point on a measuring line, the transmitting antenna radiates electromagnetic waves downwards, the electromagnetic waves are spread downwards and generate scattering when encountering a place with uneven medium characteristics, part of scattering energy is received by the receiving antenna and is recorded as A-scan data, when the transmitting antenna and the receiving antenna move to a next measuring point, the processes are repeated to obtain another A-scan data, at the moment, the A-scan data obtained by different measuring points can be combined to obtain B-scan data corresponding to an underground area, for the B-scan data, the transverse coordinate is the space position of each measuring point, and the longitudinal coordinate is the number of sampling points of time domain echoes.
And further, preprocessing the acquired B-scan data such as direct wave removing and denoising, converting the preprocessed B-scan data into a B-scan image, and constructing a label data set for network training.
In a preferred embodiment, step S10 specifically includes the following steps:
and step S101, detecting the underground area through GPR to obtain N pieces of B-scan data.
And S102, preprocessing the N B-scan data, and converting the preprocessed B-scan data into a B-scan image.
And step S103, marking the target existing area in the N B-scan images to obtain a corresponding target rectangular frame label.
And step S104, dividing N B-scan images containing the target rectangular frame label into a first data set and a test set according to a preset distribution proportion.
And S105, dividing the first data set into a training set and a verification set according to a preset distribution proportion.
And step S106, forming a label data set according to the training set, the verification set and the test set.
Wherein, the preprocessing comprises direct wave removing and denoising; the preset distribution ratio is set according to requirements and can be selected from 9.
In this embodiment, firstly, the number, distribution and kind of underground targets are changed through multiple simulation experiments and actual measurement experiments, and B-scan data under multiple detection scenes is obtained.
And then preprocessing the B-scan data obtained by simulation and the B-scan data obtained by actual measurement, specifically, for the B-scan data obtained by simulation, non-target background data and target-containing B-scan data under the same simulation scene can be obtained by simulation, and the non-target background data is directly subtracted from the target-containing B-scan data to realize the direct wave elimination. For actually measured B-scan data, subtracting the average value of each line of data from each element of each line of data to realize direct wave removal and certain noise removal effect.
And then, converting each preprocessed B-scan data into a corresponding B-scan image, and marking a rectangular frame label on a target existing area in each B-scan image to obtain a B-scan image containing a label, namely the B-scan image containing the target rectangular frame label.
And finally, randomly dividing all B-scan images containing the target rectangular frame labels into a first data set and a test set according to the distribution proportion of 9.
It can be understood that, in the present embodiment, through the steps S101 to S106, the tag data set is obtained, and data support can be provided for YOLOX network training.
And S20, constructing a YOLOX network, and training the YOLOX network through a label data set.
Referring to fig. 2 and 3, the YOLOX network in the present embodiment includes a Backbone network (Backbone), a Neck network (Neck), and a Head network (Head); the backbone network is used for extracting the features of the B-scan image (the features are hyperbolic features of the B-scan image), the neck network is used for combining and mixing the features, and the head network is used for predicting and classifying the features. The YOLOX network takes a B-scan image as an input, and takes four-corner coordinate positions of a target potential area and a rectangular frame for framing the target potential area in the B-scan image as an output.
Further, the backbone network comprises an attention module (Focus), three convolution residual modules (C1, C2 and C3) and a feature stacking module (C4); wherein the attention module is composed of a downsampling layer and a base convolutional layer (BaseConv), the base convolutional layer comprising a convolutional layer (Conv), a bulk normalization layer (BN) and an activation function (SiLU); the convolution residual module consists of a basic convolution layer and a CSP layer (CSP layer), wherein the CSP layer comprises a trunk branch, a residual side branch and a channel dimension splicing layer (Contact); the trunk branch comprises a basic convolution layer, a residual stacking layer and an addition layer; the residual edge branch comprises a basic convolutional layer; the feature stacking module is composed of a basic convolutional layer, an SPP layer (SPP layer) and a CSP layer, wherein the SPP layer comprises two basic convolutional layers, an upsampling stacking layer and a channel dimension splicing layer, and the upsampling stacking layer comprises three pooling branches and one stacking branch.
The neck network comprises two up-sampling fusion modules and two feature fusion modules, wherein each up-sampling fusion module consists of a basic convolution layer, an up-sampling layer, a channel dimension splicing layer and a CSP layer; the feature fusion module is composed of a basic convolution layer, a channel dimension splicing layer and a CSP layer.
The head network comprises three feature judgment modules, wherein each feature point judgment module consists of a basic convolution layer, a first attribute judgment branch, a second attribute judgment branch and a channel dimension splicing layer, the first attribute judgment branch comprises two basic convolution layers and one convolution layer, and the second attribute judgment branch comprises two basic convolution layers, a coordinate prediction branch and a two-classification branch.
Further, the specific process of the YOLOX network for target detection on the B-scan image comprises the following steps:
first, the input B-scan image is processed to 640 x 640 size using bilinear interpolation and a null dictionary is created that holds the output of each slice, setting the initial number of channels to 64.
Then, feature extraction is carried out through a backbone network, and the implementation process is as follows:
inputting a B-scan image into a backbone network, firstly extracting features through an attention module in the backbone network, more specifically, firstly extracting every other pixel of the high-resolution B-scan image through a down-sampling layer to obtain a plurality of low-resolution B-scan images, superposing the plurality of low-resolution B-scan images in a channel dimension, converting planar information on the length and the width into the channel dimension, wherein the input size is 640 multiplied by 3 (the length multiplied by the width multiplied by the number of channels), the output size is 320 multiplied by 12, then extracting the features through a basic convolutional layer, the input size is 320 multiplied by 320 by 12, and the output size is 320 multiplied by 320 by 64.
Then, the output of the attention module is input into a convolution residual module I (C1), wherein in the convolution residual module I, the input size of a basic convolution layer is 320 multiplied by 64, and the output size is 160 multiplied by 128; the CSP layer has an input size of 160 × 160 × 128 and an output size of 160 × 160 × 128.
Inputting the output of the convolution residual block one to a convolution residual block two (C2), in which the input size of the base convolution layer is 160 × 160 × 128 and the output size is 80 × 80 × 256; the CSP layer has an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256. And storing the output of the convolution residual error module II into the empty dictionary, and recording the output as the first characteristic.
Inputting the output of the convolution residual module II to a convolution residual module III (C3), wherein in the convolution residual module III, the input size of the basic convolution layer is 80 multiplied by 256, and the output size is 40 multiplied by 512; the CSP layer has an input size of 40 × 40 × 512 and an output size of 40 × 40 × 512. And storing the output of the convolution residual error module III in a null dictionary, and marking as a second characteristic.
Inputting the output of the convolution residual module III to a feature stacking module (C4), wherein the input size of the basic convolution layer in the feature stacking module is 40 multiplied by 512, and the output size is 20 multiplied by 1024; the input size of the SPP layer is 20 × 20 × 1024, and the output size is 20 × 20 × 1024; the CSP layer has an input size of 20 × 20 × 1024 and an output size of 20 × 20 × 1024. The output of the feature stack module is saved into the empty dictionary, denoted as the third feature.
Next, feature mixing and feature combining are performed through a neck network, and the implementation process is as follows:
inputting a third feature (namely the output of the feature stacking module) in the empty dictionary into a first upsampling fusion module, wherein in the first upsampling fusion module, the basic convolutional layer acquires the third feature, the input size is 20 × 20 × 1024, the output size is 20 × 20 × 512, and the output is marked as a first element; the input size of the up-sampling layer is 20 × 20 × 512, and the output size is 40 × 40 × 512; the channel dimension splicing layer superposes the output of the upsampling layer and a second feature (namely the output of the convolution residual module III) in the empty dictionary on a channel, and the output size is 40 multiplied by 1024; the CSP layer has an input size of 40 × 40 × 1024 and an output size of 40 × 40 × 512.
Inputting the output of the first up-sampling fusion module into a second up-sampling fusion module, wherein in the second up-sampling fusion module, the input size of the basic convolution layer is 40 multiplied by 512, the output size is 40 multiplied by 256, and the output is marked as a second element; the input size of the up-sampling layer is 40 × 40 × 256, and the output size is 80 × 80 × 256; the channel dimension splicing layer superposes the output of the upsampling layer and the first characteristic (namely the output of the convolution residual module II) in the empty dictionary in the channel dimension, and the output size is 80 multiplied by 512; the CSP layer has an input size of 80 × 80 × 512 and an output size of 80 × 80 × 256, and this output is denoted as a first fusion feature.
Inputting the output of the sampling fusion module II into the feature fusion module I, wherein in the feature fusion module I, the basic convolution layer acquires a first fusion feature with the input size of 80 multiplied by 256 and the output size of 40 multiplied by 256; a channel dimension splicing layer, wherein the output of the basic convolution layer and the second element are superposed on the channel dimension, the input size is 40 multiplied by 256, and the output size is 40 multiplied by 512; the CSP layer has an input size of 40 × 40 × 512 and an output size of 40 × 40 × 512, and this output is referred to as a second fusion feature.
Inputting the output of the first feature fusion module into a second feature fusion module, wherein in the second feature fusion module, the basic convolution layer acquires a second fusion feature, the input size is 40 multiplied by 512, and the output size is 20 multiplied by 512; a channel dimension splicing layer, wherein the output of the basic convolution layer and the first element are superposed on the channel dimension, the input size is 20 multiplied by 512, and the output size is 20 multiplied by 1024; the CSP layer has an input size of 20 × 20 × 1024 and an output size of 20 × 20 × 1024, and this output is referred to as a third fusion feature.
And finally, acquiring a target detection result through a head network, wherein the implementation process comprises the following steps:
inputting the first fusion feature into a feature judgment module I, wherein in the feature judgment module, the basic convolution layer acquires the input first fusion feature, the input size is 80 multiplied by 256, and the output size is 80 multiplied by 256; the first attribute judgment branch is used for judging the category of the feature point, and comprises a first basic convolution layer I with an input size of 80 x 256 and an output size of 80 x 256, inputting a second base convolutional layer with the size of 80 × 80 × 256 and the output size of 80 × 80 × 256, and inputting a convolutional layer with the size of 80 × 80 × 256 and the output size of 80 × 80 × 1 (1 is the classification number); the second attribute judgment branch is used for judging whether the regression coefficients of the feature points and corresponding objects exist or not, and comprises a third basic convolution layer with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256, a fourth basic convolution layer with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256, a coordinate prediction branch connected with the fourth basic convolution layer and a classification branch, wherein the coordinate prediction branch is used for returning the four-corner coordinate position of a rectangular frame corresponding to the prediction target potential area and comprises convolution layers with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 4 (1 is the number of coordinates); the two classification branches are used for judging whether the characteristic points are targets or backgrounds and comprise convolution layers with input size of 80 multiplied by 256 and output size of 80 multiplied by 1; the channel dimension splicing layer is used for superposing the output of the first attribute judgment branch, the output of the coordinate prediction branch and the output of the two classification branches, and the output size is 80 multiplied by 6.
And inputting the second fusion feature into a second feature judgment module, wherein the second feature judgment module is similar to the first feature judgment module in the implementation process, and the difference is that the output size of the channel dimension splicing layer in the second feature judgment module is 40 × 40 × 6, which is not described herein again.
And inputting the third fusion feature into a feature judgment module three, wherein the implementation process of the feature judgment module three is similar to that of the feature judgment module one, and the difference is that the output size of the channel dimension splicing layer in the feature judgment module three is 20 × 20 × 6, which is not described herein again.
That is, for each B-scan data input, the YOLOX network outputs a corresponding target potential region and the four-corner coordinate position of the rectangular box framing the target potential region.
And step S30, acquiring a target potential area of the B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image.
In step S30, the B-scan image to be imaged is a B-scan image obtained by preprocessing and converting the B-scan data needing to be imaged.
Specifically, a B-scan image to be imaged is input into a trained YOLOX network, a target potential region is obtained through the YOLOX network, and processing is performed in the target potential region through a Back Projection (BP) algorithm based on delay summation to obtain an initial imaging image. The backward projection algorithm based on delay and sum can be a traditional BP algorithm, and the implementation process is as follows:
step a, acquiring the size of an imaging area according to the size of a B-scan image to be imaged and the time window of B-scan data.
Suppose the size of the B-scan image to be imaged is
Figure 230871DEST_PATH_IMAGE007
Wherein
Figure 435587DEST_PATH_IMAGE008
The number of time sampling points of a single channel A-scan,
Figure 634488DEST_PATH_IMAGE009
the time window for the number of measured points (i.e., the trace number of A-scan) along the direction of the measuring line and the B-scan data corresponding to the B-scan image to be imaged is
Figure 241049DEST_PATH_IMAGE010
The size of the imaging area is
Figure 906517DEST_PATH_IMAGE011
Wherein
Figure 598530DEST_PATH_IMAGE012
In order to be the depth of the image,
Figure 273225DEST_PATH_IMAGE013
is the lateral extent of the imaging along the line direction.
At this time, the number of the measuring points of the B-scan image to be imaged along the measuring line direction can be directly used for determining the transverse range of the imaging area imaged along the measuring line direction, and can be expressed as
Figure 796610DEST_PATH_IMAGE014
And based on the time window of the B-scan data
Figure 164137DEST_PATH_IMAGE010
And propagation velocity of electromagnetic wave
Figure 77867DEST_PATH_IMAGE015
The imaging depth of the imaging region is obtained and can be expressed as
Figure 290673DEST_PATH_IMAGE016
In which the propagation velocity of electromagnetic waves
Figure 871827DEST_PATH_IMAGE015
Comprises the following steps:
Figure 206994DEST_PATH_IMAGE017
Figure 873598DEST_PATH_IMAGE018
it is the speed of the light that is,
Figure 624517DEST_PATH_IMAGE019
is the relative permittivity of the subsurface medium.
And b, acquiring the size of the initial imaging image according to the size of the imaging area and the preset size of each imaging unit in the imaging area.
Assume that the size of the target size of the initial imaging image to be obtained is
Figure 60177DEST_PATH_IMAGE020
Wherein
Figure 763648DEST_PATH_IMAGE021
In a discrete number in the depth direction,
Figure 714286DEST_PATH_IMAGE022
as a discrete number along the direction of the line. Each imaging unit in the imaging area
Figure 534475DEST_PATH_IMAGE023
Is of a size of
Figure 559063DEST_PATH_IMAGE024
Wherein
Figure 439294DEST_PATH_IMAGE025
For the size of each imaging unit in the depth direction,
Figure 814912DEST_PATH_IMAGE026
for each imaging unit dimension in the inline direction.
At this time, the imaging coefficient determined by the imaging precision and the imaging resolution is first determined
Figure 235529DEST_PATH_IMAGE027
And the size of the B-scan image to be imaged along the line measuring direction, and the imaging size of the initial imaging image along the line measuring direction can be expressed as
Figure 645781DEST_PATH_IMAGE028
Coefficient of imaging
Figure 431335DEST_PATH_IMAGE027
Is a positive integer; then imaging the lateral extent in the line-measuring direction according to the imaging area
Figure 294249DEST_PATH_IMAGE029
The size of the image formed along the measuring line direction with the initial image
Figure 518557DEST_PATH_IMAGE022
The dimension of each imaging unit in the direction along the measuring line is obtained and can be expressed as
Figure 517737DEST_PATH_IMAGE030
Further, each imaging unit is square in the imaging area, i.e. each imaging unit is square
Figure 739771DEST_PATH_IMAGE031
In the imaging method, the imaging size of the initial imaging image in the depth direction is obtained according to the imaging depth of the imaging area and the size of the imaging unit in the depth direction, and can be expressed as
Figure 824401DEST_PATH_IMAGE032
Step c, acquiring the two-way time delay from each imaging unit to the antenna in the imaging area, wherein the calculation formula of the two-way time delay is as follows:
Figure 852400DEST_PATH_IMAGE033
wherein, the first and the second end of the pipe are connected with each other,
Figure 971666DEST_PATH_IMAGE034
is as follows
Figure 99022DEST_PATH_IMAGE035
And row and column
Figure 936528DEST_PATH_IMAGE036
The imaging units of a column are arranged in parallel,
Figure 174742DEST_PATH_IMAGE037
Figure 431672DEST_PATH_IMAGE038
is the position coordinate of the imaging unit;
Figure 792246DEST_PATH_IMAGE039
numbering the measuring points;
Figure 117048DEST_PATH_IMAGE040
Figure 424533DEST_PATH_IMAGE041
is as follows
Figure 987232DEST_PATH_IMAGE039
The coordinate position of the antenna at each measurement point,
Figure 784287DEST_PATH_IMAGE042
is an image forming unit to
Figure 330806DEST_PATH_IMAGE039
And measuring the two-way time delay of the point antenna.
The scattering intensity of each imaging unit can be expressed according to the above formula:
Figure 441981DEST_PATH_IMAGE043
and d, acquiring a pixel value corresponding to each imaging unit according to a preset imaging model, and forming an initial imaging image. Wherein, the imaging model is:
Figure 390346DEST_PATH_IMAGE044
wherein, the first and the second end of the pipe are connected with each other,
Figure 30406DEST_PATH_IMAGE045
after focusing with the first
Figure 126538DEST_PATH_IMAGE035
And row and column
Figure 775825DEST_PATH_IMAGE036
Imaging unit of column
Figure 578696DEST_PATH_IMAGE034
The pixel value of the corresponding pixel point is determined,
Figure 655236DEST_PATH_IMAGE046
is as follows
Figure 910768DEST_PATH_IMAGE039
The trace a-scan data is then written to,
Figure 98167DEST_PATH_IMAGE042
as an image forming unit
Figure 83440DEST_PATH_IMAGE034
To the first
Figure 330882DEST_PATH_IMAGE039
The two-way time delay of the antenna of each measuring point,
Figure 339289DEST_PATH_IMAGE047
the length of time represented for each grid in the imaging region can be expressed as:
Figure 58941DEST_PATH_IMAGE048
Figure 836404DEST_PATH_IMAGE049
for the time window of the B-scan data,
Figure 317064DEST_PATH_IMAGE050
the number of time sampling points of a single channel A-scan is counted;
Figure 547188DEST_PATH_IMAGE051
the number of measuring points along the measuring line direction.
Namely, the two-way time delay from each imaging unit to the antenna of each measuring point is input into the imaging model to obtain the pixel values of the pixel points corresponding to the imaging units, and all the pixel points determining the pixel values form an initial imaging image.
In a preferred embodiment, the optimized BP algorithm is used to process in the target potential region to obtain an initial imaging image, in which case, step S30 may include the following steps:
step S301, acquiring a B-scan image to be imaged, and determining the size of an imaging area corresponding to the B-scan image to be imaged;
step S302, inputting a B-scan image to be imaged into a trained Yolox network, and acquiring four-corner coordinates of a target potential area and a rectangular frame for framing the target potential area through the Yolox network;
step S303, mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and step S304, performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
Specifically, a to-be-imaged B-scan image is input into a trained YOLOX network, the YOLOX network outputs a target potential area after target detection and four-corner coordinate positions of a rectangular frame framing the target potential area, the size of the imaging area is calculated by using preset parameters, and the calculation formula is as follows:
Figure 341969DEST_PATH_IMAGE052
wherein the content of the first and second substances,
Figure 708359DEST_PATH_IMAGE053
length of imaging the imaging region along the line measurement direction;
Figure 359920DEST_PATH_IMAGE054
is the imaging depth of the imaging region;
Figure 342920DEST_PATH_IMAGE055
for the length and width of each mesh in the backprojection imaging,
Figure 675812DEST_PATH_IMAGE056
is the number of channels scanned in the B-scan;
Figure 162288DEST_PATH_IMAGE057
for the time window of the B-scan,
Figure 922434DEST_PATH_IMAGE015
is the propagation speed of electromagnetic waves in the underground medium.
And (3) according to the four-corner coordinate position of the rectangular frame output by the YOLOX network, corresponding the four-corner coordinate position to an actual imaging area, forming a target potential area in the imaging area at the moment, and performing time delay calculation, accumulation and imaging in the target potential area to obtain an initial imaging image. For each imaging unit in a certain imaging area, acquiring the two-way time delay from the imaging unit to an antenna, and acquiring the pixel value of each focused pixel point through an imaging optimization model, wherein the imaging optimization model can be expressed as:
Figure 189467DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure 326050DEST_PATH_IMAGE059
for the pixel value of any pixel point after focusing,
Figure 667033DEST_PATH_IMAGE046
is as follows
Figure 598080DEST_PATH_IMAGE039
The trace a-scan data is then written to,
Figure 290092DEST_PATH_IMAGE060
a length of time represented for each grid in the imaging region;
Figure 964787DEST_PATH_IMAGE061
is an upward rounding function;
Figure 488172DEST_PATH_IMAGE062
a down rounding function;
Figure 855700DEST_PATH_IMAGE063
is a first
Figure 497991DEST_PATH_IMAGE035
And row and column
Figure 976377DEST_PATH_IMAGE036
Imaging unit of a column
Figure 557531DEST_PATH_IMAGE064
To the first
Figure 892697DEST_PATH_IMAGE039
And measuring the two-way time delay of the point antenna.
It can be understood that the embodiment performs processing in the target potential region through the optimized BP algorithm, so that the calculation speed is increased, and the imaging quality is improved to a certain extent.
And S40, carrying out double-threshold processing on the initial imaging image to obtain an artifact-suppressed image.
In step S40, the initial imaging image obtained in step 3 is processed using a dual-threshold segmentation algorithm, which specifically is: and performing target or background segmentation on each pixel point in the initial imaging image by using a preset amplitude threshold and a preset similarity threshold, and inhibiting artifacts. The amplitude threshold is used for distinguishing the boundary between a target point and a background point in the initial imaging image, at the moment, the pixel point meeting the amplitude threshold in the initial imaging image can be directly classified as a target, and the pixel point not meeting the amplitude threshold is classified as an undetermined point; the similarity threshold is a boundary of the similarity between the segmented target point and the background point, and is called as a similarity threshold, which judges whether the undetermined point is the target point or the background point according to the similarity between the undetermined point and the target point.
Preferably, step S40 includes the steps of:
step S401, according to the size relation of the pixel points in the initial imaging image, an amplitude threshold value is obtained.
In step S401, the amplitude threshold is a boundary that distinguishes a target point from a background point in the initial imaged image. In the initial imaging image, if the pixel value of a certain pixel point is larger than the amplitude threshold, the pixel point is determined to be a target point, otherwise, the pixel point is an undetermined point, the undetermined point cannot be directly judged to be a target or a background, and subsequently, judgment can be carried out according to the similarity threshold.
Preferably, when the target potential region includes a positive pixel point and a negative pixel point, step S401 includes the following steps:
step a, determining a first positive pixel point with the maximum positive pixel value and a first negative pixel point with the minimum negative pixel value in a target potential region, and acquiring the pixel value of the first positive pixel point and the pixel value of the first negative pixel point;
b, obtaining a target part through the distance between the coordinates of the first positive pixel point and the first negative pixel point;
step c, obtaining a background part according to the distance between the first positive pixel point and the upper edge of the target potential area;
step d, acquiring the occupation ratio of the target area according to the target part and the background part;
and e, acquiring two amplitude thresholds which are respectively a positive amplitude threshold and a negative amplitude threshold according to the pixel value of the first positive pixel point, the pixel value of the first negative pixel point and the target area ratio.
Understandably, in the process of detecting the underground region, due to the differential effect of the GPR transmitting antenna, the electromagnetic wave radiated to the underground through the transmitting antenna has a waveform with zero mean, that is, the time-domain waveform of the radiation signal has positive and negative values. Correspondingly, in the backward projection imaging process, the values of the scattering echoes on the time delay curves corresponding to one part of the imaging units are superposed to form a positive value, and the values of the scattering echoes on the time delay curves corresponding to the other part of the imaging units are superposed to form a negative value. At this time, in the imaging region near the true position of the object, a positive focus region and a negative focus region appear.
Firstly, a positive pixel point (namely a first positive pixel point) and a negative pixel point (a first negative pixel point) with the maximum absolute value in a target potential region are obtained, and the pixel values of the two pixel points are respectively recorded as
Figure 559302DEST_PATH_IMAGE065
And
Figure 310220DEST_PATH_IMAGE066
then, the distance between the positive pixel point and the negative pixel point with the maximum absolute value is obtained through a first distance evaluation model, the output result of the model is marked as a target part, the target part is a part with larger energy in the initial imaging image, wherein the first distance evaluation model is as follows:
Figure 745881DEST_PATH_IMAGE067
wherein, the first and the second end of the pipe are connected with each other,
Figure 924052DEST_PATH_IMAGE068
the index value of the positive pixel point with the maximum absolute value in the earth surface direction is obtained;
Figure 343532DEST_PATH_IMAGE069
the index value of the negative pixel point with the maximum absolute value in the earth surface direction,
Figure 163721DEST_PATH_IMAGE070
the index value of the positive pixel point with the largest absolute value in the depth direction,
Figure 719467DEST_PATH_IMAGE071
the index value in the depth direction of the negative pixel point with the largest absolute value,
Figure 865277DEST_PATH_IMAGE072
is the target portion.
Then, obtaining the distance from the positive pixel point with the largest absolute value to the upper edge of the target potential region through a second distance evaluation model, and marking the output result of the model as a background part, wherein the background part is a part with smaller energy in the initial imaging image, and the second distance evaluation model is as follows:
Figure 240895DEST_PATH_IMAGE073
wherein the content of the first and second substances,
Figure 864774DEST_PATH_IMAGE070
the index value in the depth direction of the positive pixel point with the largest absolute value,
Figure 540606DEST_PATH_IMAGE074
the index value of the position of the upper left corner of the rectangular frame in the depth direction,
Figure 591739DEST_PATH_IMAGE075
as part of the background.
Further, the target part and the background part are input into a ratio evaluation model, and the target area ratio output by the model is used as the ratio
Figure 720232DEST_PATH_IMAGE076
The pixel value of the first positive pixel point
Figure 147802DEST_PATH_IMAGE077
And the pixel value of the first negative pixel point
Figure 678141DEST_PATH_IMAGE078
Generating a positive amplitude threshold
Figure 696912DEST_PATH_IMAGE079
And a negative amplitude threshold
Figure 41263DEST_PATH_IMAGE080
. Wherein the positive amplitude threshold value
Figure 741366DEST_PATH_IMAGE079
Can be expressed as
Figure 860631DEST_PATH_IMAGE081
Threshold value of negative amplitude
Figure 987987DEST_PATH_IMAGE080
Can be expressed as
Figure 887810DEST_PATH_IMAGE082
The area proportion evaluation model can be expressed as:
Figure 391604DEST_PATH_IMAGE083
wherein, the first and the second end of the pipe are connected with each other,
Figure 365376DEST_PATH_IMAGE076
the target area occupation ratio is obtained.
It can be understood that, in the present embodiment, the positive amplitude threshold and the negative amplitude threshold are obtained based on the pixel point of the pixel point with the largest target area occupation ratio and the largest pixel absolute value, and a feasible method can be provided for determining the amplitude threshold.
Step S402, a cross-shaped template is constructed, and a similarity threshold value and a template length are obtained according to the cross-shaped template.
In step S402, the similarity threshold is a boundary for distinguishing the similarity between the target point and the background point by calculating the similarity between the undetermined point and the target point for the undetermined point in the initial imaging image that does not satisfy the amplitude threshold.
Preferably, step S402 includes the steps of:
step a, determining the position of a first pixel point with the maximum pixel absolute value in a target potential area, constructing a cross-shaped template based on the position of the first pixel point, and acquiring initial state values of the first pixel point in four directions of the cross-shaped template;
b, traversing each pixel point from near to far in each communication direction of the first pixel point, comparing the pixel value of each pixel point with an amplitude threshold value, and updating the state value of each communication direction according to the comparison result;
step c, acquiring a minimum state value from the four updated state values, and marking the minimum state value as a similarity threshold;
and d, acquiring a state average value, and marking the state average value as the length of the template.
In this embodiment, for the similarity threshold and the template length of the positive pixel point, first find the position of the first positive pixel point with the maximum positive pixel value in the target potential region, construct a cross template based on the position of the first positive pixel point, and set an initial state value for four directions of the first positive pixel point, where the initial state value is:
Figure 929213DEST_PATH_IMAGE084
Figure 316332DEST_PATH_IMAGE085
Figure 92658DEST_PATH_IMAGE086
and
Figure 186516DEST_PATH_IMAGE087
and the initial state value is 0, then traversing each pixel point in each direction from near to far in sequence, for each direction, if the pixel value of the pixel point is more than or equal to the positive amplitude threshold value, adding one to the initial state value of the direction to obtain the updated state value of the direction, and stopping traversing until a certain pixel point in the direction is less than the positive amplitude threshold value.
Further, the smallest one of the updated four state values is acquired as a similarity threshold, and the average value of the updated four state values is acquired as a template length. It will be appreciated that where the average is not an integer, the average may be rounded down.
And for the similarity threshold value and the template length of the negative pixel point, the obtaining process is the same as that of the similarity threshold value of the positive pixel.
It can be understood that, in the present embodiment, the similarity threshold is obtained based on the state values of the first pixel points in the four directions of the cross-shaped template, and a feasible method can be provided for determining the pixel degree threshold.
Step S403, each pixel point in the initial imaging image is processed according to the amplitude threshold, the similarity threshold and the cross-shaped template, and an artifact-suppressed image is obtained.
Preferably, step S403 includes the steps of:
step a, according to a pixel value of a first positive pixel point, a pixel value of a first negative pixel point, an amplitude threshold value and a similarity threshold value in a target potential region, a pixel point judgment model is constructed.
B, acquiring the type of each pixel point in the target potential area according to the pixel point judgment model; the types of the pixel points comprise target points and undetermined points.
Specifically, in the pixel point judgment model, a certain pixel point in the initial imaging image is judged
Figure 921254DEST_PATH_IMAGE088
If a pixel point
Figure 530089DEST_PATH_IMAGE088
Has a pixel value of
Figure 641265DEST_PATH_IMAGE089
Or alternatively
Figure 324050DEST_PATH_IMAGE090
In between, determine the pixel point
Figure 229689DEST_PATH_IMAGE088
The target point is not processed; if the pixel point is
Figure 325821DEST_PATH_IMAGE088
Has a pixel value of
Figure 709529DEST_PATH_IMAGE091
Or alternatively
Figure 777979DEST_PATH_IMAGE092
In between, determine the pixel point
Figure 606519DEST_PATH_IMAGE088
The undetermined point is not directly judged to be a target or a background, and a similarity threshold is used for judgment in the follow-up process.
And c, when the pixel point is the undetermined point, constructing a cross template of the undetermined point, and acquiring the target similarity of the undetermined point according to the cross template.
Specifically, for each undetermined point, a cross-shaped template is constructed according to the length of the template obtained in the step, namely the template with limited length in four directions, the transverse length and the longitudinal length of the template are equal, and the length of the template in the four directions is equal. For the undetermined point, if the pixel value of the pixel point in a certain direction in the cross template is greater than or equal to the positive amplitude threshold or less than or equal to the negative amplitude threshold, adding 1 to the target similarity value of the undetermined point in the direction until each pixel point in the cross template is judged, and acquiring the largest target similarity value in the four directions as the target similarity value of the undetermined point.
And d, detecting whether the target similarity of the undetermined point is greater than a similarity threshold.
And e, if so, determining the undetermined point as a target point, otherwise, determining the undetermined point as a background point, and setting the pixel value of the background point to be zero.
Specifically, when the target similarity value of the undetermined point is greater than or equal to the similarity threshold, determining the undetermined point as a target point, and not processing the target point; otherwise, determining the undetermined point as a background or an artifact, and performing zero setting processing on the undetermined point.
It can be understood that, in the embodiment, the target point and the undetermined point are judged through the amplitude threshold, and then the undetermined point is judged to be the target point or the background point through the similarity threshold, so that side lobes and artifacts in the imaging image can be effectively suppressed.
And S50, carrying out integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
In step S50, each column of the artifact-suppressed image is processed by the integral focusing algorithm, specifically, for a column that is not all zero, the pixel value of the column changes in a complex manner as viewed from the top. The integral focusing algorithm is based on a fixed integral principle, the value of each pixel point in each column is updated according to the change trend of pixel values of the artifact-suppressed image from top to bottom, after all columns are updated, each column of the target potential area can show a change trend that the pixel values are increased and then decreased, namely, a section with positive and negative changes in each target potential area is focused to obtain the target imaging image.
Preferably, step S50 specifically includes the steps of:
step S501, for each column of the artifact-suppressed image, acquiring an initial integral value of each pixel point;
step S502, traversing each pixel point of each column, and updating the integral value according to the pixel value of the pixel point;
step S503, when the pixel value of the pixel point meets the preset updating condition, the pixel value after the pixel point is updated is obtained according to the integral value before and after the updating;
step S504, calibrating the focused pixel position according to the position proportion relation of the central pixel points before and after focusing;
and step S505, obtaining a target imaging image according to the pixel value of each pixel point after updating and the pixel position after calibration.
Understandably, the artifact-suppressed images obtained by the dual thresholding have certain characteristics. For the imaged region, the following two features are present: the first feature is a column without a target portion, and the pixel values on this column are all 0; the second feature is a column containing the target portion, and the pixel value of at least one pixel on the column is not 0. For the first feature, no processing is done; for the second feature, a column may be taken from the column containing the target portion, and the trend of the column is: 0 → the row maximum → 0 (hold a small segment) → the row minimum → 0. According to the variation trend, imaging points are generated for each target potential area after the target is imaged, and the amplitude variation trend of the column where the imaged target part is located is as follows: 0 → the maximum value of the column → 0. And constructing an integral focusing algorithm according to the two amplitude variation trends.
According to the accumulation characteristic of an integral focusing algorithm, setting the initial integral value of each pixel point as
Figure 189947DEST_PATH_IMAGE093
The initial integrated value is 0. For each pixel point, obtaining a test interval with a preset length, wherein the length of the test interval is
Figure 377346DEST_PATH_IMAGE094
Figure 503565DEST_PATH_IMAGE095
Is a section of
Figure 751006DEST_PATH_IMAGE096
Any one of the whole number of (a), (b),
Figure 759414DEST_PATH_IMAGE097
the distance between the coordinates of the positive pixel point and the negative pixel point with the maximum absolute value, that is, the test interval contains the pixel point at the center of the interval and the upper and lower parts
Figure 16083DEST_PATH_IMAGE095
And (5) each pixel point. For a certain column in the target potential area, traversing is started from the pixel point of the minimum index, the initial integral value is firstly updated in each traversal, and the updated integral value is obtained, wherein the updating formula is as follows:
Figure 59125DEST_PATH_IMAGE098
wherein, the first and the second end of the pipe are connected with each other,
Figure 743047DEST_PATH_IMAGE099
in order for the value of the integrated value to be updated,
Figure 238751DEST_PATH_IMAGE100
in order to be the integrated value before the update,
Figure 33531DEST_PATH_IMAGE101
the pixel values of the pixel points being traversed.
For a certain pixel point, detecting that the pixel value of the pixel point meets a preset updating condition, specifically, if the pixel point is not the maximum value in the interval, namely at least one continuous pixel at two ends has the same change trend, determining that the updating condition is met; or if the value of the pixel point is 0, before the interval
Figure 196659DEST_PATH_IMAGE095
The trend of each pixel is 0 after falling or after falling
Figure 785904DEST_PATH_IMAGE095
If the change trend of each pixel is increased or increased after maintaining 0, determining that the update condition is met; or if the pixel value of the point is less than 0, determining that the updating condition is met.
When the updating condition is met, the pixel value of the pixel point is updated to the integral value which is updated in the current round of traversal, namely the pixel value from the initial pixel point to the pixel point is accumulated to obtain the updated pixel value, and the updated pixel value can be expressed as:
Figure 34482DEST_PATH_IMAGE102
wherein, the first and the second end of the pipe are connected with each other,
Figure 632954DEST_PATH_IMAGE103
in order for the pixel values to be updated,
Figure 119430DEST_PATH_IMAGE104
in order to update the value of the pixel before update,
Figure 145155DEST_PATH_IMAGE105
is the test interval of the currently traversed pixel point,
Figure 615450DEST_PATH_IMAGE099
the updated integral value after the current round of traversal,
Figure 215016DEST_PATH_IMAGE106
is a section
Figure 87157DEST_PATH_IMAGE107
Front part
Figure 549362DEST_PATH_IMAGE095
The variation trend corresponding to each pixel point is obtained,
Figure 241375DEST_PATH_IMAGE108
is a section of
Figure 181649DEST_PATH_IMAGE109
Rear end
Figure 908297DEST_PATH_IMAGE095
The corresponding trend of change of each pixel point.
Further, for the image after the integral focusing processing, the position of the focusing center may deviate to some extent, and the focused position may be calibrated by using a proportional relationship based on the depth position of the center pixel point before and after focusing, so as to reduce the positioning error of the target, and the calibrated depth position may be expressed as:
Figure 10245DEST_PATH_IMAGE110
wherein the content of the first and second substances,
Figure 455133DEST_PATH_IMAGE111
for each pixel calibrated depth position,
Figure 199098DEST_PATH_IMAGE112
the depth position before calibration for each pixel is
Figure 780252DEST_PATH_IMAGE113
The depth position of the focused center pixel point (i.e. the pixel point with the maximum positive amplitude value);
Figure 318680DEST_PATH_IMAGE114
the depth position of the focused center pixel point.
And finally, forming a target imaging image by all pixel points with updated pixel values and calibrated depth positions.
It can be understood that in the embodiment, the pixel value of each pixel point in the artifact-suppressed image is updated through the integral focusing algorithm, and the depth positions of the central pixel points before and after focusing are calibrated, so that the imaging quality is improved, and the accuracy of the imaging position is ensured.
Further, experiments can be performed under simulation conditions and actual measurement conditions respectively to verify the effect of the ground penetrating radar back projection imaging method provided by the embodiment.
For simulated B-scan data, firstly, GPRMax software is used for simulation, the length of a detection scene is set to be 2.2m, the depth is set to be 0.6m, a transmitting antenna and a receiving antenna start to move along a measuring line from 0.1m, the moving distance of each time is 0.02m, and B-scan data consisting of 100A-scans is obtained in each simulation. In order to improve the universality of the method on different scenes, B-scan data of point targets (such as cylinders) with different quantities, sizes and positions underground can be acquired, wherein the radius of the point targets is set to be 1-10cm, and the burying depth is set to be 0.1-0.4m. After a large amount of B-scan data are obtained through the simulation, the B-scan data are preprocessed to remove direct waves, and the preprocessed B-scan data are converted into B-scan images.
Then, a deep learning framework based on the Pythroch is used for constructing a YOLOX network, 800B-scan images are selected, a target rectangular frame label is marked, the B-scan images containing the target rectangular frame label are divided into a first data set and a test set according to the distribution proportion of 9. And sending the training set, the verification set and the test set into a YOLOX network to train and test the network, performing parameter adjustment and training for multiple times until the YOLOX network has a good detection effect on hyperbolic characteristics in a B-scan image, and outputting the trained YOLOX network.
And then, inputting the simulated B-scan data into a trained network, outputting a target potential region after the network performs target detection, performing back projection imaging on the target potential region, performing double-threshold processing on the initial imaging image, and performing integral focusing processing on the imaging image after the double-threshold processing to obtain a target imaging image.
For actually measured B-scan data, the processing process is the same as that under the simulation condition, the actually measured B-scan data is preprocessed, the B-scan data to be imaged is obtained through conversion and is shown in figure 4, and the target imaging image is obtained through back projection processing, double threshold processing and integral focusing processing and is shown in figure 5. The above experiment was run on the same equipment, and for two important key indicators in the back projection imaging method: time and artifact suppression were calculated and relevant experimental parameters and algorithmic comparisons are given in tables 1 and 2.
Table 1 shows the calculation time of the ground penetrating radar back projection imaging method and other BP algorithms on the same equipment.
TABLE 1 calculation times for different backprojection imaging algorithms
Figure 985285DEST_PATH_IMAGE115
As can be seen from table 1, compared with the conventional back projection algorithm and some improved back projection algorithms, the ground penetrating radar back projection imaging method of the present invention has less computation time, and the less computation time represents a faster computation speed.
Table 2 gives the quantitative evaluation of the suppression effect of different back projection imaging algorithms on artifacts under the simulation conditions using the integrated side lobe ratio, which has the formula:
Figure 267362DEST_PATH_IMAGE116
wherein the content of the first and second substances,
Figure 703022DEST_PATH_IMAGE117
in order to integrate the side lobe ratio,
Figure 146773DEST_PATH_IMAGE118
as the total energy of the imaged image,
Figure 176040DEST_PATH_IMAGE119
is the main lobe energy of the imaged object.
TABLE 2 Integrated sidelobe ratio for different backprojection imaging algorithms
Figure 255949DEST_PATH_IMAGE120
As can be seen from Table 2, compared with the traditional back projection algorithm and some improved back projection algorithms, the ground penetrating radar back projection imaging method has a smaller comprehensive side lobe ratio, and the smaller comprehensive side lobe ratio shows a better side lobe suppression level.
Therefore, the ground penetrating radar back projection imaging method provided by the embodiment determines the potential position of the target by using the trained YOLOX network before imaging, demarcates the potential region of the target, and images only the potential region of the target during imaging, so that a large amount of calculation is reduced, interference of underground noise points or non-target points on the imaging process is avoided, and the imaging quality is effectively improved. And then, enhancing and position calibrating the imaged image through double-threshold processing and integral focusing processing, so that most side lobes and artifacts are effectively removed, and the imaging quality is further improved. The experimental result shows that compared with the existing back projection imaging method, the ground penetrating radar back projection imaging method provided by the embodiment has the advantage that the imaging efficiency is obviously improved.
In addition, as shown in fig. 6, corresponding to any of the above-mentioned embodiments, an embodiment of the present invention further provides a ground penetrating radar back projection imaging system, which includes a data acquisition and processing module 110, a network training module 120, a back projection module 130, an artifact suppression module 140, and a target imaging module 150, where details of each functional module are as follows:
the data acquisition and processing module 110 is configured to acquire and preprocess B-scan data, and construct a tag data set according to the preprocessed B-scan data, where the tag data set includes a B-scan image into which the preprocessed B-scan data is converted and a target rectangular frame tag corresponding to the B-scan image;
a network training module 120, configured to construct a YOLOX network, and train the YOLOX network through a tag data set;
a back projection module 130, configured to obtain a target potential region of the B-scan image to be imaged through the trained YOLOX network, and perform back projection imaging in the target potential region to obtain an initial imaging image;
the artifact suppression module 140 is configured to perform dual-threshold processing on the initial imaging image to obtain an artifact-suppressed image;
and the target imaging module 150 is configured to perform integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
In an alternative embodiment, the data acquisition and processing module 110 includes the following sub-modules, and the detailed description of each functional sub-module is as follows:
the data acquisition sub-module is used for detecting the underground area containing the target through the ground penetrating radar to obtain N B-scan data;
the preprocessing submodule is used for preprocessing the N B-scan data and converting the preprocessed B-scan data into a B-scan image;
the marking submodule is used for marking the target existing area in the N B-scan images to obtain a corresponding target rectangular frame label;
the data set dividing submodule is used for dividing N B-scan images containing the target rectangular frame labels into a first data set and a test set according to a preset distribution proportion; dividing the first data set into a training set and a verification set according to a preset distribution proportion;
and the data set constructing submodule is used for constructing a label data set according to the training set, the verification set and the test set.
In an alternative embodiment, the rear projection module 130 includes the following sub-modules, each of which is described in detail as follows:
the parameter determining submodule is used for acquiring a B-scan image to be imaged and determining the size of an imaging area corresponding to the B-scan image to be imaged;
the target detection submodule is used for inputting the B-scan image to be imaged into a trained YOLOX network, and acquiring four-corner coordinates of a target potential area and a rectangular frame framing the target potential area through the YOLOX network;
the mapping submodule is used for mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and the imaging submodule is used for performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
In an alternative embodiment, the artifact reduction module 140 includes the following sub-modules, and the detailed description of each functional sub-module is as follows:
the first threshold acquisition submodule is used for acquiring an amplitude threshold according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
the second threshold acquisition submodule is used for constructing a cross-shaped template and acquiring a similarity threshold and a template length according to the cross-shaped template; the similarity threshold is an undetermined point which does not meet the amplitude threshold in the initial imaging image, and the similarity boundary of the target point and the background point is distinguished by calculating the similarity of the undetermined point and the target point;
and the double-threshold segmentation submodule is used for processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross template to obtain an artifact-suppressed image.
In an alternative embodiment, the target imaging module 150 includes the following sub-modules, each of which is described in detail below:
the primary processing submodule is used for acquiring an initial integral value of each pixel point for each column of the artifact-suppressed image;
the accumulation submodule is used for traversing each pixel point of each column and updating the integral value according to the pixel value of the pixel point;
the integral updating submodule is used for obtaining the pixel value of the pixel point after updating according to the integral value before and after updating when the pixel value of the pixel point meets the preset updating condition;
the calibration sub-module is used for calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and the image output submodule is used for obtaining a target imaging image according to the pixel value after each pixel point is updated and the pixel position after calibration.
The system of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to imply that the scope of the invention is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity.
The present embodiments are intended to embrace all such alterations, modifications and variations that fall within the broad scope of the present invention. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the invention.

Claims (8)

1. A ground penetrating radar back projection imaging method is characterized by comprising the following steps:
b-scan data are obtained and preprocessed, a tag data set is constructed according to the preprocessed B-scan data, and the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
constructing a YOLOX network, and training the YOLOX network through the tag data set;
acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
performing double-threshold processing on the initial imaging image to obtain an artifact-suppressed image, including:
acquiring an amplitude threshold value according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
constructing a cross-shaped template, and acquiring a similarity threshold value and a template length according to the cross-shaped template; the similarity threshold is as follows: for undetermined points which do not meet the amplitude threshold value in the initial imaging image, distinguishing boundaries of similarity between a target point and a background point by calculating the similarity between the undetermined points and the target point;
processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross-shaped template to obtain an artifact suppression image;
and carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
2. The method for imaging by back projection of ground penetrating radar according to claim 1, wherein the acquiring and preprocessing the B-scan data and constructing the tag data set according to the preprocessed B-scan data comprises:
detecting an underground area containing a target by using a ground penetrating radar to obtain N pieces of B-scan data;
preprocessing the N B-scan data, and converting the preprocessed B-scan data into a B-scan image;
marking target existing areas in the N B-scan images to obtain corresponding target rectangular frame labels;
dividing N B-scan images containing target rectangular frame labels into a first data set and a test set according to a preset distribution proportion;
dividing the first data set into a training set and a verification set according to the preset distribution proportion;
and forming a label data set according to the training set, the verification set and the test set.
3. The method of claim 1, wherein the YOLOX network comprises a backbone network, a neck network, and a head network; the backbone network is used for extracting the features of the B-scan image, the neck network is used for combining and mixing the features, and the head network is used for predicting and classifying the features;
the backbone network comprises an attention module, three convolution residual modules and a feature stacking module; wherein the attention module is composed of a downsampling layer and a base convolution layer, the base convolution layer comprises a convolution layer, a batch normalization layer and an activation function;
the convolution residual module consists of the basic convolution layer and the CSP layer, the CSP layer comprises a main trunk branch, a residual side branch and a channel dimension splicing layer, the main trunk branch comprises the basic convolution layer, a residual stacking layer and an adding layer, and the residual side branch comprises the basic convolution layer;
the feature stacking module consists of the basic convolutional layers, SPP layers and CSP layers, wherein the SPP layers comprise two basic convolutional layers, one upsampling stacking layer and one channel dimension splicing layer, and the upsampling stacking layer comprises three pooling branches and one stacking branch;
the neck network comprises two up-sampling fusion modules and two feature fusion modules, wherein each up-sampling fusion module consists of a basic convolutional layer, an up-sampling layer, a channel dimension splicing layer and a CSP layer; the feature fusion module consists of the basic convolutional layer, the channel dimension splicing layer and the CSP layer;
the head network comprises three feature point judging modules, wherein each feature point judging module consists of the basic convolutional layer, a first attribute judging branch, a second attribute judging branch and the channel dimension splicing layer, the first attribute judging branch comprises two basic convolutional layers and one convolutional layer, and the second attribute judging branch comprises two basic convolutional layers, a coordinate prediction branch and a two-classification branch.
4. The method of claim 1, wherein the obtaining of the target potential area of the B-scan image to be imaged through the trained YOLOX network and the back projection imaging in the target potential area to obtain an initial imaging image comprises:
acquiring a B-scan image to be imaged, and determining the size of an imaging area corresponding to the B-scan image to be imaged;
inputting the B-scan image to be imaged into a trained YOLOX network, and acquiring a target potential area and four-corner coordinates of a rectangular frame framing the target potential area through the YOLOX network;
mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
5. The method of claim 1, wherein the performing an integral focusing process on the artifact-suppressed image to obtain a target imaging image comprises:
for each column of the artifact-suppressed image, acquiring an initial integral value of each pixel point;
traversing each pixel point of each column, and updating the integral value according to the pixel value of the pixel point;
when the pixel value of the pixel point meets a preset updating condition, obtaining the updated pixel value of the pixel point according to the integral value before and after updating;
calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and obtaining a target imaging image according to the updated pixel value and the calibrated pixel position of each pixel point.
6. The method of claim 1, wherein the obtaining of the amplitude threshold according to the size relationship of the pixel points in the initial imaging image comprises:
determining a first positive pixel point with a maximum positive pixel value and a first negative pixel point with a minimum negative pixel value in the target potential region, and acquiring a pixel value of the first positive pixel point and a pixel value of the first negative pixel point;
obtaining a target part according to the distance between the coordinates of the first positive pixel point and the first negative pixel point;
obtaining a background part according to the distance from the first positive pixel point to the upper edge of the target potential area;
obtaining a target area occupation ratio according to the target part and the background part;
and acquiring two amplitude thresholds, namely a positive amplitude threshold and a negative amplitude threshold, according to the pixel value of the first positive pixel point, the pixel value of the first negative pixel point and the target area ratio.
7. A ground penetrating radar rear projection imaging system, comprising:
the data acquisition and processing module is used for acquiring and preprocessing B-scan data and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
a network training module, configured to construct a YOLOX network, and train the YOLOX network through the tag dataset;
the back projection module is used for acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
the artifact suppression module is used for carrying out double-threshold processing on the initial imaging image to obtain an artifact suppressed image; the artifact suppression module comprises:
the first threshold acquisition submodule is used for acquiring an amplitude threshold according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
the second threshold acquisition submodule is used for constructing a cross-shaped template and acquiring a similarity threshold and a template length according to the cross-shaped template; the similarity threshold is as follows: for undetermined points which do not meet the amplitude threshold value in the initial imaging image, distinguishing boundaries of similarity between a target point and a background point by calculating the similarity between the undetermined points and the target point;
the dual-threshold segmentation submodule is used for processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross-shaped template to obtain an artifact-suppressed image;
and the target imaging module is used for carrying out integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
8. The georadar back-projection imaging system of claim 7, wherein the target imaging module comprises:
the primary processing submodule is used for acquiring an initial integral value of each pixel point for each column of the artifact-suppressed image;
the accumulation submodule is used for traversing each pixel point of each column and updating the integral value according to the pixel value of the pixel point;
the integral updating submodule is used for obtaining the pixel value of the pixel point after updating according to the integral value before and after updating when the pixel value of the pixel point meets the preset updating condition;
the calibration submodule is used for calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and the image output submodule is used for obtaining a target imaging image according to the pixel value updated by each pixel point and the pixel position after calibration.
CN202210902645.9A 2022-07-29 2022-07-29 Ground penetrating radar backward projection imaging method and system Active CN114966560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210902645.9A CN114966560B (en) 2022-07-29 2022-07-29 Ground penetrating radar backward projection imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210902645.9A CN114966560B (en) 2022-07-29 2022-07-29 Ground penetrating radar backward projection imaging method and system

Publications (2)

Publication Number Publication Date
CN114966560A CN114966560A (en) 2022-08-30
CN114966560B true CN114966560B (en) 2022-10-28

Family

ID=82969081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210902645.9A Active CN114966560B (en) 2022-07-29 2022-07-29 Ground penetrating radar backward projection imaging method and system

Country Status (1)

Country Link
CN (1) CN114966560B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496917B (en) * 2022-11-01 2023-09-26 中南大学 Multi-target detection method and device in GPR B-Scan image
CN117310696A (en) * 2023-09-26 2023-12-29 中南大学 Self-focusing backward projection imaging method and device for ground penetrating radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101738602A (en) * 2008-11-26 2010-06-16 中国科学院电子学研究所 Echo data preprocessing method for pseudorandom sequences applied to ground penetrating radar
US8094063B1 (en) * 2009-06-03 2012-01-10 Lockheed Martin Corporation Image filtering and masking method and system for improving resolution of closely spaced objects in a range-doppler image
CN107678029A (en) * 2017-08-30 2018-02-09 哈尔滨工业大学 A kind of rear orientation projection's imaging method based on the average cross-correlation information of random reference
CN108387896A (en) * 2018-01-03 2018-08-10 厦门大学 A kind of automatic convergence imaging method based on Ground Penetrating Radar echo data
CN110333489A (en) * 2019-07-24 2019-10-15 北京航空航天大学 The processing method to SAR echo data Sidelobe Suppression is combined with RSVA using CNN
CN114331890A (en) * 2021-12-27 2022-04-12 中南大学 Ground penetrating radar B-scan image feature enhancement method and system based on deep learning
CN114758230A (en) * 2022-04-06 2022-07-15 桂林电子科技大学 Underground target body classification and identification method based on attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796829B2 (en) * 2008-12-10 2010-09-14 The United States Of America As Represented By The Secretary Of The Army Method and system for forming an image with enhanced contrast and/or reduced noise

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101738602A (en) * 2008-11-26 2010-06-16 中国科学院电子学研究所 Echo data preprocessing method for pseudorandom sequences applied to ground penetrating radar
US8094063B1 (en) * 2009-06-03 2012-01-10 Lockheed Martin Corporation Image filtering and masking method and system for improving resolution of closely spaced objects in a range-doppler image
CN107678029A (en) * 2017-08-30 2018-02-09 哈尔滨工业大学 A kind of rear orientation projection's imaging method based on the average cross-correlation information of random reference
CN108387896A (en) * 2018-01-03 2018-08-10 厦门大学 A kind of automatic convergence imaging method based on Ground Penetrating Radar echo data
CN110333489A (en) * 2019-07-24 2019-10-15 北京航空航天大学 The processing method to SAR echo data Sidelobe Suppression is combined with RSVA using CNN
CN114331890A (en) * 2021-12-27 2022-04-12 中南大学 Ground penetrating radar B-scan image feature enhancement method and system based on deep learning
CN114758230A (en) * 2022-04-06 2022-07-15 桂林电子科技大学 Underground target body classification and identification method based on attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Multi-Scale Weighted Back Projection Imaging Technique for Ground Penetrating Radar Applications;Wentai Lei 等;《remote sensing》;20140605;全文 *
基于加权相关的探地雷达后向投影成像算法;陈鑫澎 等;《电子器件》;20220220;第45卷(第1期);全文 *

Also Published As

Publication number Publication date
CN114966560A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN114966560B (en) Ground penetrating radar backward projection imaging method and system
Preston Automated acoustic seabed classification of multibeam images of Stanton Banks
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
Xiang et al. Superpixel generating algorithm based on pixel intensity and location similarity for SAR image classification
Liu et al. Algorithmic foundation and software tools for extracting shoreline features from remote sensing imagery and LiDAR data
CN109712153A (en) A kind of remote sensing images city superpixel segmentation method
CN111027497B (en) Weak and small target rapid detection method based on high-resolution optical remote sensing image
CN103871039B (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
CN109919870A (en) A kind of SAR image speckle suppression method based on BM3D
Yang et al. Evaluating SAR sea ice image segmentation using edge-preserving region-based MRFs
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN110764087B (en) Sea surface wind direction inverse weighting inversion method based on interference imaging altimeter
Gupta et al. Despeckle and geographical feature extraction in SAR images by wavelet transform
CN116012364B (en) SAR image change detection method and device
CN105139410B (en) The brain tumor MRI image dividing method projected using aerial cross sectional
CN113362293A (en) SAR image ship target rapid detection method based on significance
CN110532615A (en) A kind of decomposition method step by step of shallow sea complicated landform
CN112989940B (en) Raft culture area extraction method based on high-resolution third satellite SAR image
CN116908853B (en) High coherence point selection method, device and equipment
Priyadharsini et al. Underwater acoustic image enhancement using wavelet and KL transform
Bai et al. A fast edge-based two-stage direct sampling method
Fakiris et al. Quantification of regions of interest in swath sonar backscatter images using grey-level and shape geometry descriptors: The TargAn software
Chanussot et al. Shape signatures of fuzzy star-shaped sets based on distance from the centroid
CN110618409A (en) Multi-channel InSAR interferogram simulation method and system considering overlapping and shading
Berger et al. Automated ice-bottom tracking of 2D and 3D ice radar imagery using Viterbi and TRW-S

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant