CN114966560B - Ground penetrating radar backward projection imaging method and system - Google Patents
Ground penetrating radar backward projection imaging method and system Download PDFInfo
- Publication number
- CN114966560B CN114966560B CN202210902645.9A CN202210902645A CN114966560B CN 114966560 B CN114966560 B CN 114966560B CN 202210902645 A CN202210902645 A CN 202210902645A CN 114966560 B CN114966560 B CN 114966560B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- imaging
- pixel
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/2813—Means providing a modification of the radiation pattern for cancelling noise, clutter or interfering signals, e.g. side lobe suppression, side lobe blanking, null-steering arrays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/885—Radar or analogous systems specially adapted for specific applications for ground probing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
- G01S7/292—Extracting wanted echo-signals
- G01S7/2923—Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
- G01S7/2927—Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods by deriving and controlling a threshold value
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/35—Details of non-pulse systems
- G01S7/352—Receivers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/35—Details of non-pulse systems
- G01S7/352—Receivers
- G01S7/354—Extracting wanted echo-signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Abstract
The invention discloses a ground penetrating radar back projection imaging method and a system, wherein the method comprises the following steps: b-scan data are obtained and preprocessed, and a tag data set is constructed according to the preprocessed B-scan data; constructing a YOLOX network, and training the YOLOX network through a label data set; acquiring a target potential area of a B-scan image to be imaged through a trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image; and carrying out double-threshold processing and integral focusing processing on the initial imaging image to obtain a target imaging image. According to the method, the target potential region in the B-scan image is framed through the YOLOX network, imaging is carried out only in the region, the global backward projection calculation is avoided, and the calculation amount is saved; meanwhile, the image is enhanced through double threshold processing and integral focusing processing, and the imaging quality is improved.
Description
Technical Field
The invention relates to the technical field of ground penetrating radar imaging, in particular to a ground penetrating radar back projection imaging method and system.
Background
Ground Penetrating Radar (GPR) is an effective underground nondestructive detection technique. The ground penetrating radar radiates electromagnetic waves to the underground through a ground transmitting antenna, the electromagnetic waves can be reflected and scattered at the discontinuous position of the electromagnetic property, so that a receiving antenna on the ground receives reflected signals, and underground target detection is further achieved. Due to the characteristics of high resolution, high efficiency, low cost and non-destructiveness, the ground penetrating radar can be widely applied to many fields such as archaeology, civil engineering, physics, geoscience and the like. The ground penetrating radar can obtain reflected signals which can be converted into depth profiles according to different propagation speeds of electromagnetic waves in different underground media, the reflected signals of underground targets are usually in an inverted hyperbolic shape, but the hyperbolic shape cannot completely reflect the specific conditions of the underground targets, so that a ground penetrating radar imaging technology is needed, and the underground targets are focused and positioned by recovering some intensity information and distribution information of the underground targets through analyzing different characteristics of multiple paths of reflected signals, so that the imaging of the underground targets is a key part of the ground penetrating radar.
The Back Projection (BP) algorithm is an algorithm that has utility and representativeness in the field of georadar imaging. The traditional back projection algorithm is that after an imaging area is determined, design parameters are used for carrying out grid division on the imaging area, and the imaging area is divided into equal intervalsAnd for each grid in the imaging area, the received signal intensity of the corresponding measuring point of the grid is found by calculating the time delay from the grid to each measuring point, and finally, the scattering intensity at the grid is obtained by accumulating. A conventional backprojection algorithm may comprise the following steps:
(1) Calculating the size of the imaging area and the position of each measuring pointDividing the imaging area into equal intervalsA grid. And moving the earth surface of the imaging area on the measuring line by using a ground penetrating radar, obtaining the A-scan echo signals of all the measuring points, and synthesizing the A-scan echo signals of all the measuring points to obtain a B-scan echo signal.
(2) For each grid in the imaging regionAnd calculating the two-way propagation delay from the grid to a certain measuring point, obtaining a time index corresponding to the two-way propagation delay according to the time interval of the grid during grid division, substituting the time index into the echo of the measuring point to obtain the intensity data from the grid to the measuring point, and adding the intensity data into a list. For each grid, the intensity values from the grid to all measuring points are obtained in the same way, and the echo scattering intensity of the whole measuring line corresponding to the grid can be obtained。
(3) For each grid, summing the intensity data from the grid to all the measuring points in the list, namely summing different intensities obtained under different time delay conditions to obtain the imaging numerical value of the gridThe calculation formula of the imaging numerical value is as follows:。
(4) And (4) repeating the steps (1) to (3) for each grid point of the whole imaging area to obtain a back projection imaging result.
Although conventional back projection imaging methods can image subsurface targets to some extent, they are accompanied by very energetic strong artifacts, referred to as artifacts, the presence of which makes it difficult to distinguish between targets and non-targets. To suppress these artifacts and enhance the imaging quality, methods have been proposed to improve back projection imaging. For example, non-patent document 1 analyzes a statistical relationship between different scattering data, designs a weighted BP algorithm, sets a weight for each imaging unit by using a mean and a variance, and increases the quality of an imaging result by weighting to reduce artifacts obtained by the BP algorithm; non-patent document 2 relates intensity data of different measurement points, designs a multiplication cross-correlation BP algorithm, multiplies every two data obtained at each grid point, and finally sums the data obtained by multiplication, wherein the multiplication cross-correlation BP algorithm considers cross-correlation of received data and can also remove a large amount of artifacts; non-patent document 3 considers that a Coherence Factor (CF) is also applied to an imaging image as a weighting Factor, and designs a BP algorithm in which the Coherence Factor and a back projection are combined, thereby improving the quality of an imaging result.
The improved back projection imaging method solves the problem of more artifacts in the traditional imaging to a certain extent, but the traditional back projection imaging method calculates each point of the whole area of a detection area, when the detection area is larger, the calculation speed is very slow due to higher calculation complexity, in addition, the distribution of underground targets is usually sparse, the calculation and the imaging are carried out in a time-consuming manner on the area without the targets, the time is wasted, and the method has no much significance.
On the basis, how to save the calculated amount, reduce the calculating time, avoid imaging some noise points and improve the imaging quality is a problem which is urgently needed to be solved in the field of detecting radar imaging.
Reference list
Non-patent literature
Non-patent document 1: "improved back projection imaging for surface target detection", wentai Lei et al, turkish Journal of electric Engineering and Computer Science,2013-11-07
Non-patent document 2: "A GPR Imaging Algorithm with arms supression", lin Zhou et al Proceedings of the XIII interfacial Conference on group conferencing Radar,2010-08-16
Non-patent document 3: "Coherence Factor improvement of thread-Wall radius Images", robert J. Burkholder et al, IEEE Antennas and Wireless Provisioning Letters,2010-01-01.
Disclosure of Invention
Based on the method, the invention provides a ground penetrating radar back projection imaging method and a ground penetrating radar back projection imaging system, and aims to solve the problems that the traditional back projection imaging method is long in calculation time and serious in side lobe and artifact interference.
Based on the above purpose, the invention provides a ground penetrating radar back projection imaging method, which comprises the following steps:
b-scan data are obtained and preprocessed, and a label data set is constructed according to the preprocessed B-scan data, wherein the label data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame label corresponding to the B-scan image;
constructing a YOLOX network, and training the YOLOX network through the tag data set;
acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
carrying out double-threshold processing on the initial imaging image to obtain an artifact suppression image;
and carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
In addition, the invention also provides a ground penetrating radar back projection imaging system, which comprises:
the data acquisition and processing module is used for acquiring and preprocessing B-scan data and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
a network training module, configured to construct a YOLOX network, and train the YOLOX network through the tag dataset;
the back projection module is used for acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
the artifact suppression module is used for carrying out double-threshold processing on the initial imaging image to obtain an artifact suppression image;
and the target imaging module is used for carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
According to the ground penetrating radar back projection imaging method and system, the trained YOLOX network is used for determining the potential position of the target before imaging, the potential area of the target is defined, and only the potential area of the target is imaged during imaging, so that a large amount of calculated amount is reduced, interference of underground noise points or non-target points to the imaging process is avoided, and the imaging quality is effectively improved. And then, enhancing and position calibrating the imaged image through double threshold processing and integral focusing processing, so that most side lobes and artifacts are effectively removed, and the imaging quality is further improved. Experimental results show that compared with the existing back projection imaging method, the ground penetrating radar back projection imaging method provided by the invention has the advantage that the imaging efficiency is obviously improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart illustrating a ground penetrating radar back projection imaging method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a YOLOX network according to an embodiment of the invention;
FIG. 3 is a schematic diagram of the structure of the YOLOX CSP layer, SPP layer, attention layer and base convolutional layer in one embodiment of the present invention;
FIG. 4 is a schematic diagram of a B-scan image to be imaged according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a target imaging image corresponding to the B-scan image to be imaged shown in FIG. 4;
fig. 6 is a schematic structural diagram of a ground penetrating radar rear projection imaging system according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, a ground penetrating radar back projection imaging method provided in an embodiment of the present invention specifically includes the following steps:
and S10, acquiring and preprocessing B-scan data, and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image.
In this embodiment, a Ground Penetrating Radar (GPR) is used to probe a subsurface region to obtain B-scan data. More specifically, when the GPR detects the underground area, a one-dimensional measuring line is arranged on the ground surface in advance, a plurality of measuring points are arranged on the measuring line, the GPR scans the underground area along the measuring line direction, and in the scanning process, the relative positions of a GPR transmitting antenna and a GPR receiving antenna are fixed and synchronously move forwards for scanning. When the transmitting antenna and the receiving antenna move to a first measuring point on a measuring line, the transmitting antenna radiates electromagnetic waves downwards, the electromagnetic waves are spread downwards and generate scattering when encountering a place with uneven medium characteristics, part of scattering energy is received by the receiving antenna and is recorded as A-scan data, when the transmitting antenna and the receiving antenna move to a next measuring point, the processes are repeated to obtain another A-scan data, at the moment, the A-scan data obtained by different measuring points can be combined to obtain B-scan data corresponding to an underground area, for the B-scan data, the transverse coordinate is the space position of each measuring point, and the longitudinal coordinate is the number of sampling points of time domain echoes.
And further, preprocessing the acquired B-scan data such as direct wave removing and denoising, converting the preprocessed B-scan data into a B-scan image, and constructing a label data set for network training.
In a preferred embodiment, step S10 specifically includes the following steps:
and step S101, detecting the underground area through GPR to obtain N pieces of B-scan data.
And S102, preprocessing the N B-scan data, and converting the preprocessed B-scan data into a B-scan image.
And step S103, marking the target existing area in the N B-scan images to obtain a corresponding target rectangular frame label.
And step S104, dividing N B-scan images containing the target rectangular frame label into a first data set and a test set according to a preset distribution proportion.
And S105, dividing the first data set into a training set and a verification set according to a preset distribution proportion.
And step S106, forming a label data set according to the training set, the verification set and the test set.
Wherein, the preprocessing comprises direct wave removing and denoising; the preset distribution ratio is set according to requirements and can be selected from 9.
In this embodiment, firstly, the number, distribution and kind of underground targets are changed through multiple simulation experiments and actual measurement experiments, and B-scan data under multiple detection scenes is obtained.
And then preprocessing the B-scan data obtained by simulation and the B-scan data obtained by actual measurement, specifically, for the B-scan data obtained by simulation, non-target background data and target-containing B-scan data under the same simulation scene can be obtained by simulation, and the non-target background data is directly subtracted from the target-containing B-scan data to realize the direct wave elimination. For actually measured B-scan data, subtracting the average value of each line of data from each element of each line of data to realize direct wave removal and certain noise removal effect.
And then, converting each preprocessed B-scan data into a corresponding B-scan image, and marking a rectangular frame label on a target existing area in each B-scan image to obtain a B-scan image containing a label, namely the B-scan image containing the target rectangular frame label.
And finally, randomly dividing all B-scan images containing the target rectangular frame labels into a first data set and a test set according to the distribution proportion of 9.
It can be understood that, in the present embodiment, through the steps S101 to S106, the tag data set is obtained, and data support can be provided for YOLOX network training.
And S20, constructing a YOLOX network, and training the YOLOX network through a label data set.
Referring to fig. 2 and 3, the YOLOX network in the present embodiment includes a Backbone network (Backbone), a Neck network (Neck), and a Head network (Head); the backbone network is used for extracting the features of the B-scan image (the features are hyperbolic features of the B-scan image), the neck network is used for combining and mixing the features, and the head network is used for predicting and classifying the features. The YOLOX network takes a B-scan image as an input, and takes four-corner coordinate positions of a target potential area and a rectangular frame for framing the target potential area in the B-scan image as an output.
Further, the backbone network comprises an attention module (Focus), three convolution residual modules (C1, C2 and C3) and a feature stacking module (C4); wherein the attention module is composed of a downsampling layer and a base convolutional layer (BaseConv), the base convolutional layer comprising a convolutional layer (Conv), a bulk normalization layer (BN) and an activation function (SiLU); the convolution residual module consists of a basic convolution layer and a CSP layer (CSP layer), wherein the CSP layer comprises a trunk branch, a residual side branch and a channel dimension splicing layer (Contact); the trunk branch comprises a basic convolution layer, a residual stacking layer and an addition layer; the residual edge branch comprises a basic convolutional layer; the feature stacking module is composed of a basic convolutional layer, an SPP layer (SPP layer) and a CSP layer, wherein the SPP layer comprises two basic convolutional layers, an upsampling stacking layer and a channel dimension splicing layer, and the upsampling stacking layer comprises three pooling branches and one stacking branch.
The neck network comprises two up-sampling fusion modules and two feature fusion modules, wherein each up-sampling fusion module consists of a basic convolution layer, an up-sampling layer, a channel dimension splicing layer and a CSP layer; the feature fusion module is composed of a basic convolution layer, a channel dimension splicing layer and a CSP layer.
The head network comprises three feature judgment modules, wherein each feature point judgment module consists of a basic convolution layer, a first attribute judgment branch, a second attribute judgment branch and a channel dimension splicing layer, the first attribute judgment branch comprises two basic convolution layers and one convolution layer, and the second attribute judgment branch comprises two basic convolution layers, a coordinate prediction branch and a two-classification branch.
Further, the specific process of the YOLOX network for target detection on the B-scan image comprises the following steps:
first, the input B-scan image is processed to 640 x 640 size using bilinear interpolation and a null dictionary is created that holds the output of each slice, setting the initial number of channels to 64.
Then, feature extraction is carried out through a backbone network, and the implementation process is as follows:
inputting a B-scan image into a backbone network, firstly extracting features through an attention module in the backbone network, more specifically, firstly extracting every other pixel of the high-resolution B-scan image through a down-sampling layer to obtain a plurality of low-resolution B-scan images, superposing the plurality of low-resolution B-scan images in a channel dimension, converting planar information on the length and the width into the channel dimension, wherein the input size is 640 multiplied by 3 (the length multiplied by the width multiplied by the number of channels), the output size is 320 multiplied by 12, then extracting the features through a basic convolutional layer, the input size is 320 multiplied by 320 by 12, and the output size is 320 multiplied by 320 by 64.
Then, the output of the attention module is input into a convolution residual module I (C1), wherein in the convolution residual module I, the input size of a basic convolution layer is 320 multiplied by 64, and the output size is 160 multiplied by 128; the CSP layer has an input size of 160 × 160 × 128 and an output size of 160 × 160 × 128.
Inputting the output of the convolution residual block one to a convolution residual block two (C2), in which the input size of the base convolution layer is 160 × 160 × 128 and the output size is 80 × 80 × 256; the CSP layer has an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256. And storing the output of the convolution residual error module II into the empty dictionary, and recording the output as the first characteristic.
Inputting the output of the convolution residual module II to a convolution residual module III (C3), wherein in the convolution residual module III, the input size of the basic convolution layer is 80 multiplied by 256, and the output size is 40 multiplied by 512; the CSP layer has an input size of 40 × 40 × 512 and an output size of 40 × 40 × 512. And storing the output of the convolution residual error module III in a null dictionary, and marking as a second characteristic.
Inputting the output of the convolution residual module III to a feature stacking module (C4), wherein the input size of the basic convolution layer in the feature stacking module is 40 multiplied by 512, and the output size is 20 multiplied by 1024; the input size of the SPP layer is 20 × 20 × 1024, and the output size is 20 × 20 × 1024; the CSP layer has an input size of 20 × 20 × 1024 and an output size of 20 × 20 × 1024. The output of the feature stack module is saved into the empty dictionary, denoted as the third feature.
Next, feature mixing and feature combining are performed through a neck network, and the implementation process is as follows:
inputting a third feature (namely the output of the feature stacking module) in the empty dictionary into a first upsampling fusion module, wherein in the first upsampling fusion module, the basic convolutional layer acquires the third feature, the input size is 20 × 20 × 1024, the output size is 20 × 20 × 512, and the output is marked as a first element; the input size of the up-sampling layer is 20 × 20 × 512, and the output size is 40 × 40 × 512; the channel dimension splicing layer superposes the output of the upsampling layer and a second feature (namely the output of the convolution residual module III) in the empty dictionary on a channel, and the output size is 40 multiplied by 1024; the CSP layer has an input size of 40 × 40 × 1024 and an output size of 40 × 40 × 512.
Inputting the output of the first up-sampling fusion module into a second up-sampling fusion module, wherein in the second up-sampling fusion module, the input size of the basic convolution layer is 40 multiplied by 512, the output size is 40 multiplied by 256, and the output is marked as a second element; the input size of the up-sampling layer is 40 × 40 × 256, and the output size is 80 × 80 × 256; the channel dimension splicing layer superposes the output of the upsampling layer and the first characteristic (namely the output of the convolution residual module II) in the empty dictionary in the channel dimension, and the output size is 80 multiplied by 512; the CSP layer has an input size of 80 × 80 × 512 and an output size of 80 × 80 × 256, and this output is denoted as a first fusion feature.
Inputting the output of the sampling fusion module II into the feature fusion module I, wherein in the feature fusion module I, the basic convolution layer acquires a first fusion feature with the input size of 80 multiplied by 256 and the output size of 40 multiplied by 256; a channel dimension splicing layer, wherein the output of the basic convolution layer and the second element are superposed on the channel dimension, the input size is 40 multiplied by 256, and the output size is 40 multiplied by 512; the CSP layer has an input size of 40 × 40 × 512 and an output size of 40 × 40 × 512, and this output is referred to as a second fusion feature.
Inputting the output of the first feature fusion module into a second feature fusion module, wherein in the second feature fusion module, the basic convolution layer acquires a second fusion feature, the input size is 40 multiplied by 512, and the output size is 20 multiplied by 512; a channel dimension splicing layer, wherein the output of the basic convolution layer and the first element are superposed on the channel dimension, the input size is 20 multiplied by 512, and the output size is 20 multiplied by 1024; the CSP layer has an input size of 20 × 20 × 1024 and an output size of 20 × 20 × 1024, and this output is referred to as a third fusion feature.
And finally, acquiring a target detection result through a head network, wherein the implementation process comprises the following steps:
inputting the first fusion feature into a feature judgment module I, wherein in the feature judgment module, the basic convolution layer acquires the input first fusion feature, the input size is 80 multiplied by 256, and the output size is 80 multiplied by 256; the first attribute judgment branch is used for judging the category of the feature point, and comprises a first basic convolution layer I with an input size of 80 x 256 and an output size of 80 x 256, inputting a second base convolutional layer with the size of 80 × 80 × 256 and the output size of 80 × 80 × 256, and inputting a convolutional layer with the size of 80 × 80 × 256 and the output size of 80 × 80 × 1 (1 is the classification number); the second attribute judgment branch is used for judging whether the regression coefficients of the feature points and corresponding objects exist or not, and comprises a third basic convolution layer with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256, a fourth basic convolution layer with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 256, a coordinate prediction branch connected with the fourth basic convolution layer and a classification branch, wherein the coordinate prediction branch is used for returning the four-corner coordinate position of a rectangular frame corresponding to the prediction target potential area and comprises convolution layers with an input size of 80 × 80 × 256 and an output size of 80 × 80 × 4 (1 is the number of coordinates); the two classification branches are used for judging whether the characteristic points are targets or backgrounds and comprise convolution layers with input size of 80 multiplied by 256 and output size of 80 multiplied by 1; the channel dimension splicing layer is used for superposing the output of the first attribute judgment branch, the output of the coordinate prediction branch and the output of the two classification branches, and the output size is 80 multiplied by 6.
And inputting the second fusion feature into a second feature judgment module, wherein the second feature judgment module is similar to the first feature judgment module in the implementation process, and the difference is that the output size of the channel dimension splicing layer in the second feature judgment module is 40 × 40 × 6, which is not described herein again.
And inputting the third fusion feature into a feature judgment module three, wherein the implementation process of the feature judgment module three is similar to that of the feature judgment module one, and the difference is that the output size of the channel dimension splicing layer in the feature judgment module three is 20 × 20 × 6, which is not described herein again.
That is, for each B-scan data input, the YOLOX network outputs a corresponding target potential region and the four-corner coordinate position of the rectangular box framing the target potential region.
And step S30, acquiring a target potential area of the B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image.
In step S30, the B-scan image to be imaged is a B-scan image obtained by preprocessing and converting the B-scan data needing to be imaged.
Specifically, a B-scan image to be imaged is input into a trained YOLOX network, a target potential region is obtained through the YOLOX network, and processing is performed in the target potential region through a Back Projection (BP) algorithm based on delay summation to obtain an initial imaging image. The backward projection algorithm based on delay and sum can be a traditional BP algorithm, and the implementation process is as follows:
step a, acquiring the size of an imaging area according to the size of a B-scan image to be imaged and the time window of B-scan data.
Suppose the size of the B-scan image to be imaged isWhereinThe number of time sampling points of a single channel A-scan,the time window for the number of measured points (i.e., the trace number of A-scan) along the direction of the measuring line and the B-scan data corresponding to the B-scan image to be imaged isThe size of the imaging area isWhereinIn order to be the depth of the image,is the lateral extent of the imaging along the line direction.
At this time, the number of the measuring points of the B-scan image to be imaged along the measuring line direction can be directly used for determining the transverse range of the imaging area imaged along the measuring line direction, and can be expressed asAnd based on the time window of the B-scan dataAnd propagation velocity of electromagnetic waveThe imaging depth of the imaging region is obtained and can be expressed asIn which the propagation velocity of electromagnetic wavesComprises the following steps:,it is the speed of the light that is,is the relative permittivity of the subsurface medium.
And b, acquiring the size of the initial imaging image according to the size of the imaging area and the preset size of each imaging unit in the imaging area.
Assume that the size of the target size of the initial imaging image to be obtained isWhereinIn a discrete number in the depth direction,as a discrete number along the direction of the line. Each imaging unit in the imaging areaIs of a size ofWhereinFor the size of each imaging unit in the depth direction,for each imaging unit dimension in the inline direction.
At this time, the imaging coefficient determined by the imaging precision and the imaging resolution is first determinedAnd the size of the B-scan image to be imaged along the line measuring direction, and the imaging size of the initial imaging image along the line measuring direction can be expressed asCoefficient of imagingIs a positive integer; then imaging the lateral extent in the line-measuring direction according to the imaging areaThe size of the image formed along the measuring line direction with the initial imageThe dimension of each imaging unit in the direction along the measuring line is obtained and can be expressed asFurther, each imaging unit is square in the imaging area, i.e. each imaging unit is squareIn the imaging method, the imaging size of the initial imaging image in the depth direction is obtained according to the imaging depth of the imaging area and the size of the imaging unit in the depth direction, and can be expressed as。
Step c, acquiring the two-way time delay from each imaging unit to the antenna in the imaging area, wherein the calculation formula of the two-way time delay is as follows:
wherein, the first and the second end of the pipe are connected with each other,is as followsAnd row and columnThe imaging units of a column are arranged in parallel,、is the position coordinate of the imaging unit;numbering the measuring points;、is as followsThe coordinate position of the antenna at each measurement point,is an image forming unit toAnd measuring the two-way time delay of the point antenna.
and d, acquiring a pixel value corresponding to each imaging unit according to a preset imaging model, and forming an initial imaging image. Wherein, the imaging model is:
wherein, the first and the second end of the pipe are connected with each other,after focusing with the firstAnd row and columnImaging unit of columnThe pixel value of the corresponding pixel point is determined,is as followsThe trace a-scan data is then written to,as an image forming unitTo the firstThe two-way time delay of the antenna of each measuring point,the length of time represented for each grid in the imaging region can be expressed as:,for the time window of the B-scan data,the number of time sampling points of a single channel A-scan is counted;the number of measuring points along the measuring line direction.
Namely, the two-way time delay from each imaging unit to the antenna of each measuring point is input into the imaging model to obtain the pixel values of the pixel points corresponding to the imaging units, and all the pixel points determining the pixel values form an initial imaging image.
In a preferred embodiment, the optimized BP algorithm is used to process in the target potential region to obtain an initial imaging image, in which case, step S30 may include the following steps:
step S301, acquiring a B-scan image to be imaged, and determining the size of an imaging area corresponding to the B-scan image to be imaged;
step S302, inputting a B-scan image to be imaged into a trained Yolox network, and acquiring four-corner coordinates of a target potential area and a rectangular frame for framing the target potential area through the Yolox network;
step S303, mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and step S304, performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
Specifically, a to-be-imaged B-scan image is input into a trained YOLOX network, the YOLOX network outputs a target potential area after target detection and four-corner coordinate positions of a rectangular frame framing the target potential area, the size of the imaging area is calculated by using preset parameters, and the calculation formula is as follows:
wherein the content of the first and second substances,length of imaging the imaging region along the line measurement direction;is the imaging depth of the imaging region;for the length and width of each mesh in the backprojection imaging,is the number of channels scanned in the B-scan;for the time window of the B-scan,is the propagation speed of electromagnetic waves in the underground medium.
And (3) according to the four-corner coordinate position of the rectangular frame output by the YOLOX network, corresponding the four-corner coordinate position to an actual imaging area, forming a target potential area in the imaging area at the moment, and performing time delay calculation, accumulation and imaging in the target potential area to obtain an initial imaging image. For each imaging unit in a certain imaging area, acquiring the two-way time delay from the imaging unit to an antenna, and acquiring the pixel value of each focused pixel point through an imaging optimization model, wherein the imaging optimization model can be expressed as:
wherein the content of the first and second substances,for the pixel value of any pixel point after focusing,is as followsThe trace a-scan data is then written to,a length of time represented for each grid in the imaging region;is an upward rounding function;a down rounding function;is a firstAnd row and columnImaging unit of a columnTo the firstAnd measuring the two-way time delay of the point antenna.
It can be understood that the embodiment performs processing in the target potential region through the optimized BP algorithm, so that the calculation speed is increased, and the imaging quality is improved to a certain extent.
And S40, carrying out double-threshold processing on the initial imaging image to obtain an artifact-suppressed image.
In step S40, the initial imaging image obtained in step 3 is processed using a dual-threshold segmentation algorithm, which specifically is: and performing target or background segmentation on each pixel point in the initial imaging image by using a preset amplitude threshold and a preset similarity threshold, and inhibiting artifacts. The amplitude threshold is used for distinguishing the boundary between a target point and a background point in the initial imaging image, at the moment, the pixel point meeting the amplitude threshold in the initial imaging image can be directly classified as a target, and the pixel point not meeting the amplitude threshold is classified as an undetermined point; the similarity threshold is a boundary of the similarity between the segmented target point and the background point, and is called as a similarity threshold, which judges whether the undetermined point is the target point or the background point according to the similarity between the undetermined point and the target point.
Preferably, step S40 includes the steps of:
step S401, according to the size relation of the pixel points in the initial imaging image, an amplitude threshold value is obtained.
In step S401, the amplitude threshold is a boundary that distinguishes a target point from a background point in the initial imaged image. In the initial imaging image, if the pixel value of a certain pixel point is larger than the amplitude threshold, the pixel point is determined to be a target point, otherwise, the pixel point is an undetermined point, the undetermined point cannot be directly judged to be a target or a background, and subsequently, judgment can be carried out according to the similarity threshold.
Preferably, when the target potential region includes a positive pixel point and a negative pixel point, step S401 includes the following steps:
step a, determining a first positive pixel point with the maximum positive pixel value and a first negative pixel point with the minimum negative pixel value in a target potential region, and acquiring the pixel value of the first positive pixel point and the pixel value of the first negative pixel point;
b, obtaining a target part through the distance between the coordinates of the first positive pixel point and the first negative pixel point;
step c, obtaining a background part according to the distance between the first positive pixel point and the upper edge of the target potential area;
step d, acquiring the occupation ratio of the target area according to the target part and the background part;
and e, acquiring two amplitude thresholds which are respectively a positive amplitude threshold and a negative amplitude threshold according to the pixel value of the first positive pixel point, the pixel value of the first negative pixel point and the target area ratio.
Understandably, in the process of detecting the underground region, due to the differential effect of the GPR transmitting antenna, the electromagnetic wave radiated to the underground through the transmitting antenna has a waveform with zero mean, that is, the time-domain waveform of the radiation signal has positive and negative values. Correspondingly, in the backward projection imaging process, the values of the scattering echoes on the time delay curves corresponding to one part of the imaging units are superposed to form a positive value, and the values of the scattering echoes on the time delay curves corresponding to the other part of the imaging units are superposed to form a negative value. At this time, in the imaging region near the true position of the object, a positive focus region and a negative focus region appear.
Firstly, a positive pixel point (namely a first positive pixel point) and a negative pixel point (a first negative pixel point) with the maximum absolute value in a target potential region are obtained, and the pixel values of the two pixel points are respectively recorded asAnd。
then, the distance between the positive pixel point and the negative pixel point with the maximum absolute value is obtained through a first distance evaluation model, the output result of the model is marked as a target part, the target part is a part with larger energy in the initial imaging image, wherein the first distance evaluation model is as follows:
wherein, the first and the second end of the pipe are connected with each other,the index value of the positive pixel point with the maximum absolute value in the earth surface direction is obtained;the index value of the negative pixel point with the maximum absolute value in the earth surface direction,the index value of the positive pixel point with the largest absolute value in the depth direction,the index value in the depth direction of the negative pixel point with the largest absolute value,is the target portion.
Then, obtaining the distance from the positive pixel point with the largest absolute value to the upper edge of the target potential region through a second distance evaluation model, and marking the output result of the model as a background part, wherein the background part is a part with smaller energy in the initial imaging image, and the second distance evaluation model is as follows:
wherein the content of the first and second substances,the index value in the depth direction of the positive pixel point with the largest absolute value,the index value of the position of the upper left corner of the rectangular frame in the depth direction,as part of the background.
Further, the target part and the background part are input into a ratio evaluation model, and the target area ratio output by the model is used as the ratioThe pixel value of the first positive pixel pointAnd the pixel value of the first negative pixel pointGenerating a positive amplitude thresholdAnd a negative amplitude threshold. Wherein the positive amplitude threshold valueCan be expressed asThreshold value of negative amplitudeCan be expressed asThe area proportion evaluation model can be expressed as:
wherein, the first and the second end of the pipe are connected with each other,the target area occupation ratio is obtained.
It can be understood that, in the present embodiment, the positive amplitude threshold and the negative amplitude threshold are obtained based on the pixel point of the pixel point with the largest target area occupation ratio and the largest pixel absolute value, and a feasible method can be provided for determining the amplitude threshold.
Step S402, a cross-shaped template is constructed, and a similarity threshold value and a template length are obtained according to the cross-shaped template.
In step S402, the similarity threshold is a boundary for distinguishing the similarity between the target point and the background point by calculating the similarity between the undetermined point and the target point for the undetermined point in the initial imaging image that does not satisfy the amplitude threshold.
Preferably, step S402 includes the steps of:
step a, determining the position of a first pixel point with the maximum pixel absolute value in a target potential area, constructing a cross-shaped template based on the position of the first pixel point, and acquiring initial state values of the first pixel point in four directions of the cross-shaped template;
b, traversing each pixel point from near to far in each communication direction of the first pixel point, comparing the pixel value of each pixel point with an amplitude threshold value, and updating the state value of each communication direction according to the comparison result;
step c, acquiring a minimum state value from the four updated state values, and marking the minimum state value as a similarity threshold;
and d, acquiring a state average value, and marking the state average value as the length of the template.
In this embodiment, for the similarity threshold and the template length of the positive pixel point, first find the position of the first positive pixel point with the maximum positive pixel value in the target potential region, construct a cross template based on the position of the first positive pixel point, and set an initial state value for four directions of the first positive pixel point, where the initial state value is:、、andand the initial state value is 0, then traversing each pixel point in each direction from near to far in sequence, for each direction, if the pixel value of the pixel point is more than or equal to the positive amplitude threshold value, adding one to the initial state value of the direction to obtain the updated state value of the direction, and stopping traversing until a certain pixel point in the direction is less than the positive amplitude threshold value.
Further, the smallest one of the updated four state values is acquired as a similarity threshold, and the average value of the updated four state values is acquired as a template length. It will be appreciated that where the average is not an integer, the average may be rounded down.
And for the similarity threshold value and the template length of the negative pixel point, the obtaining process is the same as that of the similarity threshold value of the positive pixel.
It can be understood that, in the present embodiment, the similarity threshold is obtained based on the state values of the first pixel points in the four directions of the cross-shaped template, and a feasible method can be provided for determining the pixel degree threshold.
Step S403, each pixel point in the initial imaging image is processed according to the amplitude threshold, the similarity threshold and the cross-shaped template, and an artifact-suppressed image is obtained.
Preferably, step S403 includes the steps of:
step a, according to a pixel value of a first positive pixel point, a pixel value of a first negative pixel point, an amplitude threshold value and a similarity threshold value in a target potential region, a pixel point judgment model is constructed.
B, acquiring the type of each pixel point in the target potential area according to the pixel point judgment model; the types of the pixel points comprise target points and undetermined points.
Specifically, in the pixel point judgment model, a certain pixel point in the initial imaging image is judgedIf a pixel pointHas a pixel value ofOr alternativelyIn between, determine the pixel pointThe target point is not processed; if the pixel point isHas a pixel value ofOr alternativelyIn between, determine the pixel pointThe undetermined point is not directly judged to be a target or a background, and a similarity threshold is used for judgment in the follow-up process.
And c, when the pixel point is the undetermined point, constructing a cross template of the undetermined point, and acquiring the target similarity of the undetermined point according to the cross template.
Specifically, for each undetermined point, a cross-shaped template is constructed according to the length of the template obtained in the step, namely the template with limited length in four directions, the transverse length and the longitudinal length of the template are equal, and the length of the template in the four directions is equal. For the undetermined point, if the pixel value of the pixel point in a certain direction in the cross template is greater than or equal to the positive amplitude threshold or less than or equal to the negative amplitude threshold, adding 1 to the target similarity value of the undetermined point in the direction until each pixel point in the cross template is judged, and acquiring the largest target similarity value in the four directions as the target similarity value of the undetermined point.
And d, detecting whether the target similarity of the undetermined point is greater than a similarity threshold.
And e, if so, determining the undetermined point as a target point, otherwise, determining the undetermined point as a background point, and setting the pixel value of the background point to be zero.
Specifically, when the target similarity value of the undetermined point is greater than or equal to the similarity threshold, determining the undetermined point as a target point, and not processing the target point; otherwise, determining the undetermined point as a background or an artifact, and performing zero setting processing on the undetermined point.
It can be understood that, in the embodiment, the target point and the undetermined point are judged through the amplitude threshold, and then the undetermined point is judged to be the target point or the background point through the similarity threshold, so that side lobes and artifacts in the imaging image can be effectively suppressed.
And S50, carrying out integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
In step S50, each column of the artifact-suppressed image is processed by the integral focusing algorithm, specifically, for a column that is not all zero, the pixel value of the column changes in a complex manner as viewed from the top. The integral focusing algorithm is based on a fixed integral principle, the value of each pixel point in each column is updated according to the change trend of pixel values of the artifact-suppressed image from top to bottom, after all columns are updated, each column of the target potential area can show a change trend that the pixel values are increased and then decreased, namely, a section with positive and negative changes in each target potential area is focused to obtain the target imaging image.
Preferably, step S50 specifically includes the steps of:
step S501, for each column of the artifact-suppressed image, acquiring an initial integral value of each pixel point;
step S502, traversing each pixel point of each column, and updating the integral value according to the pixel value of the pixel point;
step S503, when the pixel value of the pixel point meets the preset updating condition, the pixel value after the pixel point is updated is obtained according to the integral value before and after the updating;
step S504, calibrating the focused pixel position according to the position proportion relation of the central pixel points before and after focusing;
and step S505, obtaining a target imaging image according to the pixel value of each pixel point after updating and the pixel position after calibration.
Understandably, the artifact-suppressed images obtained by the dual thresholding have certain characteristics. For the imaged region, the following two features are present: the first feature is a column without a target portion, and the pixel values on this column are all 0; the second feature is a column containing the target portion, and the pixel value of at least one pixel on the column is not 0. For the first feature, no processing is done; for the second feature, a column may be taken from the column containing the target portion, and the trend of the column is: 0 → the row maximum → 0 (hold a small segment) → the row minimum → 0. According to the variation trend, imaging points are generated for each target potential area after the target is imaged, and the amplitude variation trend of the column where the imaged target part is located is as follows: 0 → the maximum value of the column → 0. And constructing an integral focusing algorithm according to the two amplitude variation trends.
According to the accumulation characteristic of an integral focusing algorithm, setting the initial integral value of each pixel point asThe initial integrated value is 0. For each pixel point, obtaining a test interval with a preset length, wherein the length of the test interval is,Is a section ofAny one of the whole number of (a), (b),the distance between the coordinates of the positive pixel point and the negative pixel point with the maximum absolute value, that is, the test interval contains the pixel point at the center of the interval and the upper and lower partsAnd (5) each pixel point. For a certain column in the target potential area, traversing is started from the pixel point of the minimum index, the initial integral value is firstly updated in each traversal, and the updated integral value is obtained, wherein the updating formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,in order for the value of the integrated value to be updated,in order to be the integrated value before the update,the pixel values of the pixel points being traversed.
For a certain pixel point, detecting that the pixel value of the pixel point meets a preset updating condition, specifically, if the pixel point is not the maximum value in the interval, namely at least one continuous pixel at two ends has the same change trend, determining that the updating condition is met; or if the value of the pixel point is 0, before the intervalThe trend of each pixel is 0 after falling or after fallingIf the change trend of each pixel is increased or increased after maintaining 0, determining that the update condition is met; or if the pixel value of the point is less than 0, determining that the updating condition is met.
When the updating condition is met, the pixel value of the pixel point is updated to the integral value which is updated in the current round of traversal, namely the pixel value from the initial pixel point to the pixel point is accumulated to obtain the updated pixel value, and the updated pixel value can be expressed as:
wherein, the first and the second end of the pipe are connected with each other,in order for the pixel values to be updated,in order to update the value of the pixel before update,is the test interval of the currently traversed pixel point,the updated integral value after the current round of traversal,is a sectionFront partThe variation trend corresponding to each pixel point is obtained,is a section ofRear endThe corresponding trend of change of each pixel point.
Further, for the image after the integral focusing processing, the position of the focusing center may deviate to some extent, and the focused position may be calibrated by using a proportional relationship based on the depth position of the center pixel point before and after focusing, so as to reduce the positioning error of the target, and the calibrated depth position may be expressed as:
wherein the content of the first and second substances,for each pixel calibrated depth position,the depth position before calibration for each pixel isThe depth position of the focused center pixel point (i.e. the pixel point with the maximum positive amplitude value);the depth position of the focused center pixel point.
And finally, forming a target imaging image by all pixel points with updated pixel values and calibrated depth positions.
It can be understood that in the embodiment, the pixel value of each pixel point in the artifact-suppressed image is updated through the integral focusing algorithm, and the depth positions of the central pixel points before and after focusing are calibrated, so that the imaging quality is improved, and the accuracy of the imaging position is ensured.
Further, experiments can be performed under simulation conditions and actual measurement conditions respectively to verify the effect of the ground penetrating radar back projection imaging method provided by the embodiment.
For simulated B-scan data, firstly, GPRMax software is used for simulation, the length of a detection scene is set to be 2.2m, the depth is set to be 0.6m, a transmitting antenna and a receiving antenna start to move along a measuring line from 0.1m, the moving distance of each time is 0.02m, and B-scan data consisting of 100A-scans is obtained in each simulation. In order to improve the universality of the method on different scenes, B-scan data of point targets (such as cylinders) with different quantities, sizes and positions underground can be acquired, wherein the radius of the point targets is set to be 1-10cm, and the burying depth is set to be 0.1-0.4m. After a large amount of B-scan data are obtained through the simulation, the B-scan data are preprocessed to remove direct waves, and the preprocessed B-scan data are converted into B-scan images.
Then, a deep learning framework based on the Pythroch is used for constructing a YOLOX network, 800B-scan images are selected, a target rectangular frame label is marked, the B-scan images containing the target rectangular frame label are divided into a first data set and a test set according to the distribution proportion of 9. And sending the training set, the verification set and the test set into a YOLOX network to train and test the network, performing parameter adjustment and training for multiple times until the YOLOX network has a good detection effect on hyperbolic characteristics in a B-scan image, and outputting the trained YOLOX network.
And then, inputting the simulated B-scan data into a trained network, outputting a target potential region after the network performs target detection, performing back projection imaging on the target potential region, performing double-threshold processing on the initial imaging image, and performing integral focusing processing on the imaging image after the double-threshold processing to obtain a target imaging image.
For actually measured B-scan data, the processing process is the same as that under the simulation condition, the actually measured B-scan data is preprocessed, the B-scan data to be imaged is obtained through conversion and is shown in figure 4, and the target imaging image is obtained through back projection processing, double threshold processing and integral focusing processing and is shown in figure 5. The above experiment was run on the same equipment, and for two important key indicators in the back projection imaging method: time and artifact suppression were calculated and relevant experimental parameters and algorithmic comparisons are given in tables 1 and 2.
Table 1 shows the calculation time of the ground penetrating radar back projection imaging method and other BP algorithms on the same equipment.
TABLE 1 calculation times for different backprojection imaging algorithms
As can be seen from table 1, compared with the conventional back projection algorithm and some improved back projection algorithms, the ground penetrating radar back projection imaging method of the present invention has less computation time, and the less computation time represents a faster computation speed.
Table 2 gives the quantitative evaluation of the suppression effect of different back projection imaging algorithms on artifacts under the simulation conditions using the integrated side lobe ratio, which has the formula:
wherein the content of the first and second substances,in order to integrate the side lobe ratio,as the total energy of the imaged image,is the main lobe energy of the imaged object.
TABLE 2 Integrated sidelobe ratio for different backprojection imaging algorithms
As can be seen from Table 2, compared with the traditional back projection algorithm and some improved back projection algorithms, the ground penetrating radar back projection imaging method has a smaller comprehensive side lobe ratio, and the smaller comprehensive side lobe ratio shows a better side lobe suppression level.
Therefore, the ground penetrating radar back projection imaging method provided by the embodiment determines the potential position of the target by using the trained YOLOX network before imaging, demarcates the potential region of the target, and images only the potential region of the target during imaging, so that a large amount of calculation is reduced, interference of underground noise points or non-target points on the imaging process is avoided, and the imaging quality is effectively improved. And then, enhancing and position calibrating the imaged image through double-threshold processing and integral focusing processing, so that most side lobes and artifacts are effectively removed, and the imaging quality is further improved. The experimental result shows that compared with the existing back projection imaging method, the ground penetrating radar back projection imaging method provided by the embodiment has the advantage that the imaging efficiency is obviously improved.
In addition, as shown in fig. 6, corresponding to any of the above-mentioned embodiments, an embodiment of the present invention further provides a ground penetrating radar back projection imaging system, which includes a data acquisition and processing module 110, a network training module 120, a back projection module 130, an artifact suppression module 140, and a target imaging module 150, where details of each functional module are as follows:
the data acquisition and processing module 110 is configured to acquire and preprocess B-scan data, and construct a tag data set according to the preprocessed B-scan data, where the tag data set includes a B-scan image into which the preprocessed B-scan data is converted and a target rectangular frame tag corresponding to the B-scan image;
a network training module 120, configured to construct a YOLOX network, and train the YOLOX network through a tag data set;
a back projection module 130, configured to obtain a target potential region of the B-scan image to be imaged through the trained YOLOX network, and perform back projection imaging in the target potential region to obtain an initial imaging image;
the artifact suppression module 140 is configured to perform dual-threshold processing on the initial imaging image to obtain an artifact-suppressed image;
and the target imaging module 150 is configured to perform integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
In an alternative embodiment, the data acquisition and processing module 110 includes the following sub-modules, and the detailed description of each functional sub-module is as follows:
the data acquisition sub-module is used for detecting the underground area containing the target through the ground penetrating radar to obtain N B-scan data;
the preprocessing submodule is used for preprocessing the N B-scan data and converting the preprocessed B-scan data into a B-scan image;
the marking submodule is used for marking the target existing area in the N B-scan images to obtain a corresponding target rectangular frame label;
the data set dividing submodule is used for dividing N B-scan images containing the target rectangular frame labels into a first data set and a test set according to a preset distribution proportion; dividing the first data set into a training set and a verification set according to a preset distribution proportion;
and the data set constructing submodule is used for constructing a label data set according to the training set, the verification set and the test set.
In an alternative embodiment, the rear projection module 130 includes the following sub-modules, each of which is described in detail as follows:
the parameter determining submodule is used for acquiring a B-scan image to be imaged and determining the size of an imaging area corresponding to the B-scan image to be imaged;
the target detection submodule is used for inputting the B-scan image to be imaged into a trained YOLOX network, and acquiring four-corner coordinates of a target potential area and a rectangular frame framing the target potential area through the YOLOX network;
the mapping submodule is used for mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and the imaging submodule is used for performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
In an alternative embodiment, the artifact reduction module 140 includes the following sub-modules, and the detailed description of each functional sub-module is as follows:
the first threshold acquisition submodule is used for acquiring an amplitude threshold according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
the second threshold acquisition submodule is used for constructing a cross-shaped template and acquiring a similarity threshold and a template length according to the cross-shaped template; the similarity threshold is an undetermined point which does not meet the amplitude threshold in the initial imaging image, and the similarity boundary of the target point and the background point is distinguished by calculating the similarity of the undetermined point and the target point;
and the double-threshold segmentation submodule is used for processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross template to obtain an artifact-suppressed image.
In an alternative embodiment, the target imaging module 150 includes the following sub-modules, each of which is described in detail below:
the primary processing submodule is used for acquiring an initial integral value of each pixel point for each column of the artifact-suppressed image;
the accumulation submodule is used for traversing each pixel point of each column and updating the integral value according to the pixel value of the pixel point;
the integral updating submodule is used for obtaining the pixel value of the pixel point after updating according to the integral value before and after updating when the pixel value of the pixel point meets the preset updating condition;
the calibration sub-module is used for calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and the image output submodule is used for obtaining a target imaging image according to the pixel value after each pixel point is updated and the pixel position after calibration.
The system of the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to imply that the scope of the invention is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity.
The present embodiments are intended to embrace all such alterations, modifications and variations that fall within the broad scope of the present invention. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the invention.
Claims (8)
1. A ground penetrating radar back projection imaging method is characterized by comprising the following steps:
b-scan data are obtained and preprocessed, a tag data set is constructed according to the preprocessed B-scan data, and the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
constructing a YOLOX network, and training the YOLOX network through the tag data set;
acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
performing double-threshold processing on the initial imaging image to obtain an artifact-suppressed image, including:
acquiring an amplitude threshold value according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
constructing a cross-shaped template, and acquiring a similarity threshold value and a template length according to the cross-shaped template; the similarity threshold is as follows: for undetermined points which do not meet the amplitude threshold value in the initial imaging image, distinguishing boundaries of similarity between a target point and a background point by calculating the similarity between the undetermined points and the target point;
processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross-shaped template to obtain an artifact suppression image;
and carrying out integral focusing treatment on the artifact suppression image to obtain a target imaging image.
2. The method for imaging by back projection of ground penetrating radar according to claim 1, wherein the acquiring and preprocessing the B-scan data and constructing the tag data set according to the preprocessed B-scan data comprises:
detecting an underground area containing a target by using a ground penetrating radar to obtain N pieces of B-scan data;
preprocessing the N B-scan data, and converting the preprocessed B-scan data into a B-scan image;
marking target existing areas in the N B-scan images to obtain corresponding target rectangular frame labels;
dividing N B-scan images containing target rectangular frame labels into a first data set and a test set according to a preset distribution proportion;
dividing the first data set into a training set and a verification set according to the preset distribution proportion;
and forming a label data set according to the training set, the verification set and the test set.
3. The method of claim 1, wherein the YOLOX network comprises a backbone network, a neck network, and a head network; the backbone network is used for extracting the features of the B-scan image, the neck network is used for combining and mixing the features, and the head network is used for predicting and classifying the features;
the backbone network comprises an attention module, three convolution residual modules and a feature stacking module; wherein the attention module is composed of a downsampling layer and a base convolution layer, the base convolution layer comprises a convolution layer, a batch normalization layer and an activation function;
the convolution residual module consists of the basic convolution layer and the CSP layer, the CSP layer comprises a main trunk branch, a residual side branch and a channel dimension splicing layer, the main trunk branch comprises the basic convolution layer, a residual stacking layer and an adding layer, and the residual side branch comprises the basic convolution layer;
the feature stacking module consists of the basic convolutional layers, SPP layers and CSP layers, wherein the SPP layers comprise two basic convolutional layers, one upsampling stacking layer and one channel dimension splicing layer, and the upsampling stacking layer comprises three pooling branches and one stacking branch;
the neck network comprises two up-sampling fusion modules and two feature fusion modules, wherein each up-sampling fusion module consists of a basic convolutional layer, an up-sampling layer, a channel dimension splicing layer and a CSP layer; the feature fusion module consists of the basic convolutional layer, the channel dimension splicing layer and the CSP layer;
the head network comprises three feature point judging modules, wherein each feature point judging module consists of the basic convolutional layer, a first attribute judging branch, a second attribute judging branch and the channel dimension splicing layer, the first attribute judging branch comprises two basic convolutional layers and one convolutional layer, and the second attribute judging branch comprises two basic convolutional layers, a coordinate prediction branch and a two-classification branch.
4. The method of claim 1, wherein the obtaining of the target potential area of the B-scan image to be imaged through the trained YOLOX network and the back projection imaging in the target potential area to obtain an initial imaging image comprises:
acquiring a B-scan image to be imaged, and determining the size of an imaging area corresponding to the B-scan image to be imaged;
inputting the B-scan image to be imaged into a trained YOLOX network, and acquiring a target potential area and four-corner coordinates of a rectangular frame framing the target potential area through the YOLOX network;
mapping the target potential area into the imaging area according to the size of the B-scan image to be imaged, the size of the imaging area and the four-corner coordinates of the rectangular frame;
and performing time delay calculation, accumulation and imaging in the imaging area after the mapping processing to obtain an initial imaging image.
5. The method of claim 1, wherein the performing an integral focusing process on the artifact-suppressed image to obtain a target imaging image comprises:
for each column of the artifact-suppressed image, acquiring an initial integral value of each pixel point;
traversing each pixel point of each column, and updating the integral value according to the pixel value of the pixel point;
when the pixel value of the pixel point meets a preset updating condition, obtaining the updated pixel value of the pixel point according to the integral value before and after updating;
calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and obtaining a target imaging image according to the updated pixel value and the calibrated pixel position of each pixel point.
6. The method of claim 1, wherein the obtaining of the amplitude threshold according to the size relationship of the pixel points in the initial imaging image comprises:
determining a first positive pixel point with a maximum positive pixel value and a first negative pixel point with a minimum negative pixel value in the target potential region, and acquiring a pixel value of the first positive pixel point and a pixel value of the first negative pixel point;
obtaining a target part according to the distance between the coordinates of the first positive pixel point and the first negative pixel point;
obtaining a background part according to the distance from the first positive pixel point to the upper edge of the target potential area;
obtaining a target area occupation ratio according to the target part and the background part;
and acquiring two amplitude thresholds, namely a positive amplitude threshold and a negative amplitude threshold, according to the pixel value of the first positive pixel point, the pixel value of the first negative pixel point and the target area ratio.
7. A ground penetrating radar rear projection imaging system, comprising:
the data acquisition and processing module is used for acquiring and preprocessing B-scan data and constructing a tag data set according to the preprocessed B-scan data, wherein the tag data set comprises a B-scan image converted from the preprocessed B-scan data and a target rectangular frame tag corresponding to the B-scan image;
a network training module, configured to construct a YOLOX network, and train the YOLOX network through the tag dataset;
the back projection module is used for acquiring a target potential area of a B-scan image to be imaged through the trained YOLOX network, and performing back projection imaging in the target potential area to obtain an initial imaging image;
the artifact suppression module is used for carrying out double-threshold processing on the initial imaging image to obtain an artifact suppressed image; the artifact suppression module comprises:
the first threshold acquisition submodule is used for acquiring an amplitude threshold according to the size relation of pixel points in the initial imaging image; the amplitude threshold is a boundary for distinguishing a target point and a background point in the initial imaging image;
the second threshold acquisition submodule is used for constructing a cross-shaped template and acquiring a similarity threshold and a template length according to the cross-shaped template; the similarity threshold is as follows: for undetermined points which do not meet the amplitude threshold value in the initial imaging image, distinguishing boundaries of similarity between a target point and a background point by calculating the similarity between the undetermined points and the target point;
the dual-threshold segmentation submodule is used for processing each pixel point in the initial imaging image according to the amplitude threshold, the similarity threshold and the cross-shaped template to obtain an artifact-suppressed image;
and the target imaging module is used for carrying out integral focusing processing on the artifact-suppressed image to obtain a target imaging image.
8. The georadar back-projection imaging system of claim 7, wherein the target imaging module comprises:
the primary processing submodule is used for acquiring an initial integral value of each pixel point for each column of the artifact-suppressed image;
the accumulation submodule is used for traversing each pixel point of each column and updating the integral value according to the pixel value of the pixel point;
the integral updating submodule is used for obtaining the pixel value of the pixel point after updating according to the integral value before and after updating when the pixel value of the pixel point meets the preset updating condition;
the calibration submodule is used for calibrating the focused pixel position according to the position proportional relation of the central points before and after focusing;
and the image output submodule is used for obtaining a target imaging image according to the pixel value updated by each pixel point and the pixel position after calibration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210902645.9A CN114966560B (en) | 2022-07-29 | 2022-07-29 | Ground penetrating radar backward projection imaging method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210902645.9A CN114966560B (en) | 2022-07-29 | 2022-07-29 | Ground penetrating radar backward projection imaging method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114966560A CN114966560A (en) | 2022-08-30 |
CN114966560B true CN114966560B (en) | 2022-10-28 |
Family
ID=82969081
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210902645.9A Active CN114966560B (en) | 2022-07-29 | 2022-07-29 | Ground penetrating radar backward projection imaging method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114966560B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115496917B (en) * | 2022-11-01 | 2023-09-26 | 中南大学 | Multi-target detection method and device in GPR B-Scan image |
CN117310696A (en) * | 2023-09-26 | 2023-12-29 | 中南大学 | Self-focusing backward projection imaging method and device for ground penetrating radar |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101738602A (en) * | 2008-11-26 | 2010-06-16 | 中国科学院电子学研究所 | Echo data preprocessing method for pseudorandom sequences applied to ground penetrating radar |
US8094063B1 (en) * | 2009-06-03 | 2012-01-10 | Lockheed Martin Corporation | Image filtering and masking method and system for improving resolution of closely spaced objects in a range-doppler image |
CN107678029A (en) * | 2017-08-30 | 2018-02-09 | 哈尔滨工业大学 | A kind of rear orientation projection's imaging method based on the average cross-correlation information of random reference |
CN108387896A (en) * | 2018-01-03 | 2018-08-10 | 厦门大学 | A kind of automatic convergence imaging method based on Ground Penetrating Radar echo data |
CN110333489A (en) * | 2019-07-24 | 2019-10-15 | 北京航空航天大学 | The processing method to SAR echo data Sidelobe Suppression is combined with RSVA using CNN |
CN114331890A (en) * | 2021-12-27 | 2022-04-12 | 中南大学 | Ground penetrating radar B-scan image feature enhancement method and system based on deep learning |
CN114758230A (en) * | 2022-04-06 | 2022-07-15 | 桂林电子科技大学 | Underground target body classification and identification method based on attention mechanism |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7796829B2 (en) * | 2008-12-10 | 2010-09-14 | The United States Of America As Represented By The Secretary Of The Army | Method and system for forming an image with enhanced contrast and/or reduced noise |
-
2022
- 2022-07-29 CN CN202210902645.9A patent/CN114966560B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101738602A (en) * | 2008-11-26 | 2010-06-16 | 中国科学院电子学研究所 | Echo data preprocessing method for pseudorandom sequences applied to ground penetrating radar |
US8094063B1 (en) * | 2009-06-03 | 2012-01-10 | Lockheed Martin Corporation | Image filtering and masking method and system for improving resolution of closely spaced objects in a range-doppler image |
CN107678029A (en) * | 2017-08-30 | 2018-02-09 | 哈尔滨工业大学 | A kind of rear orientation projection's imaging method based on the average cross-correlation information of random reference |
CN108387896A (en) * | 2018-01-03 | 2018-08-10 | 厦门大学 | A kind of automatic convergence imaging method based on Ground Penetrating Radar echo data |
CN110333489A (en) * | 2019-07-24 | 2019-10-15 | 北京航空航天大学 | The processing method to SAR echo data Sidelobe Suppression is combined with RSVA using CNN |
CN114331890A (en) * | 2021-12-27 | 2022-04-12 | 中南大学 | Ground penetrating radar B-scan image feature enhancement method and system based on deep learning |
CN114758230A (en) * | 2022-04-06 | 2022-07-15 | 桂林电子科技大学 | Underground target body classification and identification method based on attention mechanism |
Non-Patent Citations (2)
Title |
---|
A Multi-Scale Weighted Back Projection Imaging Technique for Ground Penetrating Radar Applications;Wentai Lei 等;《remote sensing》;20140605;全文 * |
基于加权相关的探地雷达后向投影成像算法;陈鑫澎 等;《电子器件》;20220220;第45卷(第1期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114966560A (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114966560B (en) | Ground penetrating radar backward projection imaging method and system | |
Preston | Automated acoustic seabed classification of multibeam images of Stanton Banks | |
CN107239751B (en) | High-resolution SAR image classification method based on non-subsampled contourlet full convolution network | |
Xiang et al. | Superpixel generating algorithm based on pixel intensity and location similarity for SAR image classification | |
Liu et al. | Algorithmic foundation and software tools for extracting shoreline features from remote sensing imagery and LiDAR data | |
CN109712153A (en) | A kind of remote sensing images city superpixel segmentation method | |
CN111027497B (en) | Weak and small target rapid detection method based on high-resolution optical remote sensing image | |
CN103871039B (en) | Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection | |
CN109919870A (en) | A kind of SAR image speckle suppression method based on BM3D | |
Yang et al. | Evaluating SAR sea ice image segmentation using edge-preserving region-based MRFs | |
CN111079596A (en) | System and method for identifying typical marine artificial target of high-resolution remote sensing image | |
CN110764087B (en) | Sea surface wind direction inverse weighting inversion method based on interference imaging altimeter | |
Gupta et al. | Despeckle and geographical feature extraction in SAR images by wavelet transform | |
CN116012364B (en) | SAR image change detection method and device | |
CN105139410B (en) | The brain tumor MRI image dividing method projected using aerial cross sectional | |
CN113362293A (en) | SAR image ship target rapid detection method based on significance | |
CN110532615A (en) | A kind of decomposition method step by step of shallow sea complicated landform | |
CN112989940B (en) | Raft culture area extraction method based on high-resolution third satellite SAR image | |
CN116908853B (en) | High coherence point selection method, device and equipment | |
Priyadharsini et al. | Underwater acoustic image enhancement using wavelet and KL transform | |
Bai et al. | A fast edge-based two-stage direct sampling method | |
Fakiris et al. | Quantification of regions of interest in swath sonar backscatter images using grey-level and shape geometry descriptors: The TargAn software | |
Chanussot et al. | Shape signatures of fuzzy star-shaped sets based on distance from the centroid | |
CN110618409A (en) | Multi-channel InSAR interferogram simulation method and system considering overlapping and shading | |
Berger et al. | Automated ice-bottom tracking of 2D and 3D ice radar imagery using Viterbi and TRW-S |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |