CN115311532A - Ground penetrating radar underground cavity target automatic identification method based on ResNet network model - Google Patents

Ground penetrating radar underground cavity target automatic identification method based on ResNet network model Download PDF

Info

Publication number
CN115311532A
CN115311532A CN202210881642.1A CN202210881642A CN115311532A CN 115311532 A CN115311532 A CN 115311532A CN 202210881642 A CN202210881642 A CN 202210881642A CN 115311532 A CN115311532 A CN 115311532A
Authority
CN
China
Prior art keywords
image
ground penetrating
penetrating radar
target
underground cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210881642.1A
Other languages
Chinese (zh)
Inventor
白旭
刘金龙
郭士増
魏守明
温志涛
田昊翔
杨彧
崔海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Zhongrui Science & Technology Development Co ltd
Harbin Institute of Technology
Original Assignee
Dalian Zhongrui Science & Technology Development Co ltd
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Zhongrui Science & Technology Development Co ltd, Harbin Institute of Technology filed Critical Dalian Zhongrui Science & Technology Development Co ltd
Priority to CN202210881642.1A priority Critical patent/CN115311532A/en
Publication of CN115311532A publication Critical patent/CN115311532A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V3/00Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation
    • G01V3/12Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation operating with electromagnetic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V3/00Electric or magnetic prospecting or detecting; Measuring magnetic field characteristics of the earth, e.g. declination, deviation
    • G01V3/38Processing data, e.g. for analysis, for interpretation, for correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/72Data preparation, e.g. statistical preprocessing of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Electromagnetism (AREA)
  • Geophysics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a ground penetrating radar underground cavity target automatic identification method based on a ResNet network model. Preprocessing an acquired ground penetrating radar echo image of an underground cavity target, wherein the preprocessing comprises background elimination, gain and noise reduction; the generated ground penetrating radar echo image is gained; denoising the gained image; then, pre-screening and manually classifying the image data after noise reduction, and then amplifying the image based on horizontal mirror image overturning to obtain a processed amplified image data set with similar distribution; dividing the obtained augmented image data into a training set and a test set, and training a ResNet network model to obtain a network weight model; inputting the obtained test set into the obtained weight model, and performing target identification classification on the image; the method can effectively improve the target recognition rate of the underground cavity of the ground penetrating radar to more than 90 percent.

Description

Ground penetrating radar underground cavity target automatic identification method based on ResNet network model
Technical Field
The invention belongs to the technical field of target detection of ground penetrating radar echo map post-processing, and particularly relates to a ground penetrating radar underground cavity target automatic identification method based on a ResNet network model.
Background
The ground penetrating radar is a non-destructive detection instrument for detecting a shallow underground environment. The ground penetrating radar utilizes the difference of electromagnetic dielectric constants of different underground media and the reflection of electromagnetic waves when the electromagnetic waves encounter interface surfaces of different media during transmission, the echo data of the radar can reflect different parameters of the media, and the distribution of the underground environment can be rapidly detected and intuitively understood through processing and analyzing the echo data. To visually present the echo data for manual analysis, a lateral listing of multi-channel echo data is a common method from which B-Scan (B-Scan) images, which are commonly used in ground penetrating radar analysis, are derived.
The ground penetrating radar has important significance and value in underground collapse cavity detection research and engineering practice as an important geophysical method with high speed, high resolution and nondestructive detection. The ground penetrating radar technology can not generate structural damage to the road surface, is suitable for various road conditions, has real-time and high-precision detection results, meets the requirements of high-efficiency, nondestructive and accurate detection of road diseases and wide application range, and is suitable for detecting underground cavities of roads. The ground penetrating radar system can be composed of one or more pairs of transmitting and receiving antennas, each pair of transmitter and receiver can acquire a single B-Scan image by scanning an interested area, and the distribution condition of the underground environment can be acquired by analyzing and verifying the B-Scan image. At present, the B-Scan image collected in the actual engineering needs to be interpreted and interpreted manually, the method is low in efficiency and often causes the problems of missed examination or false examination. The detection and identification of underground cavity targets by using some existing mainstream deep learning methods have problems, the underground cavity which is confirmed, verified, positioned and obtains relevant mode information is difficult to obtain, and the underground cavity does not have a fixed mode and shape in a B-Scan image, so that the acquisition of a large number of underground cavity samples is a difficult engineering task.
Disclosure of Invention
The invention aims to solve the problem that the existing method is difficult to detect and identify the underground cavity in the three-dimensional ground penetrating radar image, reduce the conditions of missing detection and false detection, and provide an automatic identification method for the underground cavity target of the ground penetrating radar based on a ResNet network model.
The invention is realized by the following technical scheme, and provides a ground penetrating radar underground cavity target automatic identification method based on a ResNet network model, which specifically comprises the following steps:
step 1: background elimination is carried out on the acquired ground penetrating radar echo image of the underground cavity target, and transverse ripples of the ground penetrating radar echo image are suppressed;
step 2: the ground penetrating radar echo image generated in the step 1 is gained, and the hole target pixel characteristics in the echo image are highlighted;
and step 3: denoising the image data gained in the step 2, and inhibiting clutter interference;
and 4, step 4: pre-screening and manually classifying the images processed in the step 3, and then amplifying the images based on horizontal mirror image overturning to obtain processed amplified image data sets with similar distribution;
and 5: dividing the image data set obtained in the step (4) into a training set and a verification set, and training a ResNet network model by using the training set to obtain a network weight model;
step 6: and (5) inputting the verification set obtained in the step (5) into the trained network weight model, and performing target identification and classification on the underground cavity target ground penetrating radar echo image.
Further, in step 1, performing image background elimination by using a transverse ripple suppression filtering method to obtain a ground penetrating radar echo image with suppressed transverse ripples.
Further, a node type mean linear gain method is used for extracting the characteristics of the cavity target from the redundant background information, the node type mean linear gain can highlight the curve characteristics of the cavity in the background, and the position and shape characteristics of the cavity target can be obtained more clearly.
Further, the node-type mean linear gain method specifically includes:
firstly, dividing a picture into 7 parts according to longitudinal average, and then respectively corresponding a starting line of each part and a final line of an image to a node, namely 8 nodes in total;
and then taking the average value of the maximum value of each row of pixels of each part as the gain size of the corresponding node, obtaining a pre-gain curve through linear interpolation, calibrating the pre-gain curve by using the maximum value to obtain a gain curve, wherein each row of the image corresponds to a point on the gain curve, the point size is the gain size of the row, and the image is gained according to the gain curve.
Further, the noise reduction adopts fast non-local mean denoising.
Further, the augmentation process specifically includes: and (3) manually classifying the images subjected to noise reduction in the step (3) into hole images and non-hole images, and then respectively amplifying the two types of image data based on horizontal mirror image transformation.
Further, the ResNet network model includes ResNet18, resNet34, and ResNet50;
determining the epoch of the training parameters to be 200, the batch size to be 16, the learning rate to be 0.001, the optimizer using SGD with momentum of 0.9, weight decay to be 0.0005, the loss function selecting the cross entropy loss function.
Further, the ResNet network summarizes the neighboring complex layer structure as a block, and for an arbitrary block, the fitting function is F (x), if the requirement is H (x), the efficiency of learning the potential mapping F (x) is lower than that of learning the residual error H (x) -x, i.e., F (x) = H (x) -x is low, and the original forward path is added with an x term, and then F (x) + x is used to fit the new target.
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method for automatically identifying the underground cavity target of the ground penetrating radar based on the ResNet network model when executing the computer program.
The invention provides a computer readable storage medium for storing computer instructions, and the computer instructions are executed by a processor to realize the steps of the method for automatically identifying the underground cavity target of the ground penetrating radar based on the ResNet network model.
The invention has the beneficial effects that:
the method carries out background elimination, gain and noise reduction on the existing echo image of the underground cavity target ground penetrating radar, carries out manual classification on the obtained image, and carries out augmentation and random division into a training data set and a testing data set through horizontal mirror image overturning. And inputting the training data set into a ResNet network model to train and adjust the network parameters. After training, inputting the test data set into a network, and carrying out hole target identification on the ground penetrating radar echo image by using the network. The method can improve the identification probability of the underground cavity target to more than 90%.
In practice, when the ground penetrating radar collects relevant data of the underground cavity, the shape of the underground cavity is random and difficult to predict, and meanwhile, the depth, the size and the position of the underground cavity are unknown, so that great obstacles are generated to data collection of the underground cavity and subsequent classification and detection based on deep learning. The invention aims to automatically learn the existing ground penetrating radar echo image by using a residual convolution network ResNet so as to automatically identify the target image of the underground cavity.
Drawings
FIG. 1 is a flow chart of a method for automatically identifying a ground penetrating radar underground cavity target based on a ResNet network model;
FIG. 2 is a schematic diagram of a residual block in a ResNet network;
FIG. 3 is a diagram of the ResNet18 model architecture;
FIG. 4 is a ground penetrating radar echo image of a single acquired underground cavity target;
FIG. 5 is an image of a single underground cavity target after background elimination of a ground penetrating radar echo image;
FIG. 6 is an image of a single underground cavity target after ground penetrating radar echo image gain;
FIG. 7 is a noise-reduced image of a ground penetrating radar echo image of a single underground cavity target;
fig. 8 is a graph of the results of ResNet network identification, where (a) is the ResNet18 identification rate curve, (b) is the ResNet18 loss function curve, (c) is the ResNet34 identification rate curve, (d) is the ResNet34 loss function curve, (e) is the ResNet50 identification rate curve, (f) is the ResNet50 loss function curve, (g) is the ResNet50 identification rate curve (epoch = 500), and (h) is the ResNet50 loss function curve (epoch = 500).
Detailed Description
The technical solutions in the embodiments of the present invention will be described below clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
the invention provides a ground penetrating radar underground cavity target automatic identification method based on a ResNet network model, which specifically comprises the following steps:
step 1: background elimination is carried out on the acquired ground penetrating radar echo image of the underground cavity target, and transverse ripples of the ground penetrating radar echo image are suppressed; in the step 1, image background elimination is carried out through a transverse ripple suppression filtering method, and a ground penetrating radar echo image with suppressed transverse ripples is obtained.
Step 2: the ground penetrating radar echo image generated in the step 1 is gained, the target pixel characteristics of the hollow hole in the echo image are highlighted, the background is suppressed, and the hollow hole characteristics submerged in the image are extracted; and (3) extracting the characteristics of the cavity target from redundant background information by using a node type mean linear gain method, wherein the node type mean linear gain can highlight the curve characteristics of the cavity in the background, so that the position and shape characteristics of the cavity target can be obtained more clearly.
The node type mean value linear gain method specifically comprises the following steps:
firstly, dividing a picture into 7 parts according to longitudinal average, and then respectively corresponding a starting line of each part and a final line of an image to a node, namely 8 nodes in total;
and then taking the average value of the maximum values of the pixels of each row of each part as the gain size of the corresponding node, obtaining a pre-gain curve through linear interpolation, calibrating the pre-gain curve by using the maximum value to obtain a gain curve, wherein each row of the image corresponds to a point on the gain curve, the point size is the gain size of the row, and the image is gained according to the gain curve.
And 3, step 3: denoising the image data gained in the step 2, and inhibiting clutter interference; and denoising by adopting a fast non-local mean value.
The Fast Non-Local mean denoising (Fast Non-Local Means) is an acceleration algorithm based on Non-Local mean denoising (Non-Local Means). NL-means implements filtering based on similarity between pixels.
For an image, a search frame of size dxd is selected, and a neighborhood frame of size dxd and centered at x and y is selected. The measure of similarity of two neighborhoods is
Figure BDA0003764376560000041
Where the size of the neighborhood is m n, and x (i, j) and y (i, j) are the pixel values in both neighborhoods. Obtaining the measurement of each pixel point through the similarity
Figure BDA0003764376560000051
Where h represents a smoothing factor that affects the degree of distortion of the filtering. The final filtering result for point x is: NLmeans (x) = ∑ w (x, y) · y
The fast algorithm aims at the defect of time consumption of point-by-point calculation of the original method, and an integral image of a pixel point is constructed, so that filtering is accelerated. The present invention constructs a 5 x 5 search window and a 3 x 3 neighborhood window.
And 4, step 4: pre-screening and manually classifying the images processed in the step (3) to distinguish hollow images and non-hollow images, and then amplifying the images based on horizontal mirror image overturning to obtain processed amplified image data sets with similar distribution;
the augmentation process specifically includes: and (3) manually classifying the images subjected to noise reduction in the step (3) into hole images and non-hole images, and then respectively amplifying the two types of image data based on horizontal mirror image transformation.
And 5: dividing the image data set obtained in the step (4) into a training set and a verification set, and training a ResNet network model by using the training set to obtain a network weight model;
the ResNet network model includes ResNet18, resNet34, and ResNet50; comprising 18, 34 and 50 convolutional layers, respectively. And respectively inputting the data of the training set into 3 network models to train the network. As shown in tables 1 and 2:
table 1 different depth ResNet structure table
Figure BDA0003764376560000052
TABLE 2 ResNet network identification results presentation
Figure BDA0003764376560000061
Determining the epoch of the training parameters to be 200, the batch size to be 16, the learning rate to be 0.001, the optimizer using SGD with momentum of 0.9, weight decay to be 0.0005, the loss function selecting the cross entropy loss function.
The ResNet network model is cut in from the collation model structure. The problem that the network model cannot obtain better performance along with the expansion of the depth and falls behind a shallow structure model is solved. The ResNet network summarizes the adjacent complex layer structure as a block, and for an arbitrary block, the fitting function is F (x), if the requirement is H (x), the efficiency of learning the potential mapping F (x) is lower than that of learning the residual error H (x) -x, i.e., F (x) = H (x) -x is low, and the original forward path is added with an x, and the new target is fitted with F (x) + x. Based on the thought, the model structure is simplified, and F (x) learning 0 can be realized through L2 Regularization. Under the algorithm, the F (x) is set to be 0 by an extra block to obtain the identity mapping, so that the stability of the performance is ensured. ResNet converts the "degenerate" problem novelly to the design problem of F (x) + x. The Block of F (x) + x is called Residual Block.
Step 6: and (5) inputting the verification set obtained in the step (5) into the trained network weight model, and performing target identification and classification on the underground cavity target ground penetrating radar echo image. The method for carrying out target identification and classification on the underground cavity target ground penetrating radar echo image specifically comprises the steps of inputting the underground cavity target ground penetrating radar echo image which is not input into the system into a network by using a trained ResNet network model, and automatically carrying out target identification on the underground cavity target ground penetrating radar echo image.
Example two:
with reference to fig. 1 to 8, the present invention provides a method for automatically identifying an underground cavity target of a ground penetrating radar based on a ResNet network model, which specifically includes the following steps:
step 1: background elimination is carried out on the obtained ground penetrating radar echo image of the underground cavity target, and a ground penetrating radar echo image with suppressed transverse ripples is obtained to highlight the characteristics of the echo image;
and 2, step: the ground penetrating radar echo image after the background is eliminated in the step 1 is gained, and the characteristics of the hollow target pixels are highlighted;
and step 3: performing noise reduction processing on the image data after being gained in the step 2 to inhibit clutter influence;
and 4, step 4: pre-screening the processed ground penetrating radar echo images in the step 3, manually classifying, and amplifying the images based on overturning to obtain an amplification data set with similar distribution;
and 5: dividing the ground penetrating radar echo image obtained in the step (4) into a training set and a verification set, and training a ResNet network model to obtain a network weight model;
step 6: inputting the verification set obtained in the step 5 into the trained ResNet network model, and performing target identification and classification on the underground cavity target ground penetrating radar echo image.
In the step 1, the process is carried out,
the pretreatment comprises the following steps: background elimination is carried out on the acquired ground penetrating radar echo image of the underground cavity target;
and eliminating the image background by a transverse ripple suppression filtering method to obtain the ground penetrating radar echo image with suppressed transverse ripples.
In the step 2, the process is carried out,
and extracting the characteristics of the cavity target from redundant background information by using a node type mean linear gain method. The node type mean linear gain can highlight the curve characteristics of the cavity in the background, and the position and shape characteristics of the cavity target can be obtained more clearly.
In step 2, the node-type mean linear gain method specifically includes:
the picture is firstly divided into 7 parts according to the longitudinal average, and then the initial line of each part and the final line of the image respectively correspond to one node, namely 8 nodes in total.
And then taking the average value of the maximum value of each row of pixels of each part as the gain size of the corresponding node, obtaining a pre-gain curve through linear interpolation, calibrating the curve by using the maximum value to obtain a gain curve, wherein each row of the image corresponds to a point on the curve, the size of the point is the gain size of the row, and the image is gained according to the gain curve.
In the step 3, the process is carried out,
the denoising treatment is rapid non-local mean value denoising.
The Fast Non-Local mean denoising (Fast Non-Local Means) is an acceleration algorithm based on Non-Local mean denoising (Non-Local Means). NL-means implements filtering based on the similarity between pixels.
For an image, a search frame of size dxd is selected, and a neighborhood frame of size dxd and centered at x and y is selected. The measure of similarity of two neighborhoods is
Figure BDA0003764376560000071
Where the size of the neighborhood is m n, and x (i, j) and y (i, j) are the pixel values in both neighborhoods. Obtaining the measurement of each pixel point through the similarity
Figure BDA0003764376560000072
Where h represents a smoothing factor that affects the degree of distortion of the filtering. The final filtering result to point x is: NLmeans (x) = ∑ w (x, y) · y
The fast algorithm aims at the defect that the original method calculates the consumed time point by point, and constructs an integral image of a pixel point, thereby accelerating the filtering. The present invention constructs a 5 x 5 search window and a 3 x 3 neighborhood window.
In step 4, the augmentation process specifically includes: and (4) manually classifying the images subjected to noise reduction in the step (3) into hole images and non-hole images, and then respectively carrying out horizontal mirror image transformation on the two types of image data. An augmented data set is obtained.
In step 5, the ResNet network model contains ResNet18, resNet34 and ResNet50. Comprising 18, 34 and 50 convolutional layers, respectively. And (3) respectively inputting the data of the training set into 3 network models, and training the network.
Determining the epoch of the training parameters to be 200, the batch size to be 16, the learning rate to be 0.001, the optimizer using SGD with momentum of 0.9, weight decay to be 0.0005, the Loss function selecting the Cross Entropy Loss function (Cross Entropy Loss).
The ResNet network model is cut in from the collation model structure. The problem that the network model cannot obtain better performance along with the expansion of the depth and falls behind a shallow structure model is solved.
Figure BDA0003764376560000081
TABLE 3 ResNet recognition result presentation
For the ResNet50 network model identification to perform better than ResNet18 and ResNet34, considering the training complexity of the model and the requirement of practical performance, the epoch is increased to 500, and other training parameters are kept consistent.
Figure BDA0003764376560000082
Table 4 representation of ResNet50 recognition results (epoch = 500)
In a step 6, the process is carried out,
and (3) using the trained deep learning model, extracting the characteristics of the underground cavity target ground penetrating radar echo image which is not input into the system, inputting the characteristics into the model, and automatically identifying the target of the underground cavity target ground penetrating radar echo image.
The invention provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method for automatically identifying the underground cavity target of the ground penetrating radar based on the ResNet network model when executing the computer program.
The invention provides a computer readable storage medium for storing computer instructions, and the computer instructions are executed by a processor to realize the steps of the method for automatically identifying the underground cavity target of the ground penetrating radar based on the ResNet network model.
The memory in the embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memories of the methods described herein are intended to comprise, without being limited to, these and any other suitable types of memories.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The method for automatically identifying the underground cavity target of the ground penetrating radar based on the ResNet network model is described in detail, a specific example is applied to explain the principle and the implementation mode of the method, and the description of the embodiment is only used for helping to understand the method and the core idea of the method; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A ground penetrating radar underground cavity target automatic identification method based on a ResNet network model is characterized by specifically comprising the following steps:
step 1: background elimination is carried out on the obtained ground penetrating radar echo image of the underground cavity target, and transverse ripples of the ground penetrating radar echo image are suppressed;
step 2: the ground penetrating radar echo image generated in the step 1 is gained, and the hole target pixel characteristics in the echo image are highlighted;
and step 3: denoising the image data gained in the step 2, and inhibiting clutter interference;
and 4, step 4: pre-screening and manually classifying the images processed in the step 3, and then amplifying the images based on horizontal mirror image overturning to obtain processed amplified image data sets with similar distribution;
and 5: dividing the image data set obtained in the step (4) into a training set and a testing set, and training a ResNet network model by using the training set to obtain a network weight model;
step 6: and (5) inputting the test set obtained in the step (5) into the trained network weight model, and performing target identification and classification on the underground cavity target ground penetrating radar echo image.
2. The method according to claim 1, wherein in step 1, image background elimination is performed by a transverse ripple suppression filtering method, so as to obtain a ground penetrating radar echo image with suppressed transverse ripples.
3. The method of claim 1, wherein the hole target feature is extracted from the redundant background information by using a node-based mean linear gain method, wherein the node-based mean linear gain can highlight the curve feature of the hole in the background, so as to obtain the position and shape feature of the hole target more clearly.
4. The method according to claim 3, wherein the nodal-mean linear gain method is specifically:
firstly, dividing a picture into 7 parts according to longitudinal average, and then respectively corresponding a starting line of each part and a final line of an image to a node, namely 8 nodes in total;
and then taking the average value of the maximum values of the pixels of each row of each part as the gain size of the corresponding node, obtaining a pre-gain curve through linear interpolation, calibrating the pre-gain curve by using the maximum value to obtain a gain curve, wherein each row of the image corresponds to a point on the gain curve, the point size is the gain size of the row, and the image is gained according to the gain curve.
5. The method of claim 1, wherein the noise reduction employs fast non-local mean denoising.
6. The method according to claim 1, characterized in that said augmentation process comprises in particular: and (3) manually classifying the images subjected to noise reduction in the step (3) into hole images and non-hole images, and then respectively amplifying the two types of image data based on horizontal mirror image transformation.
7. The method of claim 1, wherein the ResNet network model comprises ResNet18, resNet34, and ResNet50;
determining the epoch of the training parameters to be 200, the batch size to be 16, the learning rate to be 0.001, the optimizer using SGD with momentum of 0.9, weight decay to be 0.0005, the loss function selecting the cross entropy loss function.
8. The method of claim 7, wherein the ResNet network summarizes the neighboring complex layer structure as a block, and for an arbitrary block, the fitting function is F (x), and if the requirement is H (x), the learning efficiency of the potential mapping F (x) is lower than that of the learning residual H (x) -x, i.e., F (x) = H (x) -x is low, and the original forward path is added with an x term, and F (x) + x is used to fit the new target.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method according to any one of claims 1-8 when executing the computer program.
10. A computer-readable storage medium storing computer instructions, which when executed by a processor implement the steps of the method of any one of claims 1 to 8.
CN202210881642.1A 2022-07-26 2022-07-26 Ground penetrating radar underground cavity target automatic identification method based on ResNet network model Pending CN115311532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210881642.1A CN115311532A (en) 2022-07-26 2022-07-26 Ground penetrating radar underground cavity target automatic identification method based on ResNet network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210881642.1A CN115311532A (en) 2022-07-26 2022-07-26 Ground penetrating radar underground cavity target automatic identification method based on ResNet network model

Publications (1)

Publication Number Publication Date
CN115311532A true CN115311532A (en) 2022-11-08

Family

ID=83858236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210881642.1A Pending CN115311532A (en) 2022-07-26 2022-07-26 Ground penetrating radar underground cavity target automatic identification method based on ResNet network model

Country Status (1)

Country Link
CN (1) CN115311532A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173618A (en) * 2023-09-06 2023-12-05 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173618A (en) * 2023-09-06 2023-12-05 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN
CN117173618B (en) * 2023-09-06 2024-04-30 哈尔滨工业大学 Ground penetrating radar cavity target identification method based on multi-feature sensing Faster R-CNN

Similar Documents

Publication Publication Date Title
CN108765369B (en) Method, apparatus, computer device and storage medium for detecting lung nodule
CN110097129B (en) Remote sensing target detection method based on profile wave grouping characteristic pyramid convolution
CN107229918B (en) SAR image target detection method based on full convolution neural network
CN111325748B (en) Infrared thermal image nondestructive testing method based on convolutional neural network
CN115291210B (en) 3D-CNN ground penetrating radar three-dimensional image pipeline identification method combined with attention mechanism
CN111784721B (en) Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning
CN110866545A (en) Method and system for automatically identifying pipeline target in ground penetrating radar data
CN115311531A (en) Ground penetrating radar underground cavity target automatic detection method of RefineDet network model
CN109597065B (en) False alarm suppression method and device for through-wall radar detection
CN110133643B (en) Plant root system detection method and device
CN110297041A (en) A kind of 3D woven composite defect inspection method based on FCN and GRU
CN108776339B (en) Single-bit synthetic aperture radar imaging method based on block sparse iteration threshold processing
CN115311532A (en) Ground penetrating radar underground cavity target automatic identification method based on ResNet network model
CN113822279B (en) Infrared target detection method, device, equipment and medium based on multi-feature fusion
CN115561753A (en) Method, device, equipment and storage medium for determining underground target
CN111445515A (en) Underground cylinder target radius estimation method and system based on feature fusion network
CN116152651A (en) Sonar target identification and detection method based on image identification technology and applied to ocean
CN113807206B (en) SAR image target identification method based on denoising task assistance
Zhang et al. Entropy-Based re-sampling method on SAR class imbalance target detection
CN116977746A (en) Millimeter wave image target classification method, device, equipment and storage medium
CN117314791B (en) Infrared image cold reflection noise correction method based on Butterworth function fitting
CN116256701B (en) Ground penetrating radar mutual interference wave suppression method and system based on deep learning
CN113376610B (en) Narrow-band radar target detection method based on signal structure information
CN117079147A (en) Road interior disease identification method, electronic equipment and storage medium
CN116756486A (en) Offshore target identification method and device based on acousto-optic electromagnetic multi-source data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination