CN114373079A - Rapid and accurate ground penetrating radar target detection method - Google Patents
Rapid and accurate ground penetrating radar target detection method Download PDFInfo
- Publication number
- CN114373079A CN114373079A CN202210021887.7A CN202210021887A CN114373079A CN 114373079 A CN114373079 A CN 114373079A CN 202210021887 A CN202210021887 A CN 202210021887A CN 114373079 A CN114373079 A CN 114373079A
- Authority
- CN
- China
- Prior art keywords
- image
- similarity
- target
- hog
- ground penetrating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a rapid and accurate ground penetrating radar target detection method, which comprises the following steps: in the model training phase: acquiring a positive and negative label sample set of the ground penetrating radar, extracting HOG characteristics, performing dimensionality reduction on the original high-dimensional HOG characteristics by using PCA, training and storing a classifier; in the target detection stage: preprocessing an original ground penetrating radar image, acquiring target candidate frames with different scales and length-width ratios by using a selective search algorithm, extracting HOG characteristics of images in the candidate frames, reducing dimensions of the original high-dimensional HOG characteristics by using PCA, and inputting the original high-dimensional HOG characteristics into a trained classifier to obtain a target detection result. The original HOG characteristics are subjected to dimensionality reduction through PCA, and the problems of overlong classifier identification time and poor generalization capability caused by high-dimensional HOG characteristics in a traditional ground penetrating radar target detection algorithm are solved; meanwhile, the invention obtains the target candidate frames with different scales and length-width ratios by using a selective search algorithm, thereby greatly improving the detection precision and efficiency.
Description
Technical Field
The invention relates to the technical field of ground penetrating radar target detection, in particular to a rapid and accurate ground penetrating radar target detection method.
Background
In recent years, investigation of shallow underground has become of increasing interest. The ground penetrating radar is a nondestructive underground detection method, which utilizes a receiving antenna to obtain electromagnetic waves reflected by a target to explain underground data, and is widely applied to road pavement measurement, pipeline detection, surface mining and land mine detection at present.
At present, a lot of target detection algorithm researches based on ground penetrating radar images exist. The use of curve fitting in combination with an edge detection algorithm is a low complexity and easy to implement method. However, in a complex environment, the accuracy of the edge detection algorithm may not be satisfied due to the influence of clutter and image noise. Moreover, the time complexity of the curve fitting algorithm is too large, so that the curve fitting algorithm is difficult to be applied to the detection of the ground penetrating radar target in a large range. Recently, machine learning and deep learning have attracted great attention and show good performance in the ground penetrating radar image target detection scheme. In most cases, deep learning requires a very large amount of training data to achieve good performance. However, the actual data acquisition of the ground penetrating radar is difficult. Therefore, the machine learning method is more suitable for the current practical scene, and a plurality of ground penetrating radar target detection algorithms based on machine learning exist at present.
At present, Haar, EHD, SIFT, SURF, HOG and other features are used for detecting the ground penetrating radar target based on machine learning. Among these common features, HOG has proven to be a more suitable feature descriptor for ground penetrating radar target detection. The HOG features can extract the distribution of the magnitude and direction of the gradient, are robust to contrast and illumination, and have great success in detecting pedestrians and vehicles in video monitoring.
The existing ground penetrating radar target detection algorithm based on the HOG characteristics usually slides a fixed window in a ground penetrating radar image to obtain a target candidate frame, and the target candidate frame is sent to a classifier for target detection. However, the HOG features are not scale invariant and are very sensitive to the aspect ratio of the target in the ground penetrating radar image, which makes it difficult to detect the target of all scales and aspect ratios by the conventional fixed-size sliding window method. And the optimal sizes of the sliding windows of different ground penetrating radar images are manually determined, which is time-consuming and labor-consuming. And the sliding window method has high time complexity as an exhaustive search method.
In addition, the original HOG features typically have hundreds of dimensions, which weakens the generalization capability of the classifier and greatly increases the recognition time of the classifier.
In summary, the traditional ground penetrating radar target detection algorithm based on the HOG features has two defects:
1. too high HOG characteristic dimensionality leads to poor generalization capability and too long identification time of the classifier;
2. the conventional sliding window method is too time-complex and cannot detect targets having various scales and aspect ratios.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting a ground penetrating radar target quickly and accurately, so as to solve the technical problems mentioned in the background art; aiming at the technical problem, the invention provides a method for reducing the dimension of the original high-dimensional HOG characteristic by using PCA, improving the generalization capability of a classifier and reducing the target identification time. Meanwhile, a selective search algorithm is used for replacing a traditional sliding window method to obtain target candidate frames with different scales and aspect ratios, and target detection precision and detection efficiency are greatly improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a fast and accurate ground penetrating radar target detection method comprises the following steps:
s1, acquiring a ground penetrating radar target positive and negative label data set for training;
step S2, extracting HOG characteristics in the data set acquired in the step S1;
step S3, performing dimensionality reduction processing on the HOG characteristics acquired in the step S2 by using a PCA method;
step S4, inputting the HOG features subjected to dimensionality reduction into a classifier network for training to obtain a trained detection model;
step S5, preprocessing an original ground penetrating radar image for target detection;
s6, aiming at the preprocessed ground penetrating radar image, obtaining target candidate frames with different scales and length-width ratios by using a selective search algorithm;
step S7, extracting HOG characteristics of the images in the candidate frame;
step S8, using PCA to perform dimension reduction processing on the HOG characteristics;
and S9, inputting the HOG characteristics subjected to dimension reduction into the detection model obtained in the step S4 for target detection, and obtaining a target detection result.
Further, in step S1, the ground penetrating radar target positive and negative label data set includes a training set and a test set, where the training set is an image actually acquired by the ground penetrating radar, and the test set is simulation data.
Further, the step S2 specifically includes:
step S201, calculating the horizontal gradient P of each pixel pointh(x, y) and a vertical gradient Pv(x, y), the expression is:
in formula (1) and formula (2), G (x, y) represents a gradation value of an image at an (x, y) pixel point;
step S202, calculating a gradient amplitude D (x, y) and a direction θ (x, y) of each pixel point according to the horizontal direction gradient and the vertical direction gradient obtained in step S201, where the expression is:
step S203, dividing each sample image into a plurality of cell units, and counting a gradient histogram of each cell unit, wherein the counting of the gradient histogram of each cell unit specifically includes: uniformly dividing the gradient direction of 0-360 degrees into 9 channels, selecting a corresponding channel by a pixel in each cell unit according to the gradient direction of the pixel, voting the corresponding channel based on the gradient amplitude of the pixel, and finally counting a 9-channel gradient histogram of the cell unit;
step S204, forming a plurality of cell units into a cell block, and counting a gradient histogram of each cell block, wherein the forming of the plurality of cell units into the cell block specifically includes: connecting the gradient histograms of the plurality of cell units in series and normalizing;
step S205, connecting all the gradient histograms of all the cell blocks of each sample image in series to obtain the final HOG feature x of the whole imagei;
Step S206, HOG characteristic x of each training sampleiThe composition feature matrix X ═ X1,x2,...,xN]。
Further, the step S3 specifically includes:
step S301, centralizing the feature matrix obtained in step S206, which includes: let X be ═ X1,x2,...,xN]Is a feature matrix composed of N M-dimensional HOG features and used for training, and then centralizing the feature matrixExpressed as:
step S302, aiming at the characteristic matrix obtained in step S301And calculating a covariance matrix, wherein the expression is as follows:
step S303, the covariance matrix CXPerforming characteristic decomposition to obtain: cXXi ═ λ xi, where λ ═ diag { λ1,λ2,...,λM},λiIs a characteristic value in descending order, xi ═ xi1,ξ2,...,ξM]Is λiCorresponding feature vector xiiA matrix of compositions;
step S304, feature dimension reduction, comprising: forming a projection matrix P ([ xi ]) by the eigenvectors corresponding to the large eigenvalue of front k1,ξ2,...,ξk]Then, the HOG feature matrix Y reduced to k dimensions is expressed as
Y=PTX (5)
In equation (5), X represents the HOG feature X of all training samplesiAnd forming a feature matrix.
Further, the classifier network is an SVM, and the SVM selects a gaussian kernel function.
Further, the step S5 specifically includes:
step S501, performing direct current filtering processing on the original image, wherein the expression is as follows:
in formula (6), I' (x, y) is represented as an image subjected to dc filtering, I (x, y) is represented as an original image of the ground penetrating radar, and 2w1+1 denotes the sliding window length, k denotes the kth column of the image to be processed;
step S502, clutter suppression is carried out on the image I' (x, y) after direct current filtering by using an averaging method, and the expression is as follows:
in formula (7), I ″ (x, y) is expressed as an image after clutter suppression, 2w2+1 denotes the sliding window length, k denotes the kth column of the image to be processed.
Further, the step S6 specifically includes:
step S601, obtaining an initial segmentation region R ═ { R ═ R from the preprocessed ground penetrating radar image1,r2,...,rnAnd initialize similarity sets
Step S602, calculating two adjacent areas ri,rjSimilarity in color, including:
counting a certain region r of the imageiObtaining a color vector normalized by L1 norm for each color channel bins of 25Two adjacent regions ri,rjThe color similarity between the two formulas is as follows:
step S603, calculating a texture similarity between the two regions, including:
local binary pattern is adopted to represent the texture characteristics, namely the region riTexture feature representation ofThen two adjacent regions ri,rjThe texture similarity calculation formula is as follows:
step S604, calculating two adjacent areas ri,rjThe size similarity between the two is expressed as:
in formula (8), size (·) represents the number of pixels in the region, and size (im) represents the number of pixels in the whole image;
step S605, calculating the matching similarity as follows:
in formula (9), size (BB)ij) Indicating the region ri,rjThe size of the circumscribed rectangle of (1);
step S606, similarity S according to color aspectcolor(ri,rj) Similarity of texture stexture(ri,rj) Size similarity ssize(ri,rj) And the similarity of anastomosis sfill(ri,rj) Calculating the total similarity, wherein the expression is as follows:
s(ri,rj)=ε1scolor(ri,rj)+ε2stexture(ri,rj)+ε3ssize(ri,rj)+ε4sfill(ri,rj) (10)
in the formula (10), ε1,ε2,ε3,ε4Weighting coefficients for different degrees of similarity;
step S607, similarity S (r) of all two adjacent areasi,rj) Calculating and adding a similarity set S;
step S608, two areas r corresponding to the maximum value in the similarity set Si,rjAre merged into the same region riS (r)i,rj) Removing and dividing the region r from the similarity set SiAdding R to the region set RiAdding the similarity with the adjacent region into a similarity set S;
step S609, repeating the step S608 until the similarity setThe subset in the region set R is then the final candidate region of the image.
Further, the step S7 specifically includes:
s701, zooming the image in the candidate frame area obtained by the selective search algorithm in the step S6 to make the size of the image consistent with that of the training sample;
step S702, adopting the method in the step S2 to extract the HOG characteristics of the image and then form an original high-dimensional HOG characteristic matrix Xdetect。
Further, the step S8 specifically includes:
using the projection matrix P obtained in the training stage to perform the comparison on the original high-dimensional HOG feature matrix X obtained in step S7detectReducing dimension of the feature, and reducing dimension of the feature matrix YdetectThe expression is as follows: y isdetect=PTXdetect。
Further, the step S9 specifically includes:
step S901, reducing the dimension of the HOG feature matrix YdetectInputting the target candidate frames into the trained SVM to judge which target candidate frames are target areas;
and step S902, performing non-maximum suppression on all the candidate frames judged as the targets, removing redundant candidate frames, and finally outputting a target detection result.
The invention has the beneficial effects that:
1. in general, the original HOG features are very high in dimensionality, resulting in degraded classifier performance. The invention uses the PCA method to reduce the dimension of the original HOG characteristics, greatly improves the generalization capability of the classifier and simultaneously can improve the target detection efficiency.
2. According to the traditional ground penetrating radar target detection algorithm based on the HOG characteristics, a target candidate region is obtained through a sliding window and is sent to a classifier for detection and identification, but the time complexity of the sliding window method is high, and due to the fact that the HOG characteristics are very sensitive to the size of a target, the sliding window with the fixed size cannot detect the target with different sizes in a ground penetrating radar image, and the detection performance is poor. The invention uses the selective search algorithm to replace the traditional sliding window, adaptively selects the candidate frames with different scales and length-width ratios, and sends the candidate frames into the classifier for detection and identification, thereby greatly improving the target detection efficiency and the detection precision.
Drawings
Fig. 1 is a schematic flowchart of a method for detecting a target of a ground penetrating radar in a fast and accurate manner provided in embodiment 1;
FIG. 2 is a schematic diagram of a training positive sample provided in example 1;
FIG. 3 is a schematic diagram of a training negative example provided in example 1;
fig. 4 is a visualized image of the training positive sample HOG feature provided in example 1;
FIG. 5 is a visual image of the training negative sample HOG feature provided in example 1;
FIG. 6 is a test image 1 used in the method of embodiment 1 for verifying the target detection performance, where the test image 1 is preprocessed;
FIG. 7 is a candidate box obtained by the selective search algorithm of the test image 1 provided in example 1;
fig. 8 is the target detection result finally obtained from the test image 1 provided in example 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1 to 8, the embodiment provides a method for detecting a ground penetrating radar target quickly and accurately, which includes two stages of training and detecting, specifically, the training stage includes the following steps:
s1, acquiring a ground penetrating radar target positive and negative label data set for training;
specifically, in the present embodiment, the positive and negative label data sets include a training set and a test set, wherein the training set is derived from images acquired by an actual ground penetrating radar, and the sample size is 33 × 52 pixels. The test set used to evaluate the target test performance is derived from the simulation data.
Step S2, extracting HOG characteristics of the positive and negative label data sets;
specifically, in this embodiment, the HOG feature extraction in step S2 specifically includes:
step S201, calculating the horizontal direction gradient P of each pixel point by using the following formulah(x, y) and a vertical gradient Pv(x,y):
Ph(x,y)=G(x-1,y+1)+2G(x,y+1)
+G(x+1,y+1)-G(x-1,y-1)
-2G(x,y-1)-G(x+1,y-1)
Pv(x,y)=G(x-1,y-1)+2G(x-1,y)
+G(x-1,y+1)-G(x+1,y-1)
-2G(x+1,y)-G(x+1,y+1)
Wherein G (x, y) represents the gray value of the image at the (x, y) pixel point;
step S202, the gradient amplitude D (x, y) and the direction theta (x, y) of each pixel point are respectively calculated by using the following formulas, and the expression is as follows:
step S203, dividing each sample image into a plurality of cell units, and counting a gradient histogram of each cell unit. The statistical method of the gradient histogram of each cell unit comprises the following steps: uniformly dividing the gradient direction of 0-360 degrees into 9 channels, selecting a corresponding channel by a pixel in each cell unit according to the gradient direction of the pixel, voting the corresponding channel based on the gradient amplitude of the pixel, and finally counting a 9-channel gradient histogram of the cell unit;
step S204, a plurality of cell units are combined into cell blocks, and the gradient histogram of each cell block is counted. The specific method is to connect the gradient histograms of each cell unit in series and normalize the histograms;
step S205, connecting all the gradient histograms of all the cell blocks of each training sample image in series to obtain the final HOG feature x of the whole imagei;
Step S206, HOG characteristic x of each training sampleiThe composition feature matrix X ═ X1,x2,...,xN]。
Step S3, performing dimensionality reduction on the original HOG characteristic by using a PCA method;
specifically, in this embodiment, the specific step of step S3 is:
step S301, centralizing the feature matrix for training, including: let X be ═ X1,x2,...,xN]N M-dimensional HOG features form a feature matrix for training. Feature matrix after centeringCan be calculated as
And step S303, characteristic decomposition. Covariance matrix CXCan be expressed as
CXξ=λξ
Wherein λ ═ diag { λ1,λ2,...,λM},λiIs a characteristic value in descending order, xi ═ xi1,ξ2,...,ξM]Is λiCorresponding feature vector xiiA matrix of components.
Step S304, feature dimension reduction, which comprises the following steps: the eigenvectors corresponding to the large eigenvalues of the front k form a projection matrix P ═ xi1,ξ2,...,ξk]. The HOG feature matrix Y reduced to k dimensions can be expressed as:
Y=PTX
step S4, inputting the HOG features subjected to dimensionality reduction into a classifier network for training, and storing the trained model for subsequent target detection;
specifically, in the embodiment, the classifier network selects an SVM, and the SVM selects a gaussian kernel function, where C and gamma select appropriate parameter values according to actual conditions.
More specifically, in this example, C is set to 1 and gamma is set to 0.5. And inputting HOG characteristics of all sample images subjected to dimensionality reduction into an SVM for training, and finally storing the trained model for subsequent target detection.
Specifically, in this embodiment, the target detection stage includes the following steps:
step S5, preprocessing an original ground penetrating radar image for target detection;
specifically, in this embodiment, the step S5 specifically includes the following steps:
step S501, performing direct current filtering processing on the original image, wherein the expression is as follows:
wherein, I' (x, y) is an image which is processed by direct current filtering, I (x, y) is an original image of the ground penetrating radar, and 2w1+1 denotes the sliding window length;
step S502, clutter suppression is carried out on the image I' (x, y) after direct current filtering by using an averaging method, and the expression is as follows:
where I "(x, y) is represented as the clutter suppressed image, 2w2+1 denotes the sliding window length.
S6, applying a selective search algorithm to the preprocessed ground penetrating radar image to obtain target candidate frames with different scales and aspect ratios;
specifically, in this embodiment, the step S6 specifically includes the following steps:
step S601, acquiring an initial segmentation region R ═ { R ═ R in the ground penetrating radar image preprocessed by the image segmentation algorithm based on the graph theory1,r2,...,rnAnd initialize similarity sets
Step S602, calculating two adjacent areas ri,rjSimilarity in color scolor(ri,rj). The color histogram of each color channel bins of the statistical image is 25. For a certain area r of a color RGB imageiThe L1 norm normalized color vector can be obtainedThe color similarity calculation formula is as follows:
step S603, calculating a texture similarity between the two regions. There are many ways to extract the local features of the image to represent the texture similarity between two adjacent regions, in this example, the Local Binary Pattern (LBP) is used to represent the texture features, and the region riTexture feature representation ofThen two adjacent regions ri,rjThe texture similarity calculation formula is as follows:
step S604, calculating two adjacent areas ri,rjThe size similarity between them is:
wherein size (·) represents the number of pixels in the region, and size (im) represents the number of pixels in the whole image;
step S605, calculating the matching similarity as:
wherein size (BB)ij) Indicating the region ri,rjThe size of the circumscribed rectangle of (1);
step S606, similarity S (r)i,rj) Obtained by weighting the four similarity degrees, and the calculation process is
s(ri,rj)=ε1scolor(ri,rj)+ε2stexture(ri,rj)+ε3ssize(ri,rj)+ε4sfill(ri,rj)
Wherein epsilon1,ε2,ε3,ε4Weighting coefficients for different degrees of similarity, in this example, ε1,ε2,ε3And ε4All are set to be 1;
step S607, similarity S (r) of all two adjacent areasi,rj) Calculating and adding a similarity set S;
step S608, two areas r corresponding to the maximum value in the similarity set Si,rjAre merged into the same region riS (r)i,rj) Removing and dividing the region r from the similarity set SiAdded to the region set R. Will r isiAdding the similarity with the adjacent region into a similarity set S;
step S609, repeating the step S608 until the similarity setThe subset in the region set R is then the final candidate region of the image.
Step S7, extracting HOG characteristics of the images in the candidate frame;
specifically, in this embodiment, the step S7 specifically includes:
step S701, scaling the size of the image in the candidate frame with various dimensions and aspect ratios obtained by the selective search algorithm to be consistent with the training sample, that is, to 33 × 52 pixels in this example;
step S702, extracting HOG characteristics like step S2 to form an original high-dimensional HOG characteristic matrix Xdetect。
Step S8, using PCA to perform dimension reduction processing on the HOG characteristics;
specifically, in this embodiment, the step S8 specifically includes:
using the projection matrix P obtained in the training phase, forOriginal high-dimensional HOG feature matrix X obtained in step S7detectReducing the dimension of the feature into a k-dimension feature matrix YdetectThe calculation formula is as follows:
Ydetect=PTXdetect
step S9, inputting the HOG features after dimension reduction into the SVM which is trained in the training stage to obtain a target detection result;
specifically, in this embodiment, the step S9 specifically includes:
step S901, reducing the dimension of the HOG feature matrix YdetectInputting the result into the SVM trained in the step S4 to judge which target candidate boxes are target areas;
s902, sorting all the candidate frames judged as targets according to the confidence coefficient from large to small;
step S903, selecting the candidate frame with the maximum confidence, and suppressing, namely deleting, all the other candidate frames with the intersection ratio with the candidate frame being greater than the threshold (set to be 0.01 in the example), if the intersection ratio with the candidate frame is smaller than the threshold, keeping, and adding the candidate frame with the maximum confidence into the final detection frame set;
step S904, continuously finding the candidate frame with the maximum confidence coefficient from the rest candidate frames, and repeating the step S903 until all the frames are processed;
in step S905, the output detection results, that is, all the detection frames included in the final detection frame set.
Specifically, fig. 2 and 3 are positive and negative samples of training, fig. 4 and 5 are HOG feature visualization results extracted from the training samples, fig. 6 is a test image after preprocessing for target detection, fig. 7 is a candidate box obtained by the test image through a selective search algorithm, and fig. 8 is a final target detection result.
The invention is not described in detail, but is well known to those skilled in the art.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.
Claims (10)
1. A fast and accurate ground penetrating radar target detection method is characterized by comprising the following steps:
s1, acquiring a ground penetrating radar target positive and negative label data set for training;
step S2, extracting HOG characteristics in the data set acquired in the step S1;
step S3, performing dimensionality reduction processing on the HOG characteristics acquired in the step S2 by using a PCA method;
step S4, inputting the HOG features subjected to dimensionality reduction into a classifier network for training to obtain a trained detection model;
step S5, preprocessing an original ground penetrating radar image for target detection;
s6, aiming at the preprocessed ground penetrating radar image, obtaining target candidate frames with different scales and length-width ratios by using a selective search algorithm;
step S7, extracting HOG characteristics of the images in the candidate frame;
step S8, using PCA to perform dimension reduction processing on the HOG characteristics;
and S9, inputting the HOG characteristics subjected to dimension reduction into the detection model obtained in the step S4 for target detection, and obtaining a target detection result.
2. The method for detecting the target of the ground penetrating radar as claimed in claim 1, wherein in the step S1, the data set of positive and negative labels of the target of the ground penetrating radar includes a training set and a test set, wherein the training set is an image actually acquired by the ground penetrating radar, and the test set is simulation data.
3. The method for detecting the target of the georadar according to claim 1, wherein the step S2 specifically includes:
step S201, calculating the horizontal gradient P of each pixel pointh(x, y) and a vertical gradient Pv(x, y), the expression is:
in formula (1) and formula (2), G (x, y) represents a gradation value of an image at an (x, y) pixel point;
step S202, calculating a gradient amplitude D (x, y) and a direction θ (x, y) of each pixel point according to the horizontal direction gradient and the vertical direction gradient obtained in step S201, where the expression is:
step S203, dividing each sample image into a plurality of cell units, and counting a gradient histogram of each cell unit, wherein the counting of the gradient histogram of each cell unit specifically includes: uniformly dividing the gradient direction of 0-360 degrees into 9 channels, selecting a corresponding channel by a pixel in each cell unit according to the gradient direction of the pixel, voting the corresponding channel based on the gradient amplitude of the pixel, and finally counting a 9-channel gradient histogram of the cell unit;
step S204, forming a plurality of cell units into a cell block, and counting a gradient histogram of each cell block, wherein the forming of the plurality of cell units into the cell block specifically includes: connecting the gradient histograms of the plurality of cell units in series and normalizing;
step S205, connecting all the gradient histograms of all the cell blocks of each sample image in series to obtain the final HOG feature x of the whole imagei;
Step S206, HOG characteristic x of each training sampleiThe composition feature matrix X ═ X1,x2,...,xN]。
4. The method for detecting the target of the georadar as claimed in claim 3, wherein the step S3 specifically includes:
step S301, centralizing the feature matrix obtained in step S206, which includes: let X be ═ X1,x2,...,xN]Is a feature matrix composed of N M-dimensional HOG features and used for training, the feature matrix X after centering is represented as:
step S302, aiming at the characteristic matrix obtained in step S301And calculating a covariance matrix, wherein the expression is as follows:
step S303, the covariance matrix CXPerforming characteristic decomposition to obtain: cXXi ═ λ xi, where λ ═ diag { λ1,λ2,...,λM},λiIs a characteristic value in descending order, xi ═ xi1,ξ2,...,ξM]Is λiCorresponding feature vector xiiA matrix of compositions;
step S304, feature dimension reduction, comprising: forming a projection matrix P ([ xi ]) by the eigenvectors corresponding to the large eigenvalue of front k1,ξ2,...,ξk]Then, the HOG feature matrix Y reduced to k dimensions is expressed as
Y=PTX (5)
In equation (5), X represents the HOG feature X of all training samplesiAnd forming a feature matrix.
5. The method of claim 4, wherein the classifier network is an SVM, and the SVM selects a Gaussian kernel function.
6. The method for detecting the target of the georadar as claimed in claim 5, wherein the step S5 specifically includes:
step S501, performing direct current filtering processing on the original image, wherein the expression is as follows:
in formula (6), I' (x, y) is represented as an image subjected to dc filtering, I (x, y) is represented as an original image of the ground penetrating radar, and 2w1+1 denotes the sliding window length, k denotes the kth column of the image to be processed;
step S502, clutter suppression is carried out on the image I' (x, y) after direct current filtering by using an averaging method, and the expression is as follows:
in formula (7), I ″ (x, y) is expressed as an image after clutter suppression, 2w2+1 denotes the sliding window length, k denotes the kth column of the image to be processed.
7. The method for detecting the target of the georadar according to claim 6, wherein the step S6 specifically includes:
step S601, obtaining an initial segmentation region R ═ { R ═ R from the preprocessed ground penetrating radar image1,r2,...,rnAnd initialize similarity sets
Step S602, calculating two adjacent areas ri,rjSimilarity in color, including:
counting a certain region r of the imageiObtaining a color vector normalized by L1 norm for each color channel bins of 25Two adjacent regions ri,rjThe color similarity between the two formulas is as follows:
step S603, calculating a texture similarity between the two regions, including:
local binary pattern is adopted to represent the texture characteristics, namely the region riTexture feature representation ofThen two adjacent regions ri,rjThe texture similarity calculation formula is as follows:
step S604, calculating two adjacent areas ri,rjThe size similarity between the two is expressed as:
in formula (8), size (·) represents the number of pixels in the region, and size (im) represents the number of pixels in the whole image;
step S605, calculating the matching similarity as follows:
in formula (9), size (BB)ij) Indicating the region ri,rjThe size of the circumscribed rectangle of (1);
step S606, similarity S according to color aspectcolor(ri,rj) Similarity of texture stexture(ri,rj) Size similarity ssize(ri,rj) And the similarity of anastomosis sfill(ri,rj) Calculating the total similarity, wherein the expression is as follows:
s(ri,rj)=ε1scolor(ri,rj)+ε2stexture(ri,rj)+ε3ssize(ri,rj)+ε4sfill(ri,rj) (10)
in the formula (10), ε1,ε2,ε3,ε4Weighting coefficients for different degrees of similarity;
step S607, similarity S (r) of all two adjacent areasi,rj) Calculating and adding a similarity set S;
step S608, two areas r corresponding to the maximum value in the similarity set Si,rjAre merged into the same region riS (r)i,rj) Removing and dividing the region r from the similarity set SiAdding R to the region set RiAdding the similarity with the adjacent region into a similarity set S;
8. The method for detecting the target of the georadar according to claim 7, wherein the step S7 specifically includes:
s701, zooming the image in the candidate frame area obtained by the selective search algorithm in the step S6 to make the size of the image consistent with that of the training sample;
step S702, adopting the method in the step S2 to extract the HOG characteristics of the image and then form an original high-dimensional HOG characteristic matrix Xdetect。
9. The method for detecting the target of the georadar according to claim 8, wherein the step S8 specifically includes:
using the projection matrix P obtained in the training stage to perform the comparison on the original high-dimensional HOG feature matrix X obtained in step S7detectReducing dimension of the feature, and reducing dimension of the feature matrix YdetectThe expression is as follows: y isdetect=PTXdetect。
10. The method for detecting the target of the georadar according to claim 9, wherein the step S9 specifically includes:
step S901, reducing the dimension of the HOG feature matrix YdetectInputting the target candidate frames into the trained SVM to judge which target candidate frames are target areas;
and step S902, performing non-maximum suppression on all the candidate frames judged as the targets, removing redundant candidate frames, and finally outputting a target detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210021887.7A CN114373079A (en) | 2022-01-10 | 2022-01-10 | Rapid and accurate ground penetrating radar target detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210021887.7A CN114373079A (en) | 2022-01-10 | 2022-01-10 | Rapid and accurate ground penetrating radar target detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114373079A true CN114373079A (en) | 2022-04-19 |
Family
ID=81143270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210021887.7A Pending CN114373079A (en) | 2022-01-10 | 2022-01-10 | Rapid and accurate ground penetrating radar target detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114373079A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117152083A (en) * | 2023-08-31 | 2023-12-01 | 哈尔滨工业大学 | Ground penetrating radar road disease image prediction visualization method based on category activation mapping |
CN117372719A (en) * | 2023-12-07 | 2024-01-09 | 四川迪晟新达类脑智能技术有限公司 | Target detection method based on screening |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868774A (en) * | 2016-03-24 | 2016-08-17 | 西安电子科技大学 | Selective search and convolutional neural network based vehicle logo recognition method |
CN109086687A (en) * | 2018-07-13 | 2018-12-25 | 东北大学 | The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction |
CN111428748A (en) * | 2020-02-20 | 2020-07-17 | 重庆大学 | Infrared image insulator recognition and detection method based on HOG characteristics and SVM |
CN113296095A (en) * | 2021-05-21 | 2021-08-24 | 东南大学 | Target hyperbolic edge extraction method for pulse ground penetrating radar |
-
2022
- 2022-01-10 CN CN202210021887.7A patent/CN114373079A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868774A (en) * | 2016-03-24 | 2016-08-17 | 西安电子科技大学 | Selective search and convolutional neural network based vehicle logo recognition method |
CN109086687A (en) * | 2018-07-13 | 2018-12-25 | 东北大学 | The traffic sign recognition method of HOG-MBLBP fusion feature based on PCA dimensionality reduction |
CN111428748A (en) * | 2020-02-20 | 2020-07-17 | 重庆大学 | Infrared image insulator recognition and detection method based on HOG characteristics and SVM |
CN113296095A (en) * | 2021-05-21 | 2021-08-24 | 东南大学 | Target hyperbolic edge extraction method for pulse ground penetrating radar |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117152083A (en) * | 2023-08-31 | 2023-12-01 | 哈尔滨工业大学 | Ground penetrating radar road disease image prediction visualization method based on category activation mapping |
CN117152083B (en) * | 2023-08-31 | 2024-04-09 | 哈尔滨工业大学 | Ground penetrating radar road disease image prediction visualization method based on category activation mapping |
CN117372719A (en) * | 2023-12-07 | 2024-01-09 | 四川迪晟新达类脑智能技术有限公司 | Target detection method based on screening |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Automatic change detection in synthetic aperture radar images based on PCANet | |
Choy et al. | Fuzzy model-based clustering and its application in image segmentation | |
CN107610114B (en) | optical satellite remote sensing image cloud and snow fog detection method based on support vector machine | |
CN107194937B (en) | Traditional Chinese medicine tongue picture image segmentation method in open environment | |
CN111340824B (en) | Image feature segmentation method based on data mining | |
CN108537751B (en) | Thyroid ultrasound image automatic segmentation method based on radial basis function neural network | |
Liu et al. | Airport detection in large-scale SAR images via line segment grouping and saliency analysis | |
CN112200121B (en) | Hyperspectral unknown target detection method based on EVM and deep learning | |
CN110717896A (en) | Plate strip steel surface defect detection method based on saliency label information propagation model | |
CN114373079A (en) | Rapid and accurate ground penetrating radar target detection method | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
Valliammal et al. | A novel approach for plant leaf image segmentation using fuzzy clustering | |
CN110738672A (en) | image segmentation method based on hierarchical high-order conditional random field | |
Lodh et al. | Flower recognition system based on color and GIST features | |
CN111210447B (en) | Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal | |
Ticay-Rivas et al. | Pollen classification based on geometrical, descriptors and colour features using decorrelation stretching method | |
CN105354547A (en) | Pedestrian detection method in combination of texture and color features | |
CN105894035B (en) | SAR image classification method based on SAR-SIFT and DBN | |
CN107679467B (en) | Pedestrian re-identification algorithm implementation method based on HSV and SDALF | |
Huang et al. | Superpixel-based change detection in high resolution sar images using region covariance features | |
CN106650629A (en) | Kernel sparse representation-based fast remote sensing target detection and recognition method | |
Yang et al. | License plate detection based on sparse auto-encoder | |
CN107704864A (en) | Well-marked target detection method based on image object Semantic detection | |
CN113887652B (en) | Remote sensing image weak and small target detection method based on morphology and multi-example learning | |
Cui et al. | Coarse to fine patches-based multitemporal analysis of very high resolution satellite images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |