CN115359258B - Weak and small target detection method and system for component uncertainty measurement - Google Patents
Weak and small target detection method and system for component uncertainty measurement Download PDFInfo
- Publication number
- CN115359258B CN115359258B CN202211031297.9A CN202211031297A CN115359258B CN 115359258 B CN115359258 B CN 115359258B CN 202211031297 A CN202211031297 A CN 202211031297A CN 115359258 B CN115359258 B CN 115359258B
- Authority
- CN
- China
- Prior art keywords
- uncertainty
- local
- consistency
- component
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000005259 measurement Methods 0.000 title claims description 26
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000011156 evaluation Methods 0.000 claims description 24
- 238000001914 filtration Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012854 evaluation process Methods 0.000 claims description 7
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 abstract description 6
- 238000005728 strengthening Methods 0.000 abstract description 2
- 238000012795 verification Methods 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 6
- 230000035772 mutation Effects 0.000 description 5
- 230000001629 suppression Effects 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000010531 catalytic reduction reaction Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013100 final test Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/273—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The invention provides a method and a system for measuring local uncertainty based on a component consistency principle, which are used for detecting a submerged small target in a complex background. The target and surrounding background belong to different component signals, and the spatial component changes cause observation uncertainty. In the method, a multi-layer nested sliding window is constructed, the component uncertainty (LUM) of the local area signal is calculated by evaluating the component consistency condition of the local area signal, a local component uncertainty graph is drawn, and the complex background in the image is restrained; and then introducing an energy weighting factor, and strengthening the energy information contained in the target in the uncertainty distribution diagram to strengthen the target signal. The true image verification result shows that the invention can realize better small target detection performance under complex background.
Description
Technical Field
The invention belongs to the field of target searching and tracking systems, and particularly relates to a method and a system for detecting a weak and small target for component uncertainty measurement.
Background
Weak target detection is a key technology in target search and tracking systems. The small object consists of several pixels, which makes it low duty cycle, lacks structural information, and results in a lower signal-to-noise ratio. In addition, it is difficult to distinguish small objects from background clutter and noise due to long imaging distances and complex imaging environments.
There are many weak target detection algorithms, and the existing method mainly processes from two aspects of background clutter suppression and target signal enhancement to complete the detection task of the weak target, wherein the single frame processing method is more widely focused.
From the point of view of image information component analysis, the infrared patch-image (IPI) and non-convex rank approximation minimization (NRAM), markov random field guided noise modeling and other methods have been proposed to overcome the interference of complex backgrounds. The method is based on the principle that the target signal, the background clutter and the noise signal are different, and different component signals are disassembled and separated, so that the target signal is extracted. However, the biggest problem of the method is that the method lacks robustness to data of different scene types, and the false alarm rate is high under a complex background. And they often need to be implemented by optimized methods, which can take a long time.
Local Contrast Measurement (LCM) mainly uses the contrast mechanism concept of the human visual system, and is an effective method in the related art. In recent years, many methods have been proposed to optimize LCM from different angles, with good results. Among them, improved Local Contrast Measure (ILCM), novel Local Contrast Method (NLCM) improves the local contrast measurement method and improves clutter suppression capability. the multiscale patch-based contrast measure (MPCM), contrast is measured by the difference between the target region and the surrounding differently oriented regions, but the background suppression capability is not strong. While the Relative Local Contrast Measure (RLCM), multiscale tri-layer LCM (TLLCM) and Weighted Strengthened Local Contrast Measure (WSLCM) fused ratio and differential calculations in local contrast measurement, a great improvement was achieved in suppressing background and boosting target signals.
Uncertainty is generated with the process of target observation. Background fluctuations, signal noise, and target appearance in different regions all bring about uncertainty changes in the spatial direction of the observed data. The complexity of the local area gray value distribution is measured by adopting a local entropy operator to provide a weight for local contrast measurement so as to restrain cloud layer background, but the relation between different component signals and the relation between the same component signals are not considered, and the complex background is difficult to deal with.
Disclosure of Invention
The invention aims to overcome the defects of the prior art that the robustness is lack for data of different scene types and the false alarm rate is high under a complex background.
In order to achieve the above object, the present invention proposes a method for detecting a small target for component uncertainty measurement, the method comprising:
step 1: constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form a multi-stage window, and the multi-stage window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; evaluating the consistency of the local signal components of the signals in the neighborhood layer by using the outermost environmental layer to obtain a local consistency graph, assigning component consistency confidence coefficient by using a local consistency evaluation result, measuring the uncertainty in the region, and drawing an uncertainty distribution map;
step 2: performing Gaussian template matched filtering in the three-layer nested window, and completing calculation of local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
step 3: and carrying out self-adaptive threshold segmentation on the uncertainty graph with energy weighting, removing non-target components, and completing target extraction.
As an improvement of the above method, the step 1 specifically includes:
step 1-1: constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form an M-M multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
step 1-2: evaluating the signal component consistency between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain a N-N local consistency graph; the evaluation criteria were:
wherein :LCij Representing the coordinates (i, j) of the pixel and the surroundingsEvaluating consistency of signal components in a neighborhood region; g ij Representing an N x N block region centered on coordinates (i, j), M-N being an even number;representing the coordinates (i, j) pels; />K is larger than 0, and represents the gray average value of the neighborhood block corresponding to the kth number, and the value of K is N multiplied by N-1;
step 1-3: assigning component consistency confidence coefficient through a local signal gray consistency evaluation result, measuring uncertainty in a region, and drawing an uncertainty distribution map;
the formula of the measured component uncertainty LUM (i, j) is as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for pixel (i, j) position:
wherein ,component consistency confidence values assigned to blocks in a (i, j) centered window structure:
Entorpy min is the minimum entropy:
as an improvement of the above method, the step 2 specifically includes:
performing (2 x p+1) x (2 x p+1) Gaussian template matched filtering in a three-layer nested window, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
the gaussian template matching filtering process is expressed as:
wherein, I (i+x, j+y) represents (i+x, j+y) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel; p represents the center of the gaussian template,sigma represents an adjustment parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is residual image I res Residual error average value of neighborhood positions around the middle pixel (i, j);
the uncertainty of the energy weighting, ELUM (i, j), is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
as an improvement of the above method, the step 3 specifically includes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty map; lambda <1.
The invention also provides a weak and small target detection system for component uncertainty measurement, which comprises:
the local uncertainty measurement module is used for constructing a three-layer nested sliding window structure, and is outwards expanded from a central window to form a multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; the outermost environmental layer is used for evaluating the consistency of the local signal components of the signals in the neighborhood layer, a local consistency graph is finally obtained, component consistency confidence is assigned according to a local consistency evaluation result, uncertainty in a region is measured, and an uncertainty distribution map is drawn;
the uncertainty graph module with energy weighting is used for carrying out Gaussian template matched filtering in three layers of nested windows, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting; and
and the target extraction module is used for carrying out self-adaptive threshold segmentation on the uncertainty graph with energy weighting, removing non-target components and completing target extraction.
As an improvement of the above system, the local uncertainty measurement module processes:
constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form an M-M multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
evaluating the signal component consistency between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain a N-N local consistency graph; the evaluation criteria were:
wherein :LCij Representation ofEvaluating the consistency of signal components of the pixel of the coordinate (i, j) and the surrounding neighborhood region; g ij Representing an N x N block region centered on coordinates (i, j), M-N being an even number;representing the coordinates (i, j) pels; />K is larger than 0, and represents the gray average value of the neighborhood block corresponding to the kth number, and the value of K is N multiplied by N-1;
assigning component consistency confidence coefficient through a local signal gray consistency evaluation result, measuring uncertainty in a region, and drawing an uncertainty distribution map;
the formula of the measured component uncertainty LUM (i, j) is as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for pixel (i, j) position:
wherein ,component consistency confidence values assigned to blocks in a (i, j) centered window structure:
Entorpy min is the minimum entropy:
as an improvement of the above system, the energy weighted uncertainty map module processes:
performing (2 x p+1) x (2 x p+1) Gaussian template matched filtering in a three-layer nested window, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
the gaussian template matching filtering process is expressed as:
wherein, I (i+x, j+y) represents (i+x, j+y) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel; p represents the center of the gaussian template,sigma represents an adjustment parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is residual image I res Residual error average value of neighborhood positions around the middle pixel (i, j);
the uncertainty of the energy weighting, ELUM (i, j), is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
as an improvement of the above system, the target extraction module processes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty map; lambda <1.
Compared with the prior art, the invention has the advantages that:
1. by evaluating the component consistency of the local area signals, the local component uncertainty (LUM) can be calculated, a local component uncertainty map is drawn, and complex backgrounds in the images are restrained; and then introducing an energy weighting factor, and strengthening the energy information contained in the target in the uncertainty distribution diagram to strengthen the target signal.
2. The true image verification results show that the energy weighted uncertainty (ELUM) can achieve better small target detection performance in complex contexts.
Drawings
FIG. 1 is a flow chart of a method for detecting a small target for component uncertainty measurement;
FIG. 2 is a block diagram of a method for detecting small targets for component uncertainty measurements;
FIG. 3 is a graph of the detection results for multiple graphs using various methods;
FIG. 4 is a graph showing ROC curves and run time for nine detection methods of the first sequence;
FIG. 5 is a graph showing ROC curves and run time for nine detection methods of the second sequence;
FIG. 6 is a graph showing ROC curves and run time for a third sequence of nine detection methods;
FIG. 7 is a graph showing ROC curves and run time for a fourth sequence of nine detection methods;
FIG. 8 is a graph showing ROC curves and run time for a fifth sequence of nine detection methods;
fig. 9 shows ROC curves and run time diagrams for a sixth sequence of nine detection methods.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings.
The invention provides a rapid detection method for a single-frame weak and small target which is robust to a low signal-to-noise ratio target signal and is suitable for a complex scene, namely a method for measuring local uncertainty based on signal component consistency. The method of the invention is divided into two phases, including component uncertainty measurement and energy-based weighting function enhancement signal. Firstly, the component consistency confidence is assigned by analyzing the condition of the component consistency of the local area signal, the component uncertainty in the local area is measured by a mutation entropy operator, and the background clutter is restrained. An energy weighting function is then designed to introduce the target energy information to enhance the target signal. Finally, extracting the target through a self-adaptive threshold segmentation algorithm. Experimental results show that the method provided by the invention has better target detection performance and better capability of coping with complex backgrounds.
The method of the invention comprises the following steps: firstly, a three-layer nested sliding window structure is constructed, which is formed by outwards expanding a central window to form a M-M multi-level window, and the three parts of an innermost central layer, an outermost environment layer and a neighborhood layer clamped between the two layers are formed. The outermost environmental layer is used for evaluating the consistency of the local signal components of the signals in the neighborhood layer, a N-N local consistency graph is finally obtained, component consistency confidence is assigned according to a local consistency evaluation result, uncertainty in a region is measured, and an uncertainty distribution map is drawn; then, gaussian template matched filtering is carried out in a three-layer nested window (2 x P+1) x (2 x P+1), and the residual error is utilized to complete calculation of local energy weighting factors, so that an uncertainty graph with energy weighting is obtained; finally, the target extraction is completed by carrying out self-adaptive threshold segmentation and removing non-target components.
As shown in fig. 1 and 2, the method of the present invention specifically includes:
step 1: constructing a three-layer nested sliding window structure, evaluating the consistency of local signal components, assigning component consistency confidence coefficient, and drawing an uncertainty distribution map;
the invention provides an uncertainty measurement method suitable for target signal enhancement and based on local component consistency evaluation assignment, which is used for helping to distinguish targets from backgrounds. A confidence assignment function is first constructed based on the local component consistency assessment results. And then, measuring the uncertainty of the local components of the image through a sliding window according to the consistency of the gray values of the local areas of the pixels.
Similar to the method based on the thought of the human visual system, the component uncertainty (LUM) can be calculated through a sliding window, the size of the sliding window is determined by a central window, the optimal size of the central window can wrap a target signal, the complete window structure is formed by the size of the central window of M x M, and targets with different sizes and different shapes can be effectively processed by adjusting the size of the central window.
Step 1-1: evaluation of consistency of Signal Components
Even in complex contexts, the target is still significantly different from the background from an energy perspective. Assuming that within a small window, the image background signal is relatively stable:
(1) When the window only wraps the background signal, the local gray value is stable and the consistency is higher;
(2) When the window wraps the target signal, the local gray value is relatively stable, but the energy in the window is obviously higher than that in the background area;
(3) When the window wraps the boundary between the target and the background, the local gray value has obvious gradient and low consistency.
The invention provides a local signal gray level consistency evaluation standard which is used for evaluating the consistency of signal components between a target area and a surrounding neighborhood area.
wherein ,LCij Representing consistency evaluation of signal components of the pixel of the coordinate (i, j) and the surrounding neighborhood region; g ij Represents an N x N block region centered on coordinates (i, j),representing the coordinate (i, j) pel, < >>K > 0 represents the gray average value of the neighborhood block corresponding to the kth number, and the value of K is (N multiplied by N-1).
According to the formula, if G ij The area center position and the neighborhood area have higher consistency, so that the LC ij About 1, if G ij The energy of the central position of the region is lower, LC ij <1, if G ij The energy in the central position of the region is higher, LC ij >1。
Step 1-2: uncertainty measurement
In information theory, the uncertainty of random variables is usually expressed by entropy, i.e. the amount of information is calculated as desired.
Wherein, p (x) is the probability of occurrence of event x, and a confidence assignment function is often used for assigning p (x) in evidence theory.
As the window is drawn across the target region, the consistency confidence assignment function value is low due to the lower consistency of the local components, which is highly ambiguous in that region. Uncertainty of consistency of the local spatial domain signal components can be measured through a mutation entropy operator. The proposed mutation entropy operator is expressed as follows:
wherein ,Uij Uncertainty measured for the position of pixel (i, j).The assigned components for each block in the (i, j) -centered window structure are consistentConfidence value. The confidence assignment function may be expressed as:
and (3) processing the confidence coefficient of each block in the window structure in the formula (5) to ensure that the sum is 1, scaling the local uncertainty measurement result to the same scale, and ensuring that uncertainty results obtained by measuring different sliding window positions are comparable.
Like the principle that information entropy obeys the maximum entropy, the mutation entropy operator proposed by us obeys the minimum entropy theorem whenWhen the minimum entropy satisfies the following formula:
in combination with the principle of minimum entropy, to suppress the background to a greater extent, the measured component uncertainty is modified into the following form:
LUM(i,j)=U ij -Entorpy min (7)
when the consistency of the signal components in the local area is higher, the uncertainty is lower, the smaller the measured mutation entropy operator value is, and the modified uncertainty operator shows even approaching 0.
Step 2: local energy weighting factor
In the uncertainty measurement process, uncertainty of consistency of pixel components of a local area is calculated, but calculation of variation entropy does not relate to energy differences of different areas, background estimation is carried out through Gaussian convolution to obtain residual signals, energy weighting factors are designed based on the local energy differences, energy information contained in a target is enhanced on the basis of an uncertainty diagram, and the target signals are improved.
Considering that the small target signal is in a two-dimensional Gaussian shape, the function of smoothing the background signal can be achieved through Gaussian convolution filtering operation. And (3) performing Gaussian template matching filtering with the size of (2 x P+1) x (2 x P+1) in the three-layer nested window. GK is a gaussian template with a template center p, which can be expressed as:
wherein :
sigma represents an adjustment parameter, and the value range is 0.6-1.
The gaussian template matching filtering process can be expressed as:
wherein I is original image data, I gaus Is the result of the gaussian convolution of the original image,
after Gaussian template matching convolution, residual errors of the original image and the image after Gaussian convolution can be obtained:
I res (i,j)=I(i,j)-I gaus (i,j) (9)
the energy of the target center is reduced due to the accumulation of the energy of the surrounding neighborhood pixels, the energy of the target surrounding region pixels is increased due to the accumulation of the target pixels, and the local energy difference in the residual image can be calculated as signal energy weighting by using a sliding window with the same component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)} (10)
wherein ,Ib (I, j) is residual image I res Residual means of neighborhood positions around the middle pel (i, j).
After calculating the component uncertainty LUM and the weighting factor, the energy weighted uncertainty enum may be defined as:
ELUM(i,j)=W(i,j)*LUM(i,j) (11)
step 3: adaptive threshold segmentation
The real target will be most prominent in the uncertainty profile and will suppress other disturbances, with the target signal being further emphasized after the energy weights are added, and other regions resulting close to 0. Thus, the real target is extracted using a threshold operation, and the threshold is defined as:
th=λ×Max+(1-λ)×Mean (12)
wherein, max and Mean are the maximum value and the average value in the ELUM diagram respectively; lambda <1.
The performance of the ELUM method on detecting dim and small targets can be tested through a real data experiment:
A. evaluation index
To evaluate the performance of the proposed method, several common metrics are described: the clutter ratio (SCR) gain and the Background Suppression Factor (BSF) of the signal are two of them. SCR, GSCR (SCR gain) and BSF are defined as:
wherein ,Gt Refers to the maximum energy, mu, of the target area b Is the energy mean value of the background signal, sigma b Is the standard deviation of the background signal. SCR (selective catalytic reduction) in and SCRout SCR of the original image and uncertainty profile, respectively; sigma (sigma) in and σout The raw image and uncertainty profile standard deviation, respectively.
The other two metrics are True Positive Rate (TPR) and False Positive Rate (FPR) to verify the final test effect, which is defined as:
B. experimental results and comparison
In the experiments six sets of real infrared sequences containing different background types were tested using the proposed method. All data are from the data set provided in infrared image dim small aircraft target detection and tracking in ground/air background and infrared dim small moving target detection data set in complex background, and the data set refers to table I.
Table I: detailed information of experimental objectives
To ensure comprehensiveness and diversity, the method of the present invention was compared to the following eight existing representative algorithms: LMWIE, IPI, NRAM, MPCM, RLCM, ADMD, TLLCM and WSLCM. All experiments were performed using MATLAB on a device with a 2.8GHz Intel (R) Xeon (R) W-10855M CPU and 32GB RAM. The significance map and the detection result are shown in FIG. 3. As shown in fig. 3, ELUM can effectively enhance small objects while suppressing the complex background of few or no false positive objects in the five images.
In the comparative experiment, MPCM, RLCM, ADMD performed poorly, ADMD detected poorly, MPCM, RLCM background inhibited poorly, and complex background problems were difficult to solve. Both TLLCM and WSCM had three undetected images, WSLCM was superior to TLLCM in terms of background inhibition; however, both have a high false positive rate. Although the LMWIE can detect all targets, the background information is not completely filtered out, leaving some background profile information. By decomposing the target information from the background, the detection rate of IPI and NRAM for targets is also high. However, the effects of IPI are unstable, and the performance of different sequence images varies greatly. NRAM performed best in eight control experiments, but its false positive points were still significantly redundant compared to our proposed method.
The average SCRG and average BSF for the six experimental groups are shown in Table II.
Table II: SCR and BSF values for different algorithms
The BSF of the WSLCM in seq.3 is slightly larger and its SCRG is also similar to the method of the present invention. In seq.6, the SCRG and BSF of NRAM are largest, followed by the method of the present invention. In addition, LMWIE, IPI and WSLCM perform well. Overall, the method of the present invention achieves a larger SCRG, a larger BSF, and stable performance over six sets of sequence data than other methods.
To further demonstrate the detection performance of the ELUM, ROC curves and run times for the nine detection methods of the test set are shown in fig. 4 and table III.
Table III: run time of a frame in different algorithms (S)
In seq.1-seq.5, ELUM has a higher TPR and lower FPR than other methods. In seq.6, NRAM, IPI and our method all perform well. In conjunction with Table III, ELUM is significantly more efficient than other approaches where TPR and FPR are similar. In general, ELUM achieves optimal performance in ground, ground-air, and air contexts.
The invention provides an ELUM algorithm, which comprises two modules: LUM and energy weighting function. In LUM, the idea of local component consistency discrimination is employed to suppress complex background and enhance the target, and the energy weighting function is considered as an enhanced utilization of the target energy information. Experiments show that the method can realize good detection performance under a complex background.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and are not limiting. Although the present invention has been described in detail with reference to the embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the present invention, which is intended to be covered by the appended claims.
Claims (8)
1. A method of detecting a small target for component uncertainty measurement, the method comprising:
step 1: constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form a multi-stage window, and the multi-stage window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; evaluating the consistency of the local signal components of the signals in the neighborhood layer by using the outermost environmental layer to obtain a local consistency graph, assigning component consistency confidence coefficient by using a local consistency evaluation result, measuring the uncertainty in the region, and drawing an uncertainty distribution map;
step 2: performing Gaussian template matched filtering in the three-layer nested window, and completing calculation of local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
step 3: performing self-adaptive threshold segmentation on the uncertainty graph with energy weighting, removing non-target components, and completing target extraction;
the step 2 specifically includes:
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is residual image I res Residual means of neighborhood positions around the middle pel (i, j).
2. The method for detecting a weak and small target for component uncertainty measurement according to claim 1, wherein the step 1 specifically comprises:
step 1-1: constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form an M-M multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer; m represents the number of pixels of the sliding window structure that are long and wide;
step 1-2: evaluating the signal component consistency between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain a N-N local consistency graph; the evaluation criteria were:
wherein :LCij Representing consistency evaluation of signal components of the pixel of the coordinate (i, j) and the surrounding neighborhood region; g ij Representing an N x N block region centered on coordinates (i, j), M-N being an even number;representing the coordinates (i, j) pels; />Representing the gray average value of the neighborhood block corresponding to the kth number, wherein the K takes N multiplied by N-1; n represents the number of pixels of the local consistency map that are long and wide;
step 1-3: assigning component consistency confidence coefficient through a local signal gray consistency evaluation result, measuring uncertainty in a region, and drawing an uncertainty distribution map;
the formula of the measured component uncertainty LUM (i, j) is as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for pixel (i, j) position:
wherein ,component consistency confidence values assigned to blocks in a (i, j) centered window structure:
Entorpy min is the minimum entropy:
3. the method for detecting a weak target for component uncertainty measurement according to claim 2, wherein the step 2 specifically comprises:
performing (2 x p+1) x (2 x p+1) Gaussian template matched filtering in a three-layer nested window, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
the gaussian template matching filtering process is expressed as:
wherein, I (i+x, j+y) represents (i+x, j+y) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel; p represents highThe center of the template is provided with a plurality of holes,sigma represents an adjustment parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is residual image I res Residual error average value of neighborhood positions around the middle pixel (i, j);
the uncertainty of the energy weighting, ELUM (i, j), is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
4. the method for detecting a small target for component uncertainty measurement according to claim 3, wherein the step 3 specifically comprises:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty map; lambda <1.
5. A small-scale object detection system for component uncertainty measurement, the system comprising:
the local uncertainty measurement module is used for constructing a three-layer nested sliding window structure, and is outwards expanded from a central window to form a multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; the outermost environmental layer is used for evaluating the consistency of the local signal components of the signals in the neighborhood layer, a local consistency graph is finally obtained, component consistency confidence is assigned according to a local consistency evaluation result, uncertainty in a region is measured, and an uncertainty distribution map is drawn;
the uncertainty graph module with energy weighting is used for carrying out Gaussian template matched filtering in three layers of nested windows, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
the target extraction module is used for carrying out self-adaptive threshold segmentation on the uncertainty graph with energy weighting, removing non-target components and completing target extraction;
the method for the uncertainty map module with energy weighting specifically comprises the following steps:
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is residual image I res Residual means of neighborhood positions around the middle pel (i, j).
6. The weak target detection system for component uncertainty measurement of claim 5, wherein said local uncertainty measurement module processes:
constructing a three-layer nested sliding window structure, wherein the sliding window structure is formed by outwards expanding a central window to form an M-M multi-level window, and the multi-level window consists of an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer; m represents the number of pixels of the sliding window structure that are long and wide;
evaluating the signal component consistency between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain a N-N local consistency graph; the evaluation criteria were:
wherein :LCij Representing consistency evaluation of signal components of the pixel of the coordinate (i, j) and the surrounding neighborhood region; g ij Representing an N x N block region centered on coordinates (i, j), M-N being an even number;representing the coordinates (i, j) pels; />Representing the gray average value of the neighborhood block corresponding to the kth number, wherein the K takes N multiplied by N-1; n represents the number of pixels of the local consistency map that are long and wide;
assigning component consistency confidence coefficient through a local signal gray consistency evaluation result, measuring uncertainty in a region, and drawing an uncertainty distribution map;
the formula of the measured component uncertainty LUM (i, j) is as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for pixel (i, j) position:
wherein ,component consistency means assigned to each block in (i, j) -centric window structureConfidence value:
Entorpy min is the minimum entropy:
7. the small-scale object detection system for component uncertainty measurement of claim 6, wherein the energy weighted uncertainty map module processes:
performing (2 x p+1) x (2 x p+1) Gaussian template matched filtering in a three-layer nested window, and calculating local energy weighting factors by utilizing residual errors to obtain an uncertainty graph with energy weighting;
the gaussian template matching filtering process is expressed as:
wherein, I (i+x, j+y) represents (i+x, j+y) point pixel original image data; i gaus (i, j) represents the result of Gaussian convolution of the original image of the (i, j) point pixel; p represents the center of the gaussian template,sigma represents an adjustment parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining residual error I of an original image and an image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein I (I, j) represents (I, j) point pixel original image data;
the local energy differences in the residual images are calculated as signal energy weights using the same sliding window as the component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j})
wherein ,Ib (I, j) is residual image I res Residual error average value of neighborhood positions around the middle pixel (i, j);
the uncertainty of the energy weighting, ELUM (i, j), is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
8. the weak and small target detection system for component uncertainty measurement of claim 7, wherein the target extraction module processes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty map; lambda <1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211031297.9A CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211031297.9A CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115359258A CN115359258A (en) | 2022-11-18 |
CN115359258B true CN115359258B (en) | 2023-04-28 |
Family
ID=84003703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211031297.9A Active CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115359258B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115908807B (en) * | 2022-11-24 | 2023-06-23 | 中国科学院国家空间科学中心 | Method, system, computer equipment and medium for fast detecting weak and small target |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0915200D0 (en) * | 2009-09-01 | 2009-10-07 | Ucl Business Plc | Method for re-localising sites in images |
CN108010047A (en) * | 2017-11-23 | 2018-05-08 | 南京理工大学 | A kind of moving target detecting method of combination unanimity of samples and local binary patterns |
CN111784738B (en) * | 2020-06-19 | 2023-10-31 | 中国科学院国家空间科学中心 | Extremely dark and weak moving target association detection method based on fluctuation analysis |
CN113516187A (en) * | 2021-07-13 | 2021-10-19 | 周口师范学院 | Infrared weak and small target detection algorithm adopting local characteristic contrast |
CN113436217A (en) * | 2021-07-23 | 2021-09-24 | 山东大学 | Unmanned vehicle environment detection method based on deep learning |
CN114332489B (en) * | 2022-03-15 | 2022-06-24 | 江西财经大学 | Image salient target detection method and system based on uncertainty perception |
-
2022
- 2022-08-26 CN CN202211031297.9A patent/CN115359258B/en active Active
Non-Patent Citations (1)
Title |
---|
刘德鹏 ; 李正周 ; 曾靖杰 ; 熊伟奇 ; 亓波 ; .基于多尺度局部对比度和多尺度梯度一致性的红外小弱目标检测算法.兵工学报.2018,(第08期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN115359258A (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109767439B (en) | Target detection method for multi-scale difference and bilateral filtering of self-adaptive window | |
CN111985329A (en) | Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection | |
CN103761731A (en) | Small infrared aerial target detection method based on non-downsampling contourlet transformation | |
Yang et al. | Multiscale facet model for infrared small target detection | |
CN110400294B (en) | Infrared target detection system and detection method | |
CN110706208A (en) | Infrared dim target detection method based on tensor mean square minimum error | |
CN115359258B (en) | Weak and small target detection method and system for component uncertainty measurement | |
Li et al. | A small target detection algorithm in infrared image by combining multi-response fusion and local contrast enhancement | |
CN113822352A (en) | Infrared dim target detection method based on multi-feature fusion | |
CN111091111A (en) | Vehicle bottom dangerous target identification method | |
CN113362293A (en) | SAR image ship target rapid detection method based on significance | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
Gupta et al. | Infrared small target detection enhancement using a lightweight convolutional neural network | |
CN112395944A (en) | Multi-scale ratio difference combined contrast infrared small target detection method based on weighting | |
CN112669332A (en) | Method for judging sea and sky conditions and detecting infrared target based on bidirectional local maximum and peak local singularity | |
CN112598711A (en) | Hyperspectral target tracking method based on joint spectrum dimensionality reduction and feature fusion | |
CN104715458B (en) | A kind of bimodulus non-local mean filtering method | |
Zhao et al. | Infrared small target detection using local component uncertainty measure with consistency assessment | |
CN114005018B (en) | Small calculation force driven multi-target tracking method for unmanned surface vehicle | |
CN107273801B (en) | Method for detecting abnormal points by video multi-target tracking | |
CN113516187A (en) | Infrared weak and small target detection algorithm adopting local characteristic contrast | |
CN114429593A (en) | Infrared small target detection method based on rapid guided filtering and application thereof | |
CN108573236B (en) | Method for detecting infrared weak and small target under cloud background based on discrete fraction Brown random field | |
Zhi et al. | Ship detection in harbor area in SAR images based on constructing an accurate sea-clutter model | |
CN108280453B (en) | Low-power-consumption rapid image target detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |