CN115359258A - Weak and small target detection method and system for component uncertainty measurement - Google Patents
Weak and small target detection method and system for component uncertainty measurement Download PDFInfo
- Publication number
- CN115359258A CN115359258A CN202211031297.9A CN202211031297A CN115359258A CN 115359258 A CN115359258 A CN 115359258A CN 202211031297 A CN202211031297 A CN 202211031297A CN 115359258 A CN115359258 A CN 115359258A
- Authority
- CN
- China
- Prior art keywords
- uncertainty
- consistency
- local
- component
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 238000005259 measurement Methods 0.000 title claims description 30
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000011156 evaluation Methods 0.000 claims description 28
- 238000001914 filtration Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000012854 evaluation process Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 abstract description 2
- 238000005728 strengthening Methods 0.000 abstract description 2
- 238000012795 verification Methods 0.000 abstract description 2
- 238000002474 experimental method Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 230000001629 suppression Effects 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000035772 mutation Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010531 catalytic reduction reaction Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/273—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a system for measuring local uncertainty based on a component consistency principle, which are used for detecting small submerged targets in a complex background. The target and surrounding background belong to different component signals, and the change of the components in space causes the uncertainty of observation. In the method, a multilayer nested sliding window is constructed, the partial component uncertainty (LUM) is calculated by evaluating the component consistency condition of a partial region signal, a partial component uncertainty map is drawn, and a complex background in an image is suppressed; and then introducing an energy weighting factor, and strengthening energy information contained in the target in the uncertainty distribution map to realize the enhancement of the target signal. The real image verification result shows that the invention can realize better small target detection performance under a complex background.
Description
Technical Field
The invention belongs to the field of target searching and tracking systems, and particularly relates to a method and a system for detecting a small target by component uncertainty measurement.
Background
Weak and small target detection is a key technology in a target searching and tracking system. A small target consists of a few pixels, which makes it low-occupancy, lacking structural information, resulting in a lower signal-to-noise ratio. In addition, due to the long imaging distance and the complex imaging environment, it is difficult to distinguish small targets from background clutter and noise.
At present, a plurality of weak and small target detection algorithms exist, and the existing method mainly processes from two aspects of background clutter suppression and target signal enhancement to complete the detection task of the weak and small target, wherein the single-frame processing method is more widely concerned.
From the viewpoint of image information component analysis, methods such as the original patch-image (IPI) and non-coherent random approximation minimization (NRAM) and Markov random field limited noise model are proposed to overcome the interference of complex background. The methods are based on the principle that the difference exists among target signals, background clutter and noise signals, and different component signals are separated out to extract the target signals. However, the biggest problem of such methods is that the methods lack robustness on different scene type data and have high false alarm rate in complex backgrounds. And they often need to be implemented by optimized methods, which are time consuming.
Local Contrast Measurement (LCM) mainly utilizes the contrast mechanism concept of the human visual system, and is an effective method in the related field. In recent years, a plurality of methods are proposed to optimize the LCM from different angles, and good effects are achieved. Wherein, improved Local Contrast Measure (ILCM) and Novel Local Contrast Method (NLCM) improves the clutter suppression capability. the multiscale batch-based contrast measure (MPCM) measures contrast by the difference between the target region and the surrounding different directional regions, but the background suppression capability is not strong. The Relative Local Contrast Measure (RLCM), multiscale tri-layer LCM (TLLCM) and Weighted linear Local Contrast Measure (WSLCM) integrate ratio and difference calculation in Local Contrast measurement, and great progress is obtained in the aspects of inhibiting background and improving target signals.
Uncertainty arises with the process of target observation. Background fluctuation, signal noise and target occurrence of different areas all bring uncertainty change in the spatial direction of the observation data. The local entropy operator is adopted to measure the complexity of the gray value distribution of the local area, and a weight is provided for local contrast measurement to suppress the cloud background, but the relationship between different component signals and the relationship between the same component signals are not considered, so that the situation of the complex background is difficult to deal with.
Disclosure of Invention
The invention aims to overcome the defects that the prior art is lack of robustness on different scene type data and has higher false alarm rate under a complex background.
In order to achieve the above object, the present invention proposes a weak small target detection method for component uncertainty measurement, the method comprising:
step 1: constructing a three-layer nested sliding window structure, extending a central window outwards to form a multi-stage window, which consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; evaluating the consistency of local signal components of signals in the neighborhood layer by utilizing the outermost environment layer to obtain a local consistency graph, assigning component consistency confidence coefficient according to the local consistency evaluation result, measuring the uncertainty in the region, and drawing an uncertainty distribution graph;
step 2: performing Gaussian template matching filtering in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
and step 3: and carrying out self-adaptive threshold segmentation on the uncertainty map with the energy weighting, and eliminating non-target components to finish target extraction.
As an improvement of the above method, the step 1 specifically includes:
step 1-1: constructing a three-layer nested sliding window structure, wherein a central window expands outwards to form an M-M multi-level window, and the M-M multi-level window consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
step 1-2: evaluating the consistency of signal components between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain an N x N local consistency graph; the evaluation criteria are:
wherein :LCij Representing the consistency evaluation of the coordinate (i, j) pixel and the signal components of the surrounding neighborhood region; g ij Representing N x N blocks of area centered on coordinates (i, j), M-N being an even number;representing the coordinate (i, j) pixel;k is more than 0, which represents the gray average value of the corresponding kth numbered neighborhood block, and the value of K is NxN-1;
step 1-3: assigning component consistency confidence coefficient and measuring uncertainty in the region according to the local signal gray consistency evaluation result, and drawing an uncertainty distribution map;
the component uncertainty LUM (i, j) is measured as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for the pixel (i, j) position:
wherein ,(ii) component correspondence confidence values assigned to blocks in a window structure centered at (i, j):
Entorpy min to minimum entropy:
as an improvement of the above method, the step 2 specifically includes:
performing Gaussian template matched filtering of (2 x p + 1) in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
the gaussian template matching filtering process is represented as:
wherein, I (I + x, j + y) represents (I + x, j + y) point pixel original image data; i is gaus (i, j) represents the result of the original image of the point pixel of (i, j) after being subjected to Gaussian convolution; p represents the center of a gaussian template,sigma represents an adjusting parameter, and the value of the adjusting parameter is 0-5;
after Gaussian template matching convolution, obtaining the residual error I of the original image and the image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein, I (I, j) represents original image data of (I, j) point pixel;
calculating local energy differences in the residual image as signal energy weights using a sliding window with the same homography consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is a residual image I res The residual mean value of neighborhood positions around the middle pixel element (i, j);
the energy weighted uncertainty ELUM (i, j) is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
as an improvement of the above method, the step 3 specifically includes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty plot; λ <1.
The invention also provides a system for detecting a small target for component uncertainty measurement, the system comprising:
the local uncertainty measuring module is used for constructing a three-layer nested sliding window structure, outwards expands a central window to form a multi-level window, and consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; the outermost environment layer is used for evaluating the consistency of local signal components of signals in the neighborhood layer to finally obtain a local consistency graph, and the uncertainty in the area is measured and an uncertainty distribution graph is drawn according to the local consistency evaluation result, the component consistency confidence coefficient is assigned;
the uncertainty map module with energy weighting is used for performing Gaussian template matching filtering in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting; and
and the target extraction module is used for carrying out self-adaptive threshold segmentation on the uncertainty map with the energy weighting, eliminating non-target components and finishing target extraction.
As an improvement of the above system, the local uncertainty measuring module processes:
constructing a three-layer nested sliding window structure, wherein a central window expands outwards to form an M-M multi-level window, and the M-M multi-level window consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
evaluating the consistency of signal components between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain an N x N local consistency graph; the evaluation criteria are:
wherein :LCij Representing the consistency evaluation of the coordinate (i, j) pixel and the signal components of the surrounding neighborhood region; g ij Denotes an N x N block region centered on coordinates (i, j), M-N being an even number;representing coordinate (i, j) pel;k is more than 0, the gray level mean value of the corresponding kth numbered neighborhood block is represented, and the value of K is NxN-1;
assigning component consistency confidence coefficient and uncertainty in the measurement region according to the local signal gray consistency evaluation result, and drawing an uncertainty distribution map;
the component uncertainty LUM (i, j) is measured as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for the pixel (i, j) position:
wherein ,component consistency confidence values assigned for blocks in a window structure centered at (i, j):
Entorpy min as minimum entropy:
as an improvement of the above system, the energy-weighted uncertainty map module processes:
performing Gaussian template matched filtering of (2 x p + 1) in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
the gaussian template matching filtering process is represented as:
wherein, I (I + x, j + y) represents (I + x, j + y) point pixel original image data; I.C. A gaus (i, j) represents the result of the original image of the point pixel of (i, j) after being subjected to Gaussian convolution; p represents the center of a gaussian template,sigma represents an adjusting parameter, and the value of the adjusting parameter is 0-5;
after Gaussian template matching convolution, obtaining the residual error I of the original image and the image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein, I (I, j) represents original image data of (I, j) point pixel;
calculating local energy difference in the residual image as signal energy weight by using a sliding window with the same homonymy consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is a residual image I res The residual mean value of neighborhood positions around the middle pixel element (i, j);
the energy weighted uncertainty ELUM (i, j) is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
as an improvement of the above system, the target extraction module processes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are respectively the maximum value and Mean value in the uncertainty diagram with energy weighting; λ <1.
Compared with the prior art, the invention has the advantages that:
1. by evaluating the component consistency condition of the local area signals, the local component uncertainty (LUM) can be calculated, a local component uncertainty map is drawn, and a complex background in the image is suppressed; and then introducing an energy weighting factor, and strengthening energy information contained in the target in the uncertainty distribution map to realize the enhancement of the target signal.
2. The real image verification result shows that the energy weighted uncertainty (ELUM) can realize better small target detection performance under a complex background.
Drawings
FIG. 1 is a flow chart of a method for detecting small targets for component uncertainty measurement;
FIG. 2 is a block diagram of a method for detecting weak small objects for component uncertainty measurement;
FIG. 3 is a graph showing the results of a plurality of methods for a plurality of graphs;
FIG. 4 is a graph showing ROC curves and operating times for the nine detection methods in the first sequence diagram;
FIG. 5 is a graph showing ROC curves and operating times for the nine detection methods in the second sequence diagram;
FIG. 6 is a graph showing ROC curves and operating times for the nine detection methods in the third sequence diagram;
FIG. 7 is a graph showing the ROC curves and the operating times of the nine detection methods in the fourth sequence diagram;
FIG. 8 is a graph showing the ROC curves and the operating times of the nine detection methods in the fifth sequence diagram;
FIG. 9 is a diagram showing ROC curves and operating times of the nine detection methods in the sixth sequence diagram.
Detailed Description
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings.
The invention provides a method for rapidly detecting a single-frame weak and small target, which is robust to a target signal with a low signal-to-noise ratio and suitable for a complex scene, namely a method for measuring local uncertainty based on signal component consistency. The method of the present invention is divided into two phases including component uncertainty measurement and energy-based weighting function enhancement of the signal. Firstly, component consistency confidence is assigned by analyzing the consistency of signal components in a local area, and component uncertainty in the local area is measured by a variation entropy operator to suppress background clutter. An energy weighting function is then designed to incorporate the target energy information to enhance the target signal. And finally extracting the target by a self-adaptive threshold segmentation algorithm. Experimental results show that the method has better target detection performance and better capability of coping with complex backgrounds.
The method of the invention comprises the following steps: firstly, a three-layer nested sliding window structure is constructed, the sliding window structure is formed by expanding a central window outwards to form an M × M multi-stage window, and the multi-stage window structure consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers. The outermost environment layer is used for evaluating the consistency of local signal components of signals in the neighborhood layer to finally obtain an N x N local consistency graph, component consistency confidence is assigned according to the local consistency evaluation result, uncertainty in the measurement region is measured, and an uncertainty distribution graph is drawn; then, gaussian template matched filtering of (2 + P + 1) and (2 + P + 1) is carried out in the three-layer nested window, and calculation of local energy weighting factors is completed by using residual errors, so that an uncertainty map with energy weighting is obtained; and finally, performing self-adaptive threshold segmentation on the target, and eliminating non-target components to finish target extraction.
As shown in fig. 1 and fig. 2, the method of the present invention specifically includes:
step 1: constructing a three-layer nested sliding window structure, carrying out consistency evaluation on local signal components, assigning component consistency confidence coefficients, and drawing an uncertainty distribution map;
the invention provides an uncertainty measurement method based on local component consistency evaluation assignment and suitable for target signal enhancement, which helps to distinguish a target from a background. A confidence assignment function is first constructed based on the local component consistency evaluation results. And then, measuring the uncertainty of the local components of the image through a sliding window according to the consistency condition of the gray value of the local area of the pixel.
Similar to the method based on the human visual system concept, the component uncertainty (LUM) can be calculated by a sliding window, the size of the sliding window is determined by a central window, the optimal size of the central window should be capable of wrapping the target signal, the complete window structure is formed by the size of the central window of M × M, and targets with different sizes and different shapes can be effectively processed by adjusting the size of the central window.
Step 1-1: evaluation of signal component uniformity
Even in complex backgrounds, there is still a significant difference in the energy of the target from the background. Assuming that within a small window, the image background signal is relatively stable:
(1) When the window only wraps the background signal, the local gray value is stable and the consistency is high;
(2) When the window wraps the target signal, the local gray value is relatively stable, but the energy in the window is obviously higher than that in the background area;
(3) When the window wraps the boundary between the target and the background, the local gray value has obvious gradient and is low in consistency.
The invention provides a local signal gray level consistency evaluation standard which is used for evaluating the consistency of signal components between a target area and a surrounding neighborhood area.
wherein ,LCij Representing the consistency evaluation of the pixel of the coordinate (i, j) and the signal components of the surrounding neighborhood region; g ij Representing an N x N block area centered on coordinates (i, j),representing the image element of the coordinates (i, j),k is greater than 0, the gray level mean value of the adjacent domain block corresponding to the kth number is represented, and the value of K is (N multiplied by N-1).
According to the formula, if G ij If the consistency of the central position of the region and the neighborhood region is higher, LC ij 1, if G ij The energy is lower at the center of the area, LC ij If G is <1 ij The energy is higher at the center of the area, LC ij >1。
Step 1-2: uncertainty measurement
In information theory, the uncertainty of a random variable is usually expressed by entropy, that is, the desired calculation is performed on the information quantity.
The invention provides a local component uncertainty method based on variation entropy operator measurement, which adopts local consistency as a confidence coefficient assignment function.
When the window is drawn across the target region, the consistency confidence assignment function value is low and the component uncertainty in the region is high because the local component consistency is low. The uncertainty of the consistency of the local spatial domain signal components can be measured through a variation entropy operator. The proposed mutation entropy operator is expressed as follows:
wherein ,Uij Is the uncertainty measured for the position of the picture element (i, j).Is the component consistency confidence value assigned to each block in the window structure centered at (i, j). The confidence assignment function may be expressed as:
in the formula (5), the confidence degrees of all blocks in the window structure are processed, the sum is 1, the measurement result of the local uncertainty is zoomed on the same scale, and the measured uncertainty results of different sliding window positions are ensured to be comparable.
Like the information entropy obeys the maximum entropy principle, the mutation entropy operator proposed by the inventor obeys the minimum entropy theorem whenWhen the minimum entropy satisfies the following equation:
and (3) combining a minimum entropy principle, and modifying the uncertainty of the measured components into the following form in order to inhibit the background to a greater extent:
LUM(i,j)=U ij -Entorpy min (7)
when the consistency of the signal components in the local area is higher, the uncertainty is lower, the measured value of the variation entropy operator is smaller, and the uncertainty operator after modification is even close to 0.
Step 2: local energy weighting factor
In the uncertainty measurement process, the uncertainty of the consistency condition of the pixel components in the local area is calculated, the operation of the variation entropy does not relate to the energy difference of different areas, background estimation is carried out through Gaussian convolution to obtain a residual signal, an energy weighting factor is designed based on the local energy difference, and on the basis of an uncertainty map, the energy information contained in the target is strengthened and the target signal is improved.
Considering that the small target signal is in a two-dimensional gaussian shape, the function of smoothing the background signal can be achieved through the gaussian convolution filtering operation. And (2) performing Gaussian template matched filtering with the size of (2 x P + 1) in the three-layer nested window. GK is a gaussian template with a template center p, which can be expressed as:
wherein :
sigma represents an adjusting parameter, and the value range is 0.6-1.
The gaussian template matching filtering process can be expressed as:
wherein I is original image data, I gaus Is an original imageAs a result of the gaussian convolution,
after the gaussian template matching convolution, the residual error between the original image and the image after gaussian convolution can be obtained:
I res (i,j)=I(i,j)-I gaus (i,j) (9)
the target center energy is reduced due to the accumulation of surrounding neighborhood pixel energy, the target surrounding area pixel energy is increased due to the accumulation of a target pixel, and the local energy difference in the residual image can be calculated as signal energy weighting by using a sliding window with the same component consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)} (10)
wherein ,Ib (I, j) is a residual image I res And (3) residual mean value of neighborhood positions around the middle pixel element (i, j).
After calculating the component uncertainty LUM and the weighting factor, the energy-weighted uncertainty ELUM may be defined as:
ELUM(i,j)=W(i,j)*LUM(i,j) (11)
and 3, step 3: adaptive threshold segmentation
The true target will be most prominent in the uncertainty map and other disturbances will be suppressed, the target signal is further emphasized after adding the energy weight, while other regions result close to 0. Therefore, the real target is extracted using a threshold operation, and the threshold is defined as:
th=λ×Max+(1-λ)×Mean (12)
wherein Max and Mean are the maximum and Mean values in the ELUM graph, respectively; lambda is less than 1.
The performance of the ELUM method for detection of dim and small targets can be tested by real data experiments:
A. evaluation index
To evaluate the performance of the proposed method, several common indicators are specified: the clutter ratio (SCR) gain and the Background Suppression Factor (BSF) of the signal are two of these. SCR, GSCR (SCR gain) and BSF are defined as:
wherein ,Gt Indicates the maximum value of the energy of the target region, mu b Is the energy mean, σ, of the background signal b Is the standard deviation of the background signal. SCR (Selective catalytic reduction) in and SCRout SCR, which is the original image and the uncertainty map, respectively; sigma in and σout The original image and the standard deviation of the uncertainty distribution map, respectively.
The other two measurement parameters are True Positive Rate (TPR) and False Positive Rate (FPR), which are used to verify the final detection result and are defined as:
B. results and comparison of the experiments
In the experiment, six sets of real infrared sequences containing different background types were tested using the proposed method. All data come from data sets provided in the infrared image weak small airplane target detection and tracking under the ground/air background and the infrared weak small moving target detection data set under the complex background, and the data sets refer to a table I.
Table I: detailed information of the Experimental targets
To ensure comprehensiveness and diversity, the method of the invention was compared with the following eight existing representative algorithms: LMWIE, IPI, NRAM, MPCM, RLCM, ADMD, TLLCM, and WSLCM. All experiments were performed using MATLAB on a device with a 2.8GHz Intel (R) Xeon (R) W-10855M CPU and 32GB RAM. The significance map and the test results are shown in FIG. 3. As shown in fig. 3, the ELUM can effectively enhance small objects while suppressing complex backgrounds with no or few false positive objects in the five images.
In contrast experiments, MPCM, RLCM and ADMD have poor performance, ADMD detection is poor, and MPCM and RLCM have poor background inhibition, so that the problem of complex background is difficult to solve. Both TLLCM and WSCM had three undetected images, WSLCM was superior to TLLCM in background suppression; however, both false alarm rates are high. LMWIE can detect all targets but the background information is not completely filtered out, leaving some background profile information. By decomposing the target information from the background, the rate of detection of targets by IPI and NRAM is also high. However, the effect of IPI is not stable, and the performance of different sequence images varies greatly. NRAM performs best in eight control experiments, but its false positives are still significantly redundant compared to our proposed method.
The average SCRG and average BSF for the six experimental groups are shown in Table II.
Table II: SCR and BSF values for different algorithms
The WSLCM in seq.3 has a slightly larger BSF, and its SCRG is also similar to the method of the present invention. In seq.6, the SCRG and BSF of NRAM are the largest, followed by the method of the present invention. In addition, LMWIE, IPI and WSLCM also have good performance. In general, the methods of the invention achieve greater SCRG, greater BSF, and stable performance over six sets of sequence data than other methods.
To further demonstrate the detection performance of ELUM, the ROC curves and run times for the nine detection methods of the test set are shown in FIG. 4 and Table III.
Table III: run time of one frame in different algorithms (S)
In seq.1 to seq.5, ELUM has higher TPR and lower FPR than other methods. In seq.6, NRAM, IPI and our method all performed well. In conjunction with Table III, the efficiency of ELUM is significantly higher than that of other methods when the TPR and FPR are similar. In general, ELUM achieves the best performance in ground, ground-air and air contexts.
The invention provides an ELUM algorithm, which comprises two modules: LUM and energy weighting function. In LUM, the idea of local component consistency discrimination is employed to suppress complex backgrounds and enhance targets, while the energy weighting function is considered as an enhanced utilization of target energy information. Experiments show that the kit can realize good detection performance under a complex background.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it should be understood by those skilled in the art that the technical solutions of the present invention may be modified or substituted with equivalents without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered by the scope of the claims of the present invention.
Claims (8)
1. A method of weak small target detection of component uncertainty measurements, the method comprising:
step 1: constructing a three-layer nested sliding window structure, extending a central window outwards to form a multi-stage window, which consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; evaluating the local signal component consistency of signals in the neighborhood layer by utilizing the outermost environment layer to obtain a local consistency graph, assigning component consistency confidence coefficients according to the local consistency evaluation result, measuring the uncertainty in the region, and drawing an uncertainty distribution graph;
step 2: performing Gaussian template matching filtering in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
and 3, step 3: and carrying out self-adaptive threshold segmentation on the uncertainty map with the energy weighting, and eliminating non-target components to finish target extraction.
2. The method for detecting a small object by component uncertainty measurement according to claim 1, wherein the step 1 specifically comprises:
step 1-1: constructing a three-layer nested sliding window structure, wherein a central window expands outwards to form an M-M multi-level window, and the M-M multi-level window consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
step 1-2: evaluating the consistency of signal components between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain an N x N local consistency graph; the evaluation criteria are:
wherein :LCij Representing the consistency evaluation of the pixel of the coordinate (i, j) and the signal components of the surrounding neighborhood region; g ij Representing N x N blocks of area centered on coordinates (i, j), M-N being an even number;representing the coordinate (i, j) pixel;expressing the gray average value of the adjacent domain block corresponding to the kth number, wherein the K value is NxN-1;
step 1-3: assigning component consistency confidence coefficient and uncertainty in the measurement region according to the local signal gray consistency evaluation result, and drawing an uncertainty distribution map;
the component uncertainty LUM (i, j) is measured as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for the pixel (i, j) position:
wherein ,component consistency confidence values assigned for blocks in a window structure centered at (i, j):
Entorpy min as minimum entropy:
3. the method for detecting a small object by component uncertainty measurement according to claim 2, wherein the step 2 specifically comprises:
performing Gaussian template matched filtering of (2 x p + 1) in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
the gaussian template matching filtering process is represented as:
wherein, I (I + x, j + y) represents (I + x, j + y) point pixel original image data; I.C. A gaus (i, j) represents the result of the original image of the point pixel of (i, j) after being subjected to Gaussian convolution; p represents the center of the gaussian template,sigma represents an adjusting parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining the residual error I of the original image and the image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein, I (I, j) represents original image data of (I, j) point pixel;
calculating local energy differences in the residual image as signal energy weights using a sliding window with the same homography consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is a residual image I res The residual mean value of neighborhood positions around the middle pixel element (i, j);
the energy weighted uncertainty ELUM (i, j) is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
4. the method for detecting a small object by component uncertainty measurement according to claim 3, wherein the step 3 specifically comprises:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are the maximum and Mean values, respectively, in the energy weighted uncertainty plot; lambda is less than 1.
5. A system for the detection of small objects for the measurement of component uncertainty, the system comprising:
the local uncertainty measuring module is used for constructing a three-layer nested sliding window structure, outwards expands a central window to form a multi-level window, and consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; the outermost environment layer is used for evaluating the consistency of local signal components of signals in the neighborhood layer to finally obtain a local consistency graph, and the uncertainty in the measurement region is measured by assigning the component consistency confidence coefficient according to the local consistency evaluation result to draw an uncertainty distribution graph;
the uncertainty map module with energy weighting is used for performing Gaussian template matching filtering in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
and the target extraction module is used for performing self-adaptive threshold segmentation on the uncertainty map with the energy weighting, eliminating non-target components and finishing target extraction.
6. The system for detecting weak small objects based on component uncertainty measurement according to claim 5, wherein the local uncertainty measurement module processes:
constructing a three-layer nested sliding window structure, extending a central window outwards to form an M x M multi-stage window, which consists of three parts, namely an innermost central layer, an outermost environment layer and a neighborhood layer sandwiched between the two layers; wherein M is a positive integer;
evaluating the consistency of signal components between the environment layer and the surrounding neighborhood region by using a local signal gray consistency evaluation standard to obtain an N x N local consistency graph; the evaluation criteria are:
wherein :LCij Representing the consistency evaluation of the coordinate (i, j) pixel and the signal components of the surrounding neighborhood region; g ij Representing N x N blocks of area centered on coordinates (i, j), M-N being an even number;representing the coordinate (i, j) pixel;expressing the gray average value of the adjacent domain block corresponding to the kth number, wherein the K value is NxN-1;
assigning component consistency confidence coefficient and uncertainty in the measurement region according to the local signal gray consistency evaluation result, and drawing an uncertainty distribution map;
the component uncertainty LUM (i, j) is measured as follows:
LUM(i,j)=U ij -Entorpy min
wherein ,Uij Uncertainty measured for the pixel (i, j) position:
wherein ,component consistency confidence values assigned for blocks in a window structure centered at (i, j):
Entorpy min as minimum entropy:
7. the system for weak small object detection of compositional uncertainty measurements according to claim 6, characterized in that said energy weighted uncertainty map module processes:
performing Gaussian template matched filtering of (2 x p + 1) in the three-layer nested window, and completing calculation of local energy weighting factors by using residual errors to obtain an uncertainty map with energy weighting;
the gaussian template matching filtering process is represented as:
wherein, I (I + x, j + y) represents (I + x, j + y) point pixel original image data; i is gaus (i, j) represents the result of the original image of the point pixel of (i, j) after being subjected to Gaussian convolution; p represents the center of the gaussian template,sigma represents an adjusting parameter, and the value is 0-5;
after Gaussian template matching convolution, obtaining the residual error I of the original image and the image after Gaussian convolution res (i,j):
I res (i,j)=I(i,j)-I gaus (i,j)
Wherein, I (I, j) represents original image data of (I, j) point pixel;
calculating local energy difference in the residual image as signal energy weight by using a sliding window with the same homonymy consistency evaluation process:
W(i,j)=max{0,I res (i,j)-I b (i,j)}
wherein ,Ib (I, j) is a residual image I res Residual mean value of neighborhood positions around the medium pixel (i, j);
the energy weighted uncertainty ELUM (i, j) is defined as:
ELUM(i,j)=W(i,j)*LUM(i,j)。
8. the system for detecting weak and small objects of component uncertainty measurement according to claim 7, characterized in that the object extraction module processes:
extracting a real target using a threshold operation;
the threshold th is defined as:
th=λ×Max+(1-λ)×Mean
wherein Max and Mean are respectively the maximum value and Mean value in the uncertainty diagram with energy weighting; lambda is less than 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211031297.9A CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211031297.9A CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115359258A true CN115359258A (en) | 2022-11-18 |
CN115359258B CN115359258B (en) | 2023-04-28 |
Family
ID=84003703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211031297.9A Active CN115359258B (en) | 2022-08-26 | 2022-08-26 | Weak and small target detection method and system for component uncertainty measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115359258B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115908807A (en) * | 2022-11-24 | 2023-04-04 | 中国科学院国家空间科学中心 | Method, system, computer equipment and medium for quickly detecting weak and small targets |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120219185A1 (en) * | 2009-09-01 | 2012-08-30 | Ucl Business Plc | Apparatus and method for determining a location in a target image |
CN108010047A (en) * | 2017-11-23 | 2018-05-08 | 南京理工大学 | A kind of moving target detecting method of combination unanimity of samples and local binary patterns |
CN111784738A (en) * | 2020-06-19 | 2020-10-16 | 中国科学院国家空间科学中心 | Extremely dark and weak moving target correlation detection method based on fluctuation analysis |
CN113436217A (en) * | 2021-07-23 | 2021-09-24 | 山东大学 | Unmanned vehicle environment detection method based on deep learning |
CN113516187A (en) * | 2021-07-13 | 2021-10-19 | 周口师范学院 | Infrared weak and small target detection algorithm adopting local characteristic contrast |
CN114332489A (en) * | 2022-03-15 | 2022-04-12 | 江西财经大学 | Image salient target detection method and system based on uncertainty perception |
-
2022
- 2022-08-26 CN CN202211031297.9A patent/CN115359258B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120219185A1 (en) * | 2009-09-01 | 2012-08-30 | Ucl Business Plc | Apparatus and method for determining a location in a target image |
CN108010047A (en) * | 2017-11-23 | 2018-05-08 | 南京理工大学 | A kind of moving target detecting method of combination unanimity of samples and local binary patterns |
CN111784738A (en) * | 2020-06-19 | 2020-10-16 | 中国科学院国家空间科学中心 | Extremely dark and weak moving target correlation detection method based on fluctuation analysis |
CN113516187A (en) * | 2021-07-13 | 2021-10-19 | 周口师范学院 | Infrared weak and small target detection algorithm adopting local characteristic contrast |
CN113436217A (en) * | 2021-07-23 | 2021-09-24 | 山东大学 | Unmanned vehicle environment detection method based on deep learning |
CN114332489A (en) * | 2022-03-15 | 2022-04-12 | 江西财经大学 | Image salient target detection method and system based on uncertainty perception |
Non-Patent Citations (4)
Title |
---|
JINHUI HAN等: "Infrared small target detection utilizing the multiscale relative local contrast measure" * |
刘德鹏;李正周;曾靖杰;熊伟奇;亓波;: "基于多尺度局部对比度和多尺度梯度一致性的红外小弱目标检测算法" * |
张西山;黄考利;闫鹏程;连光耀;李志宇;: "基于不确定性测度与支持度的测试性验前信息融合方法" * |
杨其利;周炳红;郑伟;李明涛;: "基于全卷积递归网络的弱小目标检测方法" * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115908807A (en) * | 2022-11-24 | 2023-04-04 | 中国科学院国家空间科学中心 | Method, system, computer equipment and medium for quickly detecting weak and small targets |
Also Published As
Publication number | Publication date |
---|---|
CN115359258B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109767439B (en) | Target detection method for multi-scale difference and bilateral filtering of self-adaptive window | |
Guan et al. | Gaussian scale-space enhanced local contrast measure for small infrared target detection | |
CN111985329A (en) | Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection | |
CN106469313B (en) | A kind of detection method of small target of caliber adaptive space-time domain filtering | |
CN103761731A (en) | Small infrared aerial target detection method based on non-downsampling contourlet transformation | |
CN110706208A (en) | Infrared dim target detection method based on tensor mean square minimum error | |
CN110827262A (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN115359258B (en) | Weak and small target detection method and system for component uncertainty measurement | |
CN108038856B (en) | Infrared small target detection method based on improved multi-scale fractal enhancement | |
CN112598069A (en) | Hyperspectral target tracking method based on feature extraction and weight coefficient parameter updating | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
CN106056115B (en) | A kind of infrared small target detection method under non-homogeneous background | |
CN112598711B (en) | Hyperspectral target tracking method based on joint spectrum dimensionality reduction and feature fusion | |
CN112395944A (en) | Multi-scale ratio difference combined contrast infrared small target detection method based on weighting | |
CN112163606B (en) | Infrared small target detection method based on block contrast weighting | |
CN106778822B (en) | Image straight line detection method based on funnel transformation | |
CN116310837B (en) | SAR ship target rotation detection method and system | |
CN117253150A (en) | Ship contour extraction method and system based on high-resolution remote sensing image | |
Zhao et al. | Infrared small target detection using local component uncertainty measure with consistency assessment | |
CN112348853B (en) | Particle filter tracking method based on infrared saliency feature fusion | |
CN113379639B (en) | Anti-interference detection tracking method for infrared target in complex environment | |
CN113516187A (en) | Infrared weak and small target detection algorithm adopting local characteristic contrast | |
CN114429593A (en) | Infrared small target detection method based on rapid guided filtering and application thereof | |
CN108280453B (en) | Low-power-consumption rapid image target detection method based on deep learning | |
CN107292854B (en) | Gray level image enhancement method based on local singularity quantitative analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |