CN113935984A - Multi-feature fusion method and system for detecting infrared dim small target in complex background - Google Patents
Multi-feature fusion method and system for detecting infrared dim small target in complex background Download PDFInfo
- Publication number
- CN113935984A CN113935984A CN202111281572.8A CN202111281572A CN113935984A CN 113935984 A CN113935984 A CN 113935984A CN 202111281572 A CN202111281572 A CN 202111281572A CN 113935984 A CN113935984 A CN 113935984A
- Authority
- CN
- China
- Prior art keywords
- characteristic
- target
- image
- saliency map
- radiation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title abstract description 5
- 238000001228 spectrum Methods 0.000 claims abstract description 61
- 238000001514 detection method Methods 0.000 claims abstract description 59
- 230000005855 radiation Effects 0.000 claims abstract description 40
- 230000011218 segmentation Effects 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000002708 enhancing effect Effects 0.000 claims abstract description 8
- 230000002401 inhibitory effect Effects 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims description 57
- 239000013598 vector Substances 0.000 claims description 24
- 238000010586 diagram Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000000605 extraction Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 8
- 230000005457 Black-body radiation Effects 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000000877 morphologic effect Effects 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012216 screening Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
A multi-feature fusion method and system for detecting infrared dim targets in a complex background belong to the technical field of infrared target detection and identification, and solve the problems that in the prior art, when detecting infrared dim targets in the complex background, algorithm adaptability is poor, a large number of false alarms appear in a detection result, and detection result precision is affected; the method comprises the steps of firstly, extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics which respectively represent the radiation characteristics, structural characteristics and intensity characteristics of a weak target, fusing a plurality of characteristics, constructing a characteristic saliency map, enhancing the target and simultaneously inhibiting background noise; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.
Description
Technical Field
The invention belongs to the technical field of infrared target detection and identification, and relates to a multi-feature fusion method and system for detecting infrared dim and small targets in a complex background.
Background
According to the traditional infrared image weak and small target single characteristic threshold segmentation detection algorithm, the accuracy of the detection result depends on the intensity of target pixels and the number of the pixels, and because the proportion of the pixels occupied by small targets in an infrared image is very low, the small targets are often submerged by background and noise, for the infrared image, a preset threshold segmentation method is directly adopted for target detection, the processing characteristic is single, the obtained target detection rate is low, and the false alarm is high.
At present, infrared target detection methods based on multi-feature fusion at home and abroad are partially researched to form an available algorithm model, but several important problems are not solved yet and mainly appear as follows: firstly, most of the existing multi-feature fusion-based infrared target detection algorithms mainly focus on targets with more than medium size, and have poor extraction capability for target features with low target-background contrast and small pixel number, so that a high target detection rate is difficult to obtain. Secondly, the existing algorithm has poor adaptability to the detection of infrared weak and small targets in a complex background, so that a large number of false alarms appear in a detection result and the precision of the detection result is influenced.
The document 'infrared small target detection algorithm research under complex background' (northeast university, field dimension) published at 6 months of 2012 performs corresponding analysis on three elements of a target, a background and noise in an infrared image around the problem of infrared small target detection, extracts quantitative description of infrared image region complexity by analyzing an infrared small target image, analyzes and discusses variance weighted information entropy, gradient direction characteristics and local contrast characteristics of the image, and provides a sequence image detection algorithm based on three-dimensional wavelet transform because single-frame detection hardly guarantees a detection result. However, the technical scheme of the above document still has the problems of poor algorithm adaptability, low accuracy, a large amount of false alarms appearing in the detection result and influence on the detection result precision when detecting the infrared weak and small target in the complex background.
Disclosure of Invention
The invention aims to provide a multi-feature fusion method and a multi-feature fusion system for detecting infrared dim targets in a complex background, and aims to solve the problems that algorithm adaptability is poor when the infrared dim targets in the complex background are detected, a large number of false alarms appear in a detection result, and the accuracy of the detection result is influenced in the prior art.
Therefore, the technology for detecting the infrared dim and small targets in the complex background based on multi-feature fusion is provided, the interpretability of the dim and small targets in the infrared image is improved, and the accuracy of the detection result is improved.
The invention solves the technical problems through the following technical scheme:
the method for detecting the infrared dim small target in the complex background with multi-feature fusion comprises the following steps:
s1, inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
s2, fusing the radiation characteristic saliency map, the multi-order directional derivative characteristic saliency map and the spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
s3, for the saliency feature fusion map, firstly, the CFAR algorithm is adopted to detect weak and small targets by traversing each pixel gray level and judging whether the pixel gray level exceeds the self-adaptive segmentation threshold value, then, the detection results are clustered by using a pixel clustering mode, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
The technical scheme provided by the invention comprises the steps of firstly extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics respectively representing the radiation characteristics, structural characteristics and intensity characteristics of a weak and small target, fusing a plurality of characteristics, constructing a characteristic saliency map, and inhibiting background noise while enhancing the target; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.
As a further improvement of the technical solution of the present invention, the method for extracting the radiation characteristic saliency map in step S1 includes:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmIs related to an environmental parameter.
As a further improvement of the technical solution of the present invention, the method for extracting the multiple-order directional derivative feature saliency map in step S1 includes:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vectorThe multi-order directional derivatives of (a) are characterized as follows:
b) calculating the orthogonality according to least square surface fitting and a polynomial:
obtaining:
three weight coefficient matrices are thus obtained as follows:
wherein α is a direction vectorAnd the x-axis, beta being the direction vectorAnd the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagThe characteristic diagram of the direction of the orthogonal direction,n (-) represents the normalization function.
As a further improvement of the technical solution of the present invention, the method for extracting the spectral feature saliency map described in step S1 includes:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2。
as a further improvement of the technical solution of the present invention, the calculation formula of the significant feature fusion map described in step S2 is:
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
As a further improvement of the technical solution of the present invention, the calculation formula of the adaptive segmentation threshold value in step S3 is:
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
Multi-feature fusion's weak little target detection system of infrared in complicated background includes: a characteristic saliency map extraction module, a saliency characteristic fusion module, a detection result output module,
the feature saliency map extraction module: inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
the saliency characteristic fusion module fuses a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
the detection result output module: for the saliency feature fusion map, firstly, a CFAR algorithm is adopted to traverse each pixel gray level and judge whether the pixel gray level exceeds a self-adaptive segmentation threshold value to realize the detection of weak and small targets, then, a pixel clustering mode is used to cluster the detection results, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
As a further improvement of the technical scheme of the invention, the extraction method of the radiation characteristic saliency map comprises the following steps:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmRelated to an environmental parameter;
the extraction method of the multi-order directional derivative characteristic saliency map comprises the following steps:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vectorThe multi-order directional derivatives of (a) are characterized as follows:
b) calculating the orthogonality according to least square surface fitting and a polynomial:
obtaining:
three weight coefficient matrices are thus obtained as follows:
wherein α is a direction vectorAnd the x-axis, beta being the direction vectorAnd the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagAn orthogonal directional characteristic diagram, N (-) representing a normalization function;
the extraction method of the spectral feature saliency map comprises the following steps:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2。
as a further improvement of the technical solution of the present invention, the calculation formula of the saliency feature fusion map is:
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
As a further improvement of the technical solution of the present invention, the calculation formula of the adaptive segmentation threshold is:
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
The invention has the advantages that:
the technical scheme provided by the invention comprises the steps of firstly extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics respectively representing the radiation characteristics, structural characteristics and intensity characteristics of a weak and small target, fusing a plurality of characteristics, constructing a characteristic saliency map, and inhibiting background noise while enhancing the target; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.
Drawings
FIG. 1 is a flowchart of a method for detecting infrared dim targets in a complex background with multi-feature fusion according to a first embodiment of the present invention;
FIG. 2 is a flow chart of the extraction of the saliency map of the spectral features according to the first embodiment of the present invention;
fig. 3 is a schematic diagram of weak and small target detection according to a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the invention is further described by combining the drawings and the specific embodiments in the specification:
example one
As shown in fig. 1, a method for detecting infrared weak and small targets in a complex background with multi-feature fusion includes the following steps:
step 1, correcting the original infrared digital image for environmental influence through the following calculation formula to obtain a zero-line-of-sight blackbody equivalent brightness and temperature two-dimensional distribution map:
B(Tobs) Is temperature equal to TobsThe black body radiation, namely the brightness and the temperature observed by the thermal imager can be obtained according to the observation result of the instrument; b (T)target) Is a target true temperature TtargetCorrespond toThe black body radiation temperature is obtained according to an instrument parameter table; epsilon is the equivalent emissivity of the black body in the wave band of the thermal imager; b (T)atm)·(1-τatm) For range radiation superimposed in the observed value, τa mAnd the environment parameters such as observation distance, atmospheric temperature and humidity and the like.
By inverse Planck transformation, the inversion result T of the zero-line-of-sight temperature field can be obtainedtargetI.e. significant radiation characteristic Sradiation:
Sradiation=Ttarget=B-1(L)
Step 2, extracting each pixel point (x) on the image on the basis of the quantized original infrared digital image0,y0) Along the direction vectorThe multiple order directional derivative characteristic of (1):
α isAngle between vector and x-axis, beta beingAnd the y-axis, KiThe (i-4, 5,6) coefficient is related to the pixel coordinates of the input image and is denoted as Ki(x, y) (i ═ 4,5,6), calculated from the least squares surface fit and the orthogonality of the polynomials:
calculating to obtain:
three weight templates are calculated:
correcting the obtained direction characteristic diagram, traversing pixel values in the direction characteristic diagram, and setting the direction characteristic diagram to be zero if the direction characteristic diagram is larger than zero; carrying out normalization processing on the image, and taking a value interval [0,1 ]; the image is globally processed using a filter window of size 3 x 3.
And performing cross fusion on the images on all direction channels to obtain a direction characteristic diagram:
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagOrthogonal directional characteristic diagram, N (-) for normalization, SMODDThe fused feature map is shown. And performing point multiplication on mutually orthogonal feature map vectors.
Step 3, extracting each pixel point (x) on the image on the basis of the quantized original infrared digital image0,y0) Along the direction vectorObtaining a saliency map of the image by using the multi-order directional derivative characteristics, and performing Fourier transform on the original image in a frequency domain:
IF=F(Iorig)
separating amplitude spectrum information and phase spectrum information of the original data from the spectrum image:
Af=Abs(IF)
Pf=Angle(IF)
using mean filters hn(f) The background amplitude spectrum of the image is fitted, and the size of the template can be adaptively adjusted according to the size of the image domain target.
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
Wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothThe result after mean filtering is taken as the basis. h isn(f) As a template:
removing the background portion estimate, i.e. Lf_smoothAnd the rest part is a significant log-amplitude spectrum, and IFFT transformation and high-pass filtering enhancement are carried out by adding the previously extracted phase spectrum:
Rf=Lf-Lf_smooth
Sx=gx*F-1[exp(Rf+Pf)]2
and 4, enhancing the significance of the saliency map according to a visual attention mechanism. Saliency map S for radiation characteristicsradiationMultiple order directional derivative feature saliency map SMODDSpectrum feature saliency map SxFusing the three characteristic graphs according to a fusion algorithm to obtain a final significant characteristic fusion graph SFSM:
Step 5, fusing the significant features with a graph SFSMDetermining a background distribution model by traversing the gray level of each pixel of the global image by adopting a CFAR (computational fluid dynamics) method, estimating a background probability density distribution function P (x) parameter, and setting a false alarm rate PfaAnd solving the self-adaptive segmentation threshold value T according to a formula:
and judging whether the pixel-by-pixel exceeds an adaptive segmentation threshold value to realize the detection of the weak and small target. And then clustering the detection result by using a pixel clustering mode, finally screening out isolated points, and screening out false targets caused by noise to obtain a target segmentation result.
The invention provides a technology for detecting infrared small and weak targets in a complex background based on multi-feature fusion, aiming at the problem of detecting the small and weak targets in the complex background in an infrared image. After original infrared image data are obtained, firstly, an improved high-pass filter is adopted for enhancement preprocessing, then, zero-line-of-sight temperature field radiation characteristics, multi-order Directional Derivative (MODD) characteristics and spectrum characteristics of small infrared targets are respectively extracted, three types of characteristic fusion are fused by adopting a normalization mode based on a visual attention mechanism, a saliency characteristic fusion image is generated, target saliency is enhanced, the background is suppressed, the infrared weak small signal target detection capability in a complex natural background is improved, and based on a CFAR algorithm, image background probability distribution is fitted and self-adaptive segmentation threshold segmentation is carried out, so that the weak small target detection after saliency enhancement on a characteristic fusion image is realized. The feature fusion mainly utilizes a top-down visual attention mechanism, and completes the generation of a fused feature saliency map by solving a weighted geometric average after acquiring zero-line-of-sight temperature field inversion radiation features, multi-order directional derivative features and a spectral feature saliency map of the small infrared targets one by one. The multi-feature fusion utilizes the visual saliency advantages of each feature map, extracts the best performance of each saliency map, inhibits background noise and false alarm information, and projects attention to an interested salient region where a weak target of a fused image is located, so that a later detection task is prevented from paying attention to the whole image, the target detection false alarm rate is reduced, and the detection performance is improved. The whole multi-feature fusion infrared small and weak target detection algorithm module comprises three parts: the first part is to preprocess an original image and extract radiation characteristics of a target zero-line-of-sight temperature field, multi-order directional derivative characteristics and a spectrum characteristic saliency map; the second part is to perform top-down multi-feature fusion on the three feature saliency maps based on a visual attention mechanism to generate a saliency feature fusion image so as to inhibit complex background noise, improve the signal-to-noise ratio of the image and improve the representation capability of target characteristics; and the third part adopts a CFAR detection method to perform self-adaptive segmentation threshold segmentation on the generated fusion image and detect a target pixel. Compared with the traditional infrared image weak target single feature threshold segmentation detection method, the infrared weak target detection technology in the complex background based on multi-feature fusion does not need to preset a segmentation threshold, but extracts a target significance region based on fusion of multi-type features of the weak small target, and realizes the detection of the weak small target through a CFAR self-adaptive segmentation threshold detection method.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. The method for detecting the infrared dim small target in the complex background with multi-feature fusion is characterized by comprising the following steps of:
s1, inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
s2, fusing the radiation characteristic saliency map, the multi-order directional derivative characteristic saliency map and the spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
s3, for the saliency feature fusion map, firstly, the CFAR algorithm is adopted to detect weak and small targets by traversing each pixel gray level and judging whether the pixel gray level exceeds the self-adaptive segmentation threshold value, then, the detection results are clustered by using a pixel clustering mode, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
2. The method for detecting the infrared weak and small target in the complex background with multi-feature fusion as claimed in claim 1, wherein the method for extracting the radiation feature saliency map in step S1 is as follows:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmIs related to an environmental parameter.
3. The method for detecting infrared weak and small objects in a complex background with multi-feature fusion as claimed in claim 2, wherein the method for extracting the multi-order directional derivative feature saliency map in step S1 comprises:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vectorThe multi-order directional derivatives of (a) are characterized as follows:
b) calculating the orthogonality according to least square surface fitting and a polynomial:
obtaining:
three weight coefficient matrices are thus obtained as follows:
wherein α is a direction vectorAnd the x-axis, beta being the direction vectorAnd the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
4. The method for detecting the infrared dim target in the complex background with the multi-feature fusion as claimed in claim 3, wherein the method for extracting the spectral feature saliency map in step S1 is as follows:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2。
5. the method for detecting infrared weak and small targets in complex background with multi-feature fusion as claimed in claim 4, wherein the calculation formula of the significant feature fusion map in step S2 is:
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
6. The method for detecting infrared dim targets in complex background with multi-feature fusion according to claim 5, wherein the calculation formula of the adaptive segmentation threshold in step S3 is as follows:
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
7. Multi-feature fusion's weak little target detection system of infrared in complicated background, its characterized in that includes: a characteristic saliency map extraction module, a saliency characteristic fusion module, a detection result output module,
the feature saliency map extraction module: inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
the saliency characteristic fusion module fuses a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
the detection result output module: for the saliency feature fusion map, firstly, a CFAR algorithm is adopted to traverse each pixel gray level and judge whether the pixel gray level exceeds a self-adaptive segmentation threshold value to realize the detection of weak and small targets, then, a pixel clustering mode is used to cluster the detection results, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
8. The system for detecting infrared dim targets in complex background with multi-feature fusion as claimed in claim 7, wherein the method for extracting the saliency map of radiation features is as follows:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmParticipating in the environmentNumber related;
the extraction method of the multi-order directional derivative characteristic saliency map comprises the following steps:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vectorThe multi-order directional derivatives of (a) are characterized as follows:
b) calculating the orthogonality according to least square surface fitting and a polynomial:
obtaining:
three weight coefficient matrices are thus obtained as follows:
wherein α is a direction vectorAnd the x-axis, beta being the direction vectorAnd the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
wherein g represents the number of sets of orthogonal bases; sgShowing a characteristic diagram of the direction of the image,is represented by the formulagAn orthogonal directional characteristic diagram, N (-) representing a normalization function;
the extraction method of the spectral feature saliency map comprises the following steps:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2。
9. the system for detecting infrared weak and small targets in a complex background with multi-feature fusion as claimed in claim 8, wherein the calculation formula of the saliency feature fusion map is as follows:
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
10. The system for detecting infrared dim targets in complex background with multi-feature fusion according to claim 9, wherein the calculation formula of the adaptive segmentation threshold is:
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111281572.8A CN113935984A (en) | 2021-11-01 | 2021-11-01 | Multi-feature fusion method and system for detecting infrared dim small target in complex background |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111281572.8A CN113935984A (en) | 2021-11-01 | 2021-11-01 | Multi-feature fusion method and system for detecting infrared dim small target in complex background |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113935984A true CN113935984A (en) | 2022-01-14 |
Family
ID=79285325
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111281572.8A Pending CN113935984A (en) | 2021-11-01 | 2021-11-01 | Multi-feature fusion method and system for detecting infrared dim small target in complex background |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113935984A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463365A (en) * | 2022-04-12 | 2022-05-10 | 中国空气动力研究与发展中心计算空气动力研究所 | Infrared weak and small target segmentation method, device and medium |
CN114463619A (en) * | 2022-04-12 | 2022-05-10 | 西北工业大学 | Infrared dim target detection method based on integrated fusion features |
CN115631119A (en) * | 2022-09-08 | 2023-01-20 | 江苏北方湖光光电有限公司 | Image fusion method for improving target significance |
CN117011196A (en) * | 2023-08-10 | 2023-11-07 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
CN118037847A (en) * | 2024-04-15 | 2024-05-14 | 北京航空航天大学 | Method and device for rapidly positioning region of interest based on frequency domain difference |
-
2021
- 2021-11-01 CN CN202111281572.8A patent/CN113935984A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114463365A (en) * | 2022-04-12 | 2022-05-10 | 中国空气动力研究与发展中心计算空气动力研究所 | Infrared weak and small target segmentation method, device and medium |
CN114463619A (en) * | 2022-04-12 | 2022-05-10 | 西北工业大学 | Infrared dim target detection method based on integrated fusion features |
CN114463619B (en) * | 2022-04-12 | 2022-07-08 | 西北工业大学 | Infrared dim target detection method based on integrated fusion features |
CN115631119A (en) * | 2022-09-08 | 2023-01-20 | 江苏北方湖光光电有限公司 | Image fusion method for improving target significance |
CN117011196A (en) * | 2023-08-10 | 2023-11-07 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
CN117011196B (en) * | 2023-08-10 | 2024-04-19 | 哈尔滨工业大学 | Infrared small target detection method and system based on combined filtering optimization |
CN118037847A (en) * | 2024-04-15 | 2024-05-14 | 北京航空航天大学 | Method and device for rapidly positioning region of interest based on frequency domain difference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113935984A (en) | Multi-feature fusion method and system for detecting infrared dim small target in complex background | |
Nasiri et al. | Infrared small target enhancement based on variance difference | |
Krishnan et al. | A survey on different edge detection techniques for image segmentation | |
CN113421206B (en) | Image enhancement method based on infrared polarization imaging | |
Li et al. | A small target detection algorithm in infrared image by combining multi-response fusion and local contrast enhancement | |
CN107292900A (en) | A kind of method for detecting image edge and device based on Canny algorithms | |
CN114612359A (en) | Visible light and infrared image fusion method based on feature extraction | |
CN112070717A (en) | Power transmission line icing thickness detection method based on image processing | |
Ma et al. | A method for infrared sea-sky condition judgment and search system: Robust target detection via PLS and CEDoG | |
Zhao et al. | An adaptation of CNN for small target detection in the infrared | |
CN113822279B (en) | Infrared target detection method, device, equipment and medium based on multi-feature fusion | |
Chen et al. | Attention-based hierarchical fusion of visible and infrared images | |
CN106097257B (en) | A kind of image de-noising method and device | |
CN112163606B (en) | Infrared small target detection method based on block contrast weighting | |
CN113205494B (en) | Infrared small target detection method and system based on adaptive scale image block weighting difference measurement | |
Wu et al. | Image Edge Detection Based on Sobel with Morphology | |
CN112669332B (en) | Method for judging sea-sky conditions and detecting infrared targets based on bidirectional local maxima and peak value local singularities | |
Li et al. | Infrared small target detection based on 1-D difference of guided filtering | |
Khan | Underwater Image Restoration Using Fusion and Wavelet Transform Strategy. | |
Xiao et al. | Underwater image classification based on image enhancement and information quality evaluation | |
CN115205216A (en) | Infrared small target detection method based on significance and weighted guide filtering | |
CN114429593A (en) | Infrared small target detection method based on rapid guided filtering and application thereof | |
Janoriya et al. | Critical review on edge detection techniques in spatial domain on low illumination images | |
Ning et al. | Ship detection of infrared image in complex scene based on bilateral filter enhancement | |
CN112329796A (en) | Infrared imaging cirrus cloud detection method and device based on visual saliency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |