CN113935984A - Multi-feature fusion method and system for detecting infrared dim small target in complex background - Google Patents

Multi-feature fusion method and system for detecting infrared dim small target in complex background Download PDF

Info

Publication number
CN113935984A
CN113935984A CN202111281572.8A CN202111281572A CN113935984A CN 113935984 A CN113935984 A CN 113935984A CN 202111281572 A CN202111281572 A CN 202111281572A CN 113935984 A CN113935984 A CN 113935984A
Authority
CN
China
Prior art keywords
characteristic
target
image
saliency map
radiation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111281572.8A
Other languages
Chinese (zh)
Inventor
张程
曹菡
张玉营
姚佰栋
许涛
刘静寒
梁之勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 38 Research Institute
Original Assignee
CETC 38 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 38 Research Institute filed Critical CETC 38 Research Institute
Priority to CN202111281572.8A priority Critical patent/CN113935984A/en
Publication of CN113935984A publication Critical patent/CN113935984A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A multi-feature fusion method and system for detecting infrared dim targets in a complex background belong to the technical field of infrared target detection and identification, and solve the problems that in the prior art, when detecting infrared dim targets in the complex background, algorithm adaptability is poor, a large number of false alarms appear in a detection result, and detection result precision is affected; the method comprises the steps of firstly, extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics which respectively represent the radiation characteristics, structural characteristics and intensity characteristics of a weak target, fusing a plurality of characteristics, constructing a characteristic saliency map, enhancing the target and simultaneously inhibiting background noise; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.

Description

Multi-feature fusion method and system for detecting infrared dim small target in complex background
Technical Field
The invention belongs to the technical field of infrared target detection and identification, and relates to a multi-feature fusion method and system for detecting infrared dim and small targets in a complex background.
Background
According to the traditional infrared image weak and small target single characteristic threshold segmentation detection algorithm, the accuracy of the detection result depends on the intensity of target pixels and the number of the pixels, and because the proportion of the pixels occupied by small targets in an infrared image is very low, the small targets are often submerged by background and noise, for the infrared image, a preset threshold segmentation method is directly adopted for target detection, the processing characteristic is single, the obtained target detection rate is low, and the false alarm is high.
At present, infrared target detection methods based on multi-feature fusion at home and abroad are partially researched to form an available algorithm model, but several important problems are not solved yet and mainly appear as follows: firstly, most of the existing multi-feature fusion-based infrared target detection algorithms mainly focus on targets with more than medium size, and have poor extraction capability for target features with low target-background contrast and small pixel number, so that a high target detection rate is difficult to obtain. Secondly, the existing algorithm has poor adaptability to the detection of infrared weak and small targets in a complex background, so that a large number of false alarms appear in a detection result and the precision of the detection result is influenced.
The document 'infrared small target detection algorithm research under complex background' (northeast university, field dimension) published at 6 months of 2012 performs corresponding analysis on three elements of a target, a background and noise in an infrared image around the problem of infrared small target detection, extracts quantitative description of infrared image region complexity by analyzing an infrared small target image, analyzes and discusses variance weighted information entropy, gradient direction characteristics and local contrast characteristics of the image, and provides a sequence image detection algorithm based on three-dimensional wavelet transform because single-frame detection hardly guarantees a detection result. However, the technical scheme of the above document still has the problems of poor algorithm adaptability, low accuracy, a large amount of false alarms appearing in the detection result and influence on the detection result precision when detecting the infrared weak and small target in the complex background.
Disclosure of Invention
The invention aims to provide a multi-feature fusion method and a multi-feature fusion system for detecting infrared dim targets in a complex background, and aims to solve the problems that algorithm adaptability is poor when the infrared dim targets in the complex background are detected, a large number of false alarms appear in a detection result, and the accuracy of the detection result is influenced in the prior art.
Therefore, the technology for detecting the infrared dim and small targets in the complex background based on multi-feature fusion is provided, the interpretability of the dim and small targets in the infrared image is improved, and the accuracy of the detection result is improved.
The invention solves the technical problems through the following technical scheme:
the method for detecting the infrared dim small target in the complex background with multi-feature fusion comprises the following steps:
s1, inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
s2, fusing the radiation characteristic saliency map, the multi-order directional derivative characteristic saliency map and the spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
s3, for the saliency feature fusion map, firstly, the CFAR algorithm is adopted to detect weak and small targets by traversing each pixel gray level and judging whether the pixel gray level exceeds the self-adaptive segmentation threshold value, then, the detection results are clustered by using a pixel clustering mode, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
The technical scheme provided by the invention comprises the steps of firstly extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics respectively representing the radiation characteristics, structural characteristics and intensity characteristics of a weak and small target, fusing a plurality of characteristics, constructing a characteristic saliency map, and inhibiting background noise while enhancing the target; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.
As a further improvement of the technical solution of the present invention, the method for extracting the radiation characteristic saliency map in step S1 includes:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
Figure BDA0003331259280000021
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmIs related to an environmental parameter.
As a further improvement of the technical solution of the present invention, the method for extracting the multiple-order directional derivative feature saliency map in step S1 includes:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vector
Figure BDA0003331259280000031
The multi-order directional derivatives of (a) are characterized as follows:
Figure BDA0003331259280000032
b) calculating the orthogonality according to least square surface fitting and a polynomial:
Figure BDA0003331259280000033
obtaining:
Figure BDA0003331259280000034
Figure BDA0003331259280000035
Figure BDA0003331259280000036
three weight coefficient matrices are thus obtained as follows:
Figure BDA0003331259280000037
Figure BDA0003331259280000038
Figure BDA0003331259280000039
wherein α is a direction vector
Figure BDA0003331259280000041
And the x-axis, beta being the direction vector
Figure BDA0003331259280000042
And the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
Figure BDA0003331259280000043
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagThe characteristic diagram of the direction of the orthogonal direction,n (-) represents the normalization function.
As a further improvement of the technical solution of the present invention, the method for extracting the spectral feature saliency map described in step S1 includes:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
Figure BDA0003331259280000044
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2
as a further improvement of the technical solution of the present invention, the calculation formula of the significant feature fusion map described in step S2 is:
Figure BDA0003331259280000051
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
As a further improvement of the technical solution of the present invention, the calculation formula of the adaptive segmentation threshold value in step S3 is:
Figure BDA0003331259280000052
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
Multi-feature fusion's weak little target detection system of infrared in complicated background includes: a characteristic saliency map extraction module, a saliency characteristic fusion module, a detection result output module,
the feature saliency map extraction module: inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
the saliency characteristic fusion module fuses a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
the detection result output module: for the saliency feature fusion map, firstly, a CFAR algorithm is adopted to traverse each pixel gray level and judge whether the pixel gray level exceeds a self-adaptive segmentation threshold value to realize the detection of weak and small targets, then, a pixel clustering mode is used to cluster the detection results, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
As a further improvement of the technical scheme of the invention, the extraction method of the radiation characteristic saliency map comprises the following steps:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
Figure BDA0003331259280000061
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmRelated to an environmental parameter;
the extraction method of the multi-order directional derivative characteristic saliency map comprises the following steps:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vector
Figure BDA0003331259280000062
The multi-order directional derivatives of (a) are characterized as follows:
Figure BDA0003331259280000063
b) calculating the orthogonality according to least square surface fitting and a polynomial:
Figure BDA0003331259280000064
obtaining:
Figure BDA0003331259280000065
Figure BDA0003331259280000066
Figure BDA0003331259280000067
three weight coefficient matrices are thus obtained as follows:
Figure BDA0003331259280000071
Figure BDA0003331259280000072
Figure BDA0003331259280000073
wherein α is a direction vector
Figure BDA0003331259280000074
And the x-axis, beta being the direction vector
Figure BDA0003331259280000075
And the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
Figure BDA0003331259280000076
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagAn orthogonal directional characteristic diagram, N (-) representing a normalization function;
the extraction method of the spectral feature saliency map comprises the following steps:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
Figure BDA0003331259280000081
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2
as a further improvement of the technical solution of the present invention, the calculation formula of the saliency feature fusion map is:
Figure BDA0003331259280000082
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
As a further improvement of the technical solution of the present invention, the calculation formula of the adaptive segmentation threshold is:
Figure BDA0003331259280000083
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
The invention has the advantages that:
the technical scheme provided by the invention comprises the steps of firstly extracting radiation characteristics, multi-order directional derivative characteristics and spectrum characteristics respectively representing the radiation characteristics, structural characteristics and intensity characteristics of a weak and small target, fusing a plurality of characteristics, constructing a characteristic saliency map, and inhibiting background noise while enhancing the target; calculating a segmentation threshold of the image by using a CFAR self-adaptive detection method to obtain a binary segmentation result, and performing morphological processing to screen out false targets caused by isolated points and noise to obtain a final weak and small target detection result; the algorithm of the invention has low complexity, strong self-adaption for complex background, higher detection precision and convenient engineering realization.
Drawings
FIG. 1 is a flowchart of a method for detecting infrared dim targets in a complex background with multi-feature fusion according to a first embodiment of the present invention;
FIG. 2 is a flow chart of the extraction of the saliency map of the spectral features according to the first embodiment of the present invention;
fig. 3 is a schematic diagram of weak and small target detection according to a first embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical scheme of the invention is further described by combining the drawings and the specific embodiments in the specification:
example one
As shown in fig. 1, a method for detecting infrared weak and small targets in a complex background with multi-feature fusion includes the following steps:
step 1, correcting the original infrared digital image for environmental influence through the following calculation formula to obtain a zero-line-of-sight blackbody equivalent brightness and temperature two-dimensional distribution map:
Figure BDA0003331259280000091
B(Tobs) Is temperature equal to TobsThe black body radiation, namely the brightness and the temperature observed by the thermal imager can be obtained according to the observation result of the instrument; b (T)target) Is a target true temperature TtargetCorrespond toThe black body radiation temperature is obtained according to an instrument parameter table; epsilon is the equivalent emissivity of the black body in the wave band of the thermal imager; b (T)atm)·(1-τatm) For range radiation superimposed in the observed value, τa mAnd the environment parameters such as observation distance, atmospheric temperature and humidity and the like.
By inverse Planck transformation, the inversion result T of the zero-line-of-sight temperature field can be obtainedtargetI.e. significant radiation characteristic Sradiation
Sradiation=Ttarget=B-1(L)
Step 2, extracting each pixel point (x) on the image on the basis of the quantized original infrared digital image0,y0) Along the direction vector
Figure BDA0003331259280000092
The multiple order directional derivative characteristic of (1):
Figure BDA0003331259280000101
α is
Figure BDA0003331259280000102
Angle between vector and x-axis, beta being
Figure BDA0003331259280000103
And the y-axis, KiThe (i-4, 5,6) coefficient is related to the pixel coordinates of the input image and is denoted as Ki(x, y) (i ═ 4,5,6), calculated from the least squares surface fit and the orthogonality of the polynomials:
Figure BDA0003331259280000104
calculating to obtain:
Figure BDA0003331259280000105
Figure BDA0003331259280000106
Figure BDA0003331259280000107
three weight templates are calculated:
Figure BDA0003331259280000108
Figure BDA0003331259280000109
Figure BDA00033312592800001010
correcting the obtained direction characteristic diagram, traversing pixel values in the direction characteristic diagram, and setting the direction characteristic diagram to be zero if the direction characteristic diagram is larger than zero; carrying out normalization processing on the image, and taking a value interval [0,1 ]; the image is globally processed using a filter window of size 3 x 3.
And performing cross fusion on the images on all direction channels to obtain a direction characteristic diagram:
Figure BDA0003331259280000111
wherein g represents the number of sets of orthogonal bases; sgRepresents a characteristic diagram of the direction,. DELTA.SgIs represented by the formulagOrthogonal directional characteristic diagram, N (-) for normalization, SMODDThe fused feature map is shown. And performing point multiplication on mutually orthogonal feature map vectors.
Step 3, extracting each pixel point (x) on the image on the basis of the quantized original infrared digital image0,y0) Along the direction vector
Figure BDA0003331259280000112
Obtaining a saliency map of the image by using the multi-order directional derivative characteristics, and performing Fourier transform on the original image in a frequency domain:
IF=F(Iorig)
separating amplitude spectrum information and phase spectrum information of the original data from the spectrum image:
Af=Abs(IF)
Pf=Angle(IF)
using mean filters hn(f) The background amplitude spectrum of the image is fitted, and the size of the template can be adaptively adjusted according to the size of the image domain target.
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
Wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothThe result after mean filtering is taken as the basis. h isn(f) As a template:
Figure BDA0003331259280000113
removing the background portion estimate, i.e. Lf_smoothAnd the rest part is a significant log-amplitude spectrum, and IFFT transformation and high-pass filtering enhancement are carried out by adding the previously extracted phase spectrum:
Rf=Lf-Lf_smooth
Sx=gx*F-1[exp(Rf+Pf)]2
and 4, enhancing the significance of the saliency map according to a visual attention mechanism. Saliency map S for radiation characteristicsradiationMultiple order directional derivative feature saliency map SMODDSpectrum feature saliency map SxFusing the three characteristic graphs according to a fusion algorithm to obtain a final significant characteristic fusion graph SFSM
Figure BDA0003331259280000121
Step 5, fusing the significant features with a graph SFSMDetermining a background distribution model by traversing the gray level of each pixel of the global image by adopting a CFAR (computational fluid dynamics) method, estimating a background probability density distribution function P (x) parameter, and setting a false alarm rate PfaAnd solving the self-adaptive segmentation threshold value T according to a formula:
Figure BDA0003331259280000122
and judging whether the pixel-by-pixel exceeds an adaptive segmentation threshold value to realize the detection of the weak and small target. And then clustering the detection result by using a pixel clustering mode, finally screening out isolated points, and screening out false targets caused by noise to obtain a target segmentation result.
The invention provides a technology for detecting infrared small and weak targets in a complex background based on multi-feature fusion, aiming at the problem of detecting the small and weak targets in the complex background in an infrared image. After original infrared image data are obtained, firstly, an improved high-pass filter is adopted for enhancement preprocessing, then, zero-line-of-sight temperature field radiation characteristics, multi-order Directional Derivative (MODD) characteristics and spectrum characteristics of small infrared targets are respectively extracted, three types of characteristic fusion are fused by adopting a normalization mode based on a visual attention mechanism, a saliency characteristic fusion image is generated, target saliency is enhanced, the background is suppressed, the infrared weak small signal target detection capability in a complex natural background is improved, and based on a CFAR algorithm, image background probability distribution is fitted and self-adaptive segmentation threshold segmentation is carried out, so that the weak small target detection after saliency enhancement on a characteristic fusion image is realized. The feature fusion mainly utilizes a top-down visual attention mechanism, and completes the generation of a fused feature saliency map by solving a weighted geometric average after acquiring zero-line-of-sight temperature field inversion radiation features, multi-order directional derivative features and a spectral feature saliency map of the small infrared targets one by one. The multi-feature fusion utilizes the visual saliency advantages of each feature map, extracts the best performance of each saliency map, inhibits background noise and false alarm information, and projects attention to an interested salient region where a weak target of a fused image is located, so that a later detection task is prevented from paying attention to the whole image, the target detection false alarm rate is reduced, and the detection performance is improved. The whole multi-feature fusion infrared small and weak target detection algorithm module comprises three parts: the first part is to preprocess an original image and extract radiation characteristics of a target zero-line-of-sight temperature field, multi-order directional derivative characteristics and a spectrum characteristic saliency map; the second part is to perform top-down multi-feature fusion on the three feature saliency maps based on a visual attention mechanism to generate a saliency feature fusion image so as to inhibit complex background noise, improve the signal-to-noise ratio of the image and improve the representation capability of target characteristics; and the third part adopts a CFAR detection method to perform self-adaptive segmentation threshold segmentation on the generated fusion image and detect a target pixel. Compared with the traditional infrared image weak target single feature threshold segmentation detection method, the infrared weak target detection technology in the complex background based on multi-feature fusion does not need to preset a segmentation threshold, but extracts a target significance region based on fusion of multi-type features of the weak small target, and realizes the detection of the weak small target through a CFAR self-adaptive segmentation threshold detection method.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. The method for detecting the infrared dim small target in the complex background with multi-feature fusion is characterized by comprising the following steps of:
s1, inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
s2, fusing the radiation characteristic saliency map, the multi-order directional derivative characteristic saliency map and the spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
s3, for the saliency feature fusion map, firstly, the CFAR algorithm is adopted to detect weak and small targets by traversing each pixel gray level and judging whether the pixel gray level exceeds the self-adaptive segmentation threshold value, then, the detection results are clustered by using a pixel clustering mode, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
2. The method for detecting the infrared weak and small target in the complex background with multi-feature fusion as claimed in claim 1, wherein the method for extracting the radiation feature saliency map in step S1 is as follows:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
Figure FDA0003331259270000011
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmIs related to an environmental parameter.
3. The method for detecting infrared weak and small objects in a complex background with multi-feature fusion as claimed in claim 2, wherein the method for extracting the multi-order directional derivative feature saliency map in step S1 comprises:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vector
Figure FDA0003331259270000013
The multi-order directional derivatives of (a) are characterized as follows:
Figure FDA0003331259270000012
b) calculating the orthogonality according to least square surface fitting and a polynomial:
Figure FDA0003331259270000021
obtaining:
Figure FDA0003331259270000022
Figure FDA0003331259270000023
Figure FDA0003331259270000024
three weight coefficient matrices are thus obtained as follows:
Figure FDA0003331259270000025
Figure FDA0003331259270000026
Figure FDA0003331259270000027
wherein α is a direction vector
Figure FDA0003331259270000028
And the x-axis, beta being the direction vector
Figure FDA0003331259270000029
And the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
Figure FDA00033312592700000210
wherein g represents the number of sets of orthogonal bases; sgShowing a characteristic diagram of the direction of the image,
Figure FDA0003331259270000031
is represented by the formulagThe orthogonal directional characteristic diagram, N (·) represents a normalization function.
4. The method for detecting the infrared dim target in the complex background with the multi-feature fusion as claimed in claim 3, wherein the method for extracting the spectral feature saliency map in step S1 is as follows:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
Figure FDA0003331259270000032
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2
5. the method for detecting infrared weak and small targets in complex background with multi-feature fusion as claimed in claim 4, wherein the calculation formula of the significant feature fusion map in step S2 is:
Figure FDA0003331259270000033
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
6. The method for detecting infrared dim targets in complex background with multi-feature fusion according to claim 5, wherein the calculation formula of the adaptive segmentation threshold in step S3 is as follows:
Figure FDA0003331259270000041
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
7. Multi-feature fusion's weak little target detection system of infrared in complicated background, its characterized in that includes: a characteristic saliency map extraction module, a saliency characteristic fusion module, a detection result output module,
the feature saliency map extraction module: inputting an original infrared small target image, preprocessing the input image by adopting a high-pass filter, and respectively extracting a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map of the image;
the saliency characteristic fusion module fuses a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map by adopting a normalized characteristic fusion mode based on a visual attention mechanism to generate a saliency characteristic fusion map;
the detection result output module: for the saliency feature fusion map, firstly, a CFAR algorithm is adopted to traverse each pixel gray level and judge whether the pixel gray level exceeds a self-adaptive segmentation threshold value to realize the detection of weak and small targets, then, a pixel clustering mode is used to cluster the detection results, finally, isolated points are screened out, false targets caused by noise are screened out, and target segmentation results are obtained.
8. The system for detecting infrared dim targets in complex background with multi-feature fusion as claimed in claim 7, wherein the method for extracting the saliency map of radiation features is as follows:
1) correcting the original infrared small target image through environmental influence to obtain a temperature field two-dimensional distribution diagram, wherein the correction formula is as follows:
B(Tobs)=B(Ttarget)·ε·τatm+B(Tatm)·(1-τatm)
2) according to the obtained temperature field two-dimensional distribution graph, the zero-visual-distance black body equivalent brightness temperature two-dimensional distribution graph is obtained through calculation as follows:
Figure FDA0003331259270000042
3) obtaining a zero-visual-range temperature field inversion result T by carrying out inverse Planck transformation on the zero-visual-range black body equivalent brightness temperature two-dimensional distribution graphtargetNamely the radiation characteristic saliency map:
Sradiation=Ttarget=B-1(L)
wherein, B (T)obs) Is temperature equal to TobsBlack body radiation of, B (T)target) Is a target true temperature TtargetCorresponding blackbody radiation temperature, epsilon is the equivalent emissivity of the blackbody in the waveband of the thermal imager, B (T)atm)·(1-τatm) For range radiation superimposed in the observed value, the parameter τatmParticipating in the environmentNumber related;
the extraction method of the multi-order directional derivative characteristic saliency map comprises the following steps:
a) extracting each pixel point (x) on the original infrared small target image0,y0) Along the direction vector
Figure FDA0003331259270000051
The multi-order directional derivatives of (a) are characterized as follows:
Figure FDA0003331259270000052
b) calculating the orthogonality according to least square surface fitting and a polynomial:
Figure FDA0003331259270000053
obtaining:
Figure FDA0003331259270000054
Figure FDA0003331259270000055
Figure FDA0003331259270000056
three weight coefficient matrices are thus obtained as follows:
Figure FDA0003331259270000057
Figure FDA0003331259270000058
Figure FDA0003331259270000061
wherein α is a direction vector
Figure FDA0003331259270000062
And the x-axis, beta being the direction vector
Figure FDA0003331259270000063
And the y-axis, Ki(i ═ 4,5,6) represents a weight coefficient; wherein the parameters r, c represent the buffer radius in the x-axis and y-axis directions, respectively, I (x + r, y + c) represents the pixel value at point (x + r, y + c), Pi(r, c) represents a direction vector of the point (r, c);
c) correcting the obtained multi-order directional derivative characteristic graph, traversing pixel values in the characteristic graph, setting the pixel values to be zero if the pixel values are larger than zero, carrying out normalization processing on the characteristic graph, carrying out overall processing on the image by using a filtering window with the size of 3 multiplied by 3, carrying out cross fusion on the image on each directional channel, carrying out dot multiplication on mutually orthogonal characteristic graph vectors, inhibiting background clutter noise, enhancing weak and small targets, and obtaining a multi-order directional derivative characteristic saliency map as follows:
Figure FDA0003331259270000064
wherein g represents the number of sets of orthogonal bases; sgShowing a characteristic diagram of the direction of the image,
Figure FDA0003331259270000065
is represented by the formulagAn orthogonal directional characteristic diagram, N (-) representing a normalization function;
the extraction method of the spectral feature saliency map comprises the following steps:
i) in a frequency domain, carrying out Fourier transform on the preprocessed original infrared small target image to obtain:
IF=F(Iorig)
II) separating the amplitude spectrum and the phase spectrum of the original data from the frequency spectrum image to obtain:
Af=Abs(IF)
Pf=Angle(IF)
III) mean filter hn(f) Fitting a background amplitude spectrum of the image, and adaptively adjusting the size of the template according to the size of the image domain target to obtain:
L(f)=log(Af)
Lf_smooth=hn(f)*Lf
wherein L isfFor obtaining the image after log of the amplitude spectrum information, Lf_smoothTo average the filtered results on this basis, hn(f) As a template matrix, hn(f) The calculation formula of (a) is as follows:
Figure FDA0003331259270000066
IV) removing the background part estimation part to obtain a significant log-amplitude spectrum:
Rf=Lf-Lf_smooth
v) adding the significant log magnitude spectrum with the phase spectrum in the step II) and carrying out IFFT transformation and high-pass filtering enhancement to obtain a spectrum characteristic significant graph:
Sx=gx*F-1[exp(Rf+Pf)]2
9. the system for detecting infrared weak and small targets in a complex background with multi-feature fusion as claimed in claim 8, wherein the calculation formula of the saliency feature fusion map is as follows:
Figure FDA0003331259270000071
wherein S isradiation、SMODDAnd SxRespectively representing a radiation characteristic saliency map, a multi-order directional derivative characteristic saliency map and a spectrum characteristic saliency map.
10. The system for detecting infrared dim targets in complex background with multi-feature fusion according to claim 9, wherein the calculation formula of the adaptive segmentation threshold is:
Figure FDA0003331259270000072
where T is the adaptive segmentation threshold, PfaFor a set false alarm rate, p (x) is the background probability density distribution function.
CN202111281572.8A 2021-11-01 2021-11-01 Multi-feature fusion method and system for detecting infrared dim small target in complex background Pending CN113935984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111281572.8A CN113935984A (en) 2021-11-01 2021-11-01 Multi-feature fusion method and system for detecting infrared dim small target in complex background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111281572.8A CN113935984A (en) 2021-11-01 2021-11-01 Multi-feature fusion method and system for detecting infrared dim small target in complex background

Publications (1)

Publication Number Publication Date
CN113935984A true CN113935984A (en) 2022-01-14

Family

ID=79285325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111281572.8A Pending CN113935984A (en) 2021-11-01 2021-11-01 Multi-feature fusion method and system for detecting infrared dim small target in complex background

Country Status (1)

Country Link
CN (1) CN113935984A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463365A (en) * 2022-04-12 2022-05-10 中国空气动力研究与发展中心计算空气动力研究所 Infrared weak and small target segmentation method, device and medium
CN114463619A (en) * 2022-04-12 2022-05-10 西北工业大学 Infrared dim target detection method based on integrated fusion features
CN115631119A (en) * 2022-09-08 2023-01-20 江苏北方湖光光电有限公司 Image fusion method for improving target significance
CN117011196A (en) * 2023-08-10 2023-11-07 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463365A (en) * 2022-04-12 2022-05-10 中国空气动力研究与发展中心计算空气动力研究所 Infrared weak and small target segmentation method, device and medium
CN114463619A (en) * 2022-04-12 2022-05-10 西北工业大学 Infrared dim target detection method based on integrated fusion features
CN114463619B (en) * 2022-04-12 2022-07-08 西北工业大学 Infrared dim target detection method based on integrated fusion features
CN115631119A (en) * 2022-09-08 2023-01-20 江苏北方湖光光电有限公司 Image fusion method for improving target significance
CN117011196A (en) * 2023-08-10 2023-11-07 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization
CN117011196B (en) * 2023-08-10 2024-04-19 哈尔滨工业大学 Infrared small target detection method and system based on combined filtering optimization

Similar Documents

Publication Publication Date Title
CN113935984A (en) Multi-feature fusion method and system for detecting infrared dim small target in complex background
Nasiri et al. Infrared small target enhancement based on variance difference
Krishnan et al. A survey on different edge detection techniques for image segmentation
CN111161222A (en) Printing roller defect detection method based on visual saliency
CN113421206B (en) Image enhancement method based on infrared polarization imaging
Li et al. A small target detection algorithm in infrared image by combining multi-response fusion and local contrast enhancement
Ma et al. A method for infrared sea-sky condition judgment and search system: Robust target detection via PLS and CEDoG
CN113822279B (en) Infrared target detection method, device, equipment and medium based on multi-feature fusion
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
Chen et al. Attention-based hierarchical fusion of visible and infrared images
Xu et al. COCO-Net: A dual-supervised network with unified ROI-loss for low-resolution ship detection from optical satellite image sequences
CN112163606B (en) Infrared small target detection method based on block contrast weighting
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN112669332B (en) Method for judging sea-sky conditions and detecting infrared targets based on bidirectional local maxima and peak value local singularities
CN111461999A (en) SAR image speckle suppression method based on super-pixel similarity measurement
Wu et al. Image Edge Detection Based on Sobel with Morphology
CN115205216A (en) Infrared small target detection method based on significance and weighted guide filtering
Li et al. Infrared small target detection based on 1-D difference of guided filtering
Khan Underwater Image Restoration Using Fusion and Wavelet Transform Strategy.
CN114429593A (en) Infrared small target detection method based on rapid guided filtering and application thereof
Janoriya et al. Critical review on edge detection techniques in spatial domain on low illumination images
Ning et al. Ship detection of infrared image in complex scene based on bilateral filter enhancement
CN112329796A (en) Infrared imaging cirrus cloud detection method and device based on visual saliency
Qi et al. Fast detection of small infrared objects in maritime scenes using local minimum patterns
Xiao et al. Underwater image classification based on image enhancement and information quality evaluation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination