CN114549642B - Low-contrast infrared dim target detection method - Google Patents

Low-contrast infrared dim target detection method Download PDF

Info

Publication number
CN114549642B
CN114549642B CN202210123360.5A CN202210123360A CN114549642B CN 114549642 B CN114549642 B CN 114549642B CN 202210123360 A CN202210123360 A CN 202210123360A CN 114549642 B CN114549642 B CN 114549642B
Authority
CN
China
Prior art keywords
template
layer
pixel
map
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210123360.5A
Other languages
Chinese (zh)
Other versions
CN114549642A (en
Inventor
穆靖
李范鸣
李伟华
饶俊民
卫红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technical Physics of CAS
Original Assignee
Shanghai Institute of Technical Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technical Physics of CAS filed Critical Shanghai Institute of Technical Physics of CAS
Priority to CN202210123360.5A priority Critical patent/CN114549642B/en
Publication of CN114549642A publication Critical patent/CN114549642A/en
Application granted granted Critical
Publication of CN114549642B publication Critical patent/CN114549642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a low-contrast infrared dim target detection method. The invention realizes the detection of the weak and small target based on the gray level distribution characteristic of the infrared weak and small target and the gray level difference existing between the target and the surrounding background. First a three-layer sliding template is constructed, which is used to traverse the original image pixel by pixel from top to bottom, left to right. And (3) fully utilizing the difference of gray distribution among different layers of the template, respectively calculating a gray difference contrast map and a variance difference contrast map, and then multiplying and weighting the two contrast maps to obtain a target significance map. And finally, extracting the target to be detected from the saliency map by adopting an adaptive threshold segmentation algorithm. According to the method, the detection of the low-contrast infrared weak and small target under the near-ground complex background is realized by defining the gray level difference contrast and the variance difference contrast, so that the target detection rate is effectively improved; the method is low in complexity and convenient for parallelization processing.

Description

Low-contrast infrared dim target detection method
Technical field:
The invention belongs to the technical field of image processing, and mainly relates to a detection method suitable for low-contrast weak and small targets in an infrared image containing a complex background, which is particularly suitable for carrying out background suppression and target enhancement on the infrared image under the complex near-ground background in an infrared search tracking system, improving the signal-to-noise ratio of the image and realizing high-precision real-time detection on the weak and small targets.
The background technology is as follows:
The infrared imaging equipment has the advantages of strong fog penetrating capability, day and night work and the like, and has wide application in the fields of security monitoring, industrial manufacturing and the like. The search tracking system with the infrared imaging device is widely applied to remote target detection, such as unmanned aerial vehicle monitoring, field rescue and the like. The imaging area of the target is small due to the fact that the detection distance of the device is long, the actual size of the target is small, the resolution of the infrared imaging device is low, and the target is weak and small in the infrared image due to the lack of shape and texture information. In addition, in the practical application scene, as the detection environment is complex and changeable and comprises various radiation sources such as trees, rocks and clouds, the target to be detected is very easy to submerge in a complex background, the characteristic of low contrast is presented in the image, and the difficulty of target detection is further increased. In summary, it is still a difficult task to realize the detection of far infrared dim targets in a near-earth complex background.
Traditional single-frame infrared weak and small detection algorithms can be divided into three main categories: algorithms based on filtering, algorithms based on human visual contrast mechanisms, and algorithms based on matrix decomposition.
The filtering-based algorithm utilizes the low-frequency characteristic of the background to design a linear or nonlinear filter to realize background suppression, and mainly comprises morphological filtering, gaussian differential filtering, two-dimensional least mean square filtering and other algorithms. Such algorithms are easy to implement, but are applicable to a simple, clean background; the number of false alarms of such algorithms increases substantially when complex radiation sources are included in the background. In addition, since such an algorithm only achieves suppression of the background, when the contrast of the target is low, the target is liable to be missed.
According to the contrast mechanism of the human visual system, a related researcher proposes a local contrast algorithm according to the gray level difference of the target and the neighborhood background thereof in 2014, so that the enhancement of the target is realized. However, the algorithm has the defect that the algorithm cannot restrain the highlight background and the point noise, so that the false alarm rate of the algorithm under the complex background is high. Since the algorithm well combines the radiation characteristic of the infrared weak and small target with the human visual contrast characteristic, and the implementation is simple, a large number of researchers follow the work, and more priori and hypothesis are introduced on the basis of the algorithm to improve the algorithm. Such as a ratio-difference joint local correlation contrast metric algorithm (RLCM), an image entropy weighted local difference metric algorithm (WLDM), a uniformity weighted local contrast metric algorithm (HWLCM), and the like. Meanwhile, a learner combines a filtering algorithm with the algorithm to propose an improved algorithm and simultaneously realize background suppression and target enhancement, such as a MDTDLMS-RDLCM algorithm combining a two-dimensional minimum mean square filtering algorithm and a ratio-difference combined local contrast ratio and an algorithm combining morphological filtering and pre-processing and then detecting of a local contrast operator. The method fully utilizes the characteristics of the background and the target to construct a more complex local contrast operator, introduces related preprocessing, improves the enhancement effect of the target, correspondingly improves the complexity of the algorithm and reduces the real-time performance of the algorithm.
The matrix decomposition-based algorithm converts the weak and small target detection problem into a matrix decomposition problem by utilizing the low rank property of a background matrix and the sparsity of a target matrix, and a background image and a target image can be decomposed from an original infrared image by using a convex optimization algorithm. Although the algorithm improves the detection rate of the target, the algorithm is sensitive to a high-frequency region and has high algorithm complexity, so that parallelization processing is not facilitated.
In summary, the conventional single-frame infrared weak and small target detection algorithm still has the defects of high false alarm rate, low detection rate and poor instantaneity. The reasons for the high false alarm rate and the low detection rate include that the current algorithm is sensitive to high-frequency complex background and cannot well extract the characteristics of low-contrast targets. And the reason for the poor real-time performance is that the algorithm is high in complexity and unfavorable for parallel processing. Therefore, the local difference measurement algorithm based on the three-layer template provided by the invention has important significance for realizing the real-time detection of the low-contrast infrared weak and small target under the complex background.
The invention comprises the following steps:
In order to overcome the defects of the prior art, the invention provides a low-contrast infrared weak and small target detection method with low false alarm rate and good real-time performance, so as to solve the problems of high false alarm rate and high engineering application difficulty in the prior art. The main basis of the method is the radiation characteristic of the weak and small target and the difference between the radiation characteristic and the background. Traversing an original image by using a three-layer sliding template with a single size, respectively counting gray average values and standard deviations of pixels contained in different layers of the template, sequentially calculating gray difference contrast and variance difference contrast, and simultaneously realizing target enhancement and background suppression to obtain a weighted saliency map; and finally, obtaining a binarized image by adopting a threshold segmentation algorithm, and outputting whether the original image contains the target and the position of the target in the original image by judging the number and the mass center of the connected domains in the binarized image.
The above object of the present invention is achieved by the following technical solutions:
1. A detection method of a low-contrast infrared weak and small target is characterized by combining a gray level difference measure and a variance difference measure to form a three-layer template local difference measure, and comprises the following steps:
(1) Defining a three-layer sliding template, wherein the template is divided into an inner layer, an intermediate layer and an outer layer. The original image is traversed pixel by pixel from left to right and from top to bottom by using the three-layer template, and a partial image overlapped with the three-layer template is extracted centering on each pixel of the original image. When the three-layer template is traversed to the pixel (x, y) of the original image, the pixels contained in the different layer regions of the template are:
Ωin={(i,j)|max(|i-x|,|j-y|)≤0.5k}, (1)
Ωmid={(i,j)|0.5k<max(|i-x|,|j-y|)≤1.5k}, (2)
Ωout={(i,j)|1.5k<max(|i-x|,|j-y|)≤2.5k}, (3)
Wherein omega in、Ωmid、Ωout respectively represents the inner layer, the middle layer and the outer layer of the three-layer template; (i, j) is the coordinates of any pixel point inside the three-layer template; k is the width of the inner layer of the template, and the value range is 3-7;
(2) Respectively calculating the pixel number and the gray average value of the local image taking the pixel (x, y) as the center in the inner layer, the middle layer and the outer layer of the template:
Wherein M in、Mmid、Mout is the gray average value of the pixels contained in the inner layer, the middle layer and the outer layer of the template, N in、Nmid、Nout is the number of the pixels contained in the inner layer, the middle layer and the outer layer of the template, and g (i, j) is the gray value of the pixels at the position of the image (i, j);
(3) Calculating gray difference contrast by using gray average values of the local images at different layers of the template to obtain a gray difference feature map I GSD:
IGSD=H[(Min-Mmid)+(Min-Mout)], (7)
Wherein H (t) is used to determine the sign of the pixel difference, defined as:
(4) Respectively calculating the total number of pixels and the gray average value of the local image taking the pixel (x, y) as the center in the middle layer and the inner layer of the template:
Nmi=Nin+Nmid, (9)
Ωmi=Ωin∪Ωmid, (10)
Wherein N mi is the total number of pixels contained in the template middle layer and inner layer region, Ω mi is the union region formed by the template middle layer and inner layer, and M mi is the gray scale average value of the union region formed by the template middle layer and inner layer.
(5) Respectively calculating gray variance of a local image centered on a pixel (x, y) in a union region and an outer region formed by the middle layer and the inner layer of the template:
V mi、Vout is the gray variance of the union region and the outer layer region formed by the middle layer and the inner layer of the template respectively;
(6) Calculating variance difference contrast by using gray variances of the local images at different layers of the template to obtain a variance difference feature map I VD:
Wherein epsilon is a positive real number, epsilon=1×10 -5 is set in order to avoid denominator being 0;
(7) Multiplying the calculated I GSD by I VD to obtain a saliency map I map:
Imap=IGSD×IVD, (15)
(8) And finally, performing binarization processing on the saliency map I map by adopting a self-adaptive threshold segmentation algorithm, judging whether a real target exists in the original image or not by judging the number of connected domains in the binarization map, and outputting pixel coordinates of the centroid of the target in the original image if the real target exists. The calculation formula of the segmentation threshold is as follows:
Wherein the method comprises the steps of Is the maximum value of the elements contained in the saliency map I map; lambda is a division coefficient, and the value range is 0.7-0.9.
Compared with the prior art, the invention has the beneficial effects that:
1) The enhancement of the target and the suppression of gentle background and point noise are realized at the same time by calculating the gray level difference contrast, so that the signal-to-noise ratio of the image is effectively improved;
2) The strong edge and clutter in the background are restrained by calculating the variance difference contrast, so that the false alarm rate is effectively reduced;
3) The three-layer template is simple and easy to use, the complexity of the algorithm is effectively reduced, and the parallelization processing is convenient.
Drawings
FIG. 1 is a block diagram of an implementation of the present invention.
Fig. 2 is a schematic view of a three-layer sliding formwork referred to in the present invention.
Fig. 3 is an original infrared image as an input in the present invention, and the object to be detected is in the box in the figure.
Fig. 4 is a feature diagram obtained by processing the test image in steps (3), (6) and (8), wherein fig. 1 is a gray level difference feature diagram I GSD, fig. 2 is a variance difference feature diagram I VD, fig. 3 is a weighted saliency diagram I map, and fig. 4 is a binary image after threshold segmentation, and the segmentation coefficient λ=0.7.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings in the embodiments of the present invention. Several parameters are involved which need to be adjusted for a specific processing environment to achieve good performance.
The test picture used in the invention is obtained by shooting a 320 multiplied by 256 medium wave infrared camera developed by Shanghai technology physical research institute of Chinese sciences.
Simulation environment: matlab2018a;
Test image: medium wave infrared image, size 320×256, scene is mountain background;
target information: the size of the missile target in the image is 2×3, the size k=3 of the inner layer of the template, and the threshold segmentation coefficient lambda=0.7.

Claims (1)

1. The method for detecting the low-contrast infrared dim target is characterized by comprising the following steps of:
(1) Defining a three-layer sliding template, wherein the template is divided into an inner layer, an intermediate layer and an outer layer, the three-layer template is used for traversing an original image from left to right and from top to bottom pixel by pixel, each pixel of the original image is used as a center to extract a local image overlapped with the three-layer template, and when the three-layer template is traversed to the pixel (x, y) of the original image, pixels contained in different layer areas of the template are as follows:
Ωin={(i,j)|max(|i-x|,|j-y|)≤0.5k}, (1)
Ωmid={(i,j)|0.5k<max(|i-x|,|j-y|)≤1.5k}, (2)
Ωout={(i,j)|1.5k<max(|i-x|,|j-y|)≤2.5k}, (3)
Wherein omega in、Ωmid、Ωout respectively represents the inner layer, the middle layer and the outer layer of the three-layer template; (i, j) is the coordinates of any pixel point inside the three-layer template; k is the width of the inner layer of the template, and the value range is 3-7;
(2) Respectively calculating the pixel number and the gray average value of the local image taking the pixel (x, y) as the center in the inner layer, the middle layer and the outer layer of the template:
Wherein M in、Mmid、Mout is the gray average value of the pixels contained in the inner layer, the middle layer and the outer layer of the template, N in、Nmid、Nout is the number of the pixels contained in the inner layer, the middle layer and the outer layer of the template, and g (i, j) is the gray value of the pixels at the position of the image (i, j);
(3) Calculating gray difference contrast by using gray average values of the local images at different layers of the template to obtain a gray difference feature map I GSD:
IGSD=H[(Min-Mmid)+(Min-Mout)], (7)
Wherein H (t) is used to determine the sign of the pixel difference, defined as:
(4) Respectively calculating the total number of pixels and the gray average value of the local image taking the pixel (x, y) as the center in the middle layer and the inner layer of the template:
Nmi=Nin+Nmid, (9)
Ωmi=Ωin∪Ωmid, (10)
Wherein N mi is the total number of pixels contained in the template middle layer and inner layer region, Ω mi is the union region formed by the template middle layer and inner layer, and M mi is the gray average value of the union region formed by the template middle layer and inner layer;
(5) Respectively calculating gray variance of a local image centered on a pixel (x, y) in a union region and an outer region formed by the middle layer and the inner layer of the template:
V mi、Vout is the gray variance of the union region and the outer layer region formed by the middle layer and the inner layer of the template respectively;
(6) Calculating variance difference contrast by using gray variances of the local images at different layers of the template to obtain a variance difference feature map I VD:
Wherein epsilon is a positive real number, epsilon=1×10 -5 is set in order to avoid denominator being 0;
(7) Multiplying the calculated I GSD by I VD to obtain a saliency map I map:
Imap=IGSD×IVD, (15)
(8) Finally, performing binarization processing on the saliency map I map by adopting a self-adaptive threshold segmentation algorithm, judging whether a real target exists in the original image or not by judging the number of connected domains in the binarization map, and outputting pixel coordinates of a target centroid in the original image if the real target exists, wherein a calculation formula of a segmentation threshold is as follows:
Wherein the method comprises the steps of Is the maximum value of the elements contained in the saliency map I map; lambda is a division coefficient, and the value range is 0.7-0.9.
CN202210123360.5A 2022-02-10 2022-02-10 Low-contrast infrared dim target detection method Active CN114549642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210123360.5A CN114549642B (en) 2022-02-10 2022-02-10 Low-contrast infrared dim target detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210123360.5A CN114549642B (en) 2022-02-10 2022-02-10 Low-contrast infrared dim target detection method

Publications (2)

Publication Number Publication Date
CN114549642A CN114549642A (en) 2022-05-27
CN114549642B true CN114549642B (en) 2024-05-10

Family

ID=81674233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210123360.5A Active CN114549642B (en) 2022-02-10 2022-02-10 Low-contrast infrared dim target detection method

Country Status (1)

Country Link
CN (1) CN114549642B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170523B (en) * 2022-07-14 2023-05-05 哈尔滨工业大学 Low-complexity infrared dim target detection method based on local contrast
CN115393579B (en) * 2022-10-27 2023-02-10 长春理工大学 Infrared small target detection method based on weighted block contrast

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2339533A1 (en) * 2009-11-20 2011-06-29 Vestel Elektronik Sanayi ve Ticaret A.S. Saliency based video contrast enhancement method
CN110443253A (en) * 2019-08-02 2019-11-12 上海海事大学 Infrared target detection method, apparatus and computer storage medium
CN111738320A (en) * 2020-03-04 2020-10-02 沈阳工业大学 Shielded workpiece identification method based on template matching
CN112541486A (en) * 2020-12-31 2021-03-23 洛阳伟信电子科技有限公司 Infrared weak and small target detection algorithm based on improved Pixel segmentation
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2339533A1 (en) * 2009-11-20 2011-06-29 Vestel Elektronik Sanayi ve Ticaret A.S. Saliency based video contrast enhancement method
CN110443253A (en) * 2019-08-02 2019-11-12 上海海事大学 Infrared target detection method, apparatus and computer storage medium
CN111738320A (en) * 2020-03-04 2020-10-02 沈阳工业大学 Shielded workpiece identification method based on template matching
CN112541486A (en) * 2020-12-31 2021-03-23 洛阳伟信电子科技有限公司 Infrared weak and small target detection algorithm based on improved Pixel segmentation
CN113822352A (en) * 2021-09-15 2021-12-21 中北大学 Infrared dim target detection method based on multi-feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘昆 ; 刘卫东 ; .基于加权融合特征与Ostu分割的红外弱小目标检测算法.计算机工程.2017,(第07期),全文. *
黄莉 ; 王文波 ; .基于差异结构描述符与自适应侧抑制的红外弱小目标检测算法.电子测量与仪器学报.2018,(第07期),全文. *

Also Published As

Publication number Publication date
CN114549642A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
WO2019101221A1 (en) Ship detection method and system based on multidimensional scene characteristics
CN114549642B (en) Low-contrast infrared dim target detection method
Nasiri et al. Infrared small target enhancement based on variance difference
CN111027496B (en) Infrared dim target detection method based on space-time joint local contrast
CN104834915B (en) A kind of small infrared target detection method under complicated skies background
CN107403134B (en) Local gradient trilateral-based image domain multi-scale infrared dim target detection method
CN105913404A (en) Low-illumination imaging method based on frame accumulation
CN106886747A (en) Ship Detection under a kind of complex background based on extension wavelet transformation
CN108986130A (en) A kind of method for detecting infrared puniness target under Sky background
CN114612359A (en) Visible light and infrared image fusion method based on feature extraction
CN117237740B (en) SAR image classification method based on CNN and Transformer
CN108614998B (en) Single-pixel infrared target detection method
Zhang et al. Review of dim small target detection algorithms in single-frame infrared images
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN112418090B (en) Real-time detection method for infrared weak and small target under sky background
CN110148149A (en) Underwater vehicle thermal trail segmentation method based on local contrast accumulation
CN106570889A (en) Detecting method for weak target in infrared video
CN114266724A (en) High-voltage line detection method based on radar infrared visible light image fusion
CN106910178B (en) Multi-angle SAR image fusion method based on tone statistical characteristic classification
CN106778822B (en) Image straight line detection method based on funnel transformation
Zou et al. Sonar Image Target Detection for Underwater Communication System Based on Deep Neural Network.
CN112669332A (en) Method for judging sea and sky conditions and detecting infrared target based on bidirectional local maximum and peak local singularity
CN115861669A (en) Infrared dim target detection method based on clustering idea
CN111951299B (en) Infrared aerial target detection method
Huang et al. Infrared small target detection with directional difference of Gaussian filter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant