CN113822352B - Infrared dim target detection method based on multi-feature fusion - Google Patents

Infrared dim target detection method based on multi-feature fusion Download PDF

Info

Publication number
CN113822352B
CN113822352B CN202111078520.0A CN202111078520A CN113822352B CN 113822352 B CN113822352 B CN 113822352B CN 202111078520 A CN202111078520 A CN 202111078520A CN 113822352 B CN113822352 B CN 113822352B
Authority
CN
China
Prior art keywords
target
image
background
gray
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111078520.0A
Other languages
Chinese (zh)
Other versions
CN113822352A (en
Inventor
蔺素珍
武杰
禄晓飞
李大威
余东
张海松
樊小宇
赵亚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN202111078520.0A priority Critical patent/CN113822352B/en
Publication of CN113822352A publication Critical patent/CN113822352A/en
Application granted granted Critical
Publication of CN113822352B publication Critical patent/CN113822352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-feature fusion-based infrared weak and small target detection method, which is characterized in that the real target is enhanced and partial complex background is restrained by utilizing the gray contrast of the target and the neighborhood background based on the characteristic that the local gray value of the infrared weak and small target is large. Secondly, the characteristic that the gray information of the infrared weak and small target accords with two-dimensional Gaussian distribution is utilized, and the target is detected by calculating the distance between the edge pixel point and the center pixel point of the infrared weak and small target and the covariance of gray difference, so that a remarkable graph I is obtained. And thirdly, detecting the target by calculating a similarity factor by utilizing the characteristic that the infrared weak target has low similarity with the neighborhood, so as to obtain a second remarkable graph. Finally, carrying out dot multiplication on the first salient image and the second salient image, obtaining a final salient image by fusing various characteristics of the infrared weak and small targets, and dividing a simple threshold value calculated by the final salient image to obtain a final detection result. The method effectively inhibits complex clutter in the infrared image and improves the precision of weak and small target detection.

Description

Infrared dim target detection method based on multi-feature fusion
Technical Field
The invention relates to the field of infrared image target detection, in particular to an infrared dim target detection method based on multi-feature fusion.
Background
Infrared target detection is one of the core technologies of remote early warning, accurate guidance and other systems, wherein infrared weak target detection is a classical problem. The main difficulties are as follows: 1) The imaging distance is far, and the target not only lacks characteristic information in the image, but also is easily confused with noise points; 2) The clutter edge with high intensity is easily misdetected as a target, so that false alarms are increased; 3) A weak and small target tends to drown in a complex background, causing serious missed detection. Thus, infrared dim target detection in a complex context has been a very challenging task.
Currently, little research is done on the detection of infrared dim targets. The infrared small target detection methods related to the method can be largely classified into a data driving method represented by deep learning and a model driving method based on mathematical and physical knowledge modeling. The data driving method is applied less in actual engineering due to the lack of an infrared weak and small target image set, high training cost, poor real-time performance and the like. The model-driven infrared small target detection method can be further divided into a single-frame method and a multi-frame method. However, the effect of the multi-frame method in infrared weak and small target detection is limited because an accurate motion model of the target cannot be acquired. Therefore, high-performance single-frame methods are more important in infrared small target detection.
The single frame detection method of the infrared small target comprises three types of analysis in the aspect of self-processing thought: the method is characterized in that the background is restored based on a sparse matrix and a low-rank matrix, and then the background image is subtracted from the original image, so that the aim of highlighting the target is fulfilled, but the problems of sensitivity to pixel values, long time consumption and the like exist. The second type is a background suppression method, which mainly adopts a filtering technology to suppress uniform background and clutter. However, the method is sensitive to the background edge, and is restricted to be used in complex scenes such as space-sky target detection. The third category is an emerging method for suppressing background enhancement targets, mainly by utilizing the local contrast sensitivity characteristic of human vision and extracting the local contrast enhancement targets of the targets and the background. Most of the methods have good detection effects in scenes with stable background changes, but cannot effectively inhibit high-intensity clutter edges in complex backgrounds when used for detecting infrared weak and small targets in complex scenes, so that the false alarm rate is high. Therefore, the infrared weak and small target detection method under the complex background has a large development space.
Disclosure of Invention
Aiming at the problems of low detection rate and high false alarm rate of the infrared dim and small targets in the complex background, the invention provides the infrared dim and small target detection method based on multi-feature fusion, which is used for detecting the infrared image dim and small targets in the complex background, and effectively improves the detection accuracy and robustness.
The invention adopts the following technical scheme: the infrared dim target detection method based on multi-feature fusion comprises the following steps: firstly, based on the characteristic that the local gray value of an infrared weak and small target is large, the gray contrast of the target and a neighborhood background is utilized to enhance the real target and inhibit part of complex background; secondly, detecting the target by calculating the distance between the edge pixel point and the center pixel point of the infrared weak and small target and the covariance of the gray level difference by utilizing the characteristic that the gray level information of the infrared weak and small target accords with two-dimensional Gaussian distribution, so as to obtain a first saliency map; thirdly, detecting the target by calculating a similarity factor by utilizing the characteristic that the infrared weak target has low similarity with the neighborhood, so as to obtain a second saliency map; and finally, performing point multiplication on the first saliency map and the second saliency map, obtaining a final saliency map by fusing various characteristics of the infrared weak and small targets, and dividing a calculation threshold of the final saliency map to obtain a final detection result.
The infrared weak and small target detection method based on multi-feature fusion utilizes the gray contrast of the target and the neighborhood background to enhance the real target and inhibit part of complex background, and specifically comprises the following steps: converting the infrared image I in to be detected into a gray image I; a sliding window is constructed by taking a pixel point S in the gray level image I as a center, a center unit is S 0, and surrounding units S i are local background areas; taking a plurality of units with the maximum gray average values in the local background area, and calculating the gray average values mu max of the units; comparing the gray average of the center cellsAnd mu max, if/>Enhancement of the center cell pixels with gray values set to/>I (I, j) represents the pixel gray value at the (I, j) position in the central unit, otherwise, the suppression is performed, and the gray value is set to 0; and moving the sliding window until all pixels of the gray level image I are subjected to the calculation, and obtaining a background inhibition image T.
According to the infrared weak and small target detection method based on multi-feature fusion, the target is detected by calculating the distance between the edge pixel point and the center pixel point of the infrared weak and small target and the covariance of the gray level difference, and the method for obtaining the saliency map specifically comprises the following steps: a sliding window is constructed by taking a pixel point T in a background inhibition image T as a center, if the pixel value in the sliding window is not 0, the distance from each pixel in the sliding window to the center pixel and the gray level difference are calculated, so that the covariance coefficient of the whole sliding window is calculated; and moving the sliding windows until all pixels of the background inhibition image T are subjected to the calculation, and obtaining a covariance saliency map COV consisting of covariance coefficients of each sliding window.
According to the infrared dim target detection method based on multi-feature fusion, the characteristics of low similarity between the infrared dim target and the neighborhood are utilized, the similarity factor is calculated to detect the target, and the second specific step of obtaining the saliency map comprises the following steps of: a sliding window is built by taking one pixel point p in the background inhibition image T as a center, a center unit is called SM 0, and S surrounding background units SM ij are local background areas; calculating similarity factors SF i between the background unit SM ij and the central unit SM 0 respectively; moving a sliding window until all pixels of the background inhibition image T are subjected to the calculation to obtain S similarity matrixes, wherein the ith similarity matrix consists of similarity factors of a central unit and an ith background unit; the minimum value min (SF i) of the elements at the same position in the S similarity matrixes is used as the value of the element at the corresponding position of the similarity matrix SF, and the similarity matrix SF is calculated as follows: Wherein min (SF i) is the minimum value of the element at the same position in the S similarity matrixes, and max (SF i) is the maximum value of the element at the same position in the S similarity matrixes; the similarity matrix SF is the similarity saliency map SF.
According to the infrared dim target detection method based on multi-feature fusion, the first salient image and the second salient image are subjected to dot multiplication, a final salient image is obtained by fusing various characteristics of the infrared dim target, and a final salient image calculation threshold is segmented to obtain a final detection result, and the method specifically comprises the following steps of: performing point multiplication operation on the covariance saliency map and the similarity saliency map to obtain a final detection saliency map SM: sm=cov ++sf, a segmentation threshold Th is calculated, and the final detection saliency map SM is thresholded to obtain a final detection result map I out.
In general, the technical scheme provided by the invention fully utilizes the local priori information of the infrared weak and small target, and has the following technical characteristics and beneficial effects:
(1) Since the shape of the infrared small target is isotropic, the background edge is anisotropic. They are characteristically diverse and therefore require window refinement to the tiles, i.e. to form cells.
(2) The gray level difference exists between the target and the neighborhood background, so that the gray level difference between the target and the background is needed to be utilized to strengthen the target and inhibit the background.
(3) In order to separate the targets, not only the gray level contrast between the targets and the background needs to be considered, but also the prior knowledge such as the covariance value of the gray level difference and the distance difference between the center pixel and the edge pixel of the sliding window, the similarity factor and the like needs to be utilized to divide the targets, so that the background is restrained.
(4) The saliency map may still have certain clutter or noise, but the target is obviously enhanced, so that the residual false alarm is removed by dividing by a constant false alarm rate method.
The novel method for detecting the complex background infrared dim target based on multi-feature fusion effectively inhibits complex clutter in an infrared image and improves the precision of dim target detection.
Drawings
FIG. 1 is a general frame diagram of the present invention;
FIG. 2 is a basic flow chart of the present invention;
FIG. 3 is a sliding window distribution diagram of the present invention for calculating gray contrast;
FIG. 4 is a sliding window distribution diagram of a similarity factor calculation according to the present invention;
FIG. 5 (a) is an infrared original image in an embodiment of the present invention;
FIG. 5 (b) is a final saliency map obtained by using the algorithm according to the present invention in the embodiment of the present invention;
fig. 5 (c) is a diagram of the detection result according to the embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and detailed description below:
Referring to fig. 1 and 2, the method for detecting the infrared dim target under the complex background in the embodiment includes the following steps:
Step 1: inputting an infrared image I in to be detected with the size of H multiplied by W;
step 2: converting the image I in into a grayscale image I;
Step 3.1: a 19×19 sliding window is constructed by taking a pixel point S in the gray image I as the center, and is divided into 9 units, wherein the size of a center unit S 0 is 5×5, the sizes of background area units S1, S3, S6 and S8 are 7×7, the sizes of S2 and S7 are 5×7, and the sizes of S4 and S5 are 7×5, as shown in fig. 3;
step 3.2: the method for calculating the pooling brightness of all background area units, namely the average value of the gray scales of all pixels in the units comprises the following specific steps:
Wherein H ij represents the gray value of the j-th pixel of the i-th background area unit in the local background, mu i represents the gray value average value of the i-th background area unit, and N represents the number of pixel points in the area block;
Step 3.3: taking 4 units with the largest gray average value in 8 units of a background area, and calculating the gray average value mu max of the 4 units, wherein the specific solution is as follows:
wherein MAX 4 (g) represents the maximum four gray values in g, and mu max represents the average value;
Step 3.4: comparing the gray average of the center cells And mu max, if/>Enhancement of the center cell pixels with gray values set to/>The pixel gray value representing the (i, j) position in the center cell), otherwise, the suppression is performed, the gray value is set to 0, specifically:
Step 3.5: moving the sliding window from left to right from top to bottom until the step length is 5 and all pixels of the gray level image I are calculated through the steps 3.1-3.4, so as to obtain a background suppression image T;
Step 4.1: a 5 multiplied by 5 sliding window is constructed by taking a pixel point T in a background inhibition image T as a center, if the pixel value in the sliding window is not 0, the distance from each pixel to a center pixel in the sliding window and the gray level difference are calculated, so that the covariance coefficient of the whole sliding window is calculated, the coefficient is taken as the value of the center pixel of the sliding window, and the specific method for calculating the average value of the distance from each pixel to the center pixel in the sliding window and the average value of the gray level difference is as follows:
Wherein i and j respectively represent the x and y direction coordinates of the pixel point in the sliding window, Taking 3, iij as the pixel gray value of the point with the coordinates of (i, j) as the average value, and adopting the methodTaking I 3,3 as the pixel gray value of the central pixel point, N as the number of pixels in the sliding window, and taking 25 as the pixel gray value;
step 4.2: the covariance coefficient of the sliding window is specifically calculated by:
Wherein Cov (Dis, GV) represents the covariance coefficient of the sliding window, GV represents the gray value difference between the center point of the sliding window and the edge pixel point, dis represents the distance difference between the center point of the sliding window and the edge pixel point, SL is the side length of the sliding window, and the invention takes 5;
step 4.3: moving the sliding window from left to right from top to bottom, wherein the step length is 1, until all pixels of the background inhibition image T are calculated through the steps 4.1-4.2, and obtaining a covariance saliency map COV consisting of covariance coefficients of each sliding window;
Step 5.1: a 25×25 sliding window is constructed by taking a pixel point p in the background suppression image T as a center, wherein the window is 9 units, a center unit is called SM 0, and surrounding units SM ij are local background areas, as shown in fig. 4;
Step 5.2: the similarity factors SF i between the 8 background units and the central unit are calculated respectively, and the specific method is as follows:
wherein T represents a background suppression image, (s, T) represents pixel point coordinates, and T (s, T) represents pixel gray values of the point;
step 5.3: moving a sliding window from left to right from top to bottom, wherein the step length is 1, until all pixels of the image T are calculated in the step 5.2, obtaining 8 similarity matrixes, wherein the i (i=1, 2, …, 8) similarity matrixes are composed of similarity factors of a central unit and an i background unit;
Step 5.4: taking the minimum value MIN (SF i) of the elements at the same position in the 8 similarity matrixes as the value of the element at the corresponding position of the similarity matrix SF, and specifically solving the similarity saliency map SF as follows:
Step 6: performing point multiplication operation on the covariance saliency map COV and the similarity saliency map SF to obtain a final detection saliency map:
SM=COV⊙SF
step 7: calculating a segmentation threshold Th by using a formula, and carrying out threshold segmentation on the saliency map SM to obtain a final detection result map Iout:
Th=μ+λ×σ
wherein μ and σ are the mean and standard deviation of the final saliency map SM, λ is a fixed parameter, and the invention takes 4.

Claims (2)

1. The infrared weak and small target detection method based on multi-feature fusion is characterized by comprising the following steps of: the method comprises the following steps: firstly, based on the characteristic that the local gray value of an infrared weak and small target is large, the gray contrast of the target and a neighborhood background is utilized to enhance the real target and inhibit part of complex background; secondly, detecting the target by calculating the distance between the edge pixel point and the center pixel point of the infrared weak and small target and the covariance of the gray level difference by utilizing the characteristic that the gray level information of the infrared weak and small target accords with two-dimensional Gaussian distribution, so as to obtain a first saliency map; thirdly, detecting the target by calculating a similarity factor by utilizing the characteristic that the infrared weak target has low similarity with the neighborhood, so as to obtain a second saliency map; finally, performing point multiplication on the first salient image and the second salient image, obtaining a final salient image by fusing various characteristics of the infrared weak and small targets, and dividing a calculation threshold of the final salient image to obtain a final detection result;
The method for enhancing the real target and suppressing part of complex background by utilizing the gray contrast of the target and the neighborhood background specifically comprises the following steps: converting the infrared image I in to be detected into a gray image I; a sliding window is constructed by taking a pixel point S in the gray level image I as a center, a center unit is S 0, and surrounding units S i are local background areas; taking a plurality of units with the maximum gray average values in the local background area, and calculating the gray average values mu max of the units; comparing the gray average of the center cells And mu max, if/>Enhancement of the center cell pixels with gray values set to/>I (I, j) represents the pixel gray value at the (I, j) position in the central unit, otherwise, the suppression is performed, and the gray value is set to 0; moving the sliding window until all pixels of the gray level image I are subjected to the calculation to obtain a background inhibition image T;
Detecting a target by calculating the covariance of the distance and gray level difference between the edge pixel point and the center pixel point of the infrared weak and small target, and obtaining a saliency map specifically comprises the following steps: a sliding window is constructed by taking a pixel point T in a background inhibition image T as a center, if the pixel value in the sliding window is not 0, the distance from each pixel in the sliding window to the center pixel and the gray level difference are calculated, so that the covariance coefficient of the whole sliding window is calculated; moving sliding windows until all pixels of the background inhibition image T are subjected to the calculation, and obtaining a covariance saliency map COV consisting of covariance coefficients of each sliding window;
The method for obtaining the salient map comprises the following steps of: a sliding window is built by taking one pixel point p in the background inhibition image T as a center, a center unit is called SM 0, and S surrounding background units SM ij are local background areas; calculating similarity factors SF i between the background unit SM ij and the central unit SM 0 respectively; moving a sliding window until all pixels of the background inhibition image T are subjected to the calculation to obtain S similarity matrixes, wherein the ith similarity matrix consists of similarity factors of a central unit and an ith background unit; the minimum value min (SF i) of the elements at the same position in the S similarity matrixes is used as the value of the element at the corresponding position of the similarity matrix SF, and the similarity matrix SF is calculated as follows:
Wherein min (SF i) is the minimum value of the element at the same position in the S similarity matrixes, and max (SF i) is the maximum value of the element at the same position in the S similarity matrixes; the similarity matrix SF is the similarity saliency map SF.
2. The infrared small target detection method based on multi-feature fusion according to claim 1, wherein the method comprises the following steps: performing point multiplication on the first salient image and the second salient image, obtaining a final salient image by fusing various characteristics of the infrared weak and small targets, and dividing a calculation threshold of the final salient image to obtain a final detection result, wherein the method specifically comprises the following steps of: performing point multiplication operation on the covariance saliency map and the similarity saliency map to obtain a final detection saliency map SM: sm=cov ++sf, a segmentation threshold Th is calculated, and the final detection saliency map SM is thresholded to obtain a final detection result map I out.
CN202111078520.0A 2021-09-15 2021-09-15 Infrared dim target detection method based on multi-feature fusion Active CN113822352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111078520.0A CN113822352B (en) 2021-09-15 2021-09-15 Infrared dim target detection method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111078520.0A CN113822352B (en) 2021-09-15 2021-09-15 Infrared dim target detection method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN113822352A CN113822352A (en) 2021-12-21
CN113822352B true CN113822352B (en) 2024-05-17

Family

ID=78922528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111078520.0A Active CN113822352B (en) 2021-09-15 2021-09-15 Infrared dim target detection method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN113822352B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549642B (en) * 2022-02-10 2024-05-10 中国科学院上海技术物理研究所 Low-contrast infrared dim target detection method
CN115908807B (en) * 2022-11-24 2023-06-23 中国科学院国家空间科学中心 Method, system, computer equipment and medium for fast detecting weak and small target
CN115797872B (en) * 2023-01-31 2023-04-25 捷易(天津)包装制品有限公司 Packaging defect identification method, system, equipment and medium based on machine vision
CN116363135B (en) * 2023-06-01 2023-09-12 南京信息工程大学 Infrared target detection method, device, medium and equipment based on Gaussian similarity

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324021A (en) * 2011-09-05 2012-01-18 电子科技大学 Infrared dim-small target detection method based on shear wave conversion
CN107563370A (en) * 2017-07-07 2018-01-09 西北工业大学 Visual attention mechanism-based marine infrared target detection method
CN108062523A (en) * 2017-12-13 2018-05-22 苏州长风航空电子有限公司 A kind of infrared remote small target detecting method
CN108256519A (en) * 2017-12-13 2018-07-06 苏州长风航空电子有限公司 A kind of notable detection method of infrared image based on global and local interaction
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
KR20180096101A (en) * 2017-02-20 2018-08-29 엘아이지넥스원 주식회사 Apparatus and Method for Intelligent Infrared Image Fusion
CN109272489A (en) * 2018-08-21 2019-01-25 西安电子科技大学 Inhibit the method for detecting infrared puniness target with multiple dimensioned local entropy based on background
WO2019144581A1 (en) * 2018-01-29 2019-08-01 江苏宇特光电科技股份有限公司 Smart infrared image scene enhancement method
CN110706208A (en) * 2019-09-13 2020-01-17 东南大学 Infrared dim target detection method based on tensor mean square minimum error
CN111353496A (en) * 2018-12-20 2020-06-30 中国科学院沈阳自动化研究所 Real-time detection method for infrared small and weak target
CN111899200A (en) * 2020-08-10 2020-11-06 国科天成(北京)科技有限公司 Infrared image enhancement method based on 3D filtering
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 Real-time detection method for infrared small and weak target under sky background
CN113111878A (en) * 2021-04-30 2021-07-13 中北大学 Infrared weak and small target detection method under complex background

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324021A (en) * 2011-09-05 2012-01-18 电子科技大学 Infrared dim-small target detection method based on shear wave conversion
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
KR20180096101A (en) * 2017-02-20 2018-08-29 엘아이지넥스원 주식회사 Apparatus and Method for Intelligent Infrared Image Fusion
CN107563370A (en) * 2017-07-07 2018-01-09 西北工业大学 Visual attention mechanism-based marine infrared target detection method
CN108062523A (en) * 2017-12-13 2018-05-22 苏州长风航空电子有限公司 A kind of infrared remote small target detecting method
CN108256519A (en) * 2017-12-13 2018-07-06 苏州长风航空电子有限公司 A kind of notable detection method of infrared image based on global and local interaction
WO2019144581A1 (en) * 2018-01-29 2019-08-01 江苏宇特光电科技股份有限公司 Smart infrared image scene enhancement method
CN109272489A (en) * 2018-08-21 2019-01-25 西安电子科技大学 Inhibit the method for detecting infrared puniness target with multiple dimensioned local entropy based on background
CN111353496A (en) * 2018-12-20 2020-06-30 中国科学院沈阳自动化研究所 Real-time detection method for infrared small and weak target
CN110706208A (en) * 2019-09-13 2020-01-17 东南大学 Infrared dim target detection method based on tensor mean square minimum error
CN111899200A (en) * 2020-08-10 2020-11-06 国科天成(北京)科技有限公司 Infrared image enhancement method based on 3D filtering
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 Real-time detection method for infrared small and weak target under sky background
CN113111878A (en) * 2021-04-30 2021-07-13 中北大学 Infrared weak and small target detection method under complex background

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于加权融合特征与Ostu分割的红外弱小目标检测算法;刘昆;刘卫东;;计算机工程(第07期);全文 *

Also Published As

Publication number Publication date
CN113822352A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113822352B (en) Infrared dim target detection method based on multi-feature fusion
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN108509859B (en) Non-overlapping area pedestrian tracking method based on deep neural network
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
CN108615027B (en) Method for counting video crowd based on long-term and short-term memory-weighted neural network
CN107564034A (en) The pedestrian detection and tracking of multiple target in a kind of monitor video
US20120328161A1 (en) Method and multi-scale attention system for spatiotemporal change determination and object detection
CN109410171B (en) Target significance detection method for rainy image
CN113111878B (en) Infrared weak and small target detection method under complex background
CN112016569B (en) Attention mechanism-based object detection method, network, device and storage medium
CN110135446B (en) Text detection method and computer storage medium
CN105389799B (en) SAR image object detection method based on sketch map and low-rank decomposition
CN107506792B (en) Semi-supervised salient object detection method
CN110334703B (en) Ship detection and identification method in day and night image
CN116188999B (en) Small target detection method based on visible light and infrared image data fusion
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN112308883A (en) Multi-ship fusion tracking method based on visible light and infrared images
CN104732534B (en) Well-marked target takes method and system in a kind of image
CN105405138A (en) Water surface target tracking method based on saliency detection
CN105512622A (en) Visible remote-sensing image sea-land segmentation method based on image segmentation and supervised learning
CN108388901B (en) Collaborative significant target detection method based on space-semantic channel
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN110827319B (en) Improved Staple target tracking method based on local sensitive histogram
CN110751670B (en) Target tracking method based on fusion
Zingman et al. Morphological operators for segmentation of high contrast textured regions in remotely sensed imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant