AU2020100178A4 - Multiple decision maps based infrared and visible image fusion - Google Patents

Multiple decision maps based infrared and visible image fusion Download PDF

Info

Publication number
AU2020100178A4
AU2020100178A4 AU2020100178A AU2020100178A AU2020100178A4 AU 2020100178 A4 AU2020100178 A4 AU 2020100178A4 AU 2020100178 A AU2020100178 A AU 2020100178A AU 2020100178 A AU2020100178 A AU 2020100178A AU 2020100178 A4 AU2020100178 A4 AU 2020100178A4
Authority
AU
Australia
Prior art keywords
image
fusion
maps
infrared
saliency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020100178A
Inventor
Shuying Huang
Jiaxiang Liu
Wenying Wen
Yong Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wen Wenying Dr
Original Assignee
Wen Wenying Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wen Wenying Dr filed Critical Wen Wenying Dr
Priority to AU2020100178A priority Critical patent/AU2020100178A4/en
Application granted granted Critical
Publication of AU2020100178A4 publication Critical patent/AU2020100178A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Abstract: A VGG-based multiple decision maps fusion method is proposed for infrared and visible image fusion. In our method, firstly, the infrared and visible images are fed into a pre-trained model of VGG-16 to extract features. Then, we design a feature representation method to construct saliency maps by those features. Next, multiple decision maps are constructed by saliency maps and guided filter. Finally, the final fusion image is achieved by weighting the source images based on multiple decision maps. To the best of our knowledge, it is the first introduction of decision map in the field of infrared and visible image fusion. Experimental results demonstrate that the proposed method can obtain better performance compared with state-of-the-art infrared and visible image fusion methods.

Description

BACKGROUND AND PURPOSE [0001] The infrared and visible image fusion is an important technique for computer vision. Infrared image captures the information of the target in low light and nighttime scene, but is not clear enough with few details. The visible image can record the texture details of the scene; however, the visible image has poor anti-interference ability. Different images of same scene can be processed, and significant features can be extracted and then integrated into a single image for human and machine perception through image fusion. The fusion result of infrared and visible images can retain both infrared target information and the visible texture information.
[0002] For decades, many kinds of image fusion methods have been proposed. Some classic methods include sparse representation (SR), discrete wavelet transform (DWT), nonsubsampled contourlet transform (NSCT), guided filter (GFF), convolutional sparse representation, and so on. Most of these methods are subject to complex fusion rules and pixel activity level measurement that are costly to calculate.
[0003] Deep learning can automatically leam pattern features and incorporate feature learning into the process of building model. Many image fusion methods have been developed based on deep learning. Liu et al. proposed the convolutional neural network (CNN) based image fusion method, which achieved good experimental result. However, the shallow features of this network are ignored, and some details of source images are missing. Ma et al. proposed a generative adversarial network (FusionGAN) for image fusion. In the FusionGAN, Generator generates fused images, and discriminator determines whether the result is qualified. However, due to the difficulty of the GAN convergence, its training and testing process are relatively unstable, and some texture information disappears. Li et al. proposed a CNN-based network called DenseFuse, in which the dense block is introduced to extract features. DenseFuse generated a fused image by encoding and decoding network. However, this method requires a long training time, and the features extracted are still not comprehensive enough because the training process is not targeted to infrared and
2020100178 18 Feb 2020 visible image. In general, image fusion based on deep learning requires a long training time and standard reference images, which are actually difficult to be obtained for infrared and visible images.
[0004] We propose a feature representation method. In our method, we firstly extract the different layer feature maps of infrared and visible image through the pre-trained model. Then, based on the feature maps, we design a feature representation way to build saliency maps. Thirdly, we employ the combined saliency map as the guidance image and adopt the guided filter on the source images to construct the decision maps for two kinds of images. Finally, we define a fusion strategy for multiple decision maps to obtain fusion image.
FEATURE EXTRACTION ALGORITHM DESCRIPTION [0001] In the deep learning methods, if considering the depth network to learn the characteristics of infrared and visible images, it is difficult to obtain standard reference images in this field. Therefore, instead of using the end-to-end fusion strategy, we utilize the learning ability of the pre-trained model to extract the salient regions of the two source images. Since the depth model of VGG-16 is trained on a large number of datasets, it has a strong feature extraction capability. VGG-16 has 13 convolution layers, and the ReLU activation function is not shown for brevity. For the VGG-16 convolutional layer, it contains five identical scales after four downsampling operation. We select the first layer under the identical scales to extract more spatial features. The convolution operations of VGG-16 is shown as follows. Here, let df denote the feature map extracted by the z-th layer and c is the channel, ce {1,2,-,C}.
Equation 1
2020100178 18 Feb 2020 where φι. represents a convolution layer in the VGG-16 network, and ze {1,2,3,4,5} denotes the convll, conv_2_l, conv_3_l, conv_4_l, conv_5_l of different scales, respectively. As we know, in deep learning, the neural network learns features spontaneously. The shallow layers in the network tend to detect the edge of the image, and the detected content is comprehensive. With the deepening of the network level, the detected features become more abstract, and a lot of spatial information is ignored. So, the shallower layers convl l, conv_2_l extract more spatial information, and deeper layers conv_3_l, conv_4_l, conv_5_l extract more semantic information. In addition, in the VGG-16 model, there are downsampling operation in the network layer, which will result in the feature space size of the upper layer being too small. It is not conducive to the edge extraction of the effective information region, so convl l and conv_2_l are considered to extract features.
FEATURE REPRESENTATION ALGORITHM DESCRIPTION [0001] To exact the saliency features better, we design a feature representation method based on the feature maps. Saliency maps contain the saliency features of infrared and visible images and can be used to assist in further obtaining the decision maps. The feature representation adopt L2-norm to construct saliency maps of the infrared source image and the visible source image I2.
[0002] Feature representation of source images are obtained by utilizing L2-norm to measure the pixel activity level. Through the pixel activity level, we can determine the saliency features in the source images and get the saliency maps. The activity level measurement is an important step in image fusion. It is proved the LI-norm of feature map dk l c (x, y) can used to calculate the activity level of the source images. However, LI-norm causes sparseness of feature selection and some details are ignored, while L2-norm makes it easy to select more features. Therefore, L2-norm of feature map dk 1 c (x, y) is used to estimate the activity level of image. The larger the
2020100178 18 Feb 2020 value of L2-norm, the better the sharpness of the image and the richer the details. The pixel activity level can be reflected on the activity level map.
[0003] The activity level map Tl k can be obtained by the following formula:
Equation 2 [0004] The activity level map reflects the activity level coefficient of image. The larger the coefficient, the more active this pixel is. We compare the activity level coefficient of infrared and visible images to get saliency features. Therefore, the saliency maps S(x,y) can be obtained by activity level maps of infrared and visible images.
if^'(x,y)> iF‘(x,y) otherwise
Equation 3 where A['(x,y) and A2'(x,y) are activity level maps obtained by the z-th convolution layer of infrared and visible image input network, respectively.
[0005] Due to the impact of pooling operations in the network, the size of the saliency map of infrared and visible image is different in different layers. Therefore, the upsampling operation is used to ensure the same size of the two saliency maps. In order to explore the differences of saliency map of different layer, Figure 1 shows the significant area map of the camp image obtained with different convolutions of VGG-16. Figure 1 (a) is a saliency map obtained by convll layer, which contains more complete spatial information. The saliency map Figure 1(b) obtained from conv_2_l contains more texture information, while Figure 1 (c) and (d) obtained from conv_3_l and conv_4_l respectively contains more abstract semantic information and less details. Therefore, two saliency maps A and B are obtained with convl l and conv_2_l. Saliency maps A record spatial information of the source image, while
2020100178 18 Feb 2020 saliency map B record texture details of the source image. Therefore, we combine saliency maps A and B to get combined saliency map.
[0006] Combined saliency map SF (x, y) is obtained by intersection strategy of the two saliency maps obtained with convll and conv_2_l. The mathematical formula is described as follows:
SF (x, y) = SA (x, y)A SB (x, y) Equation 4 [0007] In order to prove that convl l and conv_2_l layers of extracted features are more conducive to the infrared and visible image fusion, we set up experiments to verify the effectiveness of the features extracted from convl l and conv_2_l by five metrics.
MULTIPLE DECISION MAPS AND FUSION STRATEGY [0001] Most fusion methods fuse two kinds of source images by using a single decision map to capture complementary information between images. Because the information of infrared image and visible image is not complementary, the fusion result obtained by only using one decision map cannot get complete and effective information of the two images. Therefore, we employ the combined saliency map to construct multiply decision maps to realize the fusion of the source images. As shown in the part of multiple decision map fusion, the fusion decision maps are constructed by employing the guided filtering method, which makes use of the combined saliency map as the guidance image to filter the two source images. Because the guided filtering algorithm can maintain the edges of output image similar to the guidance image. The gradient information of the filtered image can be determined by the gradient information of the guidance image. Therefore, the saliency areas in the filtered image can be consistent with the combined saliency map. Due to the different acquisition modes of the source images, the information of the images is not completely complementary. Therefore, we propose a fusion strategy of multiple
2020100178 18 Feb 2020 decision maps which can extract effective information from the source images respectively. The multiple decision maps can be obtained by the following formulas. The combined saliency map SF (x, y) is used as the guidance image, and infrared image I, and visible image I2 are used as input image respectively.
D = Gr (ή, SF) Equation 5
D2 - Gr (l2,SF) Equation 6
Here, Grf(·) represents the guided filtering operation. SF is the combined saliency map as the guidance image. I, and I2 represent infrared image and visible image respectively . η, q, r2, and e2 are parameters of the guided filter, η = r2 = 10, q = e2 =0.1. D1 and D2 are decision maps, which are normalized between zero and one, as shown below.
D' - minfD') . .
D =-------:----------—, z = 1,2 Equation 7 Λ max(D')-min(D') 4 [0002] In the fusion strategy, two initial fusion images are obtained from the infrared and visible images based on the multiple decision maps, and then the final fusion result is achieved by weighting those initial fusion images. The fusion strategy is defined as:
F(x, f) = Σα(Α (% y^DN (V f) + Λ ίχ’ ^)(1 - D, (*> f)) Equation 8 / = 1.2 where /, (x, y) and I2 (x, y) denote the infrared image and the visible image, D'N are the normalized decision maps. When z = 1, one decision map reconstructs the first initial fusion image, and when z = 2, another decision map reconstructs the second initial fusion image. These two initial fusion images are fused by a weighted-sum

Claims (4)

1. The procedures of image fusion are as follows:
[0001] The proposed method is introduced in detail. The schematic diagram of the proposed fusion method is illustrated in Figure 1. From Figure 1, we can see that the proposed fusion method consists of four steps.
[0002] Stepl: The two kinds of source images are fed into the pre-trained VGG model for feature extraction.
[0003] Step2: We design a feature representation way to construct saliency maps by calculating the L2-norm of the channel direction of feature maps to estimate the activity level of pixels.
[0004] Step3: We build multiple decision maps by saliency maps, which is used as the guidance image to filter the infrared and visible image by guided filter.
[0005] Step4: Multiple decision maps are used for obtaining the initialized fusion results, and then they are fused with a weighted-sum fusion strategy to achieve the final fused image.
2. The procedures of image feature extraction are as follows:
[0001] For the VGG-16 convolutional layer, it contains five identical scales after four downsampling operation. We select the first layer under the identical scales to extract more spatial features. The convolution operations of VGG-16 is shown as follows. Here, let off denote the feature map extracted by the z-th layer and c is the channel, ce {1,2,--,C}.
a'k c = φ. (I J Equation 1
2020100178 18 Feb 2020 where φί (·) represents a convolution layer in the VGG-16 network, and ze {1,2,3,4,5} denotes the convll, conv_2_l, conv_3_l, conv_4_l, conv_5_l of different scales, respectively, convl l and conv_2_l are considered to extract features.
3. The procedures of image feature representation are as follows:
[0001] Stepl: The feature representation adopt L2-norm to construct saliency maps of the infrared source image and the visible source image I2.
[0002] The activity level map J-l k can be obtained by the following formula:
T'k(x,y) =|| a;':'(x,y) ||2 Equation 2 [0003] Step2: The activity level map reflects the activity level coefficient of image.
The larger the coefficient, the more active this pixel is. We compare the activity level coefficient of infrared and visible images to get saliency features. Therefore, the saliency maps S(x,y) can be obtained by activity level maps of infrared and visible images.
Cl, if ^(x,y)> F‘(x, [0, otherwise where (x, y) and (x, y) are activity level maps obtained by the z-th convolution layer of infrared and visible image input network, respectively.
[0004] Step3: Combined saliency map SF (x,y) is obtained by intersection strategy of the two saliency maps obtained with convll and conv_2_l. The mathematical formula is described as follows:
Equation 3
2020100178 18 Feb 2020
SF (x,y) = SA(x,y)OSB(x,y) Equation 4
4. The procedures of multiple decision maps and fusion strategy are as follows: [0001] We employ the combined saliency map to construct multiply decision maps to realize the fusion of the source images. As shown in the part of multiple decision map fusion in Figure 1, the fusion decision maps are constructed by employing the guided filtering method, which makes use of the combined saliency map as the guidance image to filter the two source images. Because the guided filtering algorithm can maintain the edges of output image similar to the guidance image. The gradient information of the filtered image can be determined by the gradient information of the guidance image. Therefore, the saliency areas in the filtered image can be consistent with the combined saliency map. Due to the different acquisition modes of the source images, the information of the images is not completely complementary. Therefore, we propose a fusion strategy of multiple decision maps which can extract effective information from the source images respectively. The multiple decision maps can be obtained by the following formulas. The combined saliency map SF (x,y) is used as the guidance image, and infrared image It and visible image I2 are used as input image respectively.
D' =Gr q (ή, SF) Equation 5
D2 = Gr^22,$ρ) Equation 6
Here, Gr e(·) represents the guided filtering operation. SF is the combined saliency map as the guidance image. It and I2 represent infrared image and visible image respectively . η, q, r2, and e2 are parameters of the guided filter, η=Γ2=10, q = e2 =0.1. Dl and D2 are decision maps, which are normalized between zero and one, as shown below.
2020100178 18 Feb 2020
D' =-------:----------—, / = 1,2 Equation 7 max(D‘) - min(D‘) [0002] In the fusion strategy, two initial fusion images are obtained from the infrared and visible images based on the multiple decision maps, and then the final fusion result is achieved by weighting those initial fusion images. The fusion strategy is defined as:
F(x, y) = (ή (x, y)D'N (x,y) +I2 (x, y)(l - . (x, y)) Equation 8 /=1,2 where ή (x, y) and I2 (x, y) denote the infrared image and the visible image, D'N are the normalized decision maps. When /=1, one decision map reconstructs the first initial fusion image, and when z = 2, another decision map reconstructs the second initial fusion image. These two initial fusion images are fused by a weighted-sum fusion strategy to obtain the fused image.
AU2020100178A 2020-02-04 2020-02-04 Multiple decision maps based infrared and visible image fusion Ceased AU2020100178A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020100178A AU2020100178A4 (en) 2020-02-04 2020-02-04 Multiple decision maps based infrared and visible image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020100178A AU2020100178A4 (en) 2020-02-04 2020-02-04 Multiple decision maps based infrared and visible image fusion

Publications (1)

Publication Number Publication Date
AU2020100178A4 true AU2020100178A4 (en) 2020-03-19

Family

ID=69804998

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020100178A Ceased AU2020100178A4 (en) 2020-02-04 2020-02-04 Multiple decision maps based infrared and visible image fusion

Country Status (1)

Country Link
AU (1) AU2020100178A4 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582066A (en) * 2020-04-21 2020-08-25 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image
CN111815550A (en) * 2020-07-04 2020-10-23 淮阴师范学院 Infrared and visible light image fusion method based on gray level co-occurrence matrix
CN111861958A (en) * 2020-07-10 2020-10-30 逢亿科技(上海)有限公司 Image fusion algorithm based on adaptive countermeasure system
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images
CN112115979A (en) * 2020-08-24 2020-12-22 深圳大学 Fusion method and device of infrared image and visible image
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112419301A (en) * 2020-12-03 2021-02-26 国网山西省电力公司大同供电公司 Power equipment defect diagnosis device and method based on multi-source data fusion
CN112836708A (en) * 2021-01-25 2021-05-25 绍兴图信物联科技有限公司 Image feature detection method based on Gram matrix and F norm
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN113222839A (en) * 2021-05-08 2021-08-06 华北电力大学 Infrared and visible light image fusion denoising method based on generation countermeasure network
CN113222877A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared and visible light image fusion method and application thereof in airborne photoelectric video
CN113255797A (en) * 2021-06-02 2021-08-13 通号智慧城市研究设计院有限公司 Dangerous goods detection method and system based on deep learning model
CN113298094A (en) * 2021-06-10 2021-08-24 安徽大学 RGB-T significance target detection method based on modal association and double-perception decoder
CN113592018A (en) * 2021-08-10 2021-11-02 大连大学 Infrared light and visible light image fusion method based on residual dense network and gradient loss
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN113947555A (en) * 2021-09-26 2022-01-18 国网陕西省电力公司西咸新区供电公司 Infrared and visible light fused visual system and method based on deep neural network
CN114445730A (en) * 2021-11-23 2022-05-06 江苏集萃未来城市应用技术研究所有限公司 Station pedestrian temperature detection system based on infrared light and visible light
CN114529487A (en) * 2022-02-14 2022-05-24 安徽信息工程学院 Multi-source image fusion method and system
CN114926719A (en) * 2022-05-26 2022-08-19 大连理工大学 Hypergraph low-rank representation-based complex dynamic system perception feature fusion method
CN115170810A (en) * 2022-09-08 2022-10-11 南京理工大学 Visible light infrared image fusion target detection example segmentation method
CN115358963A (en) * 2022-10-19 2022-11-18 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering
CN116091372A (en) * 2023-01-03 2023-05-09 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116823694A (en) * 2023-08-31 2023-09-29 佛山科学技术学院 Infrared and visible light image fusion method and system based on multi-focus information integration
CN117351049A (en) * 2023-12-04 2024-01-05 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium
CN117809146A (en) * 2023-12-11 2024-04-02 江南大学 Infrared and visible light image fusion method based on feature disentanglement representation
CN117952845A (en) * 2024-01-17 2024-04-30 大连理工大学 Robust infrared and visible light image fusion optimization method
CN118552424A (en) * 2024-07-30 2024-08-27 大连理工大学 Infrared and visible light image fusion method with balanced pixel-level and target-level characteristics
CN118552823B (en) * 2024-07-30 2024-09-24 大连理工大学 Infrared and visible light image fusion method of depth characteristic correlation matrix

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582066B (en) * 2020-04-21 2023-10-03 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
CN111582066A (en) * 2020-04-21 2020-08-25 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
CN111696075A (en) * 2020-04-30 2020-09-22 航天图景(北京)科技有限公司 Intelligent fan blade defect detection method based on double-spectrum image
CN111815550A (en) * 2020-07-04 2020-10-23 淮阴师范学院 Infrared and visible light image fusion method based on gray level co-occurrence matrix
CN111815550B (en) * 2020-07-04 2023-09-15 淮阴师范学院 Infrared and visible light image fusion method based on gray level co-occurrence matrix
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111652832B (en) * 2020-07-09 2023-05-12 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111861958A (en) * 2020-07-10 2020-10-30 逢亿科技(上海)有限公司 Image fusion algorithm based on adaptive countermeasure system
CN112070111B (en) * 2020-07-28 2023-11-28 浙江大学 Multi-target detection method and system adapting to multi-band image
CN112070111A (en) * 2020-07-28 2020-12-11 浙江大学 Multi-target detection method and system adaptive to multiband images
CN112115979A (en) * 2020-08-24 2020-12-22 深圳大学 Fusion method and device of infrared image and visible image
CN112115979B (en) * 2020-08-24 2024-03-22 深圳大学 Fusion method and device of infrared image and visible image
CN112288668B (en) * 2020-09-22 2024-04-16 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112288668A (en) * 2020-09-22 2021-01-29 西北工业大学 Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN112419301A (en) * 2020-12-03 2021-02-26 国网山西省电力公司大同供电公司 Power equipment defect diagnosis device and method based on multi-source data fusion
CN112836708A (en) * 2021-01-25 2021-05-25 绍兴图信物联科技有限公司 Image feature detection method based on Gram matrix and F norm
CN112836708B (en) * 2021-01-25 2022-05-13 绍兴图信物联科技有限公司 Image feature detection method based on Gram matrix and F norm
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN113222839A (en) * 2021-05-08 2021-08-06 华北电力大学 Infrared and visible light image fusion denoising method based on generation countermeasure network
CN113255797A (en) * 2021-06-02 2021-08-13 通号智慧城市研究设计院有限公司 Dangerous goods detection method and system based on deep learning model
CN113255797B (en) * 2021-06-02 2024-04-05 通号智慧城市研究设计院有限公司 Dangerous goods detection method and system based on deep learning model
CN113222877B (en) * 2021-06-03 2023-04-11 北京理工大学 Infrared and visible light image fusion method and application thereof in airborne photoelectric video
CN113222877A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared and visible light image fusion method and application thereof in airborne photoelectric video
CN113298094A (en) * 2021-06-10 2021-08-24 安徽大学 RGB-T significance target detection method based on modal association and double-perception decoder
CN113298094B (en) * 2021-06-10 2022-11-04 安徽大学 RGB-T significance target detection method based on modal association and double-perception decoder
CN113592018A (en) * 2021-08-10 2021-11-02 大连大学 Infrared light and visible light image fusion method based on residual dense network and gradient loss
CN113592018B (en) * 2021-08-10 2024-05-10 大连大学 Infrared light and visible light image fusion method based on residual dense network and gradient loss
CN113947555A (en) * 2021-09-26 2022-01-18 国网陕西省电力公司西咸新区供电公司 Infrared and visible light fused visual system and method based on deep neural network
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN113781377B (en) * 2021-11-03 2024-08-13 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN114445730A (en) * 2021-11-23 2022-05-06 江苏集萃未来城市应用技术研究所有限公司 Station pedestrian temperature detection system based on infrared light and visible light
CN114529487A (en) * 2022-02-14 2022-05-24 安徽信息工程学院 Multi-source image fusion method and system
CN114529487B (en) * 2022-02-14 2024-09-13 安徽信息工程学院 Multi-source image fusion method and system
CN114926719B (en) * 2022-05-26 2024-06-14 大连理工大学 Hypergraph low-rank representation-based complex dynamic system perception feature fusion method
CN114926719A (en) * 2022-05-26 2022-08-19 大连理工大学 Hypergraph low-rank representation-based complex dynamic system perception feature fusion method
CN115170810A (en) * 2022-09-08 2022-10-11 南京理工大学 Visible light infrared image fusion target detection example segmentation method
CN115358963A (en) * 2022-10-19 2022-11-18 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering
CN115358963B (en) * 2022-10-19 2022-12-27 季华实验室 Image fusion method based on extended Gaussian difference and guided filtering
CN116091372B (en) * 2023-01-03 2023-08-15 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116091372A (en) * 2023-01-03 2023-05-09 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116823694B (en) * 2023-08-31 2023-11-24 佛山科学技术学院 Infrared and visible light image fusion method and system based on multi-focus information integration
CN116823694A (en) * 2023-08-31 2023-09-29 佛山科学技术学院 Infrared and visible light image fusion method and system based on multi-focus information integration
CN117351049B (en) * 2023-12-04 2024-02-13 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium
CN117351049A (en) * 2023-12-04 2024-01-05 四川金信石信息技术有限公司 Thermal imaging and visible light fusion measuring point registration guiding method, device and medium
CN117809146A (en) * 2023-12-11 2024-04-02 江南大学 Infrared and visible light image fusion method based on feature disentanglement representation
CN117952845A (en) * 2024-01-17 2024-04-30 大连理工大学 Robust infrared and visible light image fusion optimization method
CN117952845B (en) * 2024-01-17 2024-08-06 大连理工大学 Robust infrared and visible light image fusion optimization method
CN118552424A (en) * 2024-07-30 2024-08-27 大连理工大学 Infrared and visible light image fusion method with balanced pixel-level and target-level characteristics
CN118552823B (en) * 2024-07-30 2024-09-24 大连理工大学 Infrared and visible light image fusion method of depth characteristic correlation matrix
CN118552424B (en) * 2024-07-30 2024-09-24 大连理工大学 Infrared and visible light image fusion method with balanced pixel-level and target-level characteristics

Similar Documents

Publication Publication Date Title
AU2020100178A4 (en) Multiple decision maps based infrared and visible image fusion
Han et al. Reinforcement cutting-agent learning for video object segmentation
CN110555434B (en) Method for detecting visual saliency of three-dimensional image through local contrast and global guidance
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN110175986B (en) Stereo image visual saliency detection method based on convolutional neural network
CN102903085B (en) Based on the fast image splicing method of corners Matching
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN107644418B (en) Optic disk detection method and system based on convolutional neural networks
CN103177451B (en) Based on the self-adapting window of image border and the Stereo Matching Algorithm of weight
CN106488122A (en) A kind of dynamic auto focusing algorithm based on improved sobel method
CN106447708A (en) OCT eye fundus image data registration method
CN106127782B (en) A kind of image partition method and system
CN110443874B (en) Viewpoint data generation method and device based on convolutional neural network
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
CN103927785B (en) A kind of characteristic point matching method towards up short stereoscopic image data
CN114004754A (en) Scene depth completion system and method based on deep learning
CN111062331B (en) Image mosaic detection method and device, electronic equipment and storage medium
CN102982334A (en) Sparse parallax obtaining method based on target edge features and gray scale similarity
CN108257165A (en) Image solid matching method, binocular vision equipment
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
Shen et al. A clustering game based framework for image segmentation
CN111476739B (en) Underwater image enhancement method, system and storage medium
CN116091524B (en) Detection and segmentation method for target in complex background
CN107403448A (en) Cost function generation method and cost function generating means
CN108509949A (en) Object detection method based on attention map

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry