CN112419212B - Infrared and visible light image fusion method based on side window guide filtering - Google Patents

Infrared and visible light image fusion method based on side window guide filtering Download PDF

Info

Publication number
CN112419212B
CN112419212B CN202011101778.3A CN202011101778A CN112419212B CN 112419212 B CN112419212 B CN 112419212B CN 202011101778 A CN202011101778 A CN 202011101778A CN 112419212 B CN112419212 B CN 112419212B
Authority
CN
China
Prior art keywords
infrared
image
visible light
images
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011101778.3A
Other languages
Chinese (zh)
Other versions
CN112419212A (en
Inventor
肖锐
石皓元
李明旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kale Micro Vision Technology Yunnan Co ltd
Original Assignee
Kale Micro Vision Technology Yunnan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kale Micro Vision Technology Yunnan Co ltd filed Critical Kale Micro Vision Technology Yunnan Co ltd
Priority to CN202011101778.3A priority Critical patent/CN112419212B/en
Publication of CN112419212A publication Critical patent/CN112419212A/en
Application granted granted Critical
Publication of CN112419212B publication Critical patent/CN112419212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared and visible light image fusion method based on side window guided filtering, which belongs to the technical field of image processing and comprises the following implementation steps: respectively carrying out fuzzy processing on input infrared and visible light source images by using a Gaussian filter to obtain a base layer containing a large scale; subtracting the base layer image from the infrared and visible light source images to obtain a detail layer image containing small-scale information; calculating the correlation coefficient of the infrared and visible light source images to obtain a fusion weight coefficient of a base layer, and fusing the base layer images; obtaining an initial detail layer weight map by using a maximum absolute value rule, optimizing the initial detail layer weight map by using side window guided filtering, and fusing detail layer images; and (5) secondarily fusing the fused base layer image and the fused detail layer image to obtain a final fused image. The invention can realize the fusion of infrared and visible light images, and the fused images can keep good contrast.

Description

Infrared and visible light image fusion method based on side window guide filtering
Technical Field
The invention relates to the technical field of image processing, in particular to an infrared light image and visible light image fusion method based on side window guide filtering.
Background
For the same application scene, image sensors of different wavebands may reflect different scene information. Image fusion of different spectral bands is one of the research hotspots in the fields of computer vision and image processing. Infrared light and visible light have different imaging principles. Infrared imaging sensors capture thermal radiation emitted by an object, which is extremely sensitive to thermal targets, but lacks background texture detail. The visible light image sensor can capture more scene details and texture information, but is susceptible to interference from the imaged scene. For example, lighting conditions, fog, occlusion, etc., can severely impact image quality. The fusion of the infrared and visible light images can provide more complementary information, and is more beneficial to human eye observation or computer vision analysis. In recent years, infrared and visible light image fusion has been widely used in the fields of video fusion, night vision, biological recognition, remote sensing, military, agriculture and the like.
In the last decades, a large number of image fusion methods have been proposed and applied in different fields, three types of methods based on multi-scale decomposition, sparse representation, principal component analysis being the most common. The current method based on multi-scale decomposition is the hottest, the most widely used method. However, these methods have the common disadvantages of low calculation efficiency, low fused image contrast, insufficient target prominence, easy occurrence of halation and artifact phenomena, and the like.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method for fusing infrared and visible light images based on side window guide filtering. Firstly, carrying out multi-scale decomposition on an input infrared image and a visible light image by using Gaussian filtering to obtain a basic layer image containing large-scale information and a detail layer image containing small-scale information. Then, different fusion strategies are used in different scale-resolved images. The weight coefficient of the base layer fusion is obtained by calculating the correlation coefficient of the input infrared and visible light images, so that the fused image is ensured to have enough contrast. The weight map of the detail layer obtains an initial weight map through a maximum absolute value rule, and a fusion weight map of the detail layer is obtained through side window guided filtering optimization. Finally, obtaining a final fusion image through linear addition.
The invention provides an infrared and visible light image fusion method based on a side window guided filter optimization weight map by utilizing a Gaussian filter multi-scale image decomposition and side window guided filter optimization weight map method, which mainly comprises the following steps:
1: the Gaussian filter multi-scale decomposition tool is utilized, so that the scale image can be effectively separated. The input source image is decomposed into a basic layer containing large-scale information and a detail layer containing small-scale information, and different fusion strategies are adopted for fusion at different scale layers. Meanwhile, the linear addition is utilized to reconstruct the fusion image, so that the richness of the fusion image information is improved, the algorithm complexity is reduced, and the efficiency is improved.
2: The weighting map based on the side window filtering and used for effectively optimizing the detail layer image fusion is adopted. The weight coefficient of the base layer is obtained by calculating the correlation coefficient of the input infrared and visible light images, so that the integral contrast of the fusion image can be kept, a better visual effect is obtained, and the robustness of the algorithm is improved. Meanwhile, the detail layer fusion initial weight obtained through the maximum absolute value is optimized by utilizing side window guided filtering. The original input image is used as a guiding image of the side window guiding filtering, and the initial weight optimizing effect of the detail layer with different roughness degrees can be obtained by setting the filtering radius and standard deviation parameters of the side window guiding filter, so that the remarkable information of the detail layer image can be reserved to the greatest extent.
An infrared and visible light image fusion method based on side window guide filtering comprises the following steps:
Step 1: multiscale decomposition to obtain base layer
For input infrared source imageAnd visible light source image/>Processing a source image by using a Gaussian filter to obtain a base layer image containing large-scale information, and subtracting the source image from the base layer image to obtain a detail layer image containing small-scale information, wherein the processing is as follows:
wherein, Representing a Gaussian filtering operation,/>And/>The radius and standard deviation of the gaussian filter are shown, respectively. /(I)And/>Respectively corresponding to the infrared and visible light base layer images of the multi-scale decomposition.
The Gaussian filter is a filter commonly used in the field of digital image processing and is characterized by simple calculation and good smoothing effect. The input infrared and visible light images are subjected to multi-scale decomposition by using Gaussian filtering, and different scale information can be effectively separated by setting different filtering radiuses and standard deviation parameters of the Gaussian filter. Meanwhile, the decomposition process keeps the image edge information which can keep different scales, and is beneficial to the improvement of the final fusion effect.
Step 2: multi-scale decomposition to obtain detail layer
Infrared source imageAnd visible light source image/>Subtracting the base layer image to obtain a detail layer image containing small-scale information, wherein the process is as follows:
wherein, And/>Respectively corresponding to the infrared and visible detail layer images of the multi-scale decomposition.
Step 3: base layer weight coefficient acquisition
By calculating input infrared source imagesAnd visible light source image/>The correlation coefficient between the two to obtain the weighting coefficient of the base layer image fusion, the process is as follows:
wherein, Representing the correlation coefficient of the input source image. And then, respectively distributing weight coefficients of infrared and visible light base layer image fusion according to the correlation coefficients of the source image:
wherein, And/>And respectively corresponding to the weight coefficients of the infrared and visible light base layer image fusion. The base layer fusion coefficient is obtained by calculating the correlation coefficient, so that the final fusion image can obtain good contrast, and meanwhile, the robustness of the whole fusion algorithm is improved.
Step 4: base layer weight coefficient acquisition
And (3) for the infrared and visible light detail layer images obtained in the step (4), obtaining an initial weight map of the detail layer by utilizing a maximum absolute value rule, wherein the process is expressed as follows:
wherein, Detail layer initial weight map representing infrared image,/>Representing a detail layer initial weight map of the visible light image. Subsequently, the detail layer initial weights are optimized using side window guidance, the process is expressed as follows:
wherein, Representing side window guided filtering operations,/>Indicating the dimensions of the side window guide filter. /(I)The standard deviation of the side window guide filter is represented, and the degree of blurring can be controlled. /(I)And/>Is an input infrared and visible light source image, here as a side window guide filtered guide image. /(I)And/>And respectively corresponding to the weight diagrams of the infrared and visible detail layer image fusion.
In the process of extracting the initial weight map of the detail layer, the maximum absolute value rule is utilized, so that the saliency information of the input infrared and visible light images can be effectively extracted, and the object is highlighted. The side window guide filtering is used for optimizing the initial weight map of the detail layer, so that the visual effect of the fusion image can be greatly improved, and the phenomena of halation and artifact are avoided. The original input image is used as a guiding image of the side window guiding filtering, and the initial weight optimizing effects of the detail layers with different roughness degrees can be obtained by setting the filtering radius and standard deviation parameters of the side window guiding filtering, so that the improvement of the final fusion effect is facilitated.
Step 5: preliminary fusion base layer and detail layer
And (3) preliminarily fusing the basic layer and the detail layer by using the weights of the infrared and visible light basic layers and the detail layers obtained in the step (3) and the step (4), wherein the process is as follows:
wherein, Base layer fusion image representing infrared and visible light images,/>A detail layer fusion image representing infrared and visible light images.
Step 6: weighted reconstruction of fused images
And (3) for the fusion image of the base layer and the detail layer obtained in the step (5), adding the two layers to obtain a final fusion image, wherein the process is as follows:
wherein, Is the final fusion result.
The beneficial effects of the invention are as follows: aiming at the infrared and visible light image fusion technology, the input infrared and visible light source images are subjected to multi-scale decomposition by utilizing Gaussian filtering, so that a base layer containing large-scale information and a detail layer containing small-scale information are obtained. In the process of the base layer fusion, the correlation coefficient of the source image is utilized for fusion, and the contrast and the robustness of the final fusion image are ensured. The fusion of the detail layers and the side window guide filtering are combined to optimize the initial weight of the detail layers obtained through the maximum absolute value rule, so that the visual effect of the fused image can be greatly improved, and the phenomena of halation and artifact are avoided. According to the invention, as long as the infrared and visible light images of the same scene are input, the effective multi-scale fusion can be carried out, and a high-quality fusion image can be obtained.
Drawings
FIG. 1 is a flowchart of an algorithm;
FIG. 2 (a) is an input infrared image;
Fig. 2 (b) is an input visible light image;
FIG. 3 (a) is a base layer of a multi-scale decomposition of an infrared image;
FIG. 3 (b) is a detail layer of the multi-scale decomposition of an infrared image;
FIG. 4 (a) is a base layer of a multi-scale decomposition of a visible light image;
FIG. 4 (b) is a detail layer of a multi-scale decomposition of a visible light image;
FIG. 5 (a) is an initial weight of an infrared image detail layer;
FIG. 5 (b) is an initial weight of the visible image detail layer;
FIG. 6 (a) is a fusion weighting map of infrared images;
Fig. 6 (b) is a fusion weight map of visible light images;
FIG. 7 (a) is a base layer fusion diagram;
FIG. 7 (b) is a detail layer fusion diagram;
Fig. 8 is an infrared and visible fusion image result.
Detailed Description
The technical scheme of the invention is clearly and completely described below by means of specific embodiments in combination with the accompanying drawings.
Figure 1 shows a flow chart of the present invention.
Fig. 2 is an example of a set of infrared and visible images of the same scene, where fig. 2 (a) is an input infrared image and fig. 2 (b) is an input visible image.
Fig. 3-4 illustrate layers of a multi-scale decomposition using gaussian filtering. Fig. 3 (a) and fig. 4 (a) are respectively a base layer obtained by decomposing an infrared light source image and a visible light source image, and mainly contain large-scale information in the image. Fig. 3 (b) and fig. 4 (b) are detail layers obtained by decomposing the infrared and visible light source images, respectively, and mainly contain small-scale information in the images. Fig. 5 (a) and 5 (b) are initial weights of the infrared and visible image detail layers, respectively, reflecting the areas of greatest interest for human eye vision in the respective images. Fig. 6 (a) and fig. 6 (b) are respectively detail layer fusion weight diagrams obtained after the initial weight diagrams of the infrared and visible light images are subjected to side window guiding and filtering, so that artifact halation can be overcome, and better subjective visual effect is achieved.
The principle of side window guided filtering is described in Yin H, gong Y, qia G, et al Side window guided filtering [ J ]. Signal Processing, 2019:315-330.
In this embodiment, when the input infrared and visible light source images are subjected to scale decomposition by Gaussian filtering, the arrangement is that,/>And respectively obtaining a corresponding base layer image and a corresponding detail layer image.
The fusion coefficient of the base layer is obtained by calculating the correlation coefficient between the input infrared and visible light source images, and the base layer fusion image is obtained by linear weighted summation of the infrared and visible light base layer images according to the numerical value of the correlation coefficient.
In this embodiment, an initial weight map of the detail layer is obtained through a maximum absolute value rule, and then the initial weight map of the detail layer is optimized by using side window guided filtering to obtain a fused weight map of the detail layer. Wherein, the input infrared and visible light source images are used as the guiding images of the side window guiding filter, and the arrangement is that,/>
And obtaining the infrared and visible light detail layer fusion image by means of the detail layer fusion weight graph and the detail layer image linear weighted summation.
Finally, the final fusion image is obtained by linearly adding the base layer fusion image and the detail layer fusion image of the infrared and visible light, the final fusion result is shown in fig. 8, the fusion image has better contrast, meanwhile, the saliency information in the infrared and visible light images is well reserved, and the subjective visual effect is good.

Claims (7)

1. The infrared and visible light image fusion method based on side window guide filtering is characterized by comprising the following steps of:
step one: multiscale decomposition a gaussian filter is used for an input infrared source image I and a visible light source image V to obtain a base layer image containing large-scale information;
Step two: the multi-scale decomposition subtracts the infrared source image I and the visible light source image V from the base layer image to obtain a detail layer image containing small-scale information;
step three: obtaining a weighting coefficient of the base layer image fusion by calculating a correlation coefficient between an input infrared source image I and a visible light source image V;
step four: respectively distributing weight coefficients fused by the infrared base layer image and the visible light base layer image according to the correlation coefficient of the source image;
The fourth step comprises the following steps:
wherein, And/>Respectively corresponding to the weight coefficients of the infrared and visible light base layer image fusion; c (I, V) represents the correlation coefficient of the input source image;
Step five: obtaining a detail layer initial weight map by utilizing a maximum absolute value rule for the detail layer image containing the small-scale information obtained in the step two;
Step six: using a side window to guide and optimize the initial weight of the detail layer for the initial weight map of the detail layer obtained in the step five;
in the sixth step, the process is as follows:
wherein SWGF () represents a side window guide filtering operation, and r SWGF represents a size of a side window guide filter; σ SWGF represents the standard deviation of the side window guide filter, which can control the degree of blurring; i and V are the input infrared and visible light source images, here as side window guide filtered guide images; And/> Respectively corresponding to the weight diagrams of infrared and visible detail layer image fusion; p I represents the detail layer initial weight map of the infrared image, and P V represents the detail layer initial weight map of the visible image;
step seven: using the weights of the infrared base layer and the visible light base layer, the infrared detail layer and the visible light detail layer obtained in the fourth step and the sixth step, and preliminarily fusing the base layer and the detail layer;
Step eight: and D, adding the base layer and detail layer fusion images obtained in the step seven by using the two layers to obtain a final fusion image result.
2. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein in the first step, the process is as follows:
BI=G(I,rGG)
BV=G(V,rGG)
wherein G represents a gaussian filtering operation, and r G and σ G represent a radius and a standard deviation of the gaussian filtering, respectively; b I and B V correspond to the multiscale resolved infrared and visible base layer images, respectively.
3. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein in the second step, the process is as follows:
DI=I-BI
DV=I-BV
Wherein D I and D V correspond to the multiscale resolved infrared and visible detail layer images, respectively; b I and B V correspond to the multiscale resolved infrared and visible base layer images, respectively.
4. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein the third step comprises the following steps:
where c (I, V) represents the correlation coefficient of the input source image.
5. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein in the fifth step, the process is as follows:
Wherein, P I represents the detail layer initial weight map of the infrared image, and P V represents the detail layer initial weight map of the visible image; d I and D V correspond to the multiscale resolved infrared and visible detail layer images, respectively.
6. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein in the seventh step, the process is as follows:
Wherein B F represents a base layer fusion image of infrared and visible light images, and D F represents a detail layer fusion image of infrared and visible light images; b I and B V correspond to the multiscale resolved infrared and visible base layer images, respectively; d I and D V correspond to the multiscale resolved infrared and visible detail layer images, respectively; And/> Respectively corresponding to the weight coefficients of the infrared and visible light base layer image fusion; /(I)And/>And respectively corresponding to the weight diagrams of the infrared and visible detail layer image fusion.
7. The method for fusing infrared and visible light images based on side window guided filtering as claimed in claim 1, wherein in the eighth step, the process is as follows:
F=BF+DF
Wherein F is the final fusion result; b F represents the base layer fusion image of the infrared and visible images, and D F represents the detail layer fusion image of the infrared and visible images.
CN202011101778.3A 2020-10-15 2020-10-15 Infrared and visible light image fusion method based on side window guide filtering Active CN112419212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011101778.3A CN112419212B (en) 2020-10-15 2020-10-15 Infrared and visible light image fusion method based on side window guide filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011101778.3A CN112419212B (en) 2020-10-15 2020-10-15 Infrared and visible light image fusion method based on side window guide filtering

Publications (2)

Publication Number Publication Date
CN112419212A CN112419212A (en) 2021-02-26
CN112419212B true CN112419212B (en) 2024-05-17

Family

ID=74855280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011101778.3A Active CN112419212B (en) 2020-10-15 2020-10-15 Infrared and visible light image fusion method based on side window guide filtering

Country Status (1)

Country Link
CN (1) CN112419212B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129243A (en) * 2021-03-10 2021-07-16 同济大学 Blood vessel image enhancement method and system based on infrared and visible light image fusion
CN113763368B (en) * 2021-09-13 2023-06-23 中国空气动力研究与发展中心超高速空气动力研究所 Multi-type damage detection characteristic analysis method for large-size test piece
CN113793318B (en) * 2021-09-13 2023-04-07 中国空气动力研究与发展中心超高速空气动力研究所 Multi-region complex damage defect characteristic comprehensive analysis method
CN113935922B (en) * 2021-10-21 2024-05-24 燕山大学 Infrared and visible light image characteristic enhancement fusion method
CN114092369A (en) * 2021-11-19 2022-02-25 中国直升机设计研究所 Image fusion method based on visual saliency mapping and least square optimization
CN114757912A (en) * 2022-04-15 2022-07-15 电子科技大学 Material damage detection method, system, terminal and medium based on image fusion
CN115578304B (en) * 2022-12-12 2023-03-10 四川大学 Multi-band image fusion method and system combining saliency region detection
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809734A (en) * 2015-05-11 2015-07-29 中国人民解放军总装备部军械技术研究所 Infrared image and visible image fusion method based on guide filtering
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
KR101788660B1 (en) * 2016-08-12 2017-10-20 포항공과대학교 산학협력단 Apparatus and method for removing haze in a single image
CN111179209A (en) * 2019-12-20 2020-05-19 上海航天控制技术研究所 Infrared and visible light image information fusion method and device based on feature guidance
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809734A (en) * 2015-05-11 2015-07-29 中国人民解放军总装备部军械技术研究所 Infrared image and visible image fusion method based on guide filtering
KR101788660B1 (en) * 2016-08-12 2017-10-20 포항공과대학교 산학협력단 Apparatus and method for removing haze in a single image
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
CN111179209A (en) * 2019-12-20 2020-05-19 上海航天控制技术研究所 Infrared and visible light image information fusion method and device based on feature guidance
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A General Perceptual Infrared and Visible Image Rusion Framework Based on Linear Filter and Side Window Filtering Technology;Huibin Yan等;IEEE Access;20191223;第3029-3041页 *
Multi-focus Image Fusion Based on Side Window Filtering Technique and Majority Filter;Rui Tao等;2019 IEEE 5th International Conference on Computer and Communications(ICCC);20200413;第327-331页 *
Side Window Filtering;Hui Yin等;2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);20200109;第8750-8758页 *
基于侧窗滤波与分块贝塞尔插值的图像融合;钱进等;长春理工大学学报(自然科学版);20200630;第7-12页 *

Also Published As

Publication number Publication date
CN112419212A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112419212B (en) Infrared and visible light image fusion method based on side window guide filtering
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
CN108230264B (en) Single image defogging method based on ResNet neural network
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN105809640B (en) Low illumination level video image enhancement based on Multi-sensor Fusion
WO2021098083A1 (en) Multispectral camera dynamic stereo calibration algorithm based on salient feature
CN109754384B (en) Infrared polarization image fusion method of uncooled infrared focal plane array
CN112184604A (en) Color image enhancement method based on image fusion
Choi et al. Attention-based multimodal image feature fusion module for transmission line detection
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN116311254B (en) Image target detection method, system and equipment under severe weather condition
CN115330653A (en) Multi-source image fusion method based on side window filtering
CN114612359A (en) Visible light and infrared image fusion method based on feature extraction
CN117392496A (en) Target detection method and system based on infrared and visible light image fusion
CN114972748A (en) Infrared semantic segmentation method capable of explaining edge attention and gray level quantization network
CN117392153B (en) Pancreas segmentation method based on local compensation and multi-scale adaptive deformation
Xing et al. Multi-level adaptive perception guidance based infrared and visible image fusion
CN114067273A (en) Night airport terminal thermal imaging remarkable human body segmentation detection method
CN117727046A (en) Novel mountain torrent front-end instrument and meter reading automatic identification method and system
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN116029954A (en) Image fusion method and device
CN116543165A (en) Remote sensing image fruit tree segmentation method based on dual-channel composite depth network
CN116109829A (en) Coral reef water area image segmentation method based on fusion network
CN113537397B (en) Target detection and image definition joint learning method based on multi-scale feature fusion
CN113689399B (en) Remote sensing image processing method and system for power grid identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant