CN113902659A - Infrared and visible light fusion method based on significant target enhancement - Google Patents

Infrared and visible light fusion method based on significant target enhancement Download PDF

Info

Publication number
CN113902659A
CN113902659A CN202111083539.4A CN202111083539A CN113902659A CN 113902659 A CN113902659 A CN 113902659A CN 202111083539 A CN202111083539 A CN 202111083539A CN 113902659 A CN113902659 A CN 113902659A
Authority
CN
China
Prior art keywords
image
visible light
gamma
infrared
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111083539.4A
Other languages
Chinese (zh)
Inventor
刘日升
刘晋源
仲维
樊鑫
罗钟铉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202111083539.4A priority Critical patent/CN113902659A/en
Publication of CN113902659A publication Critical patent/CN113902659A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of image processing and computer vision, and relates to an infrared and visible light fusion method based on significant target enhancement. The system is easy to construct, and the acquisition of input data can be completed by respectively using a three-dimensional binocular infrared camera and a visible light camera; the program is simple and easy to realize; by utilizing different principles of infrared and visible light camera imaging, an input image is decomposed into a background layer and a detail layer through filtering decomposition, a salient pixel-based enhanced fusion method is arranged for the background layer, an image gradient-based enhanced fusion algorithm is designed for the detail layer, the algorithm effectively enhances the quality of a fusion image and effectively retains the salient information of a bilateral image, and finally the real-time effect is achieved through GPU acceleration.

Description

Infrared and visible light fusion method based on significant target enhancement
Technical Field
The invention belongs to the field of image processing and computer vision, and relates to an infrared and visible light fusion method based on significant target enhancement.
Background
The binocular stereoscopic vision technology based on the visible light wave band is developed more and more mature, and the imaging of the visible light camera has rich color information and detail texture, so that the stereoscopic matching information between binocular images can be quickly and accurately obtained, and more accurate scene depth information can be obtained. However, the visible light band imaging has the defects that the imaging quality is greatly reduced and the matching precision is greatly reduced under the conditions of insufficient illumination conditions, rainstorm, heavy fog and the like. Therefore, the establishment of a color image fusion system by utilizing the complementarity of different wave band information sources is an effective way for realizing more credible image perception under the extreme environment. If a multiband stereoscopic vision system is formed by the visible light band binocular camera and the infrared band binocular camera, the advantage that infrared imaging is not affected by fog, rain, snow and illumination is utilized, the imaging deficiency of the visible light band is made up, and therefore more complete and accurate fusion information is obtained.
The multi-modal image fusion technology is an image processing framework which is high in reliability and friendly to vision and is obtained by utilizing the advantages of different sensors and adopting a specific algorithm or rule for fusion. Compared with the homomorphic fusion image unicity, the multimode image fusion can acquire more image information, and gradually becomes an indispensable method for solving forest fire alarm monitoring, unmanned driving, military monitoring and lunar exploration. The method aims to utilize the imaging difference and complementarity of sensors in different modes to extract image information of each mode to the maximum extent, and use source images in different modes to fuse a synthetic image with rich information and high fidelity. Multi-modal image fusion will therefore yield a more comprehensive understanding of the images and a more accurate localization. In recent years, most of fusion methods are based on transform domain research and design, and do not consider multi-scale detail information of images, which results in detail loss in fused images, such as an infrared and visible light fusion system and an image fusion device disclosed in patent publication CN208240087U [ chinese ]. Therefore, the infrared and visible light image optimization method carries out optimization solution after mathematical modeling is carried out on the infrared and visible light image, and realizes detail enhancement and artifact removal on the basis of keeping effective information of the infrared and visible light image.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a real-time multi-modal image fusion algorithm based on significant background and target enhancement. The method comprises the steps of performing filtering decomposition on infrared and visible light images to obtain a background layer and a detail layer, performing fusion of significant pixel distribution enhancement on the background layer, performing fusion of target gradient enhancement on the detail layer, and finally realizing real-time multi-modal image fusion through a GPU.
The specific technical scheme of the invention comprises the following steps:
an infrared and visible light fusion method based on significant target enhancement comprises the following steps:
the first step is as follows: acquiring registered infrared and visible light images:
1-1) respectively calibrating each lens and the respective system of the visible light binocular camera and the infrared binocular camera;
1-2) respectively calibrating each infrared camera and each visible light camera by using a Zhangyingyou calibration method to obtain internal parameters such as a focal length and a principal point position of each camera and external parameters such as rotation and translation;
1-3) calculating the position relation of the same plane in the visible light image and the infrared image by using the RT obtained by combined calibration and the detected checkerboard angular points and performing registration from the visible light image to the infrared image by using a homography matrix.
The second step is that: converting a color space of a visible light image, converting an RGB image into an HSV image, extracting lightness information of a color image as input of image fusion, and keeping original hue and saturation of the color image;
2-1) aiming at the problem that the visible light image is RGB three-channel, converting the RGB color space into HSV color space, extracting V (brightness) information of the visible light image, fusing the V (brightness) information with the infrared image, and reserving H (hue) and S (saturation) of the visible light image, wherein the specific conversion is as follows:
R′=R/255G′=G/255B′=B/255
Cmax=max(R′,G′,B′)
Cmin=min(R′,G′,B′)
Δ=Cmax-Cmin
V=Cmax
2-2) extracting a V (brightness) channel as the input of visible light, reserving H (hue), and reserving color information for the color restoration after the fusion from S (saturation) to the corresponding matrix.
The third step: carrying out mutual-guiding filtering decomposition on an input infrared image and a visible light image subjected to color space conversion, and respectively decomposing the image into a background layer and a detail layer, wherein structural information of the image is described on the background layer, and gradient and texture information is described on the detail layer;
B=M(I,V),D=(I,V)-B
wherein B represents a background layer, D represents a detail layer, and M represents a mutual-direction filtering;
the fourth step: a method based on a saliency map is designed to fuse a background layer, differences are made between each pixel point and all the pixel points of the whole world, absolute values are taken, and then accumulation is carried out, wherein the formula is as follows:
S(p)=|I(p)-I1|+|I(p)-I2|+|I(p)-I3|+…+|I(p)-I(N)|
Figure BDA0003264880670000031
wherein S (p) represents the significant value of the pixel point, N represents the sum of the pixel values in the image, M represents the histogram statistical formula, and I represents the pixel point in the image;
based on the saliency value s (p), the saliency value of each pixel can be obtained, and the saliency value update calculation formula is as follows:
Figure BDA0003264880670000032
wherein, the calculation parameters of the D generation color distance can obtain the weight of the saliency map based on the background layer fusion according to the obtained saliency values:
Figure BDA0003264880670000033
wherein W represents weight, Sj represents corresponding pixel value, then the decomposed infrared image and visible light image are fused based on linear weighting of the weight of the saliency map, and the following formula is calculated:
B=0.5*(0.5+I*(W1-W2)*0.5)+0.5*(0.5+V*(W2-W1)*0.5)
wherein I, V represent the input infrared image and visible light image, respectively, and W1, W2 represent the significant weights taken on the infrared image and visible light image, respectively;
the fifth step: then, carrying out a pixel fusion strategy of gradient enhancement on a detail layer obtained after object differentiation, designing a gradient enhancement algorithm, and inputting detail images phi I and phi V of an infrared image and a visible light image to realize gradient enhancement; e is the gradient enhancement operator, i and j are the locations of the pixels; max is the operator taking the maximum;
E(ΦI)=max(max(ΦI(i+1,j+1),ΦI(i,j)))
E(ΦV)=max(max(ΦV(i+1,j+1),ΦV(i,j)))
the fusion result D of the layers of detail can be expressed as:
D=E(ΦI)+E(ΦV)
and a sixth step: and finally, linearly weighting the background layer and the detail layer to obtain:
f ═ B + D where F represents the fusion result, B and D represent the background layer fusion result and the detail layer fusion result;
the seventh step: updating the fused image by storing (lightness V) information, and combining the reserved (hue H) and (saturation S) to restore HSV to RGB color space;
the specific formula is as follows:
C=V×S
X=C×(1-|(H/60°)mod2-1|)
m=V-C
Figure BDA0003264880670000041
R′,G′,B′=((R′+m)×255,(G′+m)×255,(B′+m)×255)
wherein C is the result of lightness and saturation, and m is the difference between lightness and C.
Eighth step: performing color correction and enhancement on the restored image to generate a three-channel picture which accords with observation and detection; and respectively carrying out color enhancement on the R channel, the G channel and the B channel.
In the eighth step, color enhancement is respectively performed on the R channel, the G channel, and the B channel, as shown in the following formula:
Rout=(Rin)1/gamma
Rdisplay=(Rin (1/gamma))gamma
Gout=(Gin)1/gamma
G=(Gin (1/gamma))gamma
Bout=(Bin)1/gamma
Bdisplay=(Bin (1/gamma))gamma
the invention has the beneficial effects that:
the invention designs a method for fusing infrared binocular stereo cameras and visible light binocular stereo cameras in real time. The image is decomposed into a background layer and a detail layer by using a filter decomposition strategy, different target-oriented fusion side rates are respectively carried out on the background layer and the detail layer, bilateral information is effectively obtained, and the pixel distribution of the whole target and the image is enhanced, and the method has the following characteristics:
(1) the system is easy to construct, and the acquisition of input data can be completed by using a stereo binocular camera;
(2) the program is simple and easy to realize;
(3) the image is decomposed by filtering, so that the fusion of different target directions is realized;
(4) the structure is completed, multi-thread acceleration and operation can be carried out, and the program has robustness;
(5) and realizing the significant target enhancement of the fused image through the significant enhancement and the image gradient enhancement.
Drawings
Fig. 1 is a flow chart of a visible light and infrared fusion algorithm.
Fig. 2 is a graph showing the results of decomposition of visible light and infrared light into different layers.
Fig. 3 is the result of the fusion of the background and detail layers of visible and infrared light.
Fig. 4 is a final fused image.
Detailed Description
The invention provides a method for real-time image fusion by using an infrared camera and a visible light camera, which is described in detail by combining the accompanying drawings and an embodiment as follows:
the binocular stereo camera is placed on a fixed platform, the image resolution of the experimental camera is 780 multiplied by 340, the field angle is 45.4 degrees, and NVIDIATX2 is used for calculation in order to guarantee real-time performance. On the basis, a real-time infrared and visible light fusion method is designed, and the method comprises the following steps:
1) acquiring registered infrared and visible light images:
1-1) respectively calibrating each lens and the respective system of the visible light binocular camera and the infrared binocular camera;
1-2) calibrating each infrared camera and each visible light camera respectively by using a Zhang Zhengyou calibration method to obtain internal parameters such as focal length and principal point position of each camera and external parameters such as rotation and translation.
1-3) calculating the position relation of the same plane in the visible light image and the infrared image by using the RT obtained by combined calibration and the detected checkerboard angular points and performing registration from the visible light image to the infrared image by using a homography matrix.
2) Image color space conversion
2-1) aiming at the problem that the visible light image is RGB three-channel, converting the RGB color space into HSV color space, extracting V (brightness) information of the visible light image, fusing the V (brightness) information with the infrared image, and reserving H (hue) and S (saturation) of the visible light image, wherein the specific conversion is as follows:
R′=R/255G′=G/255B′=B/255
Cmax=max(R′,G′,B′)
Cmin=min(R′,G′,B′)
Δ=Cmax-Cmin
V=Cmax
2-2) extracting a V (brightness) channel as the input of visible light, reserving H (hue), and reserving color information for the color restoration after the fusion from S (saturation) to the corresponding matrix.
3) The method comprises the steps of carrying out mutual guiding filtering decomposition on an input infrared image and a visible light image subjected to color space conversion, and decomposing the image into a background layer and a detail layer respectively, wherein structural information of the image is described on the background layer, and gradient and texture information are described on the detail layer.
B=M(I,V),D=(I,V)-B
Where B represents the background layer, D represents the detail layer, and M represents the mutual-steering filtering.
4) A method based on a saliency map is designed to fuse a background layer, differences are made between each pixel point and all the pixel points of the whole world, absolute values are taken, and then accumulation is carried out, wherein the formula is as follows:
S(p)=|I(p)-I1|+|I(p)-I2|+|I(p)-I3|+…+|I(p)-I(N)|
Figure BDA0003264880670000071
wherein S (p) represents the significant value of the pixel point, N represents the sum of the pixel values in the image, M represents the histogram statistical formula, and I represents the pixel point in the image.
Based on the saliency value s (p), the saliency value of each pixel can be obtained, and the saliency value update calculation formula is as follows:
Figure BDA0003264880670000072
wherein, the calculation parameter of the D generation color distance can obtain the weight of the saliency map based on the background layer fusion according to the obtained saliency value:
Figure BDA0003264880670000073
wherein W represents weight, Sj represents corresponding pixel value, then the decomposed infrared image and visible light image are fused based on linear weighting of the weight of the saliency map, and the following formula is calculated:
B=0.5*(0.5+I*(W1-W2)*0.5)+0.5*(0.5+V*(W2-W1)*0.5)
where I, V represent the input infrared image and visible light image, respectively, and W1, W2 represent the significant weights taken on the infrared image and visible light image, respectively.
5) And then, carrying out a gradient enhancement pixel fusion strategy on a detail layer obtained after the object difference, designing a gradient enhancement algorithm, and inputting detail images phi I and phi V of the infrared image and the visible light image to realize gradient enhancement. E is the gradient enhancement operator and i and j are the locations of the pixels. Max is the operator that takes the maximum.
E(ΦI)=max(max(ΦI(i+1,j+1),ΦI(i,j)))
E(ΦV)=max(max(ΦV(i+1,j+1),ΦV(i,j)))
The fusion result D of the layers of detail can be expressed as:
D=E(ΦI)+E(ΦV)
6) and finally, linearly weighting the background layer and the detail layer to obtain:
F=B+D
where F represents the fusion result, and B and D represent the background layer fusion result and the detail layer fusion result.
7-1) updating by storing the fused image into (lightness V) information, and combining the previously retained (hue H) and (saturation S) to perform HSV-to-RGB color space restoration. The specific formula is as follows:
C=V×S
X=C×(1-|(H/60°)mod2-1|)
m=V-C
Figure BDA0003264880670000081
R′,G′,B′=((R′+m)×255,(G′+m)×255,(B′+m)×255)
wherein C is the result of lightness and saturation, and m is the difference between lightness and C.
7-2) carrying out color correction and enhancement on the restored image obtained in the step 7-1 to generate a three-channel picture which accords with observation and detection; respectively carrying out color enhancement on an R channel, a G channel and a B channel, wherein the color enhancement is specifically shown by the following formula:
Rout=(Rin)1/gamma
Rdisplay=(Rin (1/gamma))gamma
Gout=(Gin)1/gamma
G=(Gin (1/gamma))gamma
Bout=(Bin)1/gamma
Bdisplay=(Bin (1/gamma))gamma

Claims (7)

1. an infrared and visible light fusion method based on significant target enhancement is characterized by comprising the following steps:
the first step is as follows: acquiring registered infrared and visible light images:
the second step is that: converting a color space of a visible light image, converting an RGB image into an HSV image, extracting lightness information of a color image as input of image fusion, and keeping original hue and saturation of the color image;
the third step: carrying out mutual-guiding filtering decomposition on an input infrared image and a visible light image subjected to color space conversion, and respectively decomposing the image into a background layer and a detail layer, wherein structural information of the image is described on the background layer, and gradient and texture information is described on the detail layer;
B=M(I,V),D=(I,V)-B
wherein B represents a background layer, D represents a detail layer, and M represents a mutual-direction filtering;
the fourth step: a method based on a saliency map is designed to fuse a background layer, differences are made between each pixel point and all the pixel points of the whole world, absolute values are taken, and then accumulation is carried out, wherein the formula is as follows:
S(p)=|I(p)-I1|+|I(p)-I2|+|I(p)-I3|+…+|I(p)-I(N)|
Figure FDA0003264880660000011
wherein S (p) represents the significant value of the pixel point, N represents the sum of the pixel values in the image, M represents the histogram statistical formula, and I represents the pixel point in the image;
based on the saliency value s (p), the saliency value of each pixel can be obtained, and the saliency value update calculation formula is as follows:
Figure FDA0003264880660000012
wherein, the calculation parameters of the D generation color distance can obtain the weight of the saliency map based on the background layer fusion according to the obtained saliency values:
Figure FDA0003264880660000013
wherein W represents weight, Sj represents corresponding pixel value, then the decomposed infrared image and visible light image are fused based on linear weighting of the weight of the saliency map, and the following formula is calculated:
B=0.5*(0.5+I*(W1-W2)*0.5)+0.5*(0.5+V*(W2-W1)*0.5)
wherein I, V represent the input infrared image and visible light image, respectively, and W1, W2 represent the significant weights taken on the infrared image and visible light image, respectively;
the fifth step: then, carrying out a pixel fusion strategy of gradient enhancement on a detail layer obtained after object differentiation, designing a gradient enhancement algorithm, and inputting detail images phi I and phi V of an infrared image and a visible light image to realize gradient enhancement; e is the gradient enhancement operator, i and j are the locations of the pixels; max is the operator taking the maximum;
E(ΦI)=max(max(ΦI(i+1,j+1),ΦI(i,j)))
E(ΦV)=max(max(ΦV(i+1,j+1),ΦV(i,j)))
the fusion result D of the layers of detail can be expressed as:
D=E(ΦI)+E(ΦV)
and a sixth step: and finally, linearly weighting the background layer and the detail layer to obtain:
f ═ B + D where F represents the fusion result, B and D represent the background layer fusion result and the detail layer fusion result;
the seventh step: updating the fused image by storing (lightness V) information, and combining the reserved (hue H) and (saturation S) to restore HSV to RGB color space;
eighth step: performing color correction and enhancement on the restored image to generate a three-channel picture which accords with observation and detection; and respectively carrying out color enhancement on the R channel, the G channel and the B channel.
2. The infrared and visible light fusion method based on significant target enhancement as claimed in claim 1, wherein said first step is specifically operated as follows:
1-1) respectively calibrating each lens and the respective system of the visible light binocular camera and the infrared binocular camera;
1-2) respectively calibrating each infrared camera and each visible light camera by using a Zhangyingyou calibration method to obtain internal parameters such as a focal length and a principal point position of each camera and external parameters such as rotation and translation;
1-3) calculating the position relation of the same plane in the visible light image and the infrared image by using the RT obtained by combined calibration and the detected checkerboard angular points and performing registration from the visible light image to the infrared image by using a homography matrix.
3. A method for infrared and visible light fusion based on significant object enhancement as claimed in claim 1 or 2, wherein the second step is specifically operated as follows:
2-1) aiming at the problem that the visible light image is RGB three-channel, converting the RGB color space into HSV color space, extracting V (brightness) information of the visible light image, fusing the V (brightness) information with the infrared image, and reserving H (hue) and S (saturation) of the visible light image, wherein the specific conversion is as follows:
R′=R/255G′=G/255B′=B/255
Cmax=max(R′,G′,B′)
Cmin=min(R′,G′,B′)
Δ=Cmax-Cmin
V=Cmax
2-2) extracting a V (brightness) channel as the input of visible light, reserving H (hue), and reserving color information from S (saturation) to a corresponding matrix for the color restoration after the fusion.
4. The infrared and visible light fusion method based on significant target enhancement as claimed in claim 1 or 2, wherein the seventh step is specifically operated as follows:
the specific formula is as follows:
C=V×S
X=C×(1-|(H/60°)mod2-1|)
m=V-C
Figure FDA0003264880660000031
R′,G′,B′=((R′+m)×255,(G′+m)×255,(B′+m)×255)
wherein C is the result of lightness and saturation, and m is the difference between lightness and C.
5. The infrared and visible light fusion method based on significant object enhancement as claimed in claim 1 or 2, characterized in that in the eighth step, color enhancement is performed on R channel, G channel, and B channel respectively, as shown in the following formula:
Rout=(Rin)1/gamma
Rdisplay=(Rin (1/gamma))gamma
Gout=(Gin)1/gamma
G=(Gin (1/gamma))gamma
Bout=(Bin)1/gamma
Bdisplay=(Bin (1/gamma))gamma
6. the infrared and visible light fusion method based on significant object enhancement as claimed in claim 3, wherein in the eighth step, color enhancement is performed on the R channel, the G channel, and the B channel respectively, as shown in the following formula:
Rout=(Rin)1/gamma
Rdisplay=(Rin(1/gamma))gamma
Gout=(Gin)1/gamma
G=(Gin (1/gamma))gamma
Bout=(Bin)1/gamma
Bdisplay=(Bin (1/gamma))gamma
7. the infrared and visible light fusion method based on significant object enhancement as claimed in claim 4, wherein in the eighth step, color enhancement is performed on the R channel, the G channel, and the B channel respectively, as shown in the following formula:
Rout=(Rin)1/gamma
Rdisplay=(Rin (1/gamma))gamma
Gout=(Gin)1/gamma
G=(Gin (1/gamma))gamma
Bout=(Bin)1/gamma
Bdisplay=(Bin (1/gamma))gamma
CN202111083539.4A 2021-09-16 2021-09-16 Infrared and visible light fusion method based on significant target enhancement Pending CN113902659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111083539.4A CN113902659A (en) 2021-09-16 2021-09-16 Infrared and visible light fusion method based on significant target enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111083539.4A CN113902659A (en) 2021-09-16 2021-09-16 Infrared and visible light fusion method based on significant target enhancement

Publications (1)

Publication Number Publication Date
CN113902659A true CN113902659A (en) 2022-01-07

Family

ID=79028450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111083539.4A Pending CN113902659A (en) 2021-09-16 2021-09-16 Infrared and visible light fusion method based on significant target enhancement

Country Status (1)

Country Link
CN (1) CN113902659A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881899A (en) * 2022-04-12 2022-08-09 北京理工大学 Rapid color-preserving fusion method and device for visible light and infrared image pair

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090886A (en) * 2018-01-11 2018-05-29 南京大学 A kind of display of high dynamic range infrared image and detail enhancing method
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN111062905A (en) * 2019-12-17 2020-04-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090886A (en) * 2018-01-11 2018-05-29 南京大学 A kind of display of high dynamic range infrared image and detail enhancing method
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN111062905A (en) * 2019-12-17 2020-04-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881899A (en) * 2022-04-12 2022-08-09 北京理工大学 Rapid color-preserving fusion method and device for visible light and infrared image pair
CN114881899B (en) * 2022-04-12 2024-06-04 北京理工大学 Quick color-preserving fusion method and device for visible light and infrared image pair

Similar Documents

Publication Publication Date Title
CN111062905B (en) Infrared and visible light fusion method based on saliency map enhancement
CN111161356B (en) Infrared and visible light fusion method based on double-layer optimization
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN111047510A (en) Large-field-angle image real-time splicing method based on calibration
CN111080709B (en) Multispectral stereo camera self-calibration algorithm based on track feature registration
CN112258579A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111693025B (en) Remote sensing image data generation method, system and equipment
CN111107337B (en) Depth information complementing method and device, monitoring system and storage medium
CN110969667A (en) Multi-spectrum camera external parameter self-correction algorithm based on edge features
CN106355621A (en) Method for acquiring depth information on basis of array images
CN113902657A (en) Image splicing method and device and electronic equipment
CN111462128A (en) Pixel-level image segmentation system and method based on multi-modal spectral image
CN112016478B (en) Complex scene recognition method and system based on multispectral image fusion
CN110322485A (en) A kind of fast image registration method of isomery polyphaser imaging system
CN115035235A (en) Three-dimensional reconstruction method and device
CN115170810B (en) Visible light infrared image fusion target detection example segmentation method
CN107958489B (en) Curved surface reconstruction method and device
KR20150065302A (en) Method deciding 3-dimensional position of landsat imagery by Image Matching
CN113902659A (en) Infrared and visible light fusion method based on significant target enhancement
CN104794680B (en) Polyphaser image mosaic method and device based on same satellite platform
CN103646397A (en) Real-time synthetic aperture perspective imaging method based on multi-source data fusion
CN106971385B (en) A kind of aircraft Situation Awareness multi-source image real time integrating method and its device
CN117237553A (en) Three-dimensional map mapping system based on point cloud image fusion
Wang et al. Automated mosaicking of UAV images based on SFM method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination