CN106600572A - Adaptive low-illumination visible image and infrared image fusion method - Google Patents

Adaptive low-illumination visible image and infrared image fusion method Download PDF

Info

Publication number
CN106600572A
CN106600572A CN201611142487.2A CN201611142487A CN106600572A CN 106600572 A CN106600572 A CN 106600572A CN 201611142487 A CN201611142487 A CN 201611142487A CN 106600572 A CN106600572 A CN 106600572A
Authority
CN
China
Prior art keywords
image
fusion
infrared image
low
low frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611142487.2A
Other languages
Chinese (zh)
Inventor
朴燕
刘硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201611142487.2A priority Critical patent/CN106600572A/en
Publication of CN106600572A publication Critical patent/CN106600572A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to an adaptive low-illumination visible image and infrared image fusion method, and belongs to the field of digital image processing. The method comprises the steps: image preprocessing, NSCT transformation, frequency domain coefficient fusion and an NSCT inversion. The method mainly achieves the multi-scale decomposition of a bright image and an infrared visible image, extracts high-frequency components and low-frequency components, and selects different fusion rules to carry out the frequency domain fusion according to the characteristics of the high-frequency components and low-frequency components. The NSCT inversion part is mainly used for carrying out the multi-scale inversion of the obtained fusion high-frequency and low-frequency components, obtaining the gray scale image after fusion, then carrying out the weighting of the gray scale image after fusion and an original color visible image, and obtaining a color fusion image. The method can effectively keep more detail information of the original image, can improve the contrast and resolution of the fusion image, and can be widely used in the fields of intelligent traffic, video monitoring, medical diagnosis, target detection and national defense security.

Description

A kind of adaptive low-light (level) visible images and infrared image fusion method
Technical field
The invention belongs to digital image processing field.
Background technology
With the fast development of sensor technology, the research of image fusion technology is increasingly becoming the focus neck of current research Domain.Different types of sensor, the usual difference of image information of capturing scenes is larger.In addition, single-sensor is obtained Information content is limited, it is difficult to meet the application demand of people.Image fusion technology, is to obtain two or more sensors Image information merge, produce the new image of a width.New images after fusion have abundant, contrast height of detailed information etc. Advantage, is widely used in the every field such as medical diagnosis, target recognition and tracking, video monitoring, intelligent transportation, national defense safety, It is worth with higher social application.Registration of the determination of fusion criterion, the raising of fused image quality and original image etc. is asked Topic, remains image co-registration art difficult point.
Image fusion technology be broadly divided into based on the fusion method of pixel point analysis, feature based calculate fusion method and Fusion method based on Analysis of Policy Making.Wherein, the fusion method based on pixel point analysis, directly clicks through to original image respective pixel The process of row analysis and synthesis, obtains fusion pixel values, is that image co-registration field is most widely used, simplest method.Based on biography The image interfusion method of system wavelet transformation, because direction Sexual behavior mode is poor, is easily introduced blocking effect, causes fused images contrast It is low, it is unfavorable for the observation and differentiation of human eye.Based on the image interfusion method of traditional contourlet transformation, due to fusion criterion choosing It is improper to take, and causes the problems such as fused images luminance dynamic range is little, detailed information is lacked, and has a strong impact on the function of vision system.
In recent years, the research attention rate more and more higher of image fusion technology.Low-light (level) visible images, often brightness is low, It is unintelligible, it is not easy to the observation and identification of target.At present, the image interfusion method based on pixel point analysis, has not yet been formed unification Mathematical Modeling, and fused image quality need further raising., there is contrast low, bright in traditional images fusion method The problems such as degree dynamic range is little, reservation detailed information is few, and most methods are just for the fusion of gray level image, it is difficult to extensively It is applied to every field.
The content of the invention
The present invention proposes a kind of adaptive low-light (level) visible images and infrared image fusion method, to solve fusion figure Image contrast is low, the problem that luminance dynamic range is little.
The present invention is adopted the technical scheme that, comprised the following steps:
(1), the collection of original low-light (level) visible images and infrared image:
Under low light conditions, original color visible ray is gathered respectively by the infrared camera and Visible Light Camera on head Image and infrared image, its resolution ratio is 640*480;
(2), by the original infrared image for obtaining and color visible image, scale invariant feature Point matching is carried out:
According to original infrared image and the scale invariant feature of color visible image, image registration is carried out, it is ensured that scene In any same position picture collection image in be also at same position;
(3), for the color visible image after registration, using HIS conversion its luminance picture is extracted;According to infrared sensing Device characteristic, strengthens the contrast of infrared image after registration:
After registration, color visible image is 3-D view, and infrared image is two dimensional image, image space dimension Difference, therefore, converted using color space HIS, extract the luminance picture I of color visible imagevisible, its computing formula is such as (1) shown in:
Wherein, Ir、Ig、IbIt is expressed as the three-channel pixel values of RGB of low-light (level) visible images;
Infrared image is carried out into pixel brightness value to negate, is conducive to strengthening the contrast of infrared image;
IIR=L-Iir (2)
Wherein, IIRFor the infrared image after going instead, IirFor the infrared image after registration, L is the gray level of infrared image, When the pixel of infrared image is 8bit, L=2^8=256;
(4), converted using NSCT, by luminance picture IvisibleWith infrared image IIRMulti-resolution decomposition is carried out, is respectively obtained Correspondence low frequency component and high fdrequency component:
Including two parts:Not sub sampled pyramid filter and not sub sampled anisotropic filter group, not sub sampled golden word Tower wave filter group is used to realize multi-resolution decomposition process that not sub sampled anisotropic filter group to be used to realize the decomposition of frequency domain direction;
In not sub sampled pyramid filter group, the not sub sampled pyramid filter of kth level can be obtained by following formula:
Not sub sampled pyramid filter also needs to meet Bezout identities:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
Wherein H0(z)、G0Z () is the low pass resolution filter and composite filter of not sub sampled pyramid filter;H1 (z)、G1Z () is the high pass resolution filter and composite filter of not sub sampled pyramid filter:
Not sub sampled anisotropic filter group, is made up of single fan-filter, and not sub sampled anisotropic filter is entered Row up-sampling operation, can effectively eliminate aliasing;
According to not sub sampled pyramid and not sub sampled anisotropic filter group, many chis are carried out to luminance picture and infrared image Degree conversion, extracts corresponding high fdrequency componentAnd low frequency componentWherein, J >=d >=1, J represents image Total Decomposition order, total Decomposition order is bigger, then the time of algorithm operation is longer, is the multiple dimensioned and multi-direction of guarantee picture breakdown Characteristic, makes J=2, k=[2,16];
(5) using different fusion criterions, to high fdrequency componentAnd low frequency componentMelted respectively Close, the low frequency component and high fdrequency component after being merged;
Low frequency component reflects the profile information of original image, and it is the grain details information of image that high fdrequency component is represented, According to the characteristics of high fdrequency component and low frequency component, different criterions are respectively adopted and are merged, the high fdrequency component after being merged And low frequency component;
1) low frequency component fusion
For the corresponding low frequency component that step (4) is obtained, low frequency point after being merged using adaptive threshold fusion criterion Amount:
Wherein,For the low frequency component after fusion, ITHFor luminance threshold, wthFor weight coefficient,It is respectively bright The corresponding low frequency component of degree image and infrared image;
For luminance picture and the low frequency component of infrared image, using the fusion criterion of adaptive threshold, merged, when When the low frequency component of luminance picture is more than threshold value, select luminance picture low frequency component to be the low frequency component after fusion, work as luminance graph When the low frequency component of picture is less than or equal to threshold value, arithmetic average is carried out to the low frequency component of luminance picture and infrared image, calculating can Low frequency component to after fusion;
2) high fdrequency component fusion
For the corresponding high fdrequency component that step (4) is obtained, using Pulse Coupled Neural Network ignition times big fusion is selected Criterion, high fdrequency component after being merged, Pulse Coupled Neural Network is, by a feedback-type network, to be connected by several neurons Composition, each neuron is mainly made up of receiving portion, modulating part and pulse generator, and high fdrequency component coefficient is trigger pulse The feed back input of coupled neural network, as shown in formula (7):
Wherein,It is the output of feedback fraction,It is input signal, (i, j) represents location of pixels, and k represents d floor heights The direction number of frequency component, n ' is current iteration number of times;
The coupling part of Pulse Coupled Neural Network, can be obtained by formula (8).
WhereinIt is the output of coupling part, m and n is the scope for connecting neuron, VLFor normalization coefficient, Wij,mn For the weight coefficient of other neurons of connection;
Pulse Coupled Neural Network internal arithmetic process, is calculated by formula (9) and formula (10);
Wherein,It is internal state, β, aθAnd VθFor fixed coefficient,It is threshold value;
Every time iterative process is shown below:
Wherein, X represents original luminance picture or infrared image;N is total iterations;Represent total igniting time Number;
By comparing ignition times, the big corresponding high fdrequency component of ignition times is selected as high fdrequency component after fusion:
For luminance picture and the high fdrequency component of infrared image, adopt and taken greatly based on Pulse Coupled Neural Network ignition times Criterion merged, using formula (7), will correspondence high fdrequency component trigger pulse coupled neural network, and according to formula (9)- (12) corresponding ignition times are calculated, by formula (13) by comparing ignition times, select big ignition times corresponding High fdrequency component is used as the high fdrequency component after fusion;
(6), the low frequency component and high fdrequency component after fusion is carried out into NSCT inverse transformations, the gray level image after just being merged IGray-F
For original image is low-light (level) gray scale visible images and infrared image, by multiple dimensioned inverse transformation, obtain Grayscale fusion image, maintains low-light (level) region Infrared Image Information, while the texture information of low-light (level) gray level image is remained, Improve primary visible light gray level image contrast and definition, be easy to human eye more accurately judge and recognize the people in scene or its His object;
(7) color visible image after the fusion for, being obtained by step (6) after gray level image and registration carries out weights and asks With obtain final color fusion image IF
Wherein, C represents R, G, B color channel;Represent the primary visible light pixel value of C Color Channels;Represent that C leads to The final fused images pixel value in road, w is weight coefficient, general w=0.5.
Under low light conditions, the average brightness value of color visible image is relatively low, is to ensure that color fusion image retains more The color information of many visible images, can by the grayscale fusion image obtained in step 6 and original color by formula (14) See that light image carries out weights summation, obtain final color fusion image.
The present invention has following beneficial effects:
(1) the present invention be directed to resolution ratio is 640*480 low-light (level)s visible images and infrared image carries out frequency domain fusion, The method for improving new images contrast and definition.
(2) image that visible light sensor is generally obtained has abundant colour information and detailed information, but In low light conditions or other bad weathers (such as:Haze, dust and sand weather) under, many scene informations can be lost, it is impossible to round-the-clock Work.Infrared sensor, by target heat radiation, can capture hiding thermal target, by scene brightness and boisterous Affect less, but the contrast of general infrared image is low, and colourless multimedia message, single-sensor, it is difficult to fully meet reality Engineer applied.The present invention is converted using NSCT, carries out multi-resolution decomposition to original image first, extracts corresponding high fdrequency component And low frequency component.Then, low frequency component is merged by the fusion criterion using adaptive threshold.Using based on pulse coupling Close neutral net ignition times absolute value and take big fusion criterion, high fdrequency component is merged.Again by NSCT inverse transformations and Weights are sued for peace, and obtain new fused images, effectively remove blocking artifact, and the detailed information of prominent scene objects retains more Detailed information and color information, improve fused images contrast and definition, advantageously in the observation and differentiation of human eye, Low-light (level) and inclement weather conditions are can be applicable to, with extensive social application value.
(3) present invention is applicable not only to the fusion of gray level image, applies also for the fusion of coloured image, and original image point Resolution is more than 640*480.
(4) present invention has wide at aspects such as video monitoring, intelligent transportation, medical diagnosis, machine vision and national defense safeties General using value.
Description of the drawings
Fig. 1 is algorithm flow chart in application examples of the present invention;
Fig. 2 is the head schematic diagram of image capturing system in application examples of the present invention;
Fig. 3 (a) is the original low-luminance color visible images of application examples Scene 1 of the present invention;
Fig. 3 (b) is the original infrared image of application examples Scene of the present invention 1;
Fig. 4 (a) is luminance picture of the application examples Scene 1 of the present invention after feature registration;
Fig. 4 (b) is infrared image of the application examples Scene 1 of the present invention after feature registration;
Fig. 5 is the color fusion image after the fusion of application examples Scene 1;
Fig. 6 (a) is the original low-luminance color visible images of application examples Scene 2 of the present invention;
Fig. 6 (b) is the original low-light (level) infrared image of application examples Scene 2 of the present invention;
Fig. 7 (a) is luminance picture of the application examples Scene 2 of the present invention after feature registration;
Fig. 7 (b) is infrared image of the application examples Scene 2 of the present invention after feature registration;
Fig. 8 is the color fusion image after application examples Scene 2 of the present invention merges.
Specific embodiment
Comprise the following steps:
(1), the collection of original low-light (level) visible images and infrared image:
As shown in Fig. 2 for the clouds terrace system of IMAQ of the present invention, under low light conditions, by infrared on head Camera and Visible Light Camera gather respectively original color visible images and infrared image, and its resolution ratio is 640*480;
(2), by the original infrared image for obtaining and color visible image, scale invariant feature Point matching is carried out:
Because camera position is different, lens focus are different and other external environments impacts, same position in scene As being in diverse location in the color visible image and infrared image of collection in step (1), therefore, according to original infrared figure The scale invariant feature of picture and color visible image, carries out image registration, it is ensured that the picture of any same position in scene exists Same position is also in the image of collection;
(3), for the color visible image after registration, using HIS conversion its luminance picture is extracted;According to infrared sensing Device characteristic, strengthens the contrast of infrared image after registration:
After registration, color visible image is 3-D view, and infrared image is two dimensional image, image space dimension Difference, can cause fusion error, therefore, converted using color space HIS, extract the luminance picture of color visible image Ivisible, its computing formula is as shown in (1):
Wherein, Ir、Ig、IbIt is expressed as the three-channel pixel values of RGB of low-light (level) visible images;
Infrared sensor is imaged by the infra-red radiation of object in scene, and often object (such as people) is in the picture Brightness value is higher, but under the conditions of night low-illumination, the object that eye-observation is arrived is often low-light level.Therefore, will be infrared Image carries out pixel brightness value and negates, and is conducive to strengthening the contrast of infrared image.
IIR=L-Iir (2)
Wherein, IIRFor the infrared image after going instead, IirFor the infrared image after registration, L is the gray level of infrared image, When the pixel of infrared image is 8bit, L=2^8=256;
(4), converted using NSCT, by luminance picture IvisibleWith infrared image IIRMulti-resolution decomposition is carried out, is respectively obtained Correspondence low frequency component and high fdrequency component:
The characteristics of NSCT conversion not only has contourlet transformation multiresolution, localization, multidirectional, also with flat Motion immovability, can eliminate Gibbs phenomenons, including two parts:Not sub sampled pyramid filter and not sub sampled direction are filtered Ripple device group, not sub sampled pyramid filter group is used to realize multi-resolution decomposition process.Not sub sampled anisotropic filter group is used for Realize the decomposition of frequency domain direction;
In not sub sampled pyramid filter group, the not sub sampled pyramid filter of kth level can be obtained by following formula:
Not sub sampled pyramid filter also needs to meet Bezout identities:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
Wherein H0(z)、G0Z () is the low pass resolution filter and composite filter of not sub sampled pyramid filter;H1 (z)、G1Z () is the high pass resolution filter and composite filter of not sub sampled pyramid filter:
Not sub sampled anisotropic filter group, is made up of single fan-filter, is that the direction for realizing high fdrequency component is put down Motion immovability, eliminates sampling element, and on pyramid high-subband, the directional response on relatively low and upper frequency easily draws Aliasing is sent out, up-sampling operation is carried out to not sub sampled anisotropic filter, can effectively eliminate aliasing.
According to not sub sampled pyramid and not sub sampled anisotropic filter group, many chis are carried out to luminance picture and infrared image Degree conversion, extracts corresponding high fdrequency componentAnd low frequency componentWherein, J >=d >=1, J represents image Total Decomposition order.Total Decomposition order is bigger, then the time of algorithm operation is longer, is the multiple dimensioned and multi-direction of guarantee picture breakdown Characteristic, makes J=2, k=[2,16];
(5) using different fusion criterions, to high fdrequency componentAnd low frequency componentMelted respectively Close, the low frequency component and high fdrequency component after being merged;
Low frequency component reflects the profile information of original image, and it is the grain details information of image that high fdrequency component is represented, According to the characteristics of high fdrequency component and low frequency component, different criterions are respectively adopted and are merged, the high fdrequency component after being merged And low frequency component;
1) low frequency component fusion
For the corresponding low frequency component that step (4) is obtained, low frequency point after being merged using adaptive threshold fusion criterion Amount:
Wherein,For the low frequency component after fusion, ITHFor luminance threshold, wthFor weight coefficient,It is respectively bright The corresponding low frequency component of degree image and infrared image;
For luminance picture and the low frequency component of infrared image, using the fusion criterion of adaptive threshold, merged, it is low Illumination visible images, ensemble average brightness value is relatively low, wherein, high luminance pixel point is most of to derive from bias light, such as:Automobile Light, street lamp or other lighting apparatus etc..By experiment, the maximum of correspondence low frequency component difference 0.13% derives from background Light, by formula (5) threshold value, takes w=0.75 in example.From formula (6), when the low frequency component of luminance picture is more than During threshold value, luminance picture low frequency component is selected to be the low frequency component after fusion, when the low frequency component of luminance picture is less than or equal to threshold During value, arithmetic average, the low frequency component that calculating can be arrived after fusion are carried out to the low frequency component of luminance picture and infrared image;
2) high fdrequency component fusion
For the corresponding high fdrequency component that step (4) is obtained, using Pulse Coupled Neural Network ignition times big fusion is selected Criterion, high fdrequency component after being merged, Pulse Coupled Neural Network is, by a feedback-type network, to be connected by several neurons Composition, each neuron is mainly made up of receiving portion, modulating part and pulse generator, and high fdrequency component coefficient is trigger pulse The feed back input of coupled neural network, as shown in formula (7):
Wherein,It is the output of feedback fraction,It is input signal, (i, j) represents location of pixels, and k represents d floor heights The direction number of frequency component, n ' is current iteration number of times;
The coupling part of Pulse Coupled Neural Network, can be obtained by formula (8).
WhereinIt is the output of coupling part, m and n is the scope for connecting neuron, VLFor normalization coefficient, Wij,mn For the weight coefficient of other neurons of connection;
Pulse Coupled Neural Network internal arithmetic process, is calculated by formula (9) and formula (10);
Wherein,It is internal state, β, aθAnd VθFor fixed coefficient,It is threshold value;
Every time iterative process is shown below:
Wherein, X represents original luminance picture or infrared image;N is total iterations;Represent total igniting time Number;
By comparing ignition times, the big corresponding high fdrequency component of ignition times is selected as high fdrequency component after fusion:
For luminance picture and the high fdrequency component of infrared image, adopt and taken greatly based on Pulse Coupled Neural Network ignition times Criterion merged, using formula (7), will correspondence high fdrequency component trigger pulse coupled neural network, and according to formula (9)- (12) corresponding ignition times are calculated, by formula (13) by comparing ignition times, select big ignition times corresponding High fdrequency component is used as the high fdrequency component after fusion;
(6), the low frequency component and high fdrequency component after fusion is carried out into NSCT inverse transformations, the gray level image after just being merged IGray-F
For original image is low-light (level) gray scale visible images and infrared image, by multiple dimensioned inverse transformation, obtain Grayscale fusion image, maintains low-light (level) region Infrared Image Information, while the texture information of low-light (level) gray level image is remained, Improve primary visible light gray level image contrast and definition, be easy to human eye more accurately judge and recognize the people in scene or its His object;
(7) color visible image after the fusion for, being obtained by step (6) after gray level image and registration carries out weights and asks With obtain final color fusion image IF
Wherein, C represents R, G, B color channel;Represent the primary visible light pixel value of C Color Channels;Represent that C leads to The final fused images pixel value in road, w is weight coefficient, general w=0.5.
Under low light conditions, the average brightness value of color visible image is relatively low.To ensure that color fusion image retains more The color information of many visible images, can by the grayscale fusion image obtained in step 6 and original color by formula (14) See that light image carries out weights summation, obtain final color fusion image.
Combine concrete application example and accompanying drawing below to further illustrate the present invention.
(1), the collection of original low-light (level) visible images and infrared image.
Under low-light (level) environment, by the infrared camera and Visible Light Camera of Fig. 2 clouds terrace systems, gathering original low-light (level) can See light image and infrared image, its resolution ratio is 640*480.Such as Fig. 2;
(2), by the original infrared image for obtaining and color visible image, scale invariant feature Point matching is carried out;
(3) for the color visible image after registration, using HIS conversion its luminance picture is extracted;According to infrared sensing Device characteristic, strengthens the contrast of infrared image after registration;
(4) converted using NSCT, by luminance picture IvisibleWith infrared image IIRMulti-resolution decomposition is carried out, it is right to respectively obtain Answer low frequency component and high fdrequency component;
(5) using different fusion criterions, to high fdrequency componentAnd low frequency componentMelted respectively Close, the low frequency component and high fdrequency component after being merged;
Low frequency component reflects the profile information of original image, and it is the grain details information of image that high fdrequency component is represented, According to the characteristics of high fdrequency component and low frequency component, different criterions are respectively adopted and are merged, the high fdrequency component after being merged And low frequency component;
For luminance picture and the corresponding low frequency component of infrared image, using the fusion criterion of adaptive threshold, melted Close, low-light (level) visible images, ensemble average brightness value is relatively low, can be obtained by formula (5) and formula (6), it is low when luminance picture When frequency component is more than threshold value, luminance picture low frequency component is selected to be the low frequency component after fusion, when the low frequency component of luminance picture During less than or equal to threshold value, arithmetic average is carried out to the low frequency component of luminance picture and infrared image, calculating can arrive low after fusion Frequency component;
For luminance picture and the corresponding high fdrequency component of infrared image, using based on Pulse Coupled Neural Network ignition times Take big criterion to be merged, using formula (7), by correspondence high fdrequency component trigger pulse coupled neural network, and according to formula (8)-(12) are calculated corresponding ignition times, by formula (13) by comparing ignition times, select big ignition times pair The high fdrequency component answered is used as the high fdrequency component after fusion;
(6) low frequency component and high fdrequency component after fusion is carried out into NSCT inverse transformations, the gray level image after just being merged IGray-F;、
(7) gray level image and the color visible image after registration carry out weights summation after the fusion for obtaining step (6), Obtain final color fusion image IF
Under low light conditions, the average brightness value of color visible image is relatively low, and dynamic range is little, and scene objects information is not Clearly, as shown in Fig. 3 (a) and Fig. 6 (a), it is difficult to observe the pedestrian in scene, the profile of vehicle, trunk etc.;And pass through scene The infrared image of the infra-red radiation collection of target, can be clearly observable the target information that visible light sensor cannot be seen, but Color information is a lack of, shown in such as Fig. 3 (b) and Fig. 6 (b);
As shown in Figure 4, fused images had both effectively remained the color of street lamp in visible images, automobile tail light and sky areas Multimedia message, remains profile (such as wheel portion), pedestrian, the support bar grain details letter of street lamp of car in infrared image again Breath, as shown in fig. 6, fused images had both effectively remained the color informations such as light, street lamp, the building of automobile, remains red again The grain details information such as pedestrian, tree trunk in outer image;Therefore, by the method for the present invention, to infrared image and color visible Image carries out effective integration, not only can remain the color information of visible images, can also obtain infrared image scene mesh Target grain details information, can be widely applied to the fields such as video monitoring, intelligent transportation, national defense safety.
The preferred embodiment of the present invention is the foregoing is only, protection scope of the present invention is not limited in above-mentioned embodiment party Formula, the technical scheme of every principle for belonging to the present invention belongs to the protection domain of present aspect, for those skilled in the art For, some improvements and modifications for carrying out under the premise of not departing from the present invention, these improvements and modifications also should be regarded as the present invention Protection domain.

Claims (6)

1. a kind of adaptive low-light (level) visible images and infrared image fusion method, it is characterised in that comprise the following steps:
(1), the collection of original low-light (level) visible images and infrared image:
Under low light conditions, original color visible images are gathered respectively by the infrared camera and Visible Light Camera on head And infrared image,
(2), by the original infrared image for obtaining and color visible image, scale invariant feature Point matching is carried out:
According to original infrared image and the scale invariant feature of color visible image, image registration is carried out, it is ensured that in scene Arbitrarily the picture of same position is also at same position in the image of collection;
(3), for the color visible image after registration, using HIS conversion its luminance picture is extracted;It is special according to infrared sensor Property, strengthen the contrast of infrared image after registration:
(4), converted using NSCT, by luminance picture IvisibleWith infrared image IIRMulti-resolution decomposition is carried out, correspondence is respectively obtained Low frequency component and high fdrequency component:
Including two parts:Not sub sampled pyramid filter and not sub sampled anisotropic filter group, not sub sampled pyramid filter Ripple device group is used to realize multi-resolution decomposition process that not sub sampled anisotropic filter group to be used to realize the decomposition of frequency domain direction;
(5) using different fusion criterions, to high fdrequency componentAnd low frequency componentMerged respectively, Low frequency component and high fdrequency component after being merged;
Low frequency component reflects the profile information of original image, and it is the grain details information of image that high fdrequency component is represented, according to The characteristics of high fdrequency component and low frequency component, different criterions are respectively adopted and are merged, high fdrequency component after being merged and low Frequency component;
(6), the low frequency component and high fdrequency component after fusion is carried out into NSCT inverse transformations, the gray level image after just being merged IGray-F
For original image is low-light (level) gray scale visible images and infrared image, by multiple dimensioned inverse transformation, the gray scale for obtaining Fused images, maintain low-light (level) region Infrared Image Information, while remaining the texture information of low-light (level) gray level image, improve The contrast and definition of primary visible light gray level image, is easy to human eye more accurately to judge and recognize the people in scene or other mesh Mark thing;
(7) color visible image after the fusion for, being obtained by step (6) after gray level image and registration carries out weights summation, obtains To final color fusion image IF
I F C = I G r a y - F + w × I C o l o r - F C - - - ( 14 )
Wherein, C represents R, G, B color channel;Represent the primary visible light pixel value of C Color Channels;Represent C-channel most Whole fused images pixel value, w is weight coefficient;
Under low light conditions, the average brightness value of color visible image is relatively low, is to ensure that color fusion image retains more The color information of visible images, by formula (14) by the grayscale fusion image obtained in step 6 and original color visible ray Image carries out weights summation, obtains final color fusion image.
2. a kind of adaptive low-light (level) visible images according to claim 1 and infrared image fusion method, it is special Levy and be:The resolution ratio of collection original color visible images and infrared image is 640*480 in step (1).
3. a kind of adaptive low-light (level) visible images according to claim 1 and infrared image fusion method, it is special Levy and be:In step (3), after registration, color visible image is 3-D view, and infrared image is two dimensional image, image The difference of space dimensionality, therefore, converted using color space HIS, extract the luminance picture I of color visible imagevisible, its Computing formula is as shown in (1):
L v i s i b l e = I r + I g + I b 3 - - - ( 1 )
Wherein, Ir、Ig、IbIt is expressed as the three-channel pixel values of RGB of low-light (level) visible images;
Infrared image is carried out into pixel brightness value to negate, is conducive to strengthening the contrast of infrared image;
IIR=L-Iir (2)
Wherein, IIRFor the infrared image after going instead, IirFor the infrared image after registration, L is the gray level of infrared image, when infrared When the pixel of image is 8bit, L=2^8=256.
4. a kind of adaptive low-light (level) visible images according to claim 1 and infrared image fusion method, it is special Levy and be, in step (4):
In not sub sampled pyramid filter group, the not sub sampled pyramid filter of kth level can be obtained by following formula:
H s e q ( z ) = H 1 ( z 2 s - 1 ) Π j = 0 s - 2 H 0 ( z 2 j ) 2 k > s ≥ 1 Π j = 0 s - 2 H 0 ( z 2 j ) s = 2 k - - - ( 3 )
Not sub sampled pyramid filter also needs to meet Bezout identities:
H0(z)G0(z)+H1(z)G1(z)=1 (4)
Wherein H0(z)、G0Z () is the low pass resolution filter and composite filter of not sub sampled pyramid filter;H1(z)、G1 Z () is the high pass resolution filter and composite filter of not sub sampled pyramid filter:
Not sub sampled anisotropic filter group, is made up of single fan-filter, and not sub sampled anisotropic filter is carried out Sampling operation, can effectively eliminate aliasing.
According to not sub sampled pyramid and not sub sampled anisotropic filter group, multiple dimensioned change is carried out to luminance picture and infrared image Change, extract corresponding high fdrequency componentAnd low frequency componentWherein, J >=d >=1, J represents the total score of image The solution number of plies.Total Decomposition order is bigger, then the time of algorithm operation is longer, is the multiple dimensioned and multi-direction spy for ensureing picture breakdown Property, make J=2, k=[2,16].
5. a kind of adaptive low-light (level) visible images according to claim 1 and infrared image fusion method, it is special Levy and be, step (5) low frequency component fusion method is as follows:
For the corresponding low frequency component that step (4) is obtained, low frequency component after being merged using adaptive threshold fusion criterion:
I T H = w t h × m a x ( I V i s i b l e J - I I R J ) - - - ( 5 )
I F l o w = I V i s i b l e J I V i s i b l e J ≥ I T H ( I I R J + I V i s i b l e J ) / 2 o t h e r w i s e - - - ( 6 )
Wherein,For the low frequency component after fusion, ITHFor luminance threshold, wthFor weight coefficient,Respectively luminance graph The corresponding low frequency component of picture and infrared image;
For luminance picture and the low frequency component of infrared image, using the fusion criterion of adaptive threshold, merged, worked as brightness When the low frequency component of image is more than threshold value, luminance picture low frequency component is selected to be the low frequency component after fusion, when luminance picture When low frequency component is less than or equal to threshold value, arithmetic average is carried out to the low frequency component of luminance picture and infrared image, calculating can be arrived melts Low frequency component after conjunction.
6. a kind of adaptive low-light (level) visible images according to claim 1 and infrared image fusion method, it is special Levy and be, step (5) high fdrequency component fusion method is as follows:
For the corresponding high fdrequency component that step (4) is obtained, big fusion criterion is selected using Pulse Coupled Neural Network ignition times, High fdrequency component after being merged, Pulse Coupled Neural Network is, by a feedback-type network, to be made up of several neuron connections, Each neuron is mainly made up of receiving portion, modulating part and pulse generator, and high fdrequency component coefficient is trigger pulse coupling The feed back input of neutral net, as shown in formula (7):
F i , j d , k ( n ′ ) = I i , j d , k - - - ( 7 )
Wherein,It is the output of feedback fraction,It is input signal, (i, j) represents location of pixels, and k represents d floor height frequency divisions The direction number of amount, n ' is current iteration number of times;
The coupling part of Pulse Coupled Neural Network, can be obtained by formula (8).
L i , j d , k ( n ′ ) = exp ( - a L ) L i , j d , k ( n ′ ) + V L Σ m , n W i j , m n Y i j , m n d , k ( n ′ - 1 ) - - - ( 8 )
WhereinIt is the output of coupling part, m and n is the scope for connecting neuron, VLFor normalization coefficient, Wij,mnFor even The weight coefficient of other neurons for connecing;
Pulse Coupled Neural Network internal arithmetic process, is calculated by formula (9) and formula (10);
U i , j d , k ( n ′ ) = E i , j d , k ( n ′ ) ( 1 + βL i , j d , k ( n ′ ) ) - - - ( 9 )
θ i , j d , k ( n ′ ) = exp ( - a θ ) θ i , j d , k ( n ′ - 1 ) + V θ Y i , j d , k ( n ′ - 1 ) - - - ( 10 )
Wherein,It is internal state, β, aθAnd VθFor fixed coefficient,It is threshold value;
Every time iterative process is shown below:
Y i , j d , k ( n ′ ) = 1 U i , j d , k ( n ′ ) > θ i , j d , k ( n ′ ) 0 o t h e r w i s e - - - ( 11 )
T X , i j d , k = Σ n ′ = 1 N Y i , j d , k ( n ′ ) X = V i s i b l e o r I R - - - ( 12 )
Wherein, X represents original luminance picture or infrared image;N is total iterations;Represent total ignition times;
By comparing ignition times, the big corresponding high fdrequency component of ignition times is selected as high fdrequency component after fusion:
I F h i g h = I V i s i b l e d , k T V i s i b l e , i j d , k ≥ T I R , i j d , k I I R d , k o t h e r w i s e - - - ( 13 )
For luminance picture and the high fdrequency component of infrared image, adopt and take big standard based on Pulse Coupled Neural Network ignition times Then merged, using formula (7), by correspondence high fdrequency component trigger pulse coupled neural network, and according to formula (9)-(12) Corresponding ignition times are calculated, by formula (13) by comparing ignition times, the big corresponding high frequency of ignition times is selected Component is used as the high fdrequency component after fusion.
CN201611142487.2A 2016-12-12 2016-12-12 Adaptive low-illumination visible image and infrared image fusion method Pending CN106600572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611142487.2A CN106600572A (en) 2016-12-12 2016-12-12 Adaptive low-illumination visible image and infrared image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611142487.2A CN106600572A (en) 2016-12-12 2016-12-12 Adaptive low-illumination visible image and infrared image fusion method

Publications (1)

Publication Number Publication Date
CN106600572A true CN106600572A (en) 2017-04-26

Family

ID=58597724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611142487.2A Pending CN106600572A (en) 2016-12-12 2016-12-12 Adaptive low-illumination visible image and infrared image fusion method

Country Status (1)

Country Link
CN (1) CN106600572A (en)

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
CN107203987A (en) * 2017-06-07 2017-09-26 云南师范大学 A kind of infrared image and low-light (level) image real time fusion system
CN107451984A (en) * 2017-07-27 2017-12-08 桂林电子科技大学 A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis
CN107705268A (en) * 2017-10-20 2018-02-16 天津工业大学 One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm
CN107798854A (en) * 2017-11-12 2018-03-13 佛山鑫进科技有限公司 A kind of ammeter long-distance monitoring method based on image recognition
CN107909562A (en) * 2017-12-05 2018-04-13 华中光电技术研究所(中国船舶重工集团公司第七七研究所) A kind of Fast Image Fusion based on Pixel-level
CN108053371A (en) * 2017-11-30 2018-05-18 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN108182698A (en) * 2017-12-18 2018-06-19 凯迈(洛阳)测控有限公司 A kind of fusion method of airborne photoelectric infrared image and visible images
CN108427922A (en) * 2018-03-06 2018-08-21 深圳市创艺工业技术有限公司 A kind of efficient indoor environment regulating system
CN108428224A (en) * 2018-01-09 2018-08-21 中国农业大学 Animal body surface temperature checking method and device based on convolutional Neural net
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108564543A (en) * 2018-04-11 2018-09-21 长春理工大学 A kind of underwater picture color compensation method based on electromagnetic theory
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN108710910A (en) * 2018-05-18 2018-10-26 中国科学院光电研究院 A kind of target identification method and system based on convolutional neural networks
CN108717689A (en) * 2018-05-16 2018-10-30 北京理工大学 Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background
CN108961180A (en) * 2018-06-22 2018-12-07 理光软件研究所(北京)有限公司 infrared image enhancing method and system
CN109100364A (en) * 2018-06-29 2018-12-28 杭州国翌科技有限公司 A kind of tunnel defect monitoring system and monitoring method based on spectrum analysis
CN109785277A (en) * 2018-12-11 2019-05-21 南京第五十五所技术开发有限公司 A kind of infrared and visible light image fusion method in real time
CN109949353A (en) * 2019-03-25 2019-06-28 北京理工大学 A kind of low-light (level) image natural sense colorization method
WO2019153920A1 (en) * 2018-02-09 2019-08-15 华为技术有限公司 Method for image processing and related device
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110223262A (en) * 2018-12-28 2019-09-10 中国船舶重工集团公司第七一七研究所 A kind of rapid image fusion method based on Pixel-level
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
CN110363732A (en) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method and its device
CN110363731A (en) * 2018-04-10 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and electronic equipment
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
WO2020051897A1 (en) * 2018-09-14 2020-03-19 浙江宇视科技有限公司 Image fusion method and system, electronic device, and computer readable storage medium
CN111008946A (en) * 2019-11-07 2020-04-14 武汉多谱多勒科技有限公司 Infrared and visible light image intelligent fusion device and method used in fire fighting site
CN111160171A (en) * 2019-12-19 2020-05-15 哈尔滨工程大学 Radiation source signal identification method combining two-domain multi-features
WO2020112442A1 (en) * 2018-11-27 2020-06-04 Google Llc Methods and systems for colorizing infrared images
CN111325701A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Image processing method, device and storage medium
CN111385466A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
CN111476732A (en) * 2020-04-03 2020-07-31 江苏宇特光电科技股份有限公司 Image fusion and denoising method and system
CN111612736A (en) * 2020-04-08 2020-09-01 广东电网有限责任公司 Power equipment fault detection method, computer and computer program
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112258442A (en) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 Image fusion method and device, computer equipment and storage medium
CN112307901A (en) * 2020-09-28 2021-02-02 国网浙江省电力有限公司电力科学研究院 Landslide detection-oriented SAR and optical image fusion method and system
US10942274B2 (en) 2018-04-11 2021-03-09 Microsoft Technology Licensing, Llc Time of flight and picture camera
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112767289A (en) * 2019-10-21 2021-05-07 浙江宇视科技有限公司 Image fusion method, device, medium and electronic equipment
CN112767291A (en) * 2021-01-04 2021-05-07 浙江大华技术股份有限公司 Visible light image and infrared image fusion method and device and readable storage medium
CN113538303A (en) * 2020-04-20 2021-10-22 杭州海康威视数字技术股份有限公司 Image fusion method
CN113822833A (en) * 2021-09-26 2021-12-21 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN114581315A (en) * 2022-01-05 2022-06-03 中国民用航空飞行学院 Low-visibility approach flight multi-mode monitoring image enhancement method
CN114862730A (en) * 2021-02-04 2022-08-05 四川大学 Infrared and visible light image fusion method based on multi-scale analysis and VGG-19
CN115086573A (en) * 2022-05-19 2022-09-20 北京航天控制仪器研究所 External synchronous exposure-based heterogeneous video fusion method and system
CN116681636A (en) * 2023-07-26 2023-09-01 南京大学 Light infrared and visible light image fusion method based on convolutional neural network
US11798147B2 (en) 2018-06-30 2023-10-24 Huawei Technologies Co., Ltd. Image processing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101504766A (en) * 2009-03-25 2009-08-12 湖南大学 Image amalgamation method based on mixed multi-resolution decomposition
CN101546428A (en) * 2009-05-07 2009-09-30 西北工业大学 Image fusion of sequence infrared and visible light based on region segmentation
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN102722877A (en) * 2012-06-07 2012-10-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN103177433A (en) * 2013-04-09 2013-06-26 南京理工大学 Infrared and low light image fusion method
CN104200452A (en) * 2014-09-05 2014-12-10 西安电子科技大学 Method and device for fusing infrared and visible light images based on spectral wavelet transformation
CN104282007A (en) * 2014-10-22 2015-01-14 长春理工大学 Contourlet transformation-adaptive medical image fusion method based on non-sampling
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105809640A (en) * 2016-03-09 2016-07-27 长春理工大学 Multi-sensor fusion low-illumination video image enhancement method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101504766A (en) * 2009-03-25 2009-08-12 湖南大学 Image amalgamation method based on mixed multi-resolution decomposition
CN101546428A (en) * 2009-05-07 2009-09-30 西北工业大学 Image fusion of sequence infrared and visible light based on region segmentation
CN102646272A (en) * 2012-02-23 2012-08-22 南京信息工程大学 Wavelet meteorological satellite cloud image merging method based on local variance and weighing combination
CN102722877A (en) * 2012-06-07 2012-10-10 内蒙古科技大学 Multi-focus image fusing method based on dual-channel PCNN (Pulse Coupled Neural Network)
CN103177433A (en) * 2013-04-09 2013-06-26 南京理工大学 Infrared and low light image fusion method
CN104200452A (en) * 2014-09-05 2014-12-10 西安电子科技大学 Method and device for fusing infrared and visible light images based on spectral wavelet transformation
CN104282007A (en) * 2014-10-22 2015-01-14 长春理工大学 Contourlet transformation-adaptive medical image fusion method based on non-sampling
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105809640A (en) * 2016-03-09 2016-07-27 长春理工大学 Multi-sensor fusion low-illumination video image enhancement method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHUO LIU等: "《Research on fusion technology based on low-light visible image and infrared image》", 《OPTICAL ENGINEERING》 *

Cited By (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169944A (en) * 2017-04-21 2017-09-15 北京理工大学 A kind of infrared and visible light image fusion method based on multiscale contrast
CN107203987A (en) * 2017-06-07 2017-09-26 云南师范大学 A kind of infrared image and low-light (level) image real time fusion system
CN107451984B (en) * 2017-07-27 2021-06-22 桂林电子科技大学 Infrared and visible light image fusion algorithm based on mixed multi-scale analysis
CN107451984A (en) * 2017-07-27 2017-12-08 桂林电子科技大学 A kind of infrared and visual image fusion algorithm based on mixing multiscale analysis
CN108665487B (en) * 2017-10-17 2022-12-13 国网河南省电力公司郑州供电公司 Transformer substation operation object and target positioning method based on infrared and visible light fusion
CN108665487A (en) * 2017-10-17 2018-10-16 国网河南省电力公司郑州供电公司 Substation's manipulating object and object localization method based on the fusion of infrared and visible light
CN107705268A (en) * 2017-10-20 2018-02-16 天津工业大学 One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm
CN107798854A (en) * 2017-11-12 2018-03-13 佛山鑫进科技有限公司 A kind of ammeter long-distance monitoring method based on image recognition
CN108053371A (en) * 2017-11-30 2018-05-18 努比亚技术有限公司 A kind of image processing method, terminal and computer readable storage medium
CN107909562B (en) * 2017-12-05 2021-06-08 华中光电技术研究所(中国船舶重工集团公司第七一七研究所) Fast image fusion algorithm based on pixel level
CN107909562A (en) * 2017-12-05 2018-04-13 华中光电技术研究所(中国船舶重工集团公司第七七研究所) A kind of Fast Image Fusion based on Pixel-level
CN108182698A (en) * 2017-12-18 2018-06-19 凯迈(洛阳)测控有限公司 A kind of fusion method of airborne photoelectric infrared image and visible images
CN108428224A (en) * 2018-01-09 2018-08-21 中国农业大学 Animal body surface temperature checking method and device based on convolutional Neural net
CN108428224B (en) * 2018-01-09 2020-05-22 中国农业大学 Animal body surface temperature detection method and device based on convolutional neural network
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
US11250550B2 (en) 2018-02-09 2022-02-15 Huawei Technologies Co., Ltd. Image processing method and related device
CN110136183B (en) * 2018-02-09 2021-05-18 华为技术有限公司 Image processing method and device and camera device
JP2021513278A (en) * 2018-02-09 2021-05-20 華為技術有限公司Huawei Technologies Co.,Ltd. Image processing methods and related devices
WO2019153920A1 (en) * 2018-02-09 2019-08-15 华为技术有限公司 Method for image processing and related device
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN108427922A (en) * 2018-03-06 2018-08-21 深圳市创艺工业技术有限公司 A kind of efficient indoor environment regulating system
CN110363731B (en) * 2018-04-10 2021-09-03 杭州海康微影传感科技有限公司 Image fusion method and device and electronic equipment
CN110363731A (en) * 2018-04-10 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and electronic equipment
CN110363732A (en) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method and its device
CN108564543A (en) * 2018-04-11 2018-09-21 长春理工大学 A kind of underwater picture color compensation method based on electromagnetic theory
US10942274B2 (en) 2018-04-11 2021-03-09 Microsoft Technology Licensing, Llc Time of flight and picture camera
CN108717689A (en) * 2018-05-16 2018-10-30 北京理工大学 Middle LONG WAVE INFRARED image interfusion method and device applied to naval vessel detection field under sky and ocean background
CN108710910B (en) * 2018-05-18 2020-12-04 中国科学院光电研究院 Target identification method and system based on convolutional neural network
CN108710910A (en) * 2018-05-18 2018-10-26 中国科学院光电研究院 A kind of target identification method and system based on convolutional neural networks
CN108961180B (en) * 2018-06-22 2020-09-25 理光软件研究所(北京)有限公司 Infrared image enhancement method and system
CN108961180A (en) * 2018-06-22 2018-12-07 理光软件研究所(北京)有限公司 infrared image enhancing method and system
CN109100364A (en) * 2018-06-29 2018-12-28 杭州国翌科技有限公司 A kind of tunnel defect monitoring system and monitoring method based on spectrum analysis
US11798147B2 (en) 2018-06-30 2023-10-24 Huawei Technologies Co., Ltd. Image processing method and device
WO2020051897A1 (en) * 2018-09-14 2020-03-19 浙江宇视科技有限公司 Image fusion method and system, electronic device, and computer readable storage medium
CN110246108B (en) * 2018-11-21 2023-06-20 浙江大华技术股份有限公司 Image processing method, device and computer readable storage medium
CN110246108A (en) * 2018-11-21 2019-09-17 浙江大华技术股份有限公司 A kind of image processing method, device and computer readable storage medium
US11875520B2 (en) 2018-11-21 2024-01-16 Zhejiang Dahua Technology Co., Ltd. Method and system for generating a fusion image
WO2020112442A1 (en) * 2018-11-27 2020-06-04 Google Llc Methods and systems for colorizing infrared images
US11483451B2 (en) 2018-11-27 2022-10-25 Google Llc Methods and systems for colorizing infrared images
CN109785277A (en) * 2018-12-11 2019-05-21 南京第五十五所技术开发有限公司 A kind of infrared and visible light image fusion method in real time
CN109785277B (en) * 2018-12-11 2022-10-04 南京第五十五所技术开发有限公司 Real-time infrared and visible light image fusion method
CN111325701A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Image processing method, device and storage medium
CN111325701B (en) * 2018-12-14 2023-05-09 杭州海康微影传感科技有限公司 Image processing method, device and storage medium
CN110223262A (en) * 2018-12-28 2019-09-10 中国船舶重工集团公司第七一七研究所 A kind of rapid image fusion method based on Pixel-level
CN111385466A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
CN111385466B (en) * 2018-12-30 2021-08-24 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
CN109949353A (en) * 2019-03-25 2019-06-28 北京理工大学 A kind of low-light (level) image natural sense colorization method
CN110210541B (en) * 2019-05-23 2021-09-03 浙江大华技术股份有限公司 Image fusion method and device, and storage device
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
EP3948766A4 (en) * 2019-05-24 2022-07-06 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method
CN110490914B (en) * 2019-07-29 2022-11-15 广东工业大学 Image fusion method based on brightness self-adaption and significance detection
CN112767289A (en) * 2019-10-21 2021-05-07 浙江宇视科技有限公司 Image fusion method, device, medium and electronic equipment
CN112767289B (en) * 2019-10-21 2024-05-07 浙江宇视科技有限公司 Image fusion method, device, medium and electronic equipment
CN111008946A (en) * 2019-11-07 2020-04-14 武汉多谱多勒科技有限公司 Infrared and visible light image intelligent fusion device and method used in fire fighting site
CN111160171B (en) * 2019-12-19 2022-04-12 哈尔滨工程大学 Radiation source signal identification method combining two-domain multi-features
CN111160171A (en) * 2019-12-19 2020-05-15 哈尔滨工程大学 Radiation source signal identification method combining two-domain multi-features
CN111476732A (en) * 2020-04-03 2020-07-31 江苏宇特光电科技股份有限公司 Image fusion and denoising method and system
CN111612736A (en) * 2020-04-08 2020-09-01 广东电网有限责任公司 Power equipment fault detection method, computer and computer program
CN113538303B (en) * 2020-04-20 2023-05-26 杭州海康威视数字技术股份有限公司 Image fusion method
CN113538303A (en) * 2020-04-20 2021-10-22 杭州海康威视数字技术股份有限公司 Image fusion method
CN112307901B (en) * 2020-09-28 2024-05-10 国网浙江省电力有限公司电力科学研究院 SAR and optical image fusion method and system for landslide detection
CN112307901A (en) * 2020-09-28 2021-02-02 国网浙江省电力有限公司电力科学研究院 Landslide detection-oriented SAR and optical image fusion method and system
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112132753B (en) * 2020-11-06 2022-04-05 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112258442A (en) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 Image fusion method and device, computer equipment and storage medium
CN112487947A (en) * 2020-11-26 2021-03-12 西北工业大学 Low-illumination image target detection method based on image fusion and target detection network
CN112767291A (en) * 2021-01-04 2021-05-07 浙江大华技术股份有限公司 Visible light image and infrared image fusion method and device and readable storage medium
CN112767291B (en) * 2021-01-04 2024-05-28 浙江华感科技有限公司 Visible light image and infrared image fusion method, device and readable storage medium
CN114862730B (en) * 2021-02-04 2023-05-23 四川大学 Infrared and visible light image fusion method based on multi-scale analysis and VGG-19
CN114862730A (en) * 2021-02-04 2022-08-05 四川大学 Infrared and visible light image fusion method based on multi-scale analysis and VGG-19
CN113822833A (en) * 2021-09-26 2021-12-21 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN113822833B (en) * 2021-09-26 2024-01-16 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy
CN114581315A (en) * 2022-01-05 2022-06-03 中国民用航空飞行学院 Low-visibility approach flight multi-mode monitoring image enhancement method
CN115086573A (en) * 2022-05-19 2022-09-20 北京航天控制仪器研究所 External synchronous exposure-based heterogeneous video fusion method and system
CN116681636A (en) * 2023-07-26 2023-09-01 南京大学 Light infrared and visible light image fusion method based on convolutional neural network
CN116681636B (en) * 2023-07-26 2023-12-12 南京大学 Light infrared and visible light image fusion method based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN106600572A (en) Adaptive low-illumination visible image and infrared image fusion method
CN106296612B (en) A kind of stagewise monitor video sharpening system and method for image quality evaluation and weather conditions guidance
CN103020920B (en) Method for enhancing low-illumination images
CN106023129A (en) Infrared and visible light image fused automobile anti-blooming video image processing method
CN1873693B (en) Method based on Contourlet transformation, modified type pulse coupling neural network, and image amalgamation
CN106530246A (en) Image dehazing method and system based on dark channel and non-local prior
CN103034983B (en) A kind of defogging method capable based on anisotropic filtering
CN108269244B (en) Image defogging system based on deep learning and prior constraint
Zin et al. Fusion of infrared and visible images for robust person detection
CN104700381A (en) Infrared and visible light image fusion method based on salient objects
CN103034843B (en) Method for detecting vehicle at night based on monocular vision
CN103914820B (en) Image haze removal method and system based on image layer enhancement
CN101420533B (en) Embedded image fusion system and method based on video background detection
CN106815826A (en) Night vision image Color Fusion based on scene Recognition
CN111539303B (en) Monocular vision-based vehicle driving deviation early warning method
CN103793896A (en) Method for real-time fusion of infrared image and visible image
CN109214331A (en) A kind of traffic haze visibility detecting method based on image spectrum
CN111311503A (en) Night low-brightness image enhancement system
CN107147877A (en) FX night fog day condition all-weather colorful video imaging system and its construction method
CN109583408A (en) A kind of vehicle key point alignment schemes based on deep learning
CN113762134A (en) Method for detecting surrounding obstacles in automobile parking based on vision
CN107301625A (en) Image defogging algorithm based on brightness UNE
CN111432172A (en) Fence alarm method and system based on image fusion
CN104751138B (en) A kind of vehicle mounted infrared image colorization DAS (Driver Assistant System)
CN115100618B (en) Multi-source heterogeneous perception information multi-level fusion characterization and target identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170426

WD01 Invention patent application deemed withdrawn after publication