CN103761724A - Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm - Google Patents

Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm Download PDF

Info

Publication number
CN103761724A
CN103761724A CN201410041858.2A CN201410041858A CN103761724A CN 103761724 A CN103761724 A CN 103761724A CN 201410041858 A CN201410041858 A CN 201410041858A CN 103761724 A CN103761724 A CN 103761724A
Authority
CN
China
Prior art keywords
fusion
image
visible light
infrared
visible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410041858.2A
Other languages
Chinese (zh)
Inventor
宋华军
任鹏
百晓
祝艳宏
肖渤涛
孙文健
王玉霞
邸萌萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201410041858.2A priority Critical patent/CN103761724A/en
Publication of CN103761724A publication Critical patent/CN103761724A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a visible light and infrared video fusion method based on the surreal luminance contrast pass algorithm, and relates to a visible light and infrared video fusion method. The method aims to solve the problems that an existing infrared and visible light image fusing method can not meet the real-time requirement for video processing, and an ideal visible light and infrared video fusion effect can not be obtained in a smog environment. The method comprises the following steps that surreal fusion is conducted on a visible light video and background information to obtain a fused visible light video sequence, and a luminance component of the visible light video sequence is calculated; two-layer lifting wavelet transformation is conducted on the luminance component and an infrared image to obtain sparse coefficients; the two sparse coefficients are fused according to a certain rule to obtain a fused sparse coefficient, wavelet reverse transformation is conducted to obtain a gray-level fusion image, luminance regulation is conducted, finally the regulated image is converted into a colorful space, and the fusion image finally obtained is the video sequence. The visible light and infrared video fusion method is suitable for the technical field of image processing.

Description

Visible ray based on super real luminance contrast pass-algorithm and infrared video fusion method
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of visible ray and infrared video fusion method based on super real luminance contrast pass-algorithm.
Background technology
Image co-registration refers to that the multiple image of different sensors being taken by certain computing method is merged into the process of a high-quality image, thereby can effectively bring into play the advantage of various imageing sensors, improves the analyticity of image information.The image that will obtain a certain scene in actual life can be used the sensors such as Visible Light Camera, thermal infrared imager, the visual characteristic that image color that visible ray camera is taken is distinct, contrast is high, meet the mankind, but in the situation that having smog or insufficient light, effect is poor.The infrared ray that thermal infrared imager radiates by sense object carrys out imaging, and it can normally work under the condition that has smog or insufficient light, but often contrast is not high for its captured image, and imaging definition is low.If wish to observe a certain scene or target round-the-clockly, single-sensor just can not be finished the work, and at this moment just need to merge the image information that multiple sensors is taken, and makes fused images have advantages of multiple sensors imaging separately simultaneously.
At present the classical way of image co-registration mainly contains: the research that 1, the people such as Liu Kun has proposed Pixel-level Multi-sensor Image Fusion (is shown in Liu Kun, Guo Lei, Li Hui sunshine. the research of Pixel-level Multi-sensor Image Fusion [J]. computer engineering and application, 2007,43 (12): 59-61), mainly for infrared, carry out fusion treatment with visible images.2, the infrared and visible light image fusion method that the people such as Zhao Gaopeng has proposed based on Lifting Wavelet (is shown in Zhao Gaopeng, Bao Yuming, Liu Di. the infrared and visible light image fusion method [J] based on Lifting Wavelet. computer engineering and design, 2009,30(7): 1697-1699), take Lifting Wavelet as analysis tool, obtained the efficiency higher than common small echo.And for wavelet transformation, do not there is anisotropic defect when the Description Image higher-dimension grain details, 3, Song Jiangshan, the image interfusion method that the people such as Arash have proposed based on bent wave conversion (is shown in Song Jiangshan, Xu Jianqiang, official documents writer's spring. improved bent wave conversion image interfusion method [J]. Chinese Optical and applied optics, 2009, 2(2): 145-149 and Arash, Golibagh, Mahyari, etal.ANovel Image Fusion Method Using Curvelet Transform Based on Linear Dependency Test[C] .International Conference on Digital Image Processing, 2009:351-354), compare with traditional wavelet transformation, obtained better effect, and can effectively suppress noise, but bent wave conversion complexity is higher, speed is slower.4, the image interfusion method that the people such as Zejing Guang have proposed based on Contourlet conversion (is shown in Zejing Guang, Zhenbing Zhao, Qiang Gao.Infrared and Visible Images Fusion Based on Contourlet-domain Hidden Markov Tree Model[C] .International Congress on Image and Signal Processing, 2011:1916-1920), Contourlet is converted to this new multiscale analysis tool applications have been arrived in image co-registration, than its effect of traditional wavelet transformation, there is obvious improvement.The defect for common Contourlet conversion without shift invariant, 5, Huang Keyu, the people such as W.Kong have proposed nonsubsampled contourlet transform and (have seen Huang Keyu, Li Min, He Yujie etc. a kind of Image Fusion [J] based on nonsubsampled contourlet transform. modern electronic technology, 2011, 34(24): 96-98 and W.Kong, Y.Lei, X.Ni.Fusion technique for grey-scale visible light and infrared images based on non-subsampled contourlet transform and intensity-hue-saturation transform[J] .IET Signal Processing, 2011, 5(1): 75-80), this algorithm has been obtained good effect in image co-registration, but the classical way of above-mentioned five kinds of image co-registration, while there is large-area smog in environment, the poor effect that visible ray and infrared image merge.6, the people such as Li Guangxin has proposed infrared and color visible luminance contrast and transmits blending algorithm and (see Li Guangxin, Wu Weiping, Hu Jun. infrared and color visible luminance contrast is transmitted blending algorithm. Chinese Optical, 2011,4(2): 162-167), but because orthogonal wavelet transformation calculated amount is large, cause the operation time of the method long, cannot meet the requirement of real-time of Video processing.7, the people such as Cai Qinhui proposed surreal image interfusion method (see Cai Qinhui. surreal image co-registration and the application in video. Hangzhou: Zhejiang University's master thesis, 2007,5:11-21), but the method need to be carried out the image reconstruction of gradient field, calculate relative complex, also cannot meet the requirement of real-time of Video processing.
Summary of the invention
The present invention will solve existing infrared and visible light image fusion method, cannot meet the requirement of real-time of Video processing and can in smog environment, obtain again the problem of desirable visible ray and infrared video syncretizing effect, and propose visible ray and infrared video fusion method based on super real luminance contrast pass-algorithm.
Visible ray based on super real luminance contrast pass-algorithm and infrared video fusion method, implementation step comprises as follows:
Step 1: adopt method of weighted mean to extract the background information background of captured scene from n frame visual picture, wherein n is greater than 300 and be less than 3000 positive integer;
Step 2: change background information background as need, adopt method of weighted mean to extract the background information background of captured scene from other m frame visual picture, wherein m is greater than 300 and be less than 3000 positive integer;
Step 3: adopt method of weighted mean to visible light video and the super real fusion of background information background, the visible light video sequence visible after being merged, detailed process adopts following experimental formula:
visible=0.15×background+0.9×visible;
Step 4: use following experimental formula to solve the luminance component Y of visible light video sequence visible vis,
Y vis=0.299 * R+0.587 * G+0.114 * B, R wherein, G, B represents respectively the red, green, blue passage of visible light video sequence;
Step 5: by the luminance component Y of visible light video sequence visible viscarry out respectively two-layer lifting wavelet transform with infrared image IR and obtain the sparse coefficient Ytmp of luminance component and the sparse coefficient IRtmp of infrared image;
Step 6: according to certain fusion rule, the sparse coefficient IRtmp of the sparse coefficient Ytmp of luminance component and infrared image is merged, the sparse coefficient Ftmp after being merged, concrete fusion rule is as follows:
A, Sparse System low frequency component are directly chosen the low frequency component of infrared image, to incorporate more target information;
The high fdrequency component of b, two layers of decomposition takes absolute value to get large fusion criterion, can choose better so the obvious pixel of variations in detail in image;
The high fdrequency component that c, one deck decompose is taked average weighted fusion criterion;
Step 7: the sparse coefficient Ftmp after merging is carried out to two-layer Lifting Wavelet inverse transformation and obtain grayscale fusion image F;
Step 8: use existing luminance contrast pass-algorithm to grayscale fusion image F being adjusted to the image F after being adjusted *, and ask F *with luminance component Y visdifference Imid;
Step 9: use following formula obtain three components R c, Gc, Bc of fusion results and merged into net result video sequence,
R c G c B c = R + ( F * - Y vis ) G + ( F * - Y vis ) B + ( F * - Y vis ) .
The present invention includes following beneficial effect:
1, the present invention utilizes Lifting Wavelet to replace orthogonal wavelet to carry out luminance contrast pass-algorithm, has improved the arithmetic speed of this algorithm, can carry out real-time processing to visible ray and infrared video fusion;
2, the present invention is incorporated into surreal gradient field image interfusion method in the fusion method of visible ray and infrared video, be used for carrying out the pre-service of visible light video, because the method can significantly be improved environmental aspect when poor, especially the quality of visible light video in smog environment, thus the sharpness that merges target in video improved.
Accompanying drawing explanation
Fig. 1 is general flow chart of the present invention;
Fig. 2 is comparison diagram working time of lifting wavelet transform and bi-orthogonal wavelet transformation;
Fig. 3 is the 330th frame, the 950th frame, and the infrared video contrast images of the 1660th frame and the 1980th frame, wherein figure (a)-Tu (d) is respectively the 330th frame, the 950th frame, the 1660th frame and the 1980th frame;
Fig. 4 is the 330th frame, the 950th frame, the visible light video contrast images of the 1660th frame and 1980 frames;
330th frame of Fig. 5 for adopting Lifting Wavelet luminance contrast pass-algorithm to merge, the 950th frame, the result comparison diagram of the 1660th frame and the 1980th frame;
330th frame of Fig. 6 for adopting the inventive method to merge, the 950th frame, the result comparison diagram of the 1660th frame and the 1980th frame;
Fig. 7 is the entropy comparison diagram of Lifting Wavelet luminance contrast pass-algorithm and the inventive method;
Fig. 8 be Lifting Wavelet luminance contrast pass-algorithm and the inventive method edge strength comparison diagram;
Fig. 9 be Lifting Wavelet luminance contrast pass-algorithm and the inventive method Pingdu gradient comparison diagram.
Embodiment
Embodiment one: in conjunction with Fig. 1, specific implementation step of the present invention is as follows:
Step 1: adopt method of weighted mean to extract the background information background of captured scene from n frame visual picture, wherein n is greater than 300 and be less than 3000 positive integer;
Step 2: change background information background as need, adopt method of weighted mean to extract the background information background of captured scene from other m frame visual picture, wherein m is greater than 300 and be less than 3000 positive integer;
Step 3: adopt method of weighted mean to visible light video and the super real fusion of background information background, the visible light video sequence visible after being merged, detailed process adopts following experimental formula:
visible=0.15×background+0.9×visible;
Step 4: use following experimental formula to solve the luminance component Y of visible light video sequence visible vis,
Y vis=0.299 * R+0.587 * G+0.114 * B, R wherein, G, B represents respectively the red, green, blue passage of visible light video sequence;
Step 5: by the luminance component Y of visible light video sequence visible viscarry out respectively two-layer lifting wavelet transform with infrared image IR and obtain the sparse coefficient Ytmp of luminance component and the sparse coefficient IRtmp of infrared image;
Step 6: according to certain fusion rule, the sparse coefficient IRtmp of the sparse coefficient Ytmp of luminance component and infrared image is merged, the sparse coefficient Ftmp after being merged, concrete fusion rule is as follows:
A, Sparse System low frequency component are directly chosen the low frequency component of infrared image, to incorporate more target information;
The high fdrequency component of b, two layers of decomposition takes absolute value to get large fusion criterion, can choose better so the obvious pixel of variations in detail in image;
The high fdrequency component that c, one deck decompose is taked average weighted fusion criterion;
Step 7: the sparse coefficient Ftmp after merging is carried out to two-layer Lifting Wavelet inverse transformation and obtain grayscale fusion image F;
Step 8: use existing luminance contrast pass-algorithm to grayscale fusion image F being adjusted to the image F after being adjusted *, and ask F *with luminance component Y visdifference Imid;
Step 9: use following formula obtain three components R c, Gc, Bc of fusion results and merged into net result video sequence,
R c G c B c = R + ( F * - Y vis ) G + ( F * - Y vis ) B + ( F * - Y vis ) .
Embodiment two: present embodiment is that embodiment one is described further, existing luminance contrast pass-algorithm described in step 8 is that the infrared and color visible luminance contrast of document < < is transmitted blending algorithm > > (Chinese Optical, 2011,4(2): the method 162-167), according to luminance contrast Transfer Formula, calculate F *:
F *=(σ Ref÷σ F)×(F-μ F)+μ Ref
F in formula *for the grayscale fusion image after luminance contrast is adjusted, (μ f, σ f) and (μ ref, σ ref) be respectively grayscale fusion image F before adjusting and first-order statistics amount (average) and the second-order statistic (variance) of gray scale reference image R ef.
For verifying beneficial effect of the present invention, do following emulation experiment:
1, experiment condition and method
Hardware platform is: PC (XP system, i5 dual core processor, dominant frequency 2.3G, internal memory 2G);
Software platform is: that emulation experiment one adopts is matlab7.0, and that emulation experiment two adopts is VC; Experimental subjects: what adopt in emulation one experiment is the gray scale lena image of the double precision of 512 * 512 resolution; What emulation two adopted is 2000 frame video sequences;
2, emulation content
Emulation experiment one: in order to improve the efficiency of blending algorithm, for video fusion, efficiency of algorithm is required to high feature, to existing method, infrared and the color visible luminance contrast transmission blending algorithm > > referring to document < <, Chinese Optical, 2011,4(2): 162-167, middle mentioned biorthogonal 5-3 small echo adopts Lifting Wavelet to replace proving the percentage contribution to efficiency of Lifting Wavelet.This experiment is used respectively biorthogonal 5-3 small echo and Lifting Wavelet to carry out 10 two-layer wavelet decomposition and reconstruct to the gray scale lena image of the double precision of 512 * 512 resolution, ask for total operation time, and test separately 5 times, result is as shown in table 1, and Fig. 2 has provided comparison diagram working time of lifting wavelet transform and bi-orthogonal wavelet transformation.
Emulation experiment two: to introduce in order illustrating the necessity that Lifting Wavelet is carried out multiscale analysis, to have carried out experimental analysis by image being carried out to wavelet decomposition in blending algorithm with the ratio of reconstruct shared time in each two field picture of processing.This experiment has been carried out five experiments to 2000 frame video sequences, and has obtained respectively in required averaging time of each two field picture of algorithm process of the present invention and algorithm lifting wavelet transform and inverse transformation required averaging time, and experimental result is as shown in table 2; Fig. 3 (a)-3(d) and Fig. 4 (a)-3(d) be from left to right followed successively by the 330th frame in infrared and visible light video, 950 frames, 1660 frames and 1980 frames; Fig. 5 (a)-5(d) for utilizing the 330th frame of existing Lifting Wavelet luminance contrast pass-algorithm, 950 frames, the syncretizing effect figure of 1660 frames and 1980 frames; Fig. 6 (a)-6(d) for utilizing the 330th frame of algorithm of the present invention, 950 frames, the syncretizing effect figure of 1660 frames and 1980 frames.Fig. 7 is the entropy comparison diagram of Lifting Wavelet luminance contrast pass-algorithm and the inventive method; Fig. 8 be Lifting Wavelet luminance contrast pass-algorithm and the inventive method edge strength comparison diagram; For Lifting Wavelet luminance contrast pass-algorithm and the inventive method Pingdu gradient comparison diagram.
3, experimental result
What emulation experiment one obtained is Lifting Wavelet and biorthogonal wavelet correlative value working time and efficiency improvement value, as shown in table 1:
Table 1 Lifting Wavelet and biorthogonal wavelet working time and efficiency correlation data table
Unit (second) First group Second group The 3rd group The 4th group The 5th group On average
Lifting Wavelet 8.063 7.812 7.735 8.031 8.063 7.9408
Biorthogonal wavelet 38.256 37.907 37.336 38.0231 37.765 37.85742
Efficiency improves 374.46% 385.24% 382.69% 373.45% 368.37% 376.75%
Can slightly fluctuate the working time by the known algorithm of data in table 1 with the difference of CPU running status, and the efficiency far of Lifting Wavelet in the decomposition and reconstruction calculating process of two dimensional image is higher than biorthogonal 5-3 small echo.
Comparison diagram working time by Fig. 2 lifting wavelet transform and bi-orthogonal wavelet transformation can be reached a conclusion: use Lifting Wavelet to replace biorthogonal 5-3 small echo can significantly improve the efficiency of image being carried out to wavelet decomposition and reconstruct.
In the averaging time that each two field picture of the processing that emulation experiment two obtains is required and algorithm, lifting wavelet transform and inverse transformation are as shown in table 2 required averaging time.
The time of table 2 wavelet transformation and reconstruct, the T.T. of processing every frame and the ratio of the two
Figure BDA0000463449350000061
As seen from table: in algorithm, wavelet transformation and reconstruct have taken nearly half working time of whole algorithm.What the present invention used is lifting wavelet transform, because the efficiency of biorthogonal 5-3 small echo is more far short of what is expected than Lifting Wavelet, can infer: if use biorthogonal 5-3 Image Based on Wavelet decomposition and reconstruction, most times of whole algorithm all can consume in wavelet transformation and reconstruct, this will be difficult to guarantee to process in real time, thereby adopt Lifting Wavelet to be very important, be also effective simultaneously.
The evaluating data of fused data is as shown in table 3:
The evaluating data of table 3 fused images is as follows:
? Lifting Wavelet luminance contrast is transmitted The present invention
? Entropy Edge strength Average gradient Entropy Edge strength Average gradient
The 330th frame: 7.1051 93.3333 6.0294 7.0383 115.5556 6.0666
The 950th frame: 7.1058 92.4444 3.7423 7.1525 114.6667 3.944
The 1660th frame: 6.9089 91.5556 2.9017 7.0537 113.7778 3.2072
The 1980th frame: 6.7036 90.6667 2.4488 6.8709 112.8889 2.7601
The present invention is compared with existing Lifting Wavelet luminance contrast pass-algorithm, objective indicator from table 3, when smog in image is denseer, the indexs such as its entropy of the fused images that the present invention obtains, edge strength, average gradient are all good than Lifting Wavelet luminance contrast pass-algorithm, and it is effective to infrared and fusion visible images that super real world images fusion method is introduced in this explanation.From subjective method, according to Fig. 5, the fusion results comparison diagram of Fig. 6 is known: it is more clear that the objects such as house in acquired results, trees are merged in the present invention, people in scene is also more remarkable simultaneously, and target is more remarkable, and background is more clear, especially in dense smoke, this advantage is more obvious.This explanation syncretizing effect of the present invention is better.Fig. 7, Fig. 8, Fig. 9 have provided the curve map of entropy, edge strength and average gradient of the fusion results of Lifting Wavelet luminance contrast pass-algorithm and the inventive method; Fig. 7 shows except the 330th frame, and the contained quantity of information of fusion results of the present invention is all high than only adopting Lifting Wavelet luminance contrast pass-algorithm.During the 330th frame, in figure, cigarette is also fewer, and picture quality is better, and it is lower slightly that now the present invention calculates gained entropy.This is to have taked simple weighted mean fusion criterion because consider the efficiency of algorithm in super real fusion process, if adopt more complicated gradient field fusion criterion, can address this problem.But because at this moment picture quality is better, slight quantity of information index declines and can accept, and has brought the raising of algorithm overall performance.After introducing super real fusion as can be seen from Figure 8, the edge strength of fusion results has had lifting significantly, as long as this explanation can extract high-quality background, and then background and video sequence are merged, can significantly strengthen the marginal information of pending sequence of video images.Fig. 8 and Fig. 9 demonstration, it is higher that gained image border intensity is merged in the present invention, and average gradient is larger, illustrate that its sharpness of image obtaining is merged in the present invention higher, details expressive ability is stronger, and this is consistent with subjective assessment conclusion, and the validity of introducing super real fusion has also been described.
In sum, the present invention all can obtain comparatively ideal effect aspect objective indicator and visual effect, thereby has proved that the performance of the present invention in video fusion is better than existing fusion method.

Claims (3)

1. the visible ray based on super real luminance contrast pass-algorithm and infrared video fusion method, is characterized in that it carries out according to the following steps:
Step 1: adopt method of weighted mean to extract the background information background of captured scene from n frame visual picture, wherein n is greater than 300 and be less than 3000 positive integer;
Step 2: change background information background as need, adopt method of weighted mean to extract the background information background of captured scene from other m frame visual picture, wherein m is greater than 300 and be less than 3000 positive integer;
Step 3: adopt method of weighted mean to visible light video and the super real fusion of background information background, the visible light video sequence visible after being merged, detailed process adopts following experimental formula:
visible=0.15×background+0.9×visible;
Step 4: use following experimental formula to solve the luminance component Y of visible light video sequence visible vis,
Y vis=0.299 * R+0.587 * G+0.114 * B, R wherein, G, B represents respectively the red, green, blue passage of visible light video sequence;
Step 5: by the luminance component Y of visible light video sequence visible viscarry out respectively two-layer lifting wavelet transform with infrared image IR and obtain the sparse coefficient Ytmp of luminance component and the sparse coefficient IRtmp of infrared image;
Step 6: according to certain fusion rule, the sparse coefficient IRtmp of the sparse coefficient Ytmp of luminance component and infrared image is merged the sparse coefficient Ftmp after being merged;
Step 7: the sparse coefficient Ftmp after merging is carried out to two-layer Lifting Wavelet inverse transformation and obtain grayscale fusion image F;
Step 8: use existing luminance contrast pass-algorithm to grayscale fusion image F being adjusted to the image F after being adjusted *, and ask F *with luminance component Y visdifference Imid;
Step 9: use following formula obtain three components R c, Gc, Bc of fusion results and merged into net result video sequence,
R c G c B c = R + ( F * - Y vis ) G + ( F * - Y vis ) B + ( F * - Y vis ) .
2. visible ray and the infrared video fusion method based on super real luminance contrast pass-algorithm as claimed in claim 1, is characterized in that fusion rule described in step 6 is:
A. Sparse System low frequency component is directly chosen the low frequency component of infrared image;
B. the high fdrequency component of two layers of decomposition takes absolute value to get large fusion criterion;
C. the high fdrequency component that one deck decomposes is taked average weighted fusion criterion.
3. visible ray and the infrared video fusion method based on super real luminance contrast pass-algorithm as claimed in claim 1, is characterized in that described in step 8, existing luminance contrast pass-algorithm formula is:
F *=(σ Ref÷σ F)×(F-μ F)+μ Ref
F in formula *for the grayscale fusion image after luminance contrast is adjusted, (μ f, σ f) and (μ ref, σ ref) be respectively grayscale fusion image F before adjusting and and first-order statistics amount and the second-order statistic of gray scale reference image R ef.
CN201410041858.2A 2014-01-28 2014-01-28 Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm Pending CN103761724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410041858.2A CN103761724A (en) 2014-01-28 2014-01-28 Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410041858.2A CN103761724A (en) 2014-01-28 2014-01-28 Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm

Publications (1)

Publication Number Publication Date
CN103761724A true CN103761724A (en) 2014-04-30

Family

ID=50528957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410041858.2A Pending CN103761724A (en) 2014-01-28 2014-01-28 Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm

Country Status (1)

Country Link
CN (1) CN103761724A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096289A (en) * 2015-09-21 2015-11-25 河南科技学院 Image processing method and mobile terminal
CN106447641A (en) * 2016-08-29 2017-02-22 努比亚技术有限公司 Image generation device and method
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN106611409A (en) * 2016-11-18 2017-05-03 哈尔滨工程大学 Small target enhancing detection method based on secondary image fusion
CN111028188A (en) * 2016-09-19 2020-04-17 杭州海康威视数字技术股份有限公司 Image acquisition equipment for light splitting fusion
CN115527293A (en) * 2022-11-25 2022-12-27 广州万协通信息技术有限公司 Method for opening door by security chip based on human body characteristics and security chip device
CN115578621A (en) * 2022-11-01 2023-01-06 中国矿业大学 Image identification method based on multi-source data fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339653A (en) * 2008-01-30 2009-01-07 西安电子科技大学 Infrared and colorful visual light image fusion method based on color transfer and entropy information
CN101714251A (en) * 2009-12-22 2010-05-26 上海电力学院 Infrared and visual pseudo-color image fusion and enhancement method
CN102609927A (en) * 2012-01-12 2012-07-25 北京理工大学 Foggy visible light/infrared image color fusion method based on scene depth

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339653A (en) * 2008-01-30 2009-01-07 西安电子科技大学 Infrared and colorful visual light image fusion method based on color transfer and entropy information
CN101714251A (en) * 2009-12-22 2010-05-26 上海电力学院 Infrared and visual pseudo-color image fusion and enhancement method
CN102609927A (en) * 2012-01-12 2012-07-25 北京理工大学 Foggy visible light/infrared image color fusion method based on scene depth

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李光鑫 等: "红外和彩色可见光图像亮度-对比度传递融合算法", 《中国光学》 *
蔡勤慧: "超现实的图像融合及在视频中的应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵高鹏 等: "基于提升小波的红外和可见光图像融合方法", 《计算机工程与设计》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105096289A (en) * 2015-09-21 2015-11-25 河南科技学院 Image processing method and mobile terminal
CN105096289B (en) * 2015-09-21 2018-09-11 河南科技学院 A kind of method and mobile terminal of image procossing
CN106447641A (en) * 2016-08-29 2017-02-22 努比亚技术有限公司 Image generation device and method
CN111028188A (en) * 2016-09-19 2020-04-17 杭州海康威视数字技术股份有限公司 Image acquisition equipment for light splitting fusion
CN111028188B (en) * 2016-09-19 2023-05-02 杭州海康威视数字技术股份有限公司 Light-splitting fusion image acquisition equipment
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN106548467B (en) * 2016-10-31 2019-05-14 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN106611409A (en) * 2016-11-18 2017-05-03 哈尔滨工程大学 Small target enhancing detection method based on secondary image fusion
CN106611409B (en) * 2016-11-18 2019-07-16 哈尔滨工程大学 A kind of Small object enhancing detection method based on secondary image fusion
CN115578621A (en) * 2022-11-01 2023-01-06 中国矿业大学 Image identification method based on multi-source data fusion
CN115578621B (en) * 2022-11-01 2023-06-20 中国矿业大学 Image recognition method based on multi-source data fusion
CN115527293A (en) * 2022-11-25 2022-12-27 广州万协通信息技术有限公司 Method for opening door by security chip based on human body characteristics and security chip device

Similar Documents

Publication Publication Date Title
Wang et al. An experiment-based review of low-light image enhancement methods
Wang et al. An experimental-based review of image enhancement and image restoration methods for underwater imaging
Ni et al. A Gabor feature-based quality assessment model for the screen content images
CN103761724A (en) Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm
Ciancio et al. No-reference blur assessment of digital pictures based on multifeature classifiers
CN114140353A (en) Swin-Transformer image denoising method and system based on channel attention
Yuan et al. Image quality assessment: A sparse learning way
CN103208102A (en) Remote sensing image fusion method based on sparse representation
CN113284061B (en) Underwater image enhancement method based on gradient network
CN115170410A (en) Image enhancement method and device integrating wavelet transformation and attention mechanism
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
Hong et al. Near-infrared image guided reflection removal
Feng et al. Low-light image enhancement algorithm based on an atmospheric physical model
Zhang et al. DuGAN: An effective framework for underwater image enhancement
Huang et al. Underwater image enhancement based on color restoration and dual image wavelet fusion
Deng et al. RADAR: Robust algorithm for depth image super resolution based on FRI theory and multimodal dictionary learning
Li et al. RUIESR: Realistic underwater image enhancement and super resolution
Jiang et al. DEANet: Decomposition Enhancement and Adjustment Network for Low-Light Image Enhancement
Liu et al. WSDS-GAN: A weak-strong dual supervised learning method for underwater image enhancement
Huang et al. Underwater image enhancement via LBP‐based attention residual network
CN102497576A (en) Full-reference image quality assessment method based on mutual information of Gabor features (MIGF)
CN117011357A (en) Human body depth estimation method and system based on 3D motion flow and normal map constraint
Rostami Ghadi et al. Image enhancement via reducing impairment effects on image components
Kumar et al. Underwater Image Enhancement using deep learning
Polasek et al. Vision UFormer: Long-range monocular absolute depth estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140430

WD01 Invention patent application deemed withdrawn after publication