CN102609927A - Foggy visible light/infrared image color fusion method based on scene depth - Google Patents
Foggy visible light/infrared image color fusion method based on scene depth Download PDFInfo
- Publication number
- CN102609927A CN102609927A CN2012100093825A CN201210009382A CN102609927A CN 102609927 A CN102609927 A CN 102609927A CN 2012100093825 A CN2012100093825 A CN 2012100093825A CN 201210009382 A CN201210009382 A CN 201210009382A CN 102609927 A CN102609927 A CN 102609927A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- color
- field
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to a foggy visible light/infrared image color fusion method based on scene depth and belongs to the field of multi-waveband image fusion. The method provided by invention aims at solving the problem that the existing visible light/infrared image color fusion technology has no special standard when the fusion coefficient is selected, is unnatural in the color of the fusion image, and can not effectively process the image which is influenced by the foggy day or the manual smoke. The method comprises the following steps of: firstly computing the scene depth image of a scene according to a foggy visible light image, and normalizing, so as to obtain scene depth information; then, taking normalized scene depth information as a weight factor fused with the visible light and the infrared image, and carrying out non-linear weight color fusion on the visible light and the infrared image; and finally, building a mapping relationship, and modifying a non-linear weighted color fusion result by a non-linear color transferring technology, so as to obtain the color fusion image which is outstanding in the hot target and real and natural in the color.
Description
Technical field
The present invention relates to a kind of greasy weather visible light/infrared image Color Fusion, belong to multi-band image and merge the field based on the scene depth of field.
Background technology
Infrared have extensive use with the visible images sensor in military, security monitoring field.Visible images is a reflected image, and radio-frequency component is many, under certain illumination, can reflect the detailed information of scene; Infrared image is a radiation image; Gray scale can not reflect real scene, but infrared image can be given prominence to thermal target or cold target in the scene by the temperature difference decision of target and background; Especially exist in the scene under the situation of smog, infrared ray can penetrate smoke detection to the target that is blocked.
Visible light and infrared image integration technology can be effectively comprehensive with the characteristic information of excavating the two, outstanding target, the enhanced scene details, help hide, under the situation of camouflage and fascination sooner, the detection of a target more accurately.1996, Dutch countries use research institute manpower factor laboratory (TNO) proposed TNO fusion structure (being called for short the TNO method).The TNO structure can highlight through color with the bigger part of twilight image difference infrared.This method operation is comparatively simple, be convenient to real-time implementation, but dynamic range of images reduces after the color integration, and color contrast is too strong, is unfavorable for long-time observation.Massachusetts Institute Technology (MIT) Lincoln laboratory has proposed the Color Fusion (being called for short the MIT method) based on the biological vision characteristic in the mid-90 in 20th century.The MIT fusion structure derives from biological color vision model, promptly based on primate vision mode and the induction model of rattle snake to visible light and heat, constructs feedforward center-peripheral separated network (CSSN), is also referred to as the antagonism processing.LINEAR COMBINATION METHOD based on TNO method and antagonism method is present the most frequently used fusion method, and promptly fused images is the linear combination of visible images and infrared image, and the selection of combination coefficient does not have certain criteria, generally rule of thumb obtains.
Color transmits technology and is proposed in calendar year 2001 by people such as Reinhard the earliest; Natural image on color transmits that technology uses one width of cloth color saturation is good, contrast is high daytime image as a reference; Between the not good source images of reference picture and color, set up a kind of mapping relations; The average and the variance of adjustment source each color component of coloured image have and the similar color of reference picture the source images after the processing.Toet was incorporated into color transmission technology in the multiwave image co-registration in 2003; Color in order to the adjustment fused images; But the effect of this color transmission method based on overall average and variance depends on the similarity of reference picture and source images scene color, and does not have ripe reference picture to choose criterion at present.
Along with the development of single width Misty Image enhancement techniques, according to greasy weather imaging physical model, the scene depth of view information of image can calculate through number of ways.2008, the scattered power of suspended particle was two separate amounts in Raanan Fattal hypothesis scene point reflectivity and the atmosphere, has calculated the depth of view information of scene according to the independent component analysis method.2009, KaimingHe also obtained the transmission parameter relevant with the scene depth of field based on helping priori secretly.2009, the scene point depth of field that Peter Carr supposes to be positioned at the image top in the outdoor Misty Image was often greater than the depth of field of image below scene point, and the method that expands with image segmentation and α has calculated the scene depth of field.At home, 2007, Sun Yubao made up an energy functional that comprises the scene depth of field and image gradient, and having derived comprises the PDE of the image gradient and the scene depth of field accordingly, again through user's simple interactive operation, had estimated the depth of field of scene point.2008, Chen Gongli calculated the partial parameters of greasy weather physical model with the priori of fine reference picture, greasy weather reference picture and scene, and went out the depth ratio of each point and a certain point of fixity in the scene by means of these parameter estimations.Therefore, the calculating of scene depth of view information has had the development of comparative maturity, and it is feasible calculating the scene depth of field according to the single width Misty Image.
Summary of the invention
The objective of the invention is to choose and do not have specific criteria for solving in existing visible light/infrared image integration technology fusion coefficients; The fused images color is nature not; And can not effectively handle the problem that the greasy weather degraded image merges, a kind of greasy weather visible light/infrared image Color Fusion based on the scene depth of field is proposed.
Should comprise the steps: based on the greasy weather visible light/infrared image Color Fusion of the scene depth of field
Step 1, according to visible images, calculate the depth of view information of scene;
The maximal value of the scene depth of field that step 2, calculation procedure one obtain uses depth image divided by this maximal value then, obtains the normalization depth of view information of scene;
Step 3, according to the normalization depth of view information that step 2 obtains, carry out the nonlinear weight color integration of visible light and infrared image at the YUV color space;
Step 4, choose reference picture, reference picture is converted into the YUV color space, and calculates average and the variance of reference picture respectively at Y, U, V passage;
Step 5, computation of mean values matrix and variance matrix, the circumstances in which people get things ready for a trip of going forward side by side are color to be transmitted;
Step 6, the yuv space fused images after the color transmission is converted into the RGB color space.
The depth of view information of step 1 scene is a width of cloth and the identical gray level image of visible images size; The pixel that wherein gray-scale value is big in the depth image is corresponding to the big pixel of the depth of field in the visible images, and in the visible images gray-scale value in the thicker zone of mist greater than the gray-scale value of mist thinner region.
The span of normalization depth of view information is between 0 and 1 in the step 2.
The nonlinear weight color integration adopts following method in the step 3: an at first newly-built width of cloth Y, U, three channel value of V are 0 YUV image entirely; Then, the visible images pixel value is composed the U passage to newly-built YUV image, the infrared image pixel value is composed the V passage to newly-built YUV image, obtain the aberration passage of fused images; At last visible images and infrared image are carried out the Y passage that nonlinear combination obtains newly-built YUV image; Wherein the nonlinear combination coefficient of infrared image is the normalization depth of field in the step 2, and the nonlinear combination coefficient of visible images is that the depth of field with the step 1 scene deducts the normalization depth of field.
Computation of mean values matrix and variance matrix adopt following method in the step 5: the normalization depth of field that at first obtains with step 2 multiply by variance; Obtain the variance matrix of Y, U, V passage respectively; The depth of field with step 1 deducts the normalization depth of field that step 2 obtains; Multiply by average, obtain the equal value matrix of Y, U, V passage respectively; Set up mapping relations then, with the variance matrix of Y, U, V passage and all value matrix as a reference the average and the variance information of image pass to fused images, obtain the fused images after the yuv space color transmits.
The visible images and the infrared image that are used for color integration are gray level images.
The visible images and the infrared image that are used for color integration are the images of taking in the greasy weather.
Beneficial effect of the present invention:
Greasy weather visible light/infrared image Color Fusion based on the scene depth of field provided by the present invention, compared with prior art, main improvements are:
1. based on the Color Fusion of the scene depth of field visible images quality is not strict with, method provided by the present invention more is applicable to the fusion of greasy weather degraded image.
2. scene detailed information and depth of field size are associated.According to human-eye visual characteristic and receive the visible images characteristics that the greasy weather influences, there are inverse relation in the scene details abundance and the depth of field, can comprise the visible light detailed information in conjunction with the fusion of depth of view information, can give prominence to thermal target or cold target in the infrared image again.
3. the average of reference picture and variance and the scene depth of field are connected, variance matrix that will be relevant with the depth of field and equal value matrixs the color characteristic information of image as a reference pass to fused images.
Description of drawings
Fig. 1 is the process flow diagram that greasy weather visible light/infrared image of the present invention is based on the scene depth of field is felt Color Fusion naturally;
The actual scene image of Fig. 2 for influenced by artificial smog in the specific embodiment; Wherein, (a) be that color visible image, (b) are the medium wave infrared image;
The scene depth of view information of Fig. 3 for calculating by Fig. 2 (a) in the specific embodiment; Wherein, (a) for the normalization scene depth of field, (b) for to deduct the information that (a) obtains with 1;
The colored reference picture of Fig. 4 for having similar scene with Fig. 1 in the specific embodiment;
Fig. 5 is that the rgb space based on the scene depth of view information of Fig. 3 is felt the color integration image naturally in the specific embodiment.
Embodiment
In order to further specify objects and advantages of the present invention, the present invention is further described below in conjunction with accompanying drawing and specific embodiment.
The principle of work of the method for the invention is: under the greasy weather situation, the scene detailed information in the visible images is submerged, and infrared ray can penetrate smog, and infrared image can be given prominence to the target that is blocked.According to greasy weather imaging physical model, there are exponential relationship in the degree of degeneration of visible images and the scene depth of field, and along with the increase of the depth of field, image degradation speed index increases, so consider in the image co-registration process that the depth of view information of scene is rationally feasible.The medium wave infrared image and the visible images (like accompanying drawing 2) that adopt Same Scene influenced by artificial smog in the present embodiment and degrade carry out feeling color integration naturally based on the greasy weather visible light/infrared image of the scene depth of field.Its concrete implementation procedure comprises step as shown in Figure 1.
A. calculate the scene depth of field according to visible images;
Visible images such as accompanying drawing 2 (a); According to the imaging physical model with help priori secretly; Calculate the depth image of scene, like accompanying drawing 3 (a), the depth image of scene is a width of cloth and the identical gray level image of visible images size; The pixel that wherein gray-scale value is big in the depth image is corresponding to the big pixel of the depth of field in the visible images, and in the visible images gray-scale value in the thicker zone of mist greater than the gray-scale value of mist thinner region.
B. depth image normalization;
At first calculate the maximal value of depth image gray-scale value, each gray-scale value of using depth image then obtains normalized depth of view information divided by this maximal value, and note is made normd, and the span of normalization depth of view information is between 0 and 1;
C.YUV space color integration;
An at first newly-built width of cloth Y, U, three channel value of V are 0 YUV image F entirely; Then, the visible images pixel value is composed the U passage to newly-built YUV image, the infrared image pixel value is composed the V passage to newly-built YUV image, obtain the aberration passage of fused images; At last; Visible images and infrared image are carried out the Y passage that nonlinear combination obtains newly-built YUV image; Wherein the nonlinear combination coefficient of infrared image is the normalization depth of field normd that obtains among the step b; The nonlinear combination coefficient of visible images is to deduct the normalization depth of field normd that obtains among the step b with 1, is formulated as:
Wherein F representes the yuv space fused images, and IR representes infrared image, and Vis representes visible images, and normd representes the normalization depth of field, and subscript Y, U, V represent three passages of YUV color space respectively.
D.YUV Space Nonlinear color transmits;
At first, select coloured image that a width of cloth and visible images have similar scene image as a reference, shown in accompanying drawing 4; Secondly, reference picture is converted into the YUV color space, and calculates average and the variance of reference picture at Y, U, V passage respectively, note is made μ
Y Ref, μ
U Ref, μ
V Ref, σ
Y Ref, σ
U Ref, σ
V RefOnce more, the normalization depth of field that obtains with step b multiply by variance, obtains the variance matrix σ of Y, U, V passage respectively
Y d, σ
U d, σ
V d, deduct the normalization depth of field that step b obtains with 1 and multiply by average, obtain the equal value matrix μ of Y, U, V passage respectively
Y d, μ
U d, μ
V d, be formulated as:
At last, set up mapping relations, with the variance matrix of reference picture Y, U, V passage and all value matrix as a reference the average and the variance information of image pass to fused images, obtain the fused images after the yuv space color transmits, be formulated as:
Wherein R representes the image after color transmits, and subscript Y, U, V represent three passages of YUV color space, and subscript F representes the fused images that step c obtains, μ
Y F, μ
U F, μ
V F, σ
Y F, σ
U F, σ
V FWith reference to being respectively average and the variance of image F at Y, U, V passage.
E.YUV spatial color transitive graph looks like to be converted into the RGB color space;
Utilize the conversion formula of RGB and YUV color space, the color transitive graph that steps d is obtained looks like to be converted into the RGB color space.
Accompanying drawing 5 has provided the color integration image of the RGB color space that finally obtains; Can find out: the greasy weather visible light/infrared image that the present invention is based on the scene depth of field is felt the fused images color nature that Color Fusion obtains naturally; Good contrast, thermal target is outstanding, meets human-eye visual characteristic.
Wherein, the visible images of step a in this specific embodiment not only can be the image of artificial smog scene, also can be the image under the nature fog.The infrared image that adopts when step c merges can be a shortwave, medium wave or LONG WAVE INFRARED image.
Above-described specific descriptions; Purpose, technical scheme and beneficial effect to invention have carried out further explain, and institute it should be understood that the above is merely specific embodiment of the present invention; And be not used in qualification protection scope of the present invention; All within spirit of the present invention and principle, any modification of being made, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (7)
1. based on the greasy weather visible light/infrared image Color Fusion of the scene depth of field, it is characterized in that, comprise the steps:
Step 1, according to visible images, calculate the depth of view information of scene;
The maximal value of the scene depth of field that step 2, calculation procedure one obtain uses depth image divided by this maximal value then, obtains the normalization depth of view information of scene;
Step 3, according to the normalization depth of view information that step 2 obtains, carry out the nonlinear weight color integration of visible light and infrared image at the YUV color space;
Step 4, choose reference picture, reference picture is converted into the YUV color space, and calculates average and the variance of reference picture respectively at Y, U, V passage;
Step 5, computation of mean values matrix and variance matrix, the circumstances in which people get things ready for a trip of going forward side by side are color to be transmitted;
Step 6, the yuv space fused images after the color transmission is converted into the RGB color space.
2. the greasy weather visible light/infrared image Color Fusion based on the scene depth of field as claimed in claim 1; It is characterized in that; The depth of view information of step 1 scene is a width of cloth and the identical gray level image of visible images size; The pixel that wherein gray-scale value is big in the depth image is corresponding to the big pixel of the depth of field in the visible images, and in the visible images gray-scale value in the thicker zone of mist greater than the gray-scale value of mist thinner region.
3. the greasy weather visible light/infrared image Color Fusion based on the scene depth of field as claimed in claim 1 is characterized in that the span of normalization depth of view information is between 0 and 1 in the step 2.
4. like claim 1 or 2 or 3 described greasy weather visible light/infrared image Color Fusion based on the scene depth of field; It is characterized in that the nonlinear weight color integration adopts following method in the step 3: an at first newly-built width of cloth Y, U, three channel value of V are 0 YUV image entirely; Then, the visible images pixel value is composed the U passage to newly-built YUV image, the infrared image pixel value is composed the V passage to newly-built YUV image, obtain the aberration passage of fused images; At last visible images and infrared image are carried out the Y passage that nonlinear combination obtains newly-built YUV image; Wherein the nonlinear combination coefficient of infrared image is the normalization depth of field in the step 2, and the nonlinear combination coefficient of visible images is that the depth of field with the step 1 scene deducts the normalization depth of field.
5. like claim 1 or 2 or 3 described greasy weather visible light/infrared image Color Fusion based on the scene depth of field; It is characterized in that; Computation of mean values matrix and variance matrix adopt following method in the step 5: the normalization depth of field that at first obtains with step 2 multiply by variance, obtains the variance matrix of Y, U, V passage respectively, deducts the normalization depth of field that step 2 obtains with the depth of field of step 1; Multiply by average, obtain the equal value matrix of Y, U, V passage respectively; Set up mapping relations then, with the variance matrix of Y, U, V passage and all value matrix as a reference the average and the variance information of image pass to fused images, obtain the fused images after the yuv space color transmits.
6. like claim 1 or 2 or 3 described greasy weather visible light/infrared image Color Fusion based on the scene depth of field, it is characterized in that: the visible images and the infrared image that are used for color integration are gray level image.
7. like claim 1 or 2 or 3 described greasy weather visible light/infrared image Color Fusion based on the scene depth of field, it is characterized in that: the visible images and the infrared image that are used for color integration are the image of taking in the greasy weather.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100093825A CN102609927A (en) | 2012-01-12 | 2012-01-12 | Foggy visible light/infrared image color fusion method based on scene depth |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100093825A CN102609927A (en) | 2012-01-12 | 2012-01-12 | Foggy visible light/infrared image color fusion method based on scene depth |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102609927A true CN102609927A (en) | 2012-07-25 |
Family
ID=46527273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012100093825A Pending CN102609927A (en) | 2012-01-12 | 2012-01-12 | Foggy visible light/infrared image color fusion method based on scene depth |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102609927A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761724A (en) * | 2014-01-28 | 2014-04-30 | 中国石油大学(华东) | Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm |
CN104268847A (en) * | 2014-09-23 | 2015-01-07 | 西安电子科技大学 | Infrared light image and visible light image fusion method based on interactive non-local average filtering |
CN104601953A (en) * | 2015-01-08 | 2015-05-06 | 中国航空无线电电子研究所 | Video image fusion-processing system |
CN104751432A (en) * | 2015-03-09 | 2015-07-01 | 电子科技大学 | Image reconstruction based visible light and infrared image fusion method |
CN104899558A (en) * | 2015-05-25 | 2015-09-09 | 东华大学 | Scene recognition and colorization processing method for vehicle-mounted infrared image |
CN105430295A (en) * | 2015-10-30 | 2016-03-23 | 努比亚技术有限公司 | Device and method for image processing |
CN105635718A (en) * | 2014-10-27 | 2016-06-01 | 聚晶半导体股份有限公司 | Image capture device |
CN107705263A (en) * | 2017-10-10 | 2018-02-16 | 福州图森仪器有限公司 | A kind of adaptive Penetrating Fog method and terminal based on RGB IR sensors |
CN108353128A (en) * | 2015-10-27 | 2018-07-31 | 富士胶片株式会社 | Camera system and object test equipment and its working method |
CN110211083A (en) * | 2019-06-10 | 2019-09-06 | 北京宏大天成防务装备科技有限公司 | A kind of image processing method and device |
CN110223262A (en) * | 2018-12-28 | 2019-09-10 | 中国船舶重工集团公司第七一七研究所 | A kind of rapid image fusion method based on Pixel-level |
CN110930311A (en) * | 2018-09-19 | 2020-03-27 | 杭州萤石软件有限公司 | Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion |
CN111104917A (en) * | 2019-12-24 | 2020-05-05 | 杭州魔点科技有限公司 | Face-based living body detection method and device, electronic equipment and medium |
CN111815548A (en) * | 2020-07-07 | 2020-10-23 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method |
CN112233024A (en) * | 2020-09-27 | 2021-01-15 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping |
CN112241668A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN113205470A (en) * | 2021-03-19 | 2021-08-03 | 昆明物理研究所 | Infrared medium-short wave double-color fusion method based on hue saturation mapping |
CN116309216A (en) * | 2023-02-27 | 2023-06-23 | 南京博视医疗科技有限公司 | Pseudo-color image fusion method and image fusion system based on multiple wave bands |
CN118154489A (en) * | 2024-05-13 | 2024-06-07 | 国科天成科技股份有限公司 | Infrared image enhancement system and method based on atmospheric scattering model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070247517A1 (en) * | 2004-08-23 | 2007-10-25 | Sarnoff Corporation | Method and apparatus for producing a fused image |
CN101465071A (en) * | 2009-01-08 | 2009-06-24 | 上海交通大学 | Multi-platform target tracking and distribution interactive simulation system |
-
2012
- 2012-01-12 CN CN2012100093825A patent/CN102609927A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070247517A1 (en) * | 2004-08-23 | 2007-10-25 | Sarnoff Corporation | Method and apparatus for producing a fused image |
CN101465071A (en) * | 2009-01-08 | 2009-06-24 | 上海交通大学 | Multi-platform target tracking and distribution interactive simulation system |
Non-Patent Citations (1)
Title |
---|
ZHANG BEI等: "Local Color Transfer Based on Dark Channel Dehazing for Visible/Infrared Image Fusion", 《PROC. SPIE 8056, VISUAL INFORMATION PROCESSING XX,80560O》, vol. 8056, 3 June 2011 (2011-06-03), pages 1 - 7 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761724A (en) * | 2014-01-28 | 2014-04-30 | 中国石油大学(华东) | Visible light and infrared video fusion method based on surreal luminance contrast pass algorithm |
CN104268847A (en) * | 2014-09-23 | 2015-01-07 | 西安电子科技大学 | Infrared light image and visible light image fusion method based on interactive non-local average filtering |
CN104268847B (en) * | 2014-09-23 | 2017-04-05 | 西安电子科技大学 | A kind of infrared and visible light image fusion method based on interaction non-local mean filtering |
CN105635718A (en) * | 2014-10-27 | 2016-06-01 | 聚晶半导体股份有限公司 | Image capture device |
CN104601953A (en) * | 2015-01-08 | 2015-05-06 | 中国航空无线电电子研究所 | Video image fusion-processing system |
CN104601953B (en) * | 2015-01-08 | 2017-12-15 | 中国航空无线电电子研究所 | A kind of video image fusion processing system |
CN104751432B (en) * | 2015-03-09 | 2017-06-16 | 电子科技大学 | A kind of visible ray and infrared image fusion method based on Image Reconstruction |
CN104751432A (en) * | 2015-03-09 | 2015-07-01 | 电子科技大学 | Image reconstruction based visible light and infrared image fusion method |
CN104899558A (en) * | 2015-05-25 | 2015-09-09 | 东华大学 | Scene recognition and colorization processing method for vehicle-mounted infrared image |
CN108353128A (en) * | 2015-10-27 | 2018-07-31 | 富士胶片株式会社 | Camera system and object test equipment and its working method |
CN108353128B (en) * | 2015-10-27 | 2020-05-19 | 富士胶片株式会社 | Imaging system, object detection device, and method for operating object detection device |
WO2017071559A1 (en) * | 2015-10-30 | 2017-05-04 | 努比亚技术有限公司 | Image processing apparatus and method |
CN105430295B (en) * | 2015-10-30 | 2019-07-12 | 努比亚技术有限公司 | Image processing apparatus and method |
CN105430295A (en) * | 2015-10-30 | 2016-03-23 | 努比亚技术有限公司 | Device and method for image processing |
CN107705263A (en) * | 2017-10-10 | 2018-02-16 | 福州图森仪器有限公司 | A kind of adaptive Penetrating Fog method and terminal based on RGB IR sensors |
CN110930311A (en) * | 2018-09-19 | 2020-03-27 | 杭州萤石软件有限公司 | Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion |
CN110930311B (en) * | 2018-09-19 | 2023-04-25 | 杭州萤石软件有限公司 | Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion |
CN110223262A (en) * | 2018-12-28 | 2019-09-10 | 中国船舶重工集团公司第七一七研究所 | A kind of rapid image fusion method based on Pixel-level |
CN110211083A (en) * | 2019-06-10 | 2019-09-06 | 北京宏大天成防务装备科技有限公司 | A kind of image processing method and device |
CN112241668A (en) * | 2019-07-18 | 2021-01-19 | 杭州海康威视数字技术股份有限公司 | Image processing method, device and equipment |
CN111104917A (en) * | 2019-12-24 | 2020-05-05 | 杭州魔点科技有限公司 | Face-based living body detection method and device, electronic equipment and medium |
CN111815548A (en) * | 2020-07-07 | 2020-10-23 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method |
CN111815548B (en) * | 2020-07-07 | 2023-11-03 | 昆明物理研究所 | Medium-long wave double-band infrared image fusion method |
CN112233024A (en) * | 2020-09-27 | 2021-01-15 | 昆明物理研究所 | Medium-long wave dual-waveband infrared image fusion method based on difference characteristic color mapping |
CN112233024B (en) * | 2020-09-27 | 2023-11-03 | 昆明物理研究所 | Medium-long wave double-band infrared image fusion method based on difference characteristic color mapping |
CN113205470A (en) * | 2021-03-19 | 2021-08-03 | 昆明物理研究所 | Infrared medium-short wave double-color fusion method based on hue saturation mapping |
CN113205470B (en) * | 2021-03-19 | 2022-08-30 | 昆明物理研究所 | Infrared medium-short wave double-color fusion method based on hue saturation mapping |
CN116309216A (en) * | 2023-02-27 | 2023-06-23 | 南京博视医疗科技有限公司 | Pseudo-color image fusion method and image fusion system based on multiple wave bands |
CN116309216B (en) * | 2023-02-27 | 2024-01-09 | 南京博视医疗科技有限公司 | Pseudo-color image fusion method and image fusion system based on multiple wave bands |
CN118154489A (en) * | 2024-05-13 | 2024-06-07 | 国科天成科技股份有限公司 | Infrared image enhancement system and method based on atmospheric scattering model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102609927A (en) | Foggy visible light/infrared image color fusion method based on scene depth | |
CN104200445B (en) | Image defogging method with optimal contrast ratio and minimal information loss | |
CN102750674B (en) | Video image defogging method based on self-adapting allowance | |
CN106023133B (en) | A kind of high-resolution remote sensing image Clean water withdraw method based on multiple features combining processing | |
CN102902956B (en) | A kind of ground visible cloud image identifying processing method | |
CN106940882A (en) | A kind of transformer substation video image clarification method for meeting human-eye visual characteristic | |
CN105510987B (en) | It is a kind of to carry out forest fires knowledge method for distinguishing using single Medium wave infrared channel | |
CN108537756A (en) | Single image to the fog method based on image co-registration | |
CN104021536B (en) | A kind of adaptive SAR image and Multispectral Image Fusion Methods | |
CN102436643A (en) | Image defogging method facing to atmospheric scattering proximity effect | |
CN110288539A (en) | A kind of mobile clear method of underwater picture with dark channel prior in color combining space | |
CN109493309A (en) | A kind of infrared and visible images variation fusion method keeping conspicuousness information | |
CN105447833A (en) | Foggy weather image reconstruction method based on polarization | |
CN109410171A (en) | A kind of target conspicuousness detection method for rainy day image | |
CN109801304B (en) | Building index construction and building area rapid extraction method for inhibiting bare land | |
CN106846288A (en) | A kind of many algorithm fusion methods of bimodal infrared image difference characteristic Index | |
CN105957040A (en) | Rapid defog algorithm based on image fusion | |
CN102063707A (en) | Mean shift based grey relation infrared imaging target segmentation method | |
CN104299198A (en) | Fast image defogging method based on dark channels of pixels | |
Du et al. | Retrieval of canopy closure and LAI of Moso bamboo forest using spectral mixture analysis based on real scenario simulation | |
CN110632032A (en) | Sand storm monitoring method based on earth surface reflectivity library | |
CN105913391B (en) | A kind of defogging method can be changed Morphological Reconstruction based on shape | |
CN104346800B (en) | A kind of twilight image object detection method based on texture significance | |
Wang et al. | Multi-modal image feature fusion-based PM2. 5 concentration estimation | |
CN107451975A (en) | A kind of view-based access control model weights similar picture quality clarification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120725 |