CN112184604A - Color image enhancement method based on image fusion - Google Patents
Color image enhancement method based on image fusion Download PDFInfo
- Publication number
- CN112184604A CN112184604A CN202010966547.2A CN202010966547A CN112184604A CN 112184604 A CN112184604 A CN 112184604A CN 202010966547 A CN202010966547 A CN 202010966547A CN 112184604 A CN112184604 A CN 112184604A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- pyramid
- fusion
- visible light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001914 filtration Methods 0.000 claims abstract description 23
- 238000005286 illumination Methods 0.000 claims abstract description 13
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000008859 change Effects 0.000 abstract description 2
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a color image enhancement method based on image fusion, which comprises the steps of firstly, respectively filtering a visible light image and a near-infrared image by using a weighted least square filter to obtain a basic layer of the image; then, respectively subtracting respective basic layers from the original image to obtain detail layers of the image; performing Laplacian pyramid decomposition on the basic layer, and then reconstructing the basic layer by a pyramid to obtain a fused basic layer; calculating local energy and weight of the detail layer, and performing weighted fusion to obtain a fused detail layer; and finally, adding the obtained fusion basic layer and the fusion detail layer to obtain a final fusion image. The method effectively inhibits the noise and clutter of the fused image, and simultaneously considers the detail information and illumination information of the image during fusion; the research and practice of the near-infrared image on the improvement effect of the color quality of the visible light image are carried out; and the change of illumination information of the near infrared image and the visible light image under different scales is discovered.
Description
Technical Field
The invention belongs to the technical field of computer vision, image processing and color correction, and particularly relates to a method for performing overall contrast enhancement and detail texture enhancement on a color image by using a near-infrared image.
Background
Image fusion techniques have been rapidly developed in recent years, and have been widely used in remote sensing imaging, medical imaging, object recognition, computer vision, traffic fields, and the like. The image fusion technology is to fuse a plurality of images obtained by the same sensor or a plurality of images obtained by different sensors to obtain a fused image with higher quality. The method mainly aims at the latter, specifically, the color image of the visible light wave band is fused with the near infrared image, and the quality of the color image is improved from the contrast of details and the whole color.
Similar to the human visual system, most of the images acquired by various devices are imaged in the visible light band range, and the spectral band range is approximately 380nm-750 nm. The visible light image has the advantages that a large amount of detail information and color information in a scene can be provided, the result is closer to the observation result of human eyes, the daily requirements of people can be met to a great extent, and the visible light image is easily influenced by environments such as illumination, weather and the like. The infrared band can work better under low illumination and cloudy, rainy and foggy days, and the infrared image can capture information which cannot be captured by the visible light band under severe imaging conditions. However, the infrared image lacks target color information, and is prone to lose texture on the object, which is not conducive to observation.
In summary, a single imaging device often fails or fails to achieve a desired effect in some situations. Therefore, the detection means combined by the multiple sensors can obtain richer scene information, and the multi-source information is fused into a single image with higher quality through an image fusion technology. The wavelength of the near infrared band is slightly longer than that of the visible light band, so that the near infrared band is easy to obtain, the defect of the visible light band can be well overcome, and the near infrared band is very suitable for the image fusion technology.
In the last few years, many methods have been proposed to solve the problem of fusion of visible and infrared images. However, most methods focus on image detail information and enhancement of edge objects, and often ignore that in most scenes, the loss of illumination information of visible light degrades image color. Because some scenes have more distinguishing capability of infrared light on object properties, the information is lack of full utilization in previous work.
Disclosure of Invention
In order to solve the technical problems, the invention provides a fusion method based on multi-scale decomposition. Compared with the traditional multi-scale decomposition method, the method considers the edge details and the illumination information of the image at the same time. The method comprises the steps of decomposing an image into a basic layer and a detail layer by a weighted least square method, further constructing Laplacian pyramids of the two image basic layers in the basic layer by a pyramid decomposition method, and gradually fusing the pyramids layer by layer so as to better recover color information of the image; and in the detail layer, the details of the image and the detail layer are subjected to weighted fusion by using an image local energy fusion rule, and the finally obtained basic layer and the detail layer are added to obtain the image with color recovery and detail enhancement.
The technical scheme adopted by the invention is as follows: a color image enhancement method based on image fusion comprises the following steps:
step (1): respectively filtering the visible light image and the near infrared image by using a weighted least square filter (WLS) to obtain respective filtering images, namely the basic layers of the images;
step (2): according to the filtering result obtained in the step (1), subtracting respective basic layers by using the original image to obtain a detail layer of the image;
and (3): in order to fully utilize illumination information, image Laplacian pyramid decomposition is carried out on a visible light image and a near infrared image base layer.
And (4): fusing the obtained Laplacian pyramid of the visible light image and the near-infrared image layer by layer from the top layer of the pyramid and reconstructing the pyramid to the next layer until the pyramid is reconstructed to the bottom layer, thereby obtaining a fused basic layer;
and (5): calculating local energy and weight of the visible light image and near infrared image detail layer, and performing weighted fusion to obtain a fused detail layer;
and (6): and adding the obtained fusion basic layer and the fusion detail layer to obtain a final fusion image.
Preferably, in the step (1), the visible light image (I) is processedVIS) And near infrared image (I)NIR) The weighted least squares filtering is performed as:
I=Wλ(g)=(1+λLg)-1g,
where g is the image to be filtered, λ is the smoothing factor, LgIs in the form of an optimized matrix, Dx,DyOther forward differences in two directions, Ax,AyRepresenting the filtering weights. Respectively carrying out weighted least square filtering on the visible light image and the near infrared image to obtain corresponding base layer images which are respectively marked as BVIS,BNIR;
Preferably, in the step (2), the original image is subtracted from the base layer obtained in the step (1) to obtain a detail layer of the image, where the detailed formula is as follows:
D=I-B,
wherein I is the original image, and B is the basic layer in the step (1). The detail layers of the visible light image and the near infrared image are respectively DVIS=IVIS-BVIS,DNIR=INIR-BNIR。
Preferably, in the step (3), the specific steps of constructing the laplacian gold tower by using the visible light image and the near-infrared image are as follows:
firstly, constructing Gaussian pyramids of the image and the image, wherein the Gaussian pyramids are constructed by applying Gaussian filtering and downsampling operation on the image, and the specific Gaussian pyramid construction formula is as follows:
wherein GP represents a sampling function of the pyramid, N represents the number of pyramid layers, and k represents the k-th pyramid layerSpecifically, ω denotes a gaussian filter kernel, ↓denotesa downsampling of the image, IkA gaussian pyramid image representing the kth layer of the image;
respectively applying Gaussian pyramid sampling operators to the basic layers of the visible light image and the near infrared image to construct corresponding Gaussian pyramids, and recording as GVIS,k,GNIR,k;
According to the obtained Gaussian pyramid, an image Laplacian pyramid is further constructed, and a specific formula of the image Laplacian pyramid can be expressed as follows:
wherein G iskThe k-th layer of image of the Gaussian pyramid, × indicates the image upsampling, and the N-th layer of the Laplacian pyramid is the N-th layer of the Gaussian pyramid. The laplacian pyramids of the visible light image and the near-infrared image constructed according to the formula are respectively marked as LVIS,k,LNIR,k。
Preferably, in the step (4), starting from the top layer of the pyramid, the laplacian pyramid L of the three-channel visible light image is obtainedVIS,kAnd laplacian pyramid L of near-infrared images of a single channelNIR,kL dimensionally stacked as four channelsVIS-NIR,kI.e. the joint base layer LVIS-NIR,k(ii) a With LVIS,kFor constraining the image, the joint base layer L is pairedVIS-NIR,kGlobal optimization is carried out, 4-dimensional information is mapped to a 3-dimensional RGB image to obtain a fused base layer image Bk;
From the resulting fused image BkAnd reconstructing the image to the upper layer of the pyramid, wherein the k-1 layer of the Laplacian pyramid of the visible light image is LVIS,k-1=LVIS,k-1+[Bk]↑Where ↓ represents image upsampling. The k-1 layer of the corresponding Laplacian pyramid of the near-infrared image is LNIR,k-1=LNIR,k-1+[LNIR,k]↑。
The above process is repeated until the pyramid is reconstructed back to the bottom layer, i.e. when k is 0, the result B is obtained0Namely the final fusion basic layer of the image, and is marked as B.
Preferably, in the step (5), the local energy and the weight of the visible light image and the near-infrared image detail layer are calculated and weighted and fused, and the specific steps are as follows:
defining the pixel local energy at the (m, n) point in the detail layer as:
E(m,n)=∑(m,n)∈wf2(m, n), f denotes the image, ω denotes the neighborhood range of k x k. Will DVIS,DNIRRespectively substituting the partial energy images into the formula f, wherein the partial energy images corresponding to the visible light image detail layer and the near infrared image detail layer are respectively EVIS,ENIR。
Calculated according to the above formulaVIS,ENIRAnd further calculating the correlation between the two pixel energies, wherein the specific calculation formula is as follows:
setting the fusion weight of the visible light image and the near infrared image according to the actual scene requirement through the local energy of the image and the correlation of the pixel energy of the image and the near infrared image;
and weighting the two detail layers according to the fusion weight to obtain a final fusion detail layer D.
And finally, adding the fused basic layer B obtained in the step (4) and the fused detail layer D obtained in the step (5) to obtain a final result.
The invention discloses a color image enhancement method capable of fusing a light image and a near-infrared image based on multi-scale decomposition. Compared with the prior art, the invention has the advantages that: the multi-scale decomposition based on the edge preserving filtering effectively inhibits the noise and clutter of the fused image, and simultaneously considers the detail information and illumination information of the image during fusion; the research and practice of the near-infrared image on the improvement effect of the color quality of the visible light image are carried out; and the change of illumination information of the near infrared image and the visible light image under different scales is discovered.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a diagram illustrating the results of an image decomposition and fusion process according to an embodiment of the present invention (a) an original visible light image; (b) an original near-infrared image; (c) a visible light base layer image; (d) a near-infrared base layer image; (e) a visible light detail layer image; (f) a near-infrared detail layer image; (g) fusing the base layer images; (h) fusing detail layer images; (i) and (5) finally fusing the images.
Detailed Description
In order to better understand the technical solutions, the processes of the present invention are further described below with reference to the accompanying drawings and examples. It is to be understood that the embodiments described herein are merely illustrative and explanatory of the invention and are not restrictive thereof.
The embodiment provides a color image enhancement method based on fusion of a visible light image and a near-infrared image, as shown in fig. 1, including the following steps:
step (1): respectively filtering the visible light image and the near infrared image by using a weighted least square filter (WLS) to obtain respective filtering images, namely the basic layers of the images; FIG. 2(a) is an original visible image, FIG. 2(b) is an original near-infrared image, and FIG. 2(c) is a visible base layer image; fig. 2(d) is a near-infrared base layer image.
Step (2): according to the filtering result obtained in the step (1), subtracting respective basic layers by using the original image to obtain a detail layer of the image; FIG. 2(e) is a visible light detail layer image; FIG. 2(f) is a near infrared detail layer image;
and (3): in order to fully utilize illumination information, image Laplacian pyramid decomposition is carried out on a visible light image and a near infrared image base layer.
And (4): according to the Laplacian pyramid of the visible light image and the near infrared image, fusing and reconstructing the pyramid to the next layer by layer from the top layer of the pyramid until the pyramid is reconstructed to the bottom layer, so as to obtain a fused basic layer;
and (5): calculating local energy and weight of the visible light image and near infrared image detail layer, and performing weighted fusion to obtain a fused detail layer;
and (6): and adding the obtained fusion basic layer and the fusion detail layer to obtain a final fusion image.
Further, the specific process of the step (1) is as follows:
for visible light image IVISAnd near infrared image INIRRespectively carrying out weighted least square filtering, wherein the specific formula is as follows:
I=Wλ(g)=(1+λLg)-1g,
where g is the image to be filtered, λ is the smoothing factor, LgIs in the form of an optimized matrix, Dx,DyOther forward differences in two directions, Ax,AyRepresenting the filtering weights. Respectively carrying out weighted least square filtering on the visible light image and the near infrared image, and recording the corresponding base layer image as BVIS,BNIRAs shown in fig. 2(c, d);
where lambda can be used to control the degree of smoothing in different situations to achieve the best results.
Further, the specific process of the step (2) is as follows:
and (2) subtracting the respective basic layers from the original image on the basic layers obtained in the step (1) to obtain a detail layer of the image, wherein the specific formula is as follows:
D=I-B,
wherein I is the original image, and B is the basic layer in the step (1). The detail layers of the visible light image and the near infrared image are respectively DVIS=IVIS-BVIS,DNIR=INIR-BNIRAs shown in fig. 2(e, f).
Further, the specific process of the step (3) is as follows:
the high-frequency information of the base layer obtained in the step (1) is filtered through filtering, and the base layer can be understood as mainly containing low-frequency information under a large scale, namely illumination information and color information of an image. In order to fully utilize the illumination information under different scales, a gaussian pyramid and a laplacian pyramid are constructed for the base layer, and the specific formula can be expressed as follows:
wherein GP represents a sampling function of the pyramid, N represents the number of pyramid layers, k represents the k-th pyramid layer, specifically, ω represents a gaussian filter kernel, ↓representsdownsampling of the image, and IkA gaussian pyramid image representing the kth layer of the image;
based on the obtained Gaussian pyramid, an image Laplacian pyramid is further constructed, and a specific formula can be expressed as follows:
wherein G iskThe k-th layer of image of the Gaussian pyramid, × indicates the image upsampling, and the N-th layer of the Laplacian pyramid is the N-th layer of the Gaussian pyramid. The laplacian pyramids of the visible light image and the near-infrared image constructed according to the formula are respectively marked as LVIS,k,LNIR,k。
Further, the specific process of the step (4) is as follows:
starting from the top layer of the pyramid, the Laplacian pyramid L of the three-channel visible light image is formedVIS,kAnd laplacian pyramid L of near-infrared images of a single channelNIR,kL dimensionally stacked as four channelsVIS-NIR,kI.e. the joint base layer LVIS-NIR,k(ii) a With LVIS,kFor constraining the image, the joint base layer L is pairedVIS-NIR,kGlobal optimization is carried out, and 4-dimensional information is mapped into a three-dimensional RGB image to obtain a fused base layer image Bk;
The specific method is that for any pixel point (m, n), L is selected fromVIS-NIR,kFour-channel value extraction of 4-dimensional vector bitsAnd (4) performing target optimization and constraint calculation on a transformation matrix, and remapping the 4-dimensional features into 3-dimensional RGB information. Wherein the target image is assumed to be BkThe constrained image is LVIS,kThere are the following transformation relations:
Bk(m,n)=LVIS-NIR,k(m,n)A;
from the resulting fused image BkAnd reconstructing the image to the upper layer of the pyramid, wherein the k-1 layer of the visible light image pyramid is LVIS,k-1=LVIS,k-1+[Bk]↑Where ↓ represents image upsampling. The corresponding near infrared pyramid k-1 layer is LNIR,k-1=LNIR,k-1+[LNIR,k]↑。
The above process is repeated until the pyramid is reconstructed back to the bottom layer, i.e. when k is 0, the result B is obtained0Namely the final fusion basic layer of the image, and is marked as B. FIG. 2(g) is a fused base layer image;
further, in the step (5), the local energy and the weight of the visible light image and the near-infrared image detail layer are calculated, and weighted fusion is performed, and the specific steps are as follows:
defining the pixel local energy at the (m, n) point in the detail layer as:
E(m,n)=∑(m,n)∈wf2(m, n), f denotes the image, ω denotes the neighborhood range of k x k. Will DVIS,DNIRRespectively substituting the formula f into the formula f, the local energy images corresponding to the visible light image detail layer and the infrared image detail layer are respectively EVIS,ENIR。
Calculated according to the above formulaVIS,ENIRAnd further calculating the correlation between the two pixel energies, wherein the specific calculation formula is as follows:
in the present embodiment, a correlation threshold parameter T is set, and when M (M, n) < T:
when the correlation between the two is low, the difference between the two details at the pixel point is large, and the detail value is large;
when M (M, n) > T:
since the two are close in detail at the pixel point when the correlation between the two is high, the two are weighted and valued, W in the above equationmaxAnd WminRepresents a weighted weight, where WmaxWeight, W, representing the larger energy of the regionminThe weight of the smaller region energy is represented by the following specific calculation formula:
the two detail layers are weighted according to the fusion weight to obtain the final fused detail layer D, as shown in fig. 2 (h).
Further, according to the step (6), the fused base layer B obtained in the step (4) and the fused detail layer D obtained in the step (5) are added to obtain a final fused map, as shown in fig. 2 (i).
Claims (7)
1. A color image enhancement method based on image fusion is characterized by comprising the following steps:
step (1): respectively filtering the visible light image and the near infrared image by using a weighted least square filter (WLS) to obtain respective filtering images, namely the basic layers of the images;
step (2): according to the filtering result obtained in the step (1), subtracting respective basic layers by using the original image to obtain a detail layer of the image;
and (3): in order to fully utilize illumination information, image Laplacian pyramid decomposition is carried out on a visible light image and a near infrared image base layer.
And (4): fusing the obtained Laplacian pyramid of the visible light image and the near-infrared image layer by layer from the top layer of the pyramid and reconstructing the pyramid to the next layer until the pyramid is reconstructed to the bottom layer, thereby obtaining a fused basic layer;
and (5): calculating local energy and weight of the visible light image and near infrared image detail layer, and performing weighted fusion to obtain a fused detail layer;
and (6): and adding the obtained fusion basic layer and the fusion detail layer to obtain a final fusion image.
2. The method for enhancing color image based on image fusion according to claim 1, wherein in the step (1), the visible light image (I) is subjected toVIS) And near infrared image (I)NIR) The weighted least squares filtering is performed as:
I=Wλ(g)=(1+λLg)-1g,
where g is the image to be filtered, λ is the smoothing factor, LgIs in the form of an optimized matrix, Dx,DyOther forward differences in two directions, Ax,AyRepresenting the filtering weights. Respectively carrying out weighted least square filtering on the visible light image and the near infrared image to obtain corresponding base layer images which are respectively marked as BVIS,BNIR。
3. The method according to claim 2, wherein in the step (2), the original image is subtracted from the respective base layer of the base layer obtained in the step (1) to obtain the detail layer of the image, and the specific formula is as follows:
D=I-B,
wherein I is the original picture, BThe base layer in the step (1). The detail layers of the visible light image and the near infrared image are respectively DVIS=IVIS-BVIS,DNIR=INIR-BNIR。
4. The color image enhancement method based on image fusion according to claim 3, wherein in the step (3), the specific steps of constructing the Laplacian pyramid of the visible light image and the near infrared image are as follows:
firstly, constructing Gaussian pyramids of the image and the image, wherein the Gaussian pyramids are constructed by applying Gaussian filtering and downsampling operation on the image, and the specific Gaussian pyramid construction formula is as follows:
wherein GP represents a sampling function of the pyramid, N represents the number of pyramid layers, k represents the k-th pyramid layer, specifically, ω represents a gaussian filter kernel, ↓representsdownsampling of the image, and IkA gaussian pyramid image representing the kth layer of the image;
respectively applying Gaussian pyramid sampling operators to the basic layers of the visible light image and the near infrared image to construct corresponding Gaussian pyramids, and recording as GVIS,k,GNIR,k;
According to the obtained Gaussian pyramid, an image Laplacian pyramid is further constructed, and a specific formula of the image Laplacian pyramid can be expressed as follows:
wherein G iskThe k-th layer of image of the Gaussian pyramid, × indicates the image upsampling, and the N-th layer of the Laplacian pyramid is the N-th layer of the Gaussian pyramid. The laplacian pyramids of the visible light image and the near-infrared image constructed according to the formula are respectively marked as LVIS,k,LNIR,k。
5. The method for enhancing color image based on image fusion as claimed in claim 4, wherein in the step (4), starting from the top layer of the pyramid, the Laplacian pyramid L of the three-channel visible light image is obtainedVIS,kAnd laplacian pyramid L of near-infrared images of a single channelNIR,kL dimensionally stacked as four channelsVIS-NIR,kI.e. the joint base layer LVIS-NIR,k(ii) a With LVIS,kFor constraining the image, the joint base layer L is pairedVIS-NIR,kGlobal optimization is carried out, 4-dimensional information is mapped to a 3-dimensional RGB image to obtain a fused base layer image Bk;
From the resulting fused image BkAnd reconstructing the image to the upper layer of the pyramid, wherein the k-1 layer of the Laplacian pyramid of the visible light image is LVIS,k-1=LVIS,k-1+[Bk]↑Where ↓ represents image upsampling. The k-1 layer of the corresponding Laplacian pyramid of the near-infrared image is LNIR,k-1=LNIR,k-1+[LNIR,k]↑。
The above process is repeated until the pyramid is reconstructed back to the bottom layer, i.e. when k is 0, the result B is obtained0Namely the final fusion basic layer of the image, and is marked as B.
6. The color image enhancement method based on image fusion according to claim 5, wherein in the step (5), the local energy and weight of the visible light image and the near infrared image detail layer are calculated and weighted fusion is performed, and the specific steps are as follows:
defining the pixel local energy at the (m, n) point in the detail layer as:
E(m,n)=∑(m,n)∈wf2(m, n), f denotes the image, ω denotes the neighborhood range of k x k. Will DVIS,DNIRRespectively substituting the partial energy images into the formula f, wherein the partial energy images corresponding to the visible light image detail layer and the near infrared image detail layer are respectively EVIS,ENIR。
Calculated according to the above formulaVIS,ENIRGo forward toStep two, calculating the correlation between the two pixel energies, wherein the specific calculation formula is as follows:
setting the fusion weight of the visible light image and the near infrared image according to the actual scene requirement through the local energy of the image and the correlation of the pixel energy of the image and the near infrared image;
and weighting the two detail layers according to the fusion weight to obtain a final fusion detail layer D.
7. The method for enhancing color image based on image fusion according to claim 6, wherein the fusion weight of the visible light image and the near infrared image is set according to the actual scene requirement, and the method comprises the following steps:
setting a correlation threshold parameter T, when M (M, n) < T:
when the correlation between the two is low, the difference between the two details at the pixel point is large, and the detail value is large;
when M (M, n) > T:
since the two are close in detail at the pixel point when the correlation between the two is high, the two are weighted and valued, W in the above equationmaxAnd WminRepresents a weighted weight, where WmaxWeight, W, representing the larger energy of the regionminThe weight of the smaller region energy is represented by the following specific calculation formula:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010966547.2A CN112184604B (en) | 2020-09-15 | 2020-09-15 | Color image enhancement method based on image fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010966547.2A CN112184604B (en) | 2020-09-15 | 2020-09-15 | Color image enhancement method based on image fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112184604A true CN112184604A (en) | 2021-01-05 |
CN112184604B CN112184604B (en) | 2024-02-20 |
Family
ID=73921151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010966547.2A Active CN112184604B (en) | 2020-09-15 | 2020-09-15 | Color image enhancement method based on image fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184604B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785543A (en) * | 2021-03-01 | 2021-05-11 | 天地伟业技术有限公司 | Traffic camera image enhancement method with two-channel fusion |
CN113129243A (en) * | 2021-03-10 | 2021-07-16 | 同济大学 | Blood vessel image enhancement method and system based on infrared and visible light image fusion |
CN113724164A (en) * | 2021-08-31 | 2021-11-30 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN114022353A (en) * | 2022-01-07 | 2022-02-08 | 成都国星宇航科技有限公司 | Method and device for fusing space-time image texture and image color |
CN115578621A (en) * | 2022-11-01 | 2023-01-06 | 中国矿业大学 | Image identification method based on multi-source data fusion |
CN117115442A (en) * | 2023-08-17 | 2023-11-24 | 浙江航天润博测控技术有限公司 | Semantic segmentation method based on visible light-infrared photoelectric reconnaissance image fusion |
CN117723564A (en) * | 2024-02-18 | 2024-03-19 | 青岛华康塑料包装有限公司 | Packaging bag printing quality detection method and system based on image transmission |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011008239A1 (en) * | 2009-06-29 | 2011-01-20 | Thomson Licensing | Contrast enhancement |
CN108040243A (en) * | 2017-12-04 | 2018-05-15 | 南京航空航天大学 | Multispectral 3-D visual endoscope device and image interfusion method |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
-
2020
- 2020-09-15 CN CN202010966547.2A patent/CN112184604B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011008239A1 (en) * | 2009-06-29 | 2011-01-20 | Thomson Licensing | Contrast enhancement |
CN108040243A (en) * | 2017-12-04 | 2018-05-15 | 南京航空航天大学 | Multispectral 3-D visual endoscope device and image interfusion method |
CN111429391A (en) * | 2020-03-23 | 2020-07-17 | 西安科技大学 | Infrared and visible light image fusion method, fusion system and application |
Non-Patent Citations (1)
Title |
---|
陈浩;王延杰;: "基于拉普拉斯金字塔变换的图像融合算法研究", 激光与红外, no. 04, 20 April 2009 (2009-04-20) * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785543A (en) * | 2021-03-01 | 2021-05-11 | 天地伟业技术有限公司 | Traffic camera image enhancement method with two-channel fusion |
CN112785543B (en) * | 2021-03-01 | 2023-01-24 | 天地伟业技术有限公司 | Traffic camera image enhancement method with two-channel fusion |
CN113129243A (en) * | 2021-03-10 | 2021-07-16 | 同济大学 | Blood vessel image enhancement method and system based on infrared and visible light image fusion |
CN113724164A (en) * | 2021-08-31 | 2021-11-30 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN113724164B (en) * | 2021-08-31 | 2024-05-14 | 南京邮电大学 | Visible light image noise removing method based on fusion reconstruction guidance filtering |
CN114022353A (en) * | 2022-01-07 | 2022-02-08 | 成都国星宇航科技有限公司 | Method and device for fusing space-time image texture and image color |
CN114022353B (en) * | 2022-01-07 | 2022-03-29 | 成都国星宇航科技有限公司 | Method and device for fusing space-time image texture and image color |
CN115578621A (en) * | 2022-11-01 | 2023-01-06 | 中国矿业大学 | Image identification method based on multi-source data fusion |
CN117115442A (en) * | 2023-08-17 | 2023-11-24 | 浙江航天润博测控技术有限公司 | Semantic segmentation method based on visible light-infrared photoelectric reconnaissance image fusion |
CN117723564A (en) * | 2024-02-18 | 2024-03-19 | 青岛华康塑料包装有限公司 | Packaging bag printing quality detection method and system based on image transmission |
CN117723564B (en) * | 2024-02-18 | 2024-04-26 | 青岛华康塑料包装有限公司 | Packaging bag printing quality detection method and system based on image transmission |
Also Published As
Publication number | Publication date |
---|---|
CN112184604B (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112184604A (en) | Color image enhancement method based on image fusion | |
CN107578418B (en) | Indoor scene contour detection method fusing color and depth information | |
CN105809640B (en) | Low illumination level video image enhancement based on Multi-sensor Fusion | |
CN110956094A (en) | RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network | |
CN111209952A (en) | Underwater target detection method based on improved SSD and transfer learning | |
CN112733950A (en) | Power equipment fault diagnosis method based on combination of image fusion and target detection | |
CN111062892A (en) | Single image rain removing method based on composite residual error network and deep supervision | |
CN112419212B (en) | Infrared and visible light image fusion method based on side window guide filtering | |
CN102982520B (en) | Robustness face super-resolution processing method based on contour inspection | |
Tang et al. | Single image dehazing via lightweight multi-scale networks | |
CN106846289A (en) | A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification | |
Wang et al. | MAGAN: Unsupervised low-light image enhancement guided by mixed-attention | |
CN113160053B (en) | Pose information-based underwater video image restoration and splicing method | |
CN115423734B (en) | Infrared and visible light image fusion method based on multi-scale attention mechanism | |
CN103020933A (en) | Multi-source image fusion method based on bionic visual mechanism | |
CN112669249A (en) | Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning | |
CN114782298B (en) | Infrared and visible light image fusion method with regional attention | |
CN114973028B (en) | Aerial video image real-time change detection method and system | |
CN113298147B (en) | Image fusion method and device based on regional energy and intuitionistic fuzzy set | |
CN106886747A (en) | Ship Detection under a kind of complex background based on extension wavelet transformation | |
CN114612359A (en) | Visible light and infrared image fusion method based on feature extraction | |
CN113379861B (en) | Color low-light-level image reconstruction method based on color recovery block | |
Jia et al. | Research on the decomposition and fusion method for the infrared and visible images based on the guided image filtering and Gaussian filter | |
CN114331937A (en) | Multi-source image fusion method based on feedback iterative adjustment under low illumination condition | |
CN112330639A (en) | Significance detection method for color-thermal infrared image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |