CN102722864B - Image enhancement method - Google Patents
Image enhancement method Download PDFInfo
- Publication number
- CN102722864B CN102722864B CN201210157662.0A CN201210157662A CN102722864B CN 102722864 B CN102722864 B CN 102722864B CN 201210157662 A CN201210157662 A CN 201210157662A CN 102722864 B CN102722864 B CN 102722864B
- Authority
- CN
- China
- Prior art keywords
- visible images
- image
- brightness
- masking
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention provides an image enhancement method, comprising steps that: a visible-light image and an infrared image are collected; reversible transformation is carried out on the brightness and the dimension of the visible-light image and the infrared image to obtain contrast information and texture information of the visible-light image and the infrared image; a mask is calculated on the basis of the saturation and the brightness of the visible-light image; the mask is used to migrate the visible-light image and the infrared image, and the contrast information for enhancing the visible-light image is calculated; the mask is used to migrate the visible-light image and the infrared image, and the texture information for enhancing the visible-light image is calculated; reversible transformation is carried out to obtain the brightness of the enhanced visible-light image; and the enhanced visible-light image is obtained on the basis of the brightness of the enhanced visible-light image, the saturation of the visible-light image and tone mixing. The image enhancement method of the embodiment of the invention can automatically realize high dynamic scene information recovery and achieve high qualified visible light image enhancement from two aspects of the contrast and the texture.
Description
Technical field
The present invention relates to computer vision field, particularly a kind of image enchancing method.
Background technology
Because Visible Light Camera gathers the restriction of dynamic range of images, gather some region of picture that the obvious scene of light and shade contrast may cause obtaining and cross bright and some region is excessively dark.In order to obtain the image of high dynamic range, generally can gather the image of multiple different depth of exposures, carry out hue adjustment, be transformed into the figure of the low-dynamic range of answering in contrast by high dynamic range luminance graph, because this image of image that needs different exposure time is generally only applicable to static scene.Because original raw form contains more scene dynamics range information with respect to conventional jpeg form, also having a kind of comparatively conventional method is exactly that the image of the raw form to collecting is manually adjusted, but the dynamic range that this method needs a lot of artificial participations and raw form to collect is also limited, can not collect the high multidate information of real scene completely.
Summary of the invention
The present invention is intended at least one of solve the problems of the technologies described above.
For this reason, the object of the invention is to propose a kind of image enchancing method that utilizes infrared image to recover real scene from contrast and texture two aspects.
Comprise step according to the image enchancing method of the embodiment of the present invention: A. gathers visible images and infrared image, and wherein visible images aligns with the photocentre of infrared image; B. the brightness dimension of visible images is carried out to reversible transformation, obtain visible images contrast information V
lwith visible images texture information V
d, and the brightness dimension of infrared image is carried out to reversible transformation, obtain infrared image contrast information N
lwith infrared image texture information N
d; C. calculate masking-out W according to the brightness V of the saturation degree S of visible images and visible images, masking-out W is used for merging visible images and infrared image; D. according to visible images contrast information V
l, infrared image contrast information N
lwith masking-out W, calculate the contrast information V ' that strengthens visible images
l; E. according to visible images texture information V
d, infrared image texture information N
dwith masking-out W, calculate the texture information V ' that strengthens visible images
d; F. according to the contrast information V ' that strengthens visible images
lwith the texture information V ' that strengthens visible images
d, carry out reversible inverse transformation, the brightness V ' of the visible images that is enhanced; And G. is according to the tone H that strengthens the brightness V ' of visible images and the saturation degree S of visible images and visible images, is mixed to get enhancing visible images.
Method of the present invention, by utilizing infrared image to recover the high dynamic range of real scene from contrast and two aspects of texture, realizes visible images and strengthens, and the method can not need artificial participation to realize automatically the recovery of the information of high dynamic scene.Advantage of the present invention is to utilize the method for calculating shooting in conjunction with infrared image, realizes high-quality visible images enhancing from contrast and texture two aspects.
In one embodiment of the invention, reversible transformation is wavelet transformation; Reversible contravariant is changed to inverse wavelet transform.
In one embodiment of the invention, in visible images, the region that saturation degree S is too low and brightness V is too high or too low, masking-out W value is less.
In one embodiment of the invention, step C comprises: C1. calculates initial masking-out according to the brightness V of the saturation degree S of visible images and visible images
and C2. is according to initial masking-out
be optimized calculating, obtain masking-out W.
In one embodiment of the invention, in step C1, calculate initial masking-out
formula be:
wherein, W
s=e
-α | s-1|, W
v=e
-β | v-0.5|, α is that positive coefficient, β are positive coefficient.
In one embodiment of the invention, the method for calculating masking-out W in step C2 is:
Wherein L is Laplacian Matrix, and λ is specification item coefficient, and (i, j) the individual element in matrix L is defined as
Wherein V
iand V
jthe brightness value of visible images at (i, j) point, δ
ijbeing impulse function, is 1 at (i, j) point, and other point is 0, μ
kfor window w
kthe average of middle brightness of image value, ∑
kfor window w
kmiddle brightness of image value variance, ε is standardization parameter, | w
k| be window w
kthe number of middle element; To equation (1), differentiate can obtain
Final masking-out W can obtain by solving linear equation (3).
In one embodiment of the invention, in step D, strengthen the contrast information V ' of visible images
lcomputing formula be: V '
l=WV
l+ (1-W) V
l ', wherein V
l 'for new luminance graph information, new luminance graph information V
l 'gradient amplitude coupling by visible images and infrared image obtains.
In one embodiment of the invention, it is characterized in that new luminance graph information V
l 'computing method be: use
represent the gradient magnitude of visible images,
represent the gradient magnitude of infrared image, statistics V
gand N
gprobability histogram, utilize laplacian curve to carry out matching to histogram, try to achieve the probability integral function of two curves, probability integral function is mated, obtain the V ' of matching result
g, obtain the gradient that new visible images brightness is tieed up
according to what obtain
with
solving Poisson equation solves and obtains new luminance graph information V
l '.
In one embodiment of the invention, in step e, strengthen the texture information V ' of visible images
dcomputing formula be: V '
d=WV
d+ (1-W) N
d.
The aspect that the present invention is additional and advantage in the following description part provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Brief description of the drawings
The present invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments obviously and easily and understand, wherein,
Fig. 1 is image enchancing method process flow diagram according to an embodiment of the invention.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has the element of identical or similar functions from start to finish.Be exemplary below by the embodiment being described with reference to the drawings, only for explaining the present invention, and can not be interpreted as limitation of the present invention.On the contrary, embodiments of the invention comprise all changes, amendment and the equivalent within the scope of spirit and the intension that falls into additional claims.
Image enchancing method of the present invention is by utilizing infrared image to recover the high dynamic range of real scene from contrast and two aspects of texture, realize former visible images and strengthen, the method can not need artificial participation to realize automatically the recovery of the information of high dynamic scene.Advantage of the present invention is to utilize the method for calculating shooting in conjunction with infrared image, realizes high-quality visible images enhancing from contrast and texture two aspects.
Fig. 1 is image enchancing method process flow diagram according to an embodiment of the invention.
As shown in Figure 1, image enchancing method comprises the steps:
Step S101, obtains the visible ray and the infrared image that have alignd.
Particularly, utilize hardware facility that the photocentre of Visible Light Camera and infrared camera is alignd, the scene that allows two cameras photograph by spectroscope is identical, or utilizes visible images and the infrared image of more existing collected by camera same scene of having alignd.
Step S102, contrast and the texture information of extraction visible images and infrared image.
Particularly, the brightness dimension of visible images is carried out to reversible transformation, obtain the visible images contrast information V of low-frequency range
lvisible images texture information V with high band
d, and the brightness dimension of infrared image is carried out to reversible transformation, obtain the infrared image contrast information N of low-frequency range
linfrared image texture information N with high band
d.In a preferred embodiment of the invention, reversible transformation adopts wavelet transformation.
Step S103, calculates masking-out W according to the brightness V of the saturation degree S of visible images and visible images, and masking-out W is used for merging visible images and infrared image.
Particularly, the saturation degree S of image and brightness V are 0 to 1 value.In general, in visible images, the region of the region of saturation degree S too low (being that S value approaches at 0 o'clock) and brightness V too high (being that V value approaches 1) or too low (being that V value approaches 0), lacks texture information, and contrast gathers not enough.We make initial masking-out
give less weights to these regions, can utilize
obtain initial masking-out
value, wherein, W
s=e
-α | s-1|, W
v=e
-β | v-0.5|, wherein α and β are positive coefficient.It should be noted that, calculate initial masking-out
function not exclusive, other function " is given the initial masking-out compared with zonule to the too low place of saturation degree S in visible images and the too high or too low place of brightness V as long as meet
weights " condition, also can use.
Obtaining initial masking-out
value after, we further optimize and obtain high-quality masking-out W, optimization method is:
Wherein L is Laplacian Matrix, and λ is specification item parameter.(i, j) individual element in matrix L is defined as
Wherein V
iand V
jthe brightness value of image at (i, j) point, δ
ijbeing impulse function, is 1 at (i, j) point, and other point is 0, μ
kand ∑
krespectively window w
kthe average of middle brightness of image value and variance, ε is specification item coefficient, | w
k| be window w
kthe number of middle element.
To equation (1), differentiate can obtain
Wherein, the U representation unit matrix in formula (3), the high-quality masking-out W after optimizing can obtain by solving linear equation (3).
Step S104, according to visible images contrast information V
l, infrared image contrast information N
lwith masking-out W, calculate the contrast information V ' that strengthens visible images
l.Specifically comprise:
Experimental verification shows, the gradient of natural image and near-infrared image all meet laplacian distribution.Therefore, the present invention just utilizes the gradient amplitude of this two width image to mate to adjust the contrast of visible images, wherein
represent the gradient amplitude of visible images,
represent the gradient magnitude of infrared image, statistics V
gand N
gprobability histogram, utilize laplacian curve to carry out matching to histogram, try to achieve the probability integral function of two curves, probability integral function is mated, obtain the V ' of matching result
g, can obtain so the new gradient that the brightness of light image can be tieed up
according to what obtain
with
solving Poisson equation solves and obtains new visible images luminance graph V
l '.The masking-out W that integrating step S103 obtains, tries to achieve the contrast information V ' of the final enhancing visible images after migration
lfor
V′
L=W·V
L+(1-W)·V
L′ (4)
Step S105, according to visible images texture information V
d, infrared image texture information N
dwith masking-out W, calculate the texture information V ' that strengthens visible images
d.
The masking-out W that utilizes step S103 to obtain, the texture information of fusion visible images and near-infrared image, through the texture migration of near-infrared image, the texture information V ' of the enhancing visible images finally obtaining
d.The computing formula of migration step is: V '
d=WV
d+ (1-W) N
d
Step S106, merges the contrast information V ' that strengthens visible images
lwith texture information V '
d, obtain the monochrome information V ' of high-quality enhancing visible images.
Particularly, to the enhancing visible images contrast information V ' moving through contrast masking-out
lthe texture information V ' of the enhancing visible images moving with texture masking-out
d, the contrary variation of the reversible transformation that employing step S102 uses, to V '
land V '
dcarry out reversible inverse transformation and obtain the luminance graph V ' through strengthening visible images.In a preferred embodiment of the invention, reversible contravariant is changed to inverse wavelet transform.
Step S107, will strengthen the monochrome information V ' of visible images and saturation degree S and the tone H of original visible images merge, and obtains the final visible images after infrared image enhancing.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or example in conjunction with specific features, structure, material or the feature of this embodiment or example description.In this manual, the schematic statement of above-mentioned term is not necessarily referred to identical embodiment or example.And specific features, structure, material or the feature of description can be with suitable mode combination in any one or more embodiment or example.
Although illustrated and described embodiments of the invention, for the ordinary skill in the art, be appreciated that without departing from the principles and spirit of the present invention and can carry out multiple variation, amendment, replacement and modification to these embodiment, scope of the present invention is by claims and be equal to and limit.
Claims (6)
1. an image enchancing method, is characterized in that, comprises step:
A. gather visible images and infrared image, wherein said visible images aligns with the photocentre of described infrared image;
B. the brightness dimension of described visible images is carried out to reversible transformation, obtain visible images contrast information V
lwith visible images texture information V
d, and the brightness dimension of described infrared image is carried out to reversible transformation, obtain infrared image contrast information N
lwith infrared image texture information N
d;
C. calculate masking-out W according to the brightness V of the saturation degree S of described visible images and described visible images, described masking-out W is used for merging described visible images and described infrared image, and described step C specifically comprises:
C1. calculate initial masking-out according to the brightness V of the saturation degree S of described visible images and described visible images
, and C2. is according to described initial masking-out
be optimized calculating, obtain masking-out W;
D. according to described visible images contrast information V
l, described infrared image contrast information N
lwith described masking-out W, calculate the contrast information V' that strengthens visible images
l;
E. according to described visible images texture information V
d, described infrared image texture information N
dwith described masking-out W, calculate the texture information V' that strengthens visible images
d;
F. according to the contrast information V' of described enhancing visible images
ltexture information V' with described enhancing visible images
d, carry out reversible inverse transformation, the brightness V' of the visible images that is enhanced; And
G. according to the tone H of the saturation degree S of the brightness V' of described enhancing visible images and described visible images and described visible images, be mixed to get enhancing visible images,
Wherein, in step C1, calculate initial masking-out
formula be:
wherein, W
s=e
-α | s-1|, W
v=e
-β | v-0.5|, α is that positive coefficient, β are positive coefficient,
Wherein, in step C2, the computing formula of masking-out W is:
Wherein L is Laplacian Matrix, and λ is specification item coefficient, and (i, j) the individual element in matrix L is defined as
Wherein V
iand V
jthe brightness value of described visible images at (i, j) point, δ
ijbeing impulse function, is 1 at (i, j) point, and other point is 0, μ
kfor window w
kthe average of middle brightness of image value, ∑
kfor window w
kmiddle brightness of image value variance, ε is standardization parameter, | w
k| be window w
kthe number of middle element;
Computing formula differentiate to described masking-out W can obtain linear equation
Wherein, U is unit matrix, and final described masking-out W obtains by solving described linear equation.
2. image enchancing method as claimed in claim 1, is characterized in that, described reversible transformation is wavelet transformation; Described reversible contravariant is changed to inverse wavelet transform.
3. image enchancing method as claimed in claim 1, is characterized in that, in described visible images, and the region that described saturation degree S is too low and described brightness V is too high or too low, described masking-out W value is less.
4. image enchancing method as claimed in claim 1, is characterized in that, strengthens the contrast information V' of visible images described in described step D
lcomputing formula be: V'
l=WV
l+ (1-W) V
l', wherein V
l'for new luminance graph information, described new luminance graph information V
l'gradient amplitude coupling by described visible images and described infrared image obtains.
5. image enchancing method as claimed in claim 4, is characterized in that, described new luminance graph information V
l'computing method be:
With
represent the gradient magnitude of described visible images,
represent the gradient magnitude of described infrared image, statistics V
gand N
gprobability histogram, utilize laplacian curve to carry out matching to histogram, try to achieve the probability integral function of two curves, probability integral function is mated, obtain the V' of matching result
g, obtain the gradient that new visible images brightness is tieed up
according to what obtain
with
solving Poisson equation solves and obtains described new luminance graph information V
l'.
6. image enchancing method as claimed in claim 1, is characterized in that, strengthens the texture information V' of visible images described in described step e
dcomputing formula be: V'
d=WV
d+ (1-W) N
d.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210157662.0A CN102722864B (en) | 2012-05-18 | 2012-05-18 | Image enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210157662.0A CN102722864B (en) | 2012-05-18 | 2012-05-18 | Image enhancement method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102722864A CN102722864A (en) | 2012-10-10 |
CN102722864B true CN102722864B (en) | 2014-11-26 |
Family
ID=46948611
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210157662.0A Active CN102722864B (en) | 2012-05-18 | 2012-05-18 | Image enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102722864B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103400128B (en) * | 2013-08-09 | 2016-12-28 | 深圳市捷顺科技实业股份有限公司 | A kind of image processing method and device |
CN103973990B (en) * | 2014-05-05 | 2018-12-07 | 浙江宇视科技有限公司 | wide dynamic fusion method and device |
JP6429176B2 (en) * | 2014-09-16 | 2018-11-28 | 華為技術有限公司Huawei Technologies Co.,Ltd. | Image processing method and apparatus |
CN105338262B (en) | 2015-10-09 | 2018-09-21 | 浙江大华技术股份有限公司 | A kind of graphic images processing method and processing device |
CN105744159B (en) * | 2016-02-15 | 2019-05-24 | 努比亚技术有限公司 | A kind of image composition method and device |
CN106600554B (en) * | 2016-12-15 | 2020-06-19 | 天津津航技术物理研究所 | Infrared image preprocessing method for vehicle-mounted night vision pedestrian detection |
CN106846288B (en) * | 2017-01-17 | 2019-09-06 | 中北大学 | A kind of more algorithm fusion methods of bimodal infrared image difference characteristic Index |
CN108737741A (en) * | 2017-12-21 | 2018-11-02 | 西安工业大学 | A kind of auto Anti-Blooming system of night Computer Vision |
CN112132910B (en) * | 2020-09-27 | 2023-09-26 | 上海科技大学 | Infrared-based image matting system containing semitransparent information and suitable for low-light environment |
CN114298944A (en) * | 2021-12-30 | 2022-04-08 | 上海闻泰信息技术有限公司 | Image enhancement method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545064A (en) * | 2003-11-27 | 2004-11-10 | 上海交通大学 | Infrared and visible light image merging method |
CN101853492A (en) * | 2010-05-05 | 2010-10-06 | 浙江理工大学 | Method for fusing night-viewing twilight image and infrared image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8503778B2 (en) * | 2009-05-14 | 2013-08-06 | National University Of Singapore | Enhancing photograph visual quality using texture and contrast data from near infra-red images |
-
2012
- 2012-05-18 CN CN201210157662.0A patent/CN102722864B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545064A (en) * | 2003-11-27 | 2004-11-10 | 上海交通大学 | Infrared and visible light image merging method |
CN101853492A (en) * | 2010-05-05 | 2010-10-06 | 浙江理工大学 | Method for fusing night-viewing twilight image and infrared image |
Non-Patent Citations (4)
Title |
---|
Enhancing Photographs with Near Infrared Images;Xiaopeng Zhang, et al.;《IEEE Conference on Computer Vision and Pattern Recognition CVPR 2008》;20080628;摘要,第3页右栏-第7页左栏第3节"Vsible Image Enhancement",第4节"Experiments and Results" * |
Xiaopeng Zhang, et al..Enhancing Photographs with Near Infrared Images.《IEEE Conference on Computer Vision and Pattern Recognition CVPR 2008》.2008,摘要,第3页右栏-第7页左栏第3节"Vsible Image Enhancement",第4节"Experiments and Results". * |
基于方向对比度和局部方差的双树复小波图像融合算法;王爽 等;《中国体视学与图像分析》;20090630;第14卷(第2期);133-137 * |
王爽 等.基于方向对比度和局部方差的双树复小波图像融合算法.《中国体视学与图像分析》.2009,第14卷(第2期),133-137. * |
Also Published As
Publication number | Publication date |
---|---|
CN102722864A (en) | 2012-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102722864B (en) | Image enhancement method | |
CN102831592B (en) | Based on the image nonlinearity enhancement method of histogram subsection transformation | |
US8503778B2 (en) | Enhancing photograph visual quality using texture and contrast data from near infra-red images | |
CN103020920B (en) | Method for enhancing low-illumination images | |
CN108765336B (en) | Image defogging method based on dark and bright primary color prior and adaptive parameter optimization | |
CN102231791B (en) | Video image defogging method based on image brightness stratification | |
CN102014243B (en) | Method and device for enhancing images | |
CN111161360B (en) | Image defogging method of end-to-end network based on Retinex theory | |
CN103886565B (en) | Nighttime color image enhancement method based on purpose optimization and histogram equalization | |
CN107292830B (en) | Low-illumination image enhancement and evaluation method | |
CN105096278B (en) | The image enchancing method adjusted based on illumination and equipment | |
CN102096909B (en) | Improved unsharp masking image reinforcing method based on logarithm image processing model | |
CN107274365A (en) | A kind of mine image intensification method based on unsharp masking and NSCT algorithms | |
CN111402145B (en) | Self-supervision low-illumination image enhancement method based on deep learning | |
CN104700376A (en) | Gamma correction and smoothing filtering based image histogram equalization enhancing method | |
CN103996178A (en) | Sand and dust weather color image enhancing method | |
CN102289792A (en) | Method and system for enhancing low-illumination video image | |
CN105354801B (en) | A kind of image enchancing method based on HSV color space | |
CN102903081A (en) | Low-light image enhancement method based on red green blue (RGB) color model | |
CN103607589B (en) | JND threshold value computational methods based on hierarchy selection visual attention mechanism | |
CN110852956A (en) | Method for enhancing high dynamic range image | |
CN105931208A (en) | Physical model-based low-illuminance image enhancement algorithm | |
CN103106644A (en) | Self-adaptation image quality enhancing method capable of overcoming non-uniform illumination of colored image | |
CN102800054A (en) | Image blind deblurring method based on sparsity metric | |
CN109784357B (en) | Image rephotography detection method based on statistical model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |