CN105701783A - Single image defogging method based on ambient light model and apparatus thereof - Google Patents

Single image defogging method based on ambient light model and apparatus thereof Download PDF

Info

Publication number
CN105701783A
CN105701783A CN201610023592.8A CN201610023592A CN105701783A CN 105701783 A CN105701783 A CN 105701783A CN 201610023592 A CN201610023592 A CN 201610023592A CN 105701783 A CN105701783 A CN 105701783A
Authority
CN
China
Prior art keywords
environment light
original image
image
estimate
shade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610023592.8A
Other languages
Chinese (zh)
Other versions
CN105701783B (en
Inventor
王维东
聂涛
沈翰祺
黄露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610023592.8A priority Critical patent/CN105701783B/en
Publication of CN105701783A publication Critical patent/CN105701783A/en
Application granted granted Critical
Publication of CN105701783B publication Critical patent/CN105701783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Abstract

The invention discloses a single image defogging method based on an ambient light model and an apparatus thereof. The method comprises the following steps of multiplying a weighting coefficient by each channel of an original image and then carrying out summation so as to acquire a multi-channel weighted brightness graph I(x) of the image; using the multi-channel weighted brightness graph I(x) of the original image to estimate an ambient light graph which changes along with a position, using each channel information of the original image Ic(x) or the multi-channel weighted brightness graph I(x) to estimate an ambient light shade; using the ambient light graph and the ambient light shade to estimate a scene target reflectivity rho(x); and multiplying an average illuminance Auni constant by the scene target reflectivity rho(x) so as to acquire a defogged clear image. In the invention, a physical model based on ambient light is adopted; the ambient light and the ambient light shade are estimated, and the scene target reflectivity is estimated, finally the defogged image of each channel is recovered; and problems that a suitable occasion of a classic model is not wide enough and a defogging effect is poor especially under a night fog condition and the condition that an extra light source exists in a scene target are solved.

Description

A kind of single image defogging method based on environment light model and device
Technical field
The present invention relates to a kind of digital image processing method, particularly relate to a kind of single image defogging method based on environment light model and device。
Background technology
In greasy weather situation, visibility seriously reduces, make collection to image serious distortion occurs, lose important scene information, cannot passing through image and accurately obtain the information comprised in image, reduce the use value of image, more follow-up image procossing and analytical work bring great difficulty。Meanwhile, when shooting out of doors, due to scattering and the absorption of air, picture contrast out captured by often finding is low, smudgy。This directly influences the normal operation of some application systems utilizing computer vision algorithms make, such as video monitoring, remote sensing technology and target identification system etc.。Such as in field of video monitoring, owing to monitoring site shrouds dense fog, thus causing that all kinds of minutias etc. in scene thicken unclear, it is unfavorable for the carrying out of follow-up analyzing and processing work。If the police need to call these data carries out scene of a crime when analyzing, then just cannot find out suspect according to fuzzy face characteristic and obtain strong material evidence afterwards;And if it occur that traffic accident event, traffic police also cannot judge reason and the primary responsibility party of accident by ambiguous vehicle characteristics and the number-plate number。
Therefore, image is carried out mist elimination process and eliminates the fog impact on image, rich image information, raising picture contrast and definition, strengthen image visual effect with this and provide reliable information then particularly important for follow-up image procossing and data analysis。
Image mist elimination processing method can be largely classified into two kinds。One is based on image enchancing method, and this method does not account for mist and causes the reason of image degradation, and simply region interested is strengthened。Conventional method has histogram equalization, Retinex algorithm and homomorphic filtering etc.。Although these methods improve the definition of some image, but are not suitable for the image that scenery change in depth is bigger, and use these methods cannot recover scenery realistic colour after carrying out mist elimination process。
Another method is namely based on the defogging method of physical model。McCartney proposed a kind of atmospherical scattering model and explains the image-forming mechanism of scene under the effect of atmospheric particles in 1976, a lot of mist elimination algorithms are all built upon on this model in recent years, and concrete expression formula is I (x)=A ρ (x) e-βd(x)+A(1-e-βd(x))。The light that camera is received by this model is divided into two parts: reflection attenuation light and atmosphere light。Reflection attenuation light be the reflection light of body surface because of atmospheric particles scattering process, all the other parts that are unscattered and that received by camera, its light intensity exponentially decays along with the increase of propagation distance。Atmosphere light is atmospheric particles to natural scattering of light effect the part that received by camera, and intensity is gradually increased along with the increase of propagation distance。
But the assumed condition of above-mentioned model is that mist is uniform and sufficiently thick, there is abundant scattering and present isotropism in sunlight, so atmosphere light A is quantitative in mist。But in a practical situation, there is not abundant scattering in sunlight, then atmosphere light A is not then quantitatively, nor can ignore the sunlight being emitted directly toward camera。And when scene occurring luminous object and luminous quantity much larger than reflection light quantity, then above-mentioned model is just inapplicable。And above-mentioned model is also without considering that situation about blocking occurs in scene。Therefore research atmospheric scattering mechanism, it is proposed to a kind of atmosphere light is with the environment light model of shift in position, and applies this environment light model and realize single width figure defogging method and device then has a good application prospect。
Summary of the invention
It is an object of the invention to provide a kind of single image defogging method based on environment light model and device, it is possible to suitable in increasingly complex application scenarios and realize image mist elimination effect better。
Technical scheme provided by the invention is:
A kind of single image defogging method based on environment light model, including:
(1) utilize each channel information of original image, try to achieve original image multichannel weighted luminance figure I (x);
(2) original image multichannel weighted luminance figure I (x) is utilized to estimate the environment light figure changed with position (coordinate in original image)Utilize each channel information I of original imagecX () or multichannel weighted luminance figure I (x) estimate environment light shade
(3) environment light figure is utilizedWith environment light shadeEstimate scene objects reflectivity ρ (x);
(4) scene objects reflectivity ρ (x) is utilized to be multiplied by uniform illumination AuniConstant obtains the picture rich in detail after mist elimination。
Preferably, solve original image multichannel weighted luminance figure I (x) each for original image passage can be multiplied by weight coefficient respectively and sue for peace again, finally give multichannel weighted luminance figure I (x) of image, the value that wherein weight coefficient can specify by various graphics standards, or specific, it is also possible to it is arbitrary;Can solve multichannel weighted luminance figure I (x) of image with following weight coefficient, computing formula is as follows:
I (x)=0.299R+0.587G+0.114B
Utilize multichannel weighted luminance figure I (x) of the image obtained, estimate environment light figure according to environment light modelEnvironment light model is
I ( x ) = E ^ l ( x ) ρ ( x ) t ( x ) + E ^ l ( x ) [ 1 - t ( x ) ] = E ^ l ( x ) ρ ( x ) t ( x ) + V ^ ( x )
WhereinFor environment light, including the direct light of atmosphere light, direct sunlight, scattering light and other scene light sources;Wherein ρ (x) is the reflectance of scene scenery, and t (x) represents that scene light arrives the absorbance of imaging device, t (x)=e after being subject to scatter attenuation-βd(x), β is the total scattering coefficient of mist, and d (x) is depth of view information;For environment light shade, its physical significance is environment lightAn extra brightness gain of superposition on scenery picture it is scattered in through fog。
Preferably, the described environment light figure with change in locationEvaluation method be:
First original image multichannel weighted luminance figure I (x) is carried out low-pass filtering and obtains initial environment light figureAsk for the side-play amount El meeting following formula againoffset:
El o f f s e t = m a x ( I ( x ) - E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) + El o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaE:
λ E = m a x ( I ( x ) ÷ E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) · λ E .
As preferably, described low-pass filtering is specifically as follows the Gaussian Blur core first using large and small yardstick and original image multichannel weighted luminance figure I (x) phase convolution, and result is designated as initial environment light figure respectivelyWith
Then willDivided byGlobal maximum is taken out, as deviation ratio λ from division resultE, finally willWith λEBe multiplied acquisition environment light figure
Or for first using Gaussian Blur core and original image multichannel weighted luminance figure I (x) the phase convolution of large and small yardstick;Result is designated as respectively initial environment light figureWith
Then willDeductGlobal maximum is taken as side-play amount El from work difference resultoffset, finally willWith side-play amount EloffsetIt is added and obtains environment light figure
Preferably, multichannel weighted luminance figure I (x) is utilized to estimate environment light shadeMethod be:
Utilize and there are multiple low pass filters (conventional 3~5 low, relatively low, medium, higher, higher cutoff frequency, but it is not limited to this) constitute a bank of filters, one by one multichannel weighted luminance figure I (x) is carried out certain low-pass filtering or the combined filter of multiple different low pass filter, then filter result is asked for weighted average to obtain initial environment light shadeAsk for the side-play amount V meeting following formula againoffset:
V o f f s e t = m a x ( I ( x ) - V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) - V o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaV:
λ V = min ( I ( x ) ÷ V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) · λ V ;
It is specifically as follows:
The method utilizing different scale combined filter, carries out the Gaussian Blur core of original image multichannel weighted luminance figure I (x) Yu large, medium and small three yardsticks convolution operation respectively, and makees weighted average acquisition initial environment light shadeThen by I (x) and little yardstick Gaussian Blur nuclear phase convolution, result is designated asThen willDivided byAnd take global minimum, it is thus achieved that deviation ratio λV, finally by λVIt is multiplied byCalculate environment light shadeEstimated value;
Or first with the method for different scale combined filter, the Gaussian Blur core of original image multichannel weighted luminance figure I (x) Yu large, medium and small three yardsticks is carried out respectively convolution operation, and makees weighted average acquisition initial environment light shadeBy I (x) and little yardstick Gaussian Blur nuclear phase convolution, result is designated asAnd willWithMake difference and take global maximum, obtaining side-play amount Voffset, finally existIn deduct compensation dosage Voffset, obtain the estimated value of environment light shade
The method that can also adopt dark estimates environment light shadeUtilize each channel information I of original imagec(x) estimation environment light shadeMethod be:
Obtain original image R, G, B triple channel smallest passage value, then in each regional area (original image can be divided into multiple regional area) of original image, smallest passage value is done mini-value filtering, using filter result as environment light shadeEstimated value, namely as follows complete estimate:
m i n y ∈ Ω ( x ) ( m i n c ( I c ( y ) ) ) = t ( x ) m i n y ∈ Ω ( x ) ( E l ( x ) · ρ ( x ) ) + V ( x ) ≈ 0 + V ( x ) = V ^ ( x )
Wherein c is tri-Color Channels of R, G, B, is the regional area of a × a sized by Ω (x), and a is regional area length。In units of pixel, value is generally 7~11, but is not limited to this。
Before estimation scene objects reflectivity ρ (x), use the environment light figure to estimating of the wave filter with edge holding capacityEnvironment light shadeBecome more meticulous。
The method become more meticulous is particularly as follows: use two-sided filter (a kind of wave filter with edge holding capacity) to environment light figureEnvironment light shadeBeing filtered, two-sided filter is at environment light figureEnvironment light shadeChanging big region weight less, filter effect is inconspicuous, maintains environment light figureEnvironment light shadeMarginal information, at environment light figureEnvironment light shadeChange little region weight relatively big, play smoothing effect, remove halo effect;
Or use and guide wave filter (a kind of wave filter with edge holding capacity) to environment light figureEnvironment light shadeIt is filtered, with original image for guiding figure, environment light figureEnvironment light shadeScheme for input, minimum for criterion with filter result and input figure gap, at environment light figureEnvironment light shadeChange little region and play the effect of weighted mean filter, at environment light figureEnvironment light shadeChange big region filtering dynamics little, maintain marginal information。
Preferably, the formula of the method for estimation of scene objects reflectivity ρ (x) is
ρ ( x ) = I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x ) ;
Or it is
ρ ( x ) = I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x ) ;
Wherein k is a penalty coefficient, and conventional value is typically in 0.8~1.0, but is not limited to this。
Preferably, scene objects reflectivity ρ (x) is utilized to be multiplied by uniform illumination AuniConstant obtains picture rich in detail, including: utilize following mist elimination formula that original image is performed defogging:
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x ) ;
Or
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x ) ;
Wherein AuniFor uniform illumination constant, value is 0.8~1.0 times of the maximum number that now the luminance bit degree of depth can represent;Such as maximum number is 255, then can value be 200~255, but be not limited to this;IresX () is the picture rich in detail after mist elimination, I (x) is original image multichannel weighted luminance figure,For environment light figure estimated value,For environment light shade estimated value;
According to demand, it is also possible to utilize equation below to carry out result adjustment:
I r e s ′ ( x ) = I r e s ( x ) ± I o f f s e t
IoffsetFor intensity deviation amount, value is 0~0.08 times of the maximum number that now the luminance bit degree of depth can represent, for instance maximum number is 255, then can value be 0~20, but be not limited to this;I 'resX () is the picture rich in detail after result adjustment。
It doesn't matter with image particular content for the maximum number that the luminance bit degree of depth can represent, and during simply according to process, the concrete bit depth that image adopts, the brightness maxima that can represent。Such as picture adopts 8 bit depth, then maximum is 255, and such as adopting 16 is then 65535, but is not limited to this。
Preferably, utilize the multi-channel information of image and the passage containing monochrome information is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, passage processes one by one, finally integrates the picture rich in detail after all channel informations obtain mist elimination;Wherein the multi-channel information of image refers under specific color pattern, for describing multiple color informations of color of image feature。Concrete processing method is as follows:
For RGB pattern, tri-passages of R, G, B are used mist elimination formula I respectivelyres(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, restoring the mist elimination image of each passage, the final three-channel mist elimination image of the RGB of integration obtains the picture rich in detail after mist elimination;
For Lab pattern, the lightness information of L * channel is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, and a, b the two color channel information remains unchanged, L, a, b triple channel information of finally integrating completes defogging;
For YUV pattern, the monochrome information of Y passage is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, and when two chrominance channels such as U, V are operated, it is multiplied by chromatic component amplification factor respectively, finally integrate Y, U, V channel information and complete defogging;
For HIS pattern, the monochrome information of I passage is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, the hue information of H passage remains unchanged, and the saturation infromation of channel S is corrected according to saturation correction formula, finally integrates H, S, I channel information and completes defogging。
But it is not limited to only use this several modes to process。
Described saturation correction formula is as follows:
S 2 ( x ) = S 1 σ 1 σ 2 ( x ) e a 1 - a 2 σ 1 σ 2
Wherein S1(x)、S2The saturation of (x) respectively original image and mist elimination image, σ1、σ2The respectively standard deviation of original image and mist elimination image luminance component logarithm, a1、a2The respectively average of original image and mist elimination image luminance component logarithm。
The invention also discloses a kind of single image demister based on environment light model, including:
For utilizing each channel information of original image, try to achieve the original image processing module of original image multichannel weighted luminance figure;
The environment light figure changed with position (coordinate in original image) for utilizing original image multichannel weighted luminance figure I (x) to estimateEnvironment light figure estimation module;
For utilizing each channel information I of original imagecX () or multichannel weighted luminance figure I (x) estimate environment light shadeEnvironment light shade estimation module;
For utilizing environment light figureWith environment light shadeEstimate reflectivity ρ (x) estimation block of scene objects reflectivity ρ (x);
For utilizing scene objects reflectivity ρ (x) to be multiplied by uniform illumination constant AuniThe mist elimination obtaining the picture rich in detail after mist elimination performs module。
Preferably, original image processing module includes: processing unit, and for each for the original image read passage is multiplied by weight coefficient respectively, then summation obtains multichannel weighted luminance figure I (x) of image。
Preferably, environment light estimation module includes: λEEstimation unit, is used for estimating deviation ratio λE;Environment light estimation unit, is used for estimating environment light figureBecome more meticulous unit, for refining the environment light figure estimatedOr including: EloffsetEstimation unit is used for asking for side-play amount Eloffset;Environment light estimation unit, estimates environment light figure furtherBecome more meticulous unit, for refining the environment light figure estimated
Preferably, environment light shade estimation module includes: combined filter unit, processes for original image multichannel weighted luminance figure I (x) uses different scale wave filter be filtered;λVEstimation unit, is used for estimating deviation ratio λV;Environment light shade estimation unit, is used for estimating environment light shadeBecome more meticulous unit, for refining the environment light shade estimatedOr including: VoffsetEstimation unit, is used for solving side-play amount Voffset;Environment light shade estimation unit, is used for estimating environment light shadeBecome more meticulous unit, for refining the environment light shade estimatedOr including: dark unit is used for estimating environment light shadeBecome more meticulous unit, for refining the environment light shade estimated
Preferably, reflectivity ρ (x) estimation block includes: reflectivity ρ (x) evaluation unit, for utilizing the environment light figure estimatedWith environment light shadeEstimate scene objects reflectivity ρ (x)。
Preferably, mist elimination processes execution module and includes: mist elimination processing unit, for scene objects reflectivity ρ (x) is restored mist elimination image by passage。
Compared with prior art, the method have the advantages that
The present invention adopts a kind of physical model based on environment light, estimate environment light and environment light shade and estimate the reflectance of scene objects, final passage one by one recover mist elimination image, the applicable situation solving classical model is wide not, the problem particularly going fog effect difference when there is additional light source in greasy weather at night situation and scene objects;And the present invention is applicable not only to thick fog situation, being also applied for mist situation, there is mist elimination image obtained when luminous source also comparatively clear in night fog and scene, picture contrast is also highly improved。
Accompanying drawing explanation
Fig. 1 is the single image mist elimination process flow figure based on environment light model of an embodiment of the present invention;
Fig. 2 is the particular flow sheet of the single image mist elimination processing method based on environment light model of the embodiment of the present invention;
The environment light figure that Fig. 3 (1) is the embodiment of the present inventionMethod of estimation flow chart;
Fig. 3 (2) is embodiment of the present invention environment light figureConcrete method of estimation figure;
The environment light shade that Fig. 4 (1) is the embodiment of the present inventionMethod of estimation flow chart;
Fig. 4 (2) is embodiment of the present invention environment light shadeConcrete method of estimation figure;
Fig. 5 is the structured flowchart of the single image demister based on environment light model of the present invention;
Fig. 6 is the concrete structure block diagram of the single image demister based on environment light model of the present invention。
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is a part of embodiment of the present invention, rather than whole embodiments。
First, the single image defogging method that the embodiment of the present invention is provided carries out a simple introduction: first the original image obtained under greasy weather sight is processed, being specially and its each passage is weighted summation acquisition original image multichannel weighted luminance figure I (x), recycling I (x) estimates environment light figureThen becoming more meticulous with two-sided filter or guiding wave filter, recycling original image multichannel weighted luminance figure I (x) estimates environment light shadeThen become more meticulous with two-sided filter or guiding wave filter, the environment light figure then obtained according to above-mentioned two stepsWith environment light shadeEstimating the reflectivity ρ (x) of scene objects, finally the process of passage obtains the picture rich in detail after mist elimination one by one。
As it is shown in figure 1, this processing method mainly includes the following steps that (step S102 S110):
Step S102, obtains original image and asks for multichannel weighted luminance figure I (x)。
Step S104, utilizes original image multichannel weighted luminance figure I (x) to estimate environment light figure
Step S106, utilizes each channel information I of original imagecX () or multichannel weighted luminance figure I (x) estimate environment light shade
Step S108, utilizes the environment light figure estimatedWith environment light shadeEstimate scene objects reflectivity ρ (x)。
Step S1010, passage processes one by one, finally gives the picture rich in detail after mist elimination。
In the present embodiment, obtain original image and ask for multichannel weighted luminance figure I (x), such a way can be adopted to realize: each passage of original image is multiplied by weight coefficient respectively and summation obtains multichannel weighted luminance figure I (x) of image, the value that wherein weight coefficient can specify by various graphics standards, or specific, it is also possible to it is arbitrary;Following weight coefficient can be selected to obtain multichannel weighted luminance figure I (x), and computing formula is as follows:
I (x)=0.299R+0.587G+0.114B
In the present embodiment, original image multichannel weighted luminance figure I (x) is utilized to estimate environment light figureCan utilize in environment light model I (x) andRelation estimate environment light figureEnvironment light model is:
I ( x ) = E ^ l ( x ) ρ ( x ) t ( x ) + E ^ l ( x ) [ 1 - t ( x ) ] = E ^ l ( x ) ρ ( x ) t ( x ) + V ^ ( x )
WhereinFor environment light, including the direct light of atmosphere light, direct sunlight, scattering light and other scene light sources;ρ (x) is the reflectance of scene scenery, and t (x) represents that scene light arrives the absorbance of imaging device, t (x)=e after being subject to scatter attenuation-βd(x), β is the total scattering coefficient of mist, and d (x) is depth of view information;For environment light shade, its physical significance is that environment light El (x) is scattered in an extra brightness gain of superposition on scenery picture through fog。
In the present embodiment, original image multichannel weighted luminance figure I (x) is utilized to estimate environment light figureTime, environment light figureMethod of estimation be:
First original image multichannel weighted luminance figure I (x) is carried out low-pass filtering and obtains initial environment light figureAsk for the side-play amount El meeting following formula againoffset:
El o f f s e t = m a x ( I ( x ) - E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) + El o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaE:
λ E = m a x ( I ( x ) ÷ E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) · λ E ;
Finally use two-sided filter or guide the wave filter environment light figure to estimatingCarrying out becomes more meticulous obtains
In the present embodiment, each channel information I of original image is utilizedcX () or multichannel weighted luminance figure I (x) estimate environment light shadeEnvironment light shadeMethod of estimation be:
Utilize and there are multiple low pass filters (conventional 3~5 low, relatively low, medium, higher, higher cutoff frequency, but it is not limited to this) constitute a bank of filters, one by one I (x) is carried out certain low-pass filtering or the combined filter of multiple different low pass filter, then filter result is asked for weighted average to obtain initial environment light shadeAsk for the side-play amount V meeting following formula againoffset:
V o f f s e t = m a x ( I ( x ) - V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) - V o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaV:
λ V = min ( I ( x ) ÷ V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) · λ V ;
The method that can also adopt dark estimates environment light shadeConcrete operations are for obtaining original image R, G, B triple channel smallest passage value, then in each regional area, smallest passage value done mini-value filtering, using filter result as environment light shadeEstimated value, namely as follows complete estimate:
m i n y ∈ Ω ( x ) ( m i n c ( I c ( y ) ) ) = t ( x ) m i n y ∈ Ω ( x ) ( E l ( x ) · ρ ( x ) ) + V ( x ) ≈ 0 + V ( x ) = V ^ ( x )
Wherein c is tri-Color Channels of R, G, B, is the regional area of a × a sized by Ω (x), and a is regional area length, and conventional value is generally 7~11, but is not limited to this。
Finally use two-sided filter or guide the wave filter environment shade to estimatingCarrying out becomes more meticulous obtains
In the present embodiment, the environment light figure estimated is utilizedWith environment light shadeEstimate scene objects reflectivity ρ (x), it is possible to realize in the following way: use equation below
ρ ( x ) = I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x ) Or ρ ( x ) = I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Finally estimating body surface reflectivity ρ (x), wherein k is a penalty coefficient, and conventional value is typically in 0.8~1.0, but is not limited to this。
In the present embodiment, passage processes one by one, finally gives the picture rich in detail after mist elimination, it is possible to realize in the following way: original image multichannel weighted luminance figure I (x) uses following mist elimination formula perform defogging:
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Or I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Wherein AuniFor uniform illumination constant, value is 0.8~1.0 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 200~255, but is not limited to this;IresX () is the picture rich in detail after mist elimination, I (x) is original image multichannel weighted luminance figure,For environment light figure estimated value,For environment light shade estimated value;
According to demand, it is also possible to utilize equation below to carry out result adjustment:
I′res(x)=Ires(x)±Ioffset
IoffsetFor intensity deviation amount, value is 0~0.08 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 0~20, but is not limited to this;I 'resX () is the picture rich in detail after result adjustment。
In the present embodiment, passage processes one by one, finally gives the picture rich in detail after mist elimination, it is possible to realize in the following way:
Utilize the multi-channel information of image and the passage containing monochrome information is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, passage processes one by one, finally integrates the picture rich in detail after all channel informations obtain mist elimination。
Single image defogging method above-described embodiment provided in conjunction with Fig. 1 (Fig. 1 is single image defogging method flow chart according to embodiments of the present invention) and preferred embodiment is described in more details。As in figure 2 it is shown, this flow process comprises the following steps (step S202 step S212), in conjunction with Fig. 3 (1) and Fig. 3 (2) more specific detail environment light figureEstimate flow process (step S206 step S20618), in conjunction with Fig. 4 (1) and Fig. 4 (2) more specific detail environment light shadeEstimate flow process (step S208 step S20826);
Step S202, reads in original image;
Step S204, obtain original image multichannel weighted luminance figure I (x), use the step S202 original image obtained, its each passage is multiplied by respectively weight coefficient acquisition multichannel weighted luminance figure I (x) of suing for peace, the value that wherein weight coefficient can specify by various graphics standards, or specific, it is also possible to it is arbitrary;Can be selected for following weight coefficient and calculate acquisition original image multichannel weighted luminance figure I (x), computing formula is:
I (x)=0.299R+0.587G+0.114B
Step S206, uses step S204 original image multichannel weighted luminance figure I (x) obtained to carry out environment light figureEstimate。
Use the step S206-S2066 Methods For Global Estimation shown in Fig. 3 (1) to environment light figureEstimate。
Original image multichannel weighted luminance figure I (x) is carried out low-pass filtering by step S2062, obtains initial environment light figureBy multichannel weighted luminance figure I (x) filter result and initial environment light figureMake difference and also takes global maximum, using result as side-play amount Eloffset, formula is as follows:
El o f f s e t = m a x ( I ( x ) - E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Or by multichannel weighted luminance figure I (x) filter result and initial environment light figureIt is divided by and takes global maximum, using result as deviation ratio λE, formula is as follows:
λ E = m a x ( I ( x ) ÷ E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e
Step S2064 utilizes the step S2062 initial environment light figure obtainedWith side-play amount EloffsetOr deviation ratio λEEstimate environment light figureBy initial environment light figurePlus side-play amount Eloffset, formula is as follows:
E ^ l ( x ) = E ^ l i n i t ( x ) + El o f f s e t
Or initial environment light figureIt is multiplied by deviation ratio λE, formula is as follows:
E ^ l ( x ) = E ^ l i n i t ( x ) · λ E
The environment light figure that step S2064 is estimated by step S2066Become more meticulous with guiding wave filter or two-sided filter。
Two of step S2068 step S20618 shown in Fig. 3 (2) estimate that example is further to environment light figureMethod of estimation illustrates。
Step S2068, allow step S204 original image multichannel weighted luminance figure I (x) obtained respectively with large scale Gaussian Blur core Fl(x) and little yardstick Gaussian Blur core FsX () phase convolution, result is designated as respectivelyThen willDivided byGlobal maximum is taken out and as deviation ratio λ from division resultE, formula is:
λ E = [ I ( x ) * F s ( x ) I ( x ) * F l ( x ) ] m a x = [ I ‾ s m a l l ( x ) I ‾ l arg e ( x ) ] max
Step S20610, uses the large scale Gaussian Blur core F in step S2068lThe convolution results of (x) and original image multichannel weighted luminance figure I (x)It is multiplied by the step S2068 deviation ratio λ calculatedE, finally estimate environment light figureComputing formula is as follows:
E ^ l ( x ) = λ E · I ( x ) * F l ( x ) = λ E · I ‾ l arg e ( x )
Step S20612, to the environment light figure estimated in step S20610Use two-sided filter or guide wave filter to refine。
Step S20614, allow step S204 original image multichannel weighted luminance figure I (x) obtained respectively with large scale Gaussian Blur core Fl(x) and little yardstick Gaussian Blur core FsX () phase convolution, result is designated as respectivelyWillDeductAnd take global maximum thus obtaining side-play amount Eloffset, computing formula is as follows:
El o f f s e t = [ I ( x ) * F s ( x ) - I ( x ) * F l ( x ) ] m a x = [ I ‾ s m a l l ( x ) - I ‾ l arg e ( x ) ] m a x
Step S20616, by the step S20614 filter result obtainedPlus side-play amount Eloffset, thus estimatingComputing formula is as follows:
V = I ‾ l arg e ( x ) + El o f f s e t = I ( x ) * F l ( x ) + El o f f s e t
Step S20618, to the environment light figure estimated in step S20616Use two-sided filter or guide wave filter to refine。
Step S208, uses step S204 original image multichannel weighted luminance figure I (x) obtained to carry out environment light shadeEstimate。
Use the step S208-S2086 group method shown in Fig. 4 (1) to environment light shadeEstimate。
Step S2082 utilizes has multiple low pass filters (conventional 3~5 low, relatively low, medium, higher, higher cutoff frequency, but it is not limited to this) constitute a bank of filters, one by one I (x) is carried out certain low-pass filtering or the combined filter of multiple different low pass filter, then filter result is asked for weighted average to obtain initial environment light shadeAsk for the side-play amount V meeting following formula againoffset:
V o f f s e t = m a x ( I ( x ) - V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e
Or ask for the deviation ratio λ meeting following formulaV:
λ V = m i n ( I ( x ) ÷ V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Step S2084 utilizes the step S2082 initial environment light shade obtainedSide-play amount VoffsetOr deviation ratio λVEstimate environment light shadeComputing formula is
V ^ ( x ) = V ^ i n i t ( x ) - V o f f s e t Or V ^ ( x ) = V ^ i n i t ( x ) · λ V
Step S2086 uses two-sided filter or guides wave filter that step S2084 is obtained environment light shade estimated valueBecome more meticulous。
Step S208-S20810 uses the method for dark to environment light shadeEstimate。
Step S2088 obtains original image R, G, B triple channel smallest passage value, then in each regional area, smallest passage value is done mini-value filtering, using filter result as environment light shadeEstimated value, namely as follows complete estimate:
m i n y ∈ Ω ( x ) ( m i n c ( I c ( y ) ) ) = t ( x ) m i n y ∈ Ω ( x ) ( E l ( x ) · ρ ( x ) ) + V ( x ) ≈ 0 + V ( x ) = V ^ ( x )
Wherein c is tri-Color Channels of R, G, B, is the regional area of a × a sized by Ω (x), and a is regional area length, and conventional value is generally 7~11, but is not limited to this。
Step S2088 is obtained environment light shade estimated value by step S20810Become more meticulous with two-sided filter or guiding wave filter。
Two of step S20812 step S20826 shown in Fig. 4 (2) estimate that example is further to environment light shadeMethod of estimation illustrates。
Step S20812 utilizes the method for different scale combined filter, the Gaussian Blur core of original image multichannel weighted luminance figure I (x) Yu large, medium and small three yardsticks carries out convolution operation respectively, and makees weighted average acquisition initial environment light shadeFormula is:
V ^ i n i t ( x ) = Σ i = 1 3 ω i [ I ( x ) * F i ( x ) ]
Wherein ωiFor weight, generally take 1/3, but be not limited to this。FiX Gaussian Blur core that () is different scale。
Step S20814 is by I (x) and little yardstick Gaussian Blur core FsX () phase convolution, result is designated asThen willDivided by the step S20812 initial environment light shade obtainedAnd take global minimum, it is thus achieved that deviation ratio λV。Draw λVShown in equation below:
λ V = [ I ( x ) * F s ( x ) Σ i = 1 3 ω i ( I ( x ) * F i ( x ) ) ] min = [ I ‾ s m a l l ( x ) Σ i = 1 3 ω i ( I ( x ) * F i ( x ) ) ] min
Step S20816 is by the step S20814 λ obtainedVIt is multiplied by what step S20812 obtainedCalculate environment light shadeEstimated value, formula is
Device becomes more meticulous。
Step S20820 utilizes the method for different scale combined filter, and the Gaussian Blur core of original image multichannel weighted luminance figure I (x) Yu large, medium and small three yardsticks is carried out convolution operation respectively, and then weighted average obtains initial environment light shadeFormula is:
V ^ i n i t ( x ) = Σ i = 1 3 ω i [ I ( x ) * F i ( x ) ]
Wherein ωiFor weight, generally take 1/3, but be not limited to this。FiX Gaussian Blur core that () is different scale。
Step S20822 is by I (x) and little yardstick Gaussian Blur nuclear phase convolution, and result is designated asAnd willWith step S20820 acquisitionMake difference and take global maximum, obtaining side-play amount Voffset, computing formula is as follows:
V o f f s e t = [ Σ i = 1 3 w i ( I ( x ) * F i ( x ) ) - I ( x ) * F s ( x ) ] m a x = [ V ^ i n i t ( x ) - I ‾ s m a l l ( x ) ] m a x
Step S20620 is obtained by step S20824In deduct step S20822 obtain compensation dosage Voffset, obtain the estimated value of environment light shadeComputing formula is as follows:
Device becomes more meticulous。
Step S210, utilizes the environment light figure that above-mentioned steps estimatesWith environment light shadeEstimate scene objects reflection coefficient ρ (x), use equation below to be calculated
ρ ( x ) = I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x ) Or ρ ( x ) = I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Finally estimating body surface reflection coefficient ρ (x), wherein k is a penalty coefficient, and conventional value is typically in 0.8~1.0, but is not limited to this。
Step S212, utilizes step S210 body surface reflectivity ρ (x) obtained to restore mist elimination image, it is possible to use original image is performed defogging by following mist elimination formula:
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Or use equation below to perform defogging
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Wherein I (x) is original image multichannel weighted luminance figure, IresThe monochrome information of x picture rich in detail that () obtains after performing defogging,For the estimated value that above-mentioned estimation is obtained, wherein AuniFor uniform illumination constant, value is 0.8~1.0 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 200~255, but is not limited to this;
According to demand, it is also possible to utilize equation below to carry out result adjustment:
I′res(x)=Ires(x)±Ioffset
IoffsetFor intensity deviation amount, value is 0~0.08 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 0~20, but is not limited to this;I 'resX () is the picture rich in detail after result adjustment。
Can utilizing the multi-channel information of image, one by one the execution defogging of passage, all channel informations of final integration obtain the picture rich in detail after mist elimination。
During R, G, B triple channel pattern of use, respectively tri-passages of R, G, B being used mist elimination formula, restore the mist elimination image of each passage, the mist elimination image of three passages is integrated into the picture rich in detail after mist elimination the most at last。
Or during L, a, b triple channel pattern of use, using mist elimination formula to be operated the lightness information of L * channel, and a, b the two color channel information remains unchanged, L, a, b triple channel information of finally integrating completes defogging。
Or during Y, U, V triple channel pattern of use, the monochrome information of Y passage is used mist elimination formula, and when two chrominance channels such as U, V are operated, it is multiplied by chromatic component amplification factor respectively, and the size of chromatic component amplification factor is determined by the environment light shade estimated, finally integrate Y, U, V channel information and complete defogging。
Or during H, S, I triple channel pattern of use, when I passage is operated, the monochrome information of I passage being used mist elimination formula, the hue information of H passage remains unchanged, and the saturation infromation of channel S is corrected according to saturation correction formula;Finally integrate H, S, I channel information and complete defogging。
But it is not limited to use this several modes to process。
Updating formula is as follows:
S 2 ( x ) = S 1 σ 1 σ 2 ( x ) e a 1 - a 2 σ 1 σ 2
S1(x)、S2The saturation of (x) respectively original image and mist elimination image, σ1、σ2The respectively standard deviation of original image and mist elimination image luminance component logarithm, a1、a2The respectively average of original image and mist elimination image luminance component logarithm。
Fig. 5 is the structured flowchart of single image demister according to embodiments of the present invention, this device is for realizing a kind of single image defogging method based on environment light model that above-described embodiment provides, as shown in Figure 5, this device specifically includes that original image processing module 10, environment light figure estimation module 20, environment light shade estimation module 30, reflectivity ρ (x) estimation block 40 and mist elimination perform module 50。Wherein, original image processing module 10, for solving multichannel weighted luminance figure I (x) of original image;Environment light figure estimation module 20, is connected to original image processing module 10, is used for estimating environment light figureEnvironment light shade estimation module 30, is connected to original image processing module 10, is used for estimating environment light shadeReflectivity ρ (x) estimation block 40, is connected to environment light estimation module 20 and environment light shade estimation module 30, for according to environment light figureWith environment light shadeEstimate reflectivity ρ (x);Mist elimination performs module 50 and is connected to reflectivity ρ (x) estimation block 40, is used for allowing reflectivity ρ (x) be multiplied by uniform illumination AuniConstant obtains the picture rich in detail after mist elimination。
Fig. 6 is the concrete structure block diagram of a kind of according to the preferred embodiment of the invention single image demister based on environment light model, as shown in Figure 6, it is a kind of based in the single image demister of environment light model that the preferred embodiment provides, preferably, original image processing module 10 can further include: processing unit 12, each for original image passage is multiplied by weight coefficient respectively, summation receives multichannel weighted luminance figure I (x) of image again, the value that wherein weight coefficient can specify by various graphics standards, or specific, it is also possible to it is arbitrary;Can solving multichannel weighted luminance figure I (x) with following weight coefficient, computing formula is as follows:
I (x)=0.299R+0.587G+0.114B
It is a kind of based in the single image demister of environment light model that the preferred embodiment provides, and environment light figure estimation module 20 is to environment light figureEstimating, method of estimation is: original image multichannel weighted luminance figure I (x) first carries out low-pass filtering and obtains initial environment light figureAsk for the side-play amount El meeting following formula againoffset:
El o f f s e t = m a x ( I ( x ) - E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) + El o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaE:
λ E = m a x ( I ( x ) ÷ E ^ l i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
E ^ l ( x ) = E ^ l i n i t ( x ) · λ E ;
Finally with guiding wave filter or two-sided filter to become more meticulous the environment light figure estimated
Preferably, environment light figure estimation module 20 can further include: λEEstimation unit 22, for estimating deviation ratio λ according to below equationE:
λ E = [ I ( x ) * F s ( x ) I ( x ) * F ( x ) ] m a x = [ I ‾ s m a l l ( x ) I ‾ l arg e ( x ) ] m a x
First small scale Gaussian filtering device F is used respectivelys(x) and large scale Gaussian filter FlX original image multichannel weighted luminance figure I (x) is filtered operation by (), result is designated as respectivelyThen willDivided byGlobal maximum is taken out and as deviation ratio λ from division resultE;Environment light figure estimation unit 24, is connected to λEEstimation unit 22, is used for using below equation to estimate environment light figure further
E ^ l ( x ) = λ E · I ( x ) * F l ( x ) = λ E · I ‾ l arg e ( x )
Concrete operations are utilize the deviation ratio λ estimatedEIt is multiplied byFinally estimate environment light figureBecome more meticulous unit 26, be connected to environment light estimation unit 24, for using two-sided filter or guiding wave filter to estimatingCarrying out becomes more meticulous obtains
Or including EloffsetEstimation unit 28, for asking for side-play amount El according to below equationoffset:
El o f f s e t = [ I ‾ s m a l l ( x ) - I ‾ l arg e ( x ) ] = [ I ( x ) * F s ( x ) - I ( x ) * F l ( x ) ] m a x
Initially with large scale Gaussian filter Fl(x) and small scale Gaussian filtering device FsX original image multichannel weighted luminance figure I (x) is filtered operation by (), filter result is designated as respectivelyWillGlobal maximum is taken and as side-play amount El as after the recoveryoffset;Environment light figure estimation unit 210, is connected to EloffsetEstimation unit 28, is used for using below equation to estimate environment light figure further
E ^ l ( x ) = I ( x ) * F l ( x ) + El o f f s e t = I ‾ l arg e ( x ) + El o f f s e t
Concrete operations are for utilizing filter resultPlus the side-play amount El solvedoffset, thus estimatingBecome more meticulous unit 212, be connected to environment light figure estimation unit 210, for using two-sided filter or guiding wave filter to estimatingCarrying out becomes more meticulous obtains
It is a kind of based in the single image demister of environment light model that the preferred embodiment provides, and environment light shade estimation module 30 is to environment light shadeEstimate, method of estimation is: first with having multiple low pass filters (conventional 3~5 low, relatively low, medium, higher, higher cutoff frequency, but it is not limited to this) constitute a bank of filters, one by one I (x) is carried out certain low-pass filtering or the combined filter of multiple different low pass filter, then filter result is asked for weighted average to obtain initial environment light shadeAsk for the side-play amount V meeting following formula againoffset:
V o f f s e t = m a x ( I ( x ) - V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) - V o f f s e t ;
Or ask for the deviation ratio λ meeting following formulaV:
λ V = m i n ( I ( x ) ÷ V ^ i n i t ( x ) ) , ∀ x ∈ i m a g e ;
Estimated value is obtained by below equation
V ^ ( x ) = V ^ i n i t ( x ) · λ V ;
Finally with two-sided filter or the guiding wave filter environment light shade to estimatingBecome more meticulous。
Preferably, environment light shade estimation module 30 can further include: combined filter unit 32, for the Gaussian Blur core of three different scales in original image multichannel weighted luminance figure I (x) convolution weighted average are obtained initial environment light shadeFormula is:
V ^ i n i t ( x ) = Σ i = 1 3 ω i [ I ( x ) * F i ( x ) ]
Wherein ωiFor weight, FiX Gaussian Blur core that () is different scale;λVEstimation unit 34, λVComputing formula is as follows:
λ V = [ I ( x ) * F s ( x ) Σ i = 1 3 ω i ( I ( x ) * F i ( x ) ) ] min = [ I ‾ s m a l l ( x ) Σ i = 1 3 ω i ( I ( x ) * F i ( x ) ) ] min
By original image multichannel weighted luminance figure I (x) and little yardstick Gaussian Blur core FsX () phase convolution, result is designated asThen willDivided by initial environment light shadeAnd take global minimum, it is thus achieved that deviation ratio λV;Environment light shade estimation unit 36, is connected to λVEstimation unit 34, finally by deviation ratio λVIt is multiplied by initial environment light shadeEstimate environment light shadeComputing formula is
V ^ ( x ) = λ V · Σ i = 1 3 ω i [ I ( x ) * F i ( x ) ] = λ V · V ^ i n i t ( x )
Become more meticulous unit 38, is connected to environment light shade estimation unit 36, for using two-sided filter or guiding the wave filter environment light shade to estimatingCarrying out becomes more meticulous obtains
Or including VoffsetEstimation unit 310, is used for utilizing equation below to try to achieve side-play amount Voffset:
V o f f s e t = [ V ^ i n i t ( x ) - I ‾ s m a l l ( x ) ] m a x = [ Σ i = 1 3 w i ( I ( x ) * F i ( x ) ) - I ( x ) * F s ( x ) ] m a x
The method utilizing different scale combined filter, carries out the Gaussian Blur core of original image multichannel weighted luminance figure I (x) Yu large, medium and small three yardsticks convolution operation respectively, and makees weighted average acquisition initial environment light shadeThen by I (x) and little yardstick Gaussian Blur nuclear phase convolution, result is designated asAnd willWithMake difference and take global maximum, obtaining side-play amount Voffset;Environment light shade estimation unit 312, is connected to VoffsetEstimation unit 310, is used for allowing initial environment light shadeWith side-play amount VoffsetMake to differ from and take global maximum to estimate environment light shadeComputing formula is as follows:
V ^ ( x ) = Σ i = 1 3 w i [ I ( x ) * F i ( x ) ] - V o f f s e t = V ^ i n i t ( x ) - V o f f s e t
Become more meticulous unit 314, is connected to environment light shade estimation unit 312, for using two-sided filter or guiding the wave filter environment light shade to estimatingCarrying out becomes more meticulous obtains
Or including dark unit 316, for obtaining the dark channel image of original image as follows, and estimate environment light shadeObtain original image R, G, B triple channel smallest passage value, then in each regional area, smallest passage value is done mini-value filtering, using filter result as environment light shadeEstimated value, namely as follows complete estimate:
m i n y ∈ Ω ( x ) ( m i n c ( I c ( y ) ) ) = t ( x ) m i n y ∈ Ω ( x ) ( E l ( x ) · ρ ( x ) ) + V ( x ) ≈ 0 + V ( x ) = V ^ ( x )
Wherein c is tri-Color Channels of R, G, B, is the regional area of a × a sized by Ω (x), and a is regional area length, and conventional value is generally 7~11, but is not limited to this。
Become more meticulous unit 318, is connected to dark unit 316, uses two-sided filter or guides the wave filter environment light shade to estimatingCarrying out becomes more meticulous obtains
It is a kind of based in the single image demister of environment light model that the preferred embodiment provides, it is preferable that reflectivity ρ (x) estimation block can further include: reflectivity ρ (x) evaluation unit 42, is used for utilizing environment light figureWith environment light shadeEstimate scene objects reflectivity ρ (x), use equation below to be calculated
ρ ( x ) = I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x ) Or ρ ( x ) = I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Wherein k is a penalty coefficient, and conventional value is typically in 0.8~1.0, but is not limited to this。
It is a kind of based in the single image demister of environment light model that the preferred embodiment provides, preferably, mist elimination processes execution module and can further include: mist elimination processing unit 52, for scene objects reflectivity ρ (x) is multiplied by uniform illumination Auni, performing defogging restore mist elimination image by passage, computing formula is as follows:
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Or use equation below to perform defogging
I r e s ( x ) = A u n i · ρ ( x ) = A u n i · I ( x ) - k V ^ ( x ) E ^ l ( x ) - V ^ ( x )
Wherein I (x) represents original image multichannel weighted luminance figure, Ires(x) monochrome information for obtaining after performing defogging,For the estimated value that above-mentioned estimation is obtained, wherein AuniFor uniform illumination constant, value is 0.8~1.0 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 200~255, but is not limited to this。
According to demand, it is also possible to utilize equation below to carry out result adjustment:
I′res(x)=Ires(x)±Ioffset
IoffsetFor intensity deviation amount, value is 0~0.08 times of the maximum number that now the luminance bit degree of depth can represent, and in the present embodiment, value is 0~20, but is not limited to this;I 'resX () is the picture rich in detail after result adjustment。
Can utilizing the multi-channel information of image, passage ground performs defogging one by one, and all channel informations of final integration obtain the picture rich in detail after mist elimination。
During R, G, B triple channel pattern of use, respectively tri-passages of R, G, B being used mist elimination formula, restore the mist elimination image of each passage, the mist elimination image of three passages is integrated into the picture rich in detail after mist elimination the most at last。
Or during L, a, b triple channel pattern of use, using mist elimination formula to be operated the lightness information of L * channel, and a, b the two color channel information remains unchanged, L, a, b triple channel information of finally integrating completes defogging。
Or during Y, U, V triple channel pattern of use, the monochrome information of Y passage is used mist elimination formula, and when two chrominance channels such as U, V are operated, it is multiplied by chromatic component amplification factor respectively, and the size of chromatic component amplification factor is determined by the environment light shade estimated, finally integrate Y, U, V channel information and complete defogging。
Or during H, S, I triple channel pattern of use, when I passage is operated, the monochrome information of I passage being used mist elimination formula, the hue information of H passage remains unchanged, and the saturation infromation of channel S is corrected according to saturation correction formula;Finally integrate H, S, I channel information and complete defogging。
But it is not limited to use this several modes to process。
Updating formula is as follows:
S 2 ( x ) = S 1 σ 1 σ 2 ( x ) e a 1 - a 2 σ 1 σ 2
S1(x)、S2The saturation of (x) respectively original image and mist elimination image, σ1、σ2The respectively standard deviation of original image and mist elimination image luminance component logarithm, a1、a2The respectively average of original image and mist elimination image luminance component logarithm。
Finally it is noted that above example is only in order to illustrate technical scheme, it is not intended to limit, all any amendment, equivalent replacement and improvement etc. made within the spirit and principles in the present invention, should be included within protection scope of the present invention。

Claims (9)

1. the single image defogging method based on environment light model, it is characterised in that including:
(1) utilize each channel information of original image, try to achieve original image multichannel weighted luminance figure I (x);
(2) original image multichannel weighted luminance figure I (x) is utilized to estimate the environment light figure with change in locationUtilize each channel information I of original imagecX () or multichannel weighted luminance figure I (x) estimate environment light shade
(3) environment light figure is utilizedWith environment light shadeEstimate scene objects reflectivity ρ (x);
(4) scene objects reflectivity ρ (x) is utilized to be multiplied by uniform illumination AuniConstant obtains the picture rich in detail after mist elimination。
2. the single image defogging method based on environment light model according to claim 1, it is characterized in that, the method for solving of described original image multichannel weighted luminance figure I (x) is: each passage of original image is multiplied by weight coefficient respectively, then summation obtains multichannel weighted luminance figure I (x) of image。
3. the single image defogging method based on environment light model according to claim 1, it is characterised in that the described environment light figure with change in locationEvaluation method be:
First original image multichannel weighted luminance figure I (x) is carried out low-pass filtering and obtains initial environment light figureAsk for the side-play amount El meeting following formula againoffset:
Estimated value is obtained by below equation
Or ask for the deviation ratio λ meeting following formulaE:
Estimated value is obtained by below equation
4. the single image defogging method based on environment light model described in claim 1, it is characterised in that described utilize multichannel weighted luminance figure I (x) to estimate environment light shadeMethod be:
Utilize multiple low pass filter that I (x) carries out certain low-pass filtering or the combined filter of multiple different low pass filter one by one, then filter result is asked for weighted average to obtain initial environment light shadeAsk for the side-play amount V meeting following formula againoffset:
Estimated value is obtained by below equation
Or ask for the deviation ratio λ meeting following formulaV:
Estimated value is obtained by below equation
Utilize each channel information I of original imagec(x) estimation environment light shadeMethod be:
Obtain original image R, G, B triple channel smallest passage value, then in each regional area of original image, smallest passage value is done mini-value filtering, using filter result as environment light shadeEstimated value, namely as follows complete estimate:
Wherein c is tri-Color Channels of R, G, B, is the regional area of a × a sized by Ω (x), and a is regional area length。
5. the single image defogging method based on environment light model according to claim 1, it is characterised in that before estimation scene objects reflectivity ρ (x), uses the environment light figure to estimating of the wave filter with edge holding capacityEnvironment light shadeBecome more meticulous。
6. the single image defogging method based on environment light model according to claim 1, it is characterised in that the estimation formulas of scene objects reflectivity ρ (x) is:
Wherein use environment light model to estimate scene objects reflectivity ρ (x), original image multichannel weighted luminance figure I (x) and environment light figureEnvironment light shadeShown in the relation equation below of scene objects reflectivity ρ (x):
Wherein t (x) represents that scene light arrives the absorbance of imaging device, t (x)=e after being subject to scatter attenuation-βd(x), β is the total scattering coefficient of mist, and d (x) is depth of view information;Being the environment light figure with picture position change, its physical significance is the result that sunlight in scene, all kinds of direct or indirect light source are superimposed;For environment light shade, its physical significance is environment lightAn extra brightness gain of superposition on scenery picture it is scattered in through fog;
Or it is
Wherein k is a penalty coefficient, and value is 0.8~1.0。
7. the single image defogging method based on environment light model according to claim 6, it is characterised in that the picture rich in detail after described mist elimination is Ires(x), calculation is:
Or:
Wherein AuniFor uniform illumination constant, value is 0.8~1.0 times of the maximum number that now the luminance bit degree of depth can represent;
Obtain the picture rich in detail I after mist eliminationresX, after (), recycling equation below carries out result adjustment:
I′res(x)=Ires(x)±Ioffset
IoffsetFor intensity deviation amount, value is 0~0.08 times of the maximum number that now the luminance bit degree of depth can represent;I 'resX () is the picture rich in detail after result adjustment。
8. the single image defogging method based on environment light model according to claim 1, it is characterised in that the preparation method of the picture rich in detail after described mist elimination is: utilize the multi-channel information of image and the passage containing monochrome information is used mist elimination formula Ires(x)=Auniρ (x) and I 'res(x)=Ires(x)±Ioffset, passage processes one by one, finally integrates the picture rich in detail after all channel informations obtain mist elimination;Wherein the multi-channel information of image refers under specific color pattern, for describing multiple color informations of color of image feature。
9. the single image demister based on environment light model, it is characterised in that including:
For utilizing each channel information of original image, try to achieve the original image processing module of original image multichannel weighted luminance figure;
The environment light figure changed with position (coordinate in original image) for utilizing original image multichannel weighted luminance figure I (x) to estimateEnvironment light estimation module;
For utilizing each channel information I of original imagecX () or multichannel weighted luminance figure I (x) estimate environment light shadeEnvironment light shade estimation module;
For utilizing environment light figureWith environment light shadeEstimate reflectivity ρ (x) estimation block of scene objects reflectivity ρ (x);
For utilizing scene objects reflectivity ρ (x) to be multiplied by uniform illumination AuniConstant obtains the mist elimination of the picture rich in detail after mist elimination and performs module。
CN201610023592.8A 2016-01-14 2016-01-14 A kind of single image to the fog method and device based on environment light model Active CN105701783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610023592.8A CN105701783B (en) 2016-01-14 2016-01-14 A kind of single image to the fog method and device based on environment light model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610023592.8A CN105701783B (en) 2016-01-14 2016-01-14 A kind of single image to the fog method and device based on environment light model

Publications (2)

Publication Number Publication Date
CN105701783A true CN105701783A (en) 2016-06-22
CN105701783B CN105701783B (en) 2018-08-07

Family

ID=56227398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610023592.8A Active CN105701783B (en) 2016-01-14 2016-01-14 A kind of single image to the fog method and device based on environment light model

Country Status (1)

Country Link
CN (1) CN105701783B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292853A (en) * 2017-07-27 2017-10-24 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107590782A (en) * 2017-08-21 2018-01-16 西北工业大学 A kind of spissatus minimizing technology of high-resolution optical image based on full convolutional network
CN108364261A (en) * 2017-12-13 2018-08-03 湖北工业大学 A kind of TV-Retinex single-frame images defogging methods of gradient guiding
CN108805826A (en) * 2018-05-07 2018-11-13 珠海全志科技股份有限公司 Improve the method for defog effect
CN109816605A (en) * 2019-01-16 2019-05-28 大连海事大学 A kind of MSRCR image defogging method based on multichannel convolutive
CN109996377A (en) * 2017-12-29 2019-07-09 杭州海康威视数字技术股份有限公司 A kind of method for controlling street lamps, device and electronic equipment
CN110874826A (en) * 2019-11-18 2020-03-10 北京邮电大学 Workpiece image defogging method and device applied to ion beam precise film coating
CN111738928A (en) * 2020-04-30 2020-10-02 南京图格医疗科技有限公司 Endoscope defogging method and device based on probability optimization and neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901473A (en) * 2009-05-31 2010-12-01 汉王科技股份有限公司 Self-adaptive defogging strengthening method of single-frame image
US20110135200A1 (en) * 2009-12-04 2011-06-09 Chao-Ho Chen Method for determining if an input image is a foggy image, method for determining a foggy level of an input image and cleaning method for foggy images
CN102831591A (en) * 2012-06-27 2012-12-19 北京航空航天大学 Gaussian filter-based real-time defogging method for single image
CN103177424A (en) * 2012-12-07 2013-06-26 西安电子科技大学 Low-luminance image reinforcing and denoising method
CN104240192A (en) * 2013-07-04 2014-12-24 西南科技大学 Rapid single-image defogging algorithm
CN104574412A (en) * 2015-01-22 2015-04-29 浙江大学 Remote sensing image defogging method under inhomogeneous cloud and fog condition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101901473A (en) * 2009-05-31 2010-12-01 汉王科技股份有限公司 Self-adaptive defogging strengthening method of single-frame image
US20110135200A1 (en) * 2009-12-04 2011-06-09 Chao-Ho Chen Method for determining if an input image is a foggy image, method for determining a foggy level of an input image and cleaning method for foggy images
CN102831591A (en) * 2012-06-27 2012-12-19 北京航空航天大学 Gaussian filter-based real-time defogging method for single image
CN103177424A (en) * 2012-12-07 2013-06-26 西安电子科技大学 Low-luminance image reinforcing and denoising method
CN104240192A (en) * 2013-07-04 2014-12-24 西南科技大学 Rapid single-image defogging algorithm
CN104574412A (en) * 2015-01-22 2015-04-29 浙江大学 Remote sensing image defogging method under inhomogeneous cloud and fog condition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIN-HWAN KIM1: "Single Image Dehazing Based on Contrast Enhancement", 《2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTIC, SPEECH AND SIGNAL PROCESSING》 *
王永超: "基于暗通道先验的图像去雾算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292853A (en) * 2017-07-27 2017-10-24 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107590782A (en) * 2017-08-21 2018-01-16 西北工业大学 A kind of spissatus minimizing technology of high-resolution optical image based on full convolutional network
CN107590782B (en) * 2017-08-21 2020-05-12 西北工业大学 High-resolution optical image thick cloud removing method based on full convolution network
CN108364261A (en) * 2017-12-13 2018-08-03 湖北工业大学 A kind of TV-Retinex single-frame images defogging methods of gradient guiding
CN109996377A (en) * 2017-12-29 2019-07-09 杭州海康威视数字技术股份有限公司 A kind of method for controlling street lamps, device and electronic equipment
CN108805826A (en) * 2018-05-07 2018-11-13 珠海全志科技股份有限公司 Improve the method for defog effect
CN109816605A (en) * 2019-01-16 2019-05-28 大连海事大学 A kind of MSRCR image defogging method based on multichannel convolutive
CN109816605B (en) * 2019-01-16 2022-10-04 大连海事大学 MSRCR image defogging method based on multi-channel convolution
CN110874826A (en) * 2019-11-18 2020-03-10 北京邮电大学 Workpiece image defogging method and device applied to ion beam precise film coating
CN110874826B (en) * 2019-11-18 2020-07-31 北京邮电大学 Workpiece image defogging method and device applied to ion beam precise film coating
CN111738928A (en) * 2020-04-30 2020-10-02 南京图格医疗科技有限公司 Endoscope defogging method and device based on probability optimization and neural network

Also Published As

Publication number Publication date
CN105701783B (en) 2018-08-07

Similar Documents

Publication Publication Date Title
CN105701783A (en) Single image defogging method based on ambient light model and apparatus thereof
US10290081B2 (en) System for image dehazing by modifying lower bound of transmittance and method therefor
CN106296612B (en) A kind of stagewise monitor video sharpening system and method for image quality evaluation and weather conditions guidance
Galdran et al. Enhanced variational image dehazing
US8417053B2 (en) Cleaning method for foggy images
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN108734670B (en) Method for restoring single night weak-illumination haze image
CN102831591B (en) Gaussian filter-based real-time defogging method for single image
US9384532B2 (en) Apparatus for improving fogged image using user-controllable root operator
CN106709893B (en) A kind of round-the-clock haze image sharpening restoration methods
US9418402B2 (en) System for improving foggy luminance image using fog reduction estimation model
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
CN106846263A (en) The image defogging method being immunized based on fusion passage and to sky
CN107578386A (en) A kind of optimization defogging processing method of unmanned plane shooting image
CN102665034A (en) Night effect removal method for camera-collected video
CN105913390B (en) A kind of image defogging method and system
CN104021527B (en) Rain and snow removal method in image
CN104867121A (en) Fast image defogging method based on dark channel prior and Retinex theory
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN106251296A (en) A kind of image defogging method and system
CN103578083A (en) Single image defogging method based on joint mean shift
CN104272347A (en) Image processing apparatus for removing haze contained in still image and method thereof
Halmaoui et al. Contrast restoration of road images taken in foggy weather
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
Choi et al. Fog detection for de-fogging of road driving images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant