CN104200445A - Image defogging method with optimal contrast ratio and minimal information loss - Google Patents

Image defogging method with optimal contrast ratio and minimal information loss Download PDF

Info

Publication number
CN104200445A
CN104200445A CN201410504518.9A CN201410504518A CN104200445A CN 104200445 A CN104200445 A CN 104200445A CN 201410504518 A CN201410504518 A CN 201410504518A CN 104200445 A CN104200445 A CN 104200445A
Authority
CN
China
Prior art keywords
image
dummy section
sky
model
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410504518.9A
Other languages
Chinese (zh)
Other versions
CN104200445B (en
Inventor
谢从华
黄晓华
高蕴梅
乔伟伟
常晋义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshu intellectual property operation center Co.,Ltd.
Original Assignee
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshu Institute of Technology filed Critical Changshu Institute of Technology
Priority to CN201410504518.9A priority Critical patent/CN104200445B/en
Publication of CN104200445A publication Critical patent/CN104200445A/en
Application granted granted Critical
Publication of CN104200445B publication Critical patent/CN104200445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to an image defogging method with optimal contrast ratio and minimal information loss. The image defogging method is characterized in that through the adoption of functions including the Gauss mixed model, the quadtree, the maximal contrast ratio and the minimal information loss, the image defogging method is realized. The image defogging method comprises the following steps: firstly, based on the Gauss mixed model and the expected value maximal algorithm, images are segmented into two categories, namely, a sky region and a non-sky region; secondly, the quadtree iteration method is adopted in the sky region of the images to estimate the atmospheric illumination intensity of an atmospheric scattering model; then, the grid partition method is adopted to partition the non-sky region of the images and the functions of the maximal contrast ratio and the minimal information loss are adopted to calculate the spreading rate of an atmospheric lighting model of each grid unit; based on the constant coefficient and the optimal spreading rate of the non-sky region, the spreading rate of the sky region is estimated; finally, the recovery images of the sky region and the non-sky region are merged and output.

Description

The image defogging method capable of a kind of optimum contrast and minimum information loss
Technical field
The present invention relates to Computer Image Processing, particularly the image defogging method capable of a kind of optimum contrast and minimum loss of information.
Background technology
Since 2012,74, whole nation emphasis monitoring city nearly half has been subject to serious haze and has polluted, haze shrouds the overhead at us, cover sight line, image color dimness, contrast step-down that the imaging system such as camera and video monitoring is caught, the serious degradation of picture quality, has directly affected the visual effect of image, have a strong impact on their range of application, need to carry out mist elimination processing to the image that is subject to haze pollution in a lot of application scenarios.
Atmospheric medium is mainly made up of air molecule, steam and gasoloid, and in air, contained particulate is the principal element that haze forms, and is also that picture quality produces the basic reason of degenerating.Under haze weather, the light of body surface reflection can be subject to the impact of airborne suspended particle in the process that arrives imaging device, thereby makes equipment cannot obtain clear picture.Particulate has scattering process to light, and scattering loss makes " transmitted light " strength retrogression, has caused the contrast of image to decline.
The research of image mist elimination can be traced back to the image mist elimination of the people such as L.Bissonnette in 1992 for mist and rainy day gas the earliest.Image mist elimination technology has experienced the development of two more than ten years, has obtained larger progress, constantly has new thought and new method to produce and for Practical Project, mainly contains based on model with based on non-model both direction.
The low-light level that method consideration image based on non-model presents and the feature of low contrast, using conventional image enchancing method as basic processing means.The reason that method based on non-model does not need analysis image to degenerate, only by specifically giving prominence to interested partial information in image, decays unwanted information or remove simultaneously.Mainly comprise spatial domain and frequency field two class methods, typical method comprises algorithm of histogram equalization, bent wave conversion, Homomorphic Filtering Algorithm, based on atmosphere modulation transfer function, wavelet method and Retinex algorithm etc.Defogging method capable based on non-model just strengthens the contrast of image, do not consider Misty Image Blur technique and causes for Degradation, do not consider that haze concentration is directly proportional to the target depth of field, in fact the visual effect of just having improved to a certain extent image is not in fact real image mist elimination.
Defogging method capable based on model, by analysis image causes for Degradation, to atmospheric scattering modeling, is realized the recovery of image.Method based on model is divided three classes: (1) recovery based on partial differential equation; (2) recovery based on depth relationship; (3) recovery based on prior imformation.
Image defogging method capable based on partial differential equation is relatively applicable to the color sharpness of image and the occasion that contrast has higher requirements.By atmospheric scattering model, set up the energy optimization model of outdoor image global defogging and local defogging, derive and comprise accordingly the partial differential equation of image gradient and the scene depth of field.But the method has a weak point to be that the desired gradual modification atmospheric scattering of obtaining of picture depth information coefficient all needs the interactive operation by user.
Method based on depth relationship utilizes depth relationship figure to carry out mist elimination processing to image.Calculate the depth map of background image by gathering scene image corresponding under different weather condition, then obtain the degree of depth of foreground target object in conjunction with relevant heuristic information.Although the mist elimination effect of these methods is comparatively satisfactory, it need to be too harsh by the requirement of reference picture, is difficult in actual applications realize.
Based on the restored method of prior imformation, need multiple image or more supplementarys.According to whether known two classes that are divided into of depth information of scene.One class is the known method of hypothesis depth information of scene, by restoring scene contrast, use a simple Gaussian function to predict the light path in scene, but the method needs radar installations to obtain scene depth.Another kind of is to extract scene depth by supplementary.Utilize two-value scattering model, the different angles such as the polarization characteristic of different scattered lights and interactive depth of field estimation are extracted depth information of scene.But the method for polarized light can only be applied to the weak mist of atmospheric scattering degree, and is unsuitable for foggy weather.Some method needs to use the mutual of the image of same scene under different weather state or user, is difficult to meet the practical application request to conversion scene.
According to the mist elimination algorithm of depth information, develop into the degree that realizes single image mist elimination.Tan has proposed the single image defogging method capable of the local contrast of a kind of expansion with mist image, but has color supersaturation, causes halation artifact phenomenon.Fattal has proposed a kind of method of analyzing based on independent component, be incoherent by supposition transmitance and surface projection at regional area, the reflectivity of estimation scenery, infers transmitance when scene light is propagated in air, finally realize the recovery of scene, but the method is only suitable for mist image.He has proposed the classical single image mist elimination based on dark primary priori theory, but the method is not suitable for the object brightness image similar with atmosphere light.
Existing these methods are relatively applicable to special image, have advantage separately.But part defogging method capable shows as contrast and excessively stretches, or can not correctly estimate that the depth of field causes removing the heavier image of haze, or only consider contrast to maximize and do not lost the important information of former figure, or the sky part color of image is twisted.
Summary of the invention
For solving the problems of the technologies described above, the present invention proposes the image defogging method capable of a kind of optimum contrast and minimum loss of information.
For achieving the above object, the present invention proposes a kind of image defogging method capable based on optimum contrast and minimum loss of information, it is characterized in that the method utilization utilizes gauss hybrid models, Si Fenshu, maximizes the functions such as contrast and minimum information loss and has realized image defogging method capable.The concrete steps of the method comprise:
Step 1, sets up the haze iconic model based on McCartney atmospheric scattering model;
Step 2, utilizes gauss hybrid models and expectation maximization algorithm to segment the image into day dummy section and non-day dummy section two classes;
Step 3, the atmosphere intensity of illumination of the method estimated image sky dummy section based on four points of tree iteration;
Step 4, the method piecemeal of dividing based on grid, the spreading rate of each grid cell of the non-day dummy section of image based on optimum contrast and the estimation of minimum information loss criterion;
Step 5, the normal scale-up factor method of the average spreading rate of non-day dummy section of employing and image is estimated the spreading rate of day dummy section;
Step 6, according to atmospheric scattering model, utilizes the model parameter of estimating, merges the Recovery image of output sky and non-day dummy section.
In described step 2, the concrete steps of utilizing gauss hybrid models and expectation maximization algorithm to segment the image into day dummy section and non-day dummy section two classes comprise:
Step 21): adopting FCM Algorithms (FCM) is 2 clusters to haze image I initial division, and initializes weights, the parameters such as average and variance;
Step 22): calculate the posterior probability that each pixel belongs to respectively 2 models, and upgrade the weight of each model according to posterior probability, average and covariance;
Step 23): by the weight after upgrading, average and covariance are upgraded posterior probability, again each pixel is divided in the class of posterior probability maximum, and the log-likelihood function of computed image.
If (2-4) log-likelihood function convergence, stops iteration, otherwise proceeds to step 22).
(2-5 utilizes bayesian criterion pixel to be divided into the model of posterior probability maximum.In 2 class results after cutting apart, selecting the larger class of average in two models is sky image, and another part is non-space image.
The invention has the advantages that and proposed a kind of image defogging method capable based on optimum contrast and minimum loss of information, segment the image into day dummy section and non-day dummy section by gauss hybrid models and expectation maximization algorithm, and zones of different adopts different atmospheric scattering illumination model method for parameter estimation, there is better accuracy and speed faster.Based on the atmosphere intensity of illumination of method estimated image sky dummy section of four points of tree iteration, there is better locality and the advantage such as speed faster.Spreading rate for non-day dummy section is estimated, the method piecemeal that adopts grid to divide, and model parameter has better locality.Based on the non-sky regional spread of the image rate of optimum contrast and the estimation of minimum information loss criterion, considering the maximized while of contrast, it is minimum that information loss is also wanted, and keeps edge and the grain details of former figure as far as possible.For the spreading rate estimation of sky dummy section, the constant coefficient rule of three of the average spreading rate of employing and non-day dummy section, avoids the color of sky part to be twisted or supersaturation, also can realize the natural transition with non-day dummy section.
Brief description of the drawings
Below in conjunction with the drawings and specific embodiments, the present invention is done further and illustrated, above-mentioned and/or otherwise advantage of the present invention will become apparent.
Fig. 1 is process flow diagram of the present invention.
Embodiment
This method is divided into cutting apart of sky and non-day dummy section, maximizes and minimized non-day dummy section mist elimination of information loss based on contrast, estimates day three parts of dummy section mist elimination based on normal scale-up factor method, and concrete workflow as shown in Figure 1.
Step 1, sets up the haze iconic model based on McCartney atmospheric scattering model
Suppose that the image irradiation model under haze condition is, haze image equals the imaging after by haze shield of atmosphere intensity of illumination that the imaging of original object image after haze transmission add infinite point and superposes
I c(p)=t(p)J c(p)+(1-t(p))A c(1)
Wherein, J cand I (p) c(p) represent respectively the pixel p of original image and observed image, c ∈ r, g, b) represent three kinds of Color Channels of red, green, blue.A cbe illustrated in along the intensity of illumination of observer's direction of visual lines infinite point, be conventionally assumed to be global constant, irrelevant with local location p.T (p) ∈ [0,1] represents the spreading rate along light, is to be determined by the distance of camera and scene point, has reflected the ability of light penetration mist.Be worth greatlyr, show the light from scene surface reflection, the quantity that penetrating fog arrives observer visual field is more, and t (p) is inversely proportional to scene depth, can be calculated by following formula:
t(p)=e -ρd(p)(2)
Wherein ρ is total scattering coefficient, has characterized the particle of unit volume to the scattering power of incident ray, and its value is larger, shows that the degree of incident ray generation scattering is more serious, is conventionally assumed to be 1.D (p) represents the depth of field of camera to pixel p.
Step 2, utilizes gauss hybrid models and expectation maximization algorithm to segment the image into day dummy section and non-day dummy section two classes
Atmosphere intensity of illumination A represents by the sky color of maximum bright spot in image conventionally, because the weather such as haze can cause the sky environment division of image brighter, and background is all darker with respect to the brightness of sky.Therefore, we are brighter sky part and other target and background two parts by the content of gauss hybrid models matching haze image, blending constituent.Then, utilize expectation maximization algorithm to solve the parameter of gauss hybrid models, and be partitioned into sky image with bayesian criterion.
If the N of haze image I pixel { p 1, p 2..., p nindependent same distribution, wherein pixel p ithe red, green and blue color value that (1≤i≤N) is corresponding is respectively I r(p i), I g(p i) and I b(p i), i.e. (p i=(I r(p i), I g(p i), I b(p i)) t), haze gradation of image information is mixed and is formed by two darker and brighter Gaussian density function, pixel p imixing probability density function P be expressed as:
P ( p i ) = Σ r = 1 2 α r f r ( p , θ r ) - - - ( 3 )
Wherein α rrepresent the weight of r density component in hybrid density, meet α r>=0 and the parameter θ of r density component r={ μ r, σ r, μ r, σ rrepresent respectively average and variance, f r(p i, θ r) be r Gaussian density function component.
The parameter step of estimating mixture model based on maximum expected value (Expectation Maximum) is as follows:
(2-1) adopting FCM Algorithms (FCM) is 2 clusters to haze image I initial division, and initializes weights, the parameters such as average and variance.
(2-2) calculating pixel point p ibelong to respectively the posterior probability of model r (r=1,2) and upgrade the weight of each model r, average and covariance according to posterior probability:
(2-3) use these weights, average and covariance are upgraded posterior probability, again p ibe divided in the class of posterior probability maximum, and the log-likelihood function of computed image.
If (2-4) log-likelihood function convergence, stops iteration, otherwise proceeds to (2-2).
(2-5) utilize bayesian criterion pixel p ibe divided into the model of posterior probability maximum.In 2 class results after cutting apart, select average μ in two models ra larger class is sky image.
Step 3, the image sky regional atmospheric intensity of illumination based on four points of tree iteration is estimated:
Sky image after cutting apart is carried out four points of tree location of iteration and estimates atmosphere intensity of illumination.Sky image is evenly divided into four according to the position at 1/2nd places of height and width to sky image, then calculate the draw brightness value maximum region of each piecemeal as the piecemeal of dividing next time, iteration is divided, and finishes until piecemeal is less than the threshold value (general value 5 X 5 or 7 X 7) of appointment.Using the mean value of last piecemeal color as atmosphere intensity of illumination.
Step 4, the method piecemeal of dividing based on grid, the spreading rate of each grid cell of the non-day dummy section of image based on optimum contrast and the estimation of minimum information loss criterion
The spreading rate of supposing day dummy section is t 1, the spreading rate in non-space region is t 2.And suppose that the depth of field has local similarity, image is carried out to grid division according to 16 × 16 sizes, suppose that the transfer rate of each grid is identical.According to the formula of haze iconic model (1), in the time that spreading rate is fixing, original image J c(p) non-day dummy section estimated with following formula:
J c ( p ) = 1 t 2 ( I c ( p ) - A c ) + A c - - - ( 7 )
Recover original image and only depend on spreading rate t 2(p).Because the contrast of haze image is lower, we will estimate spreading rate according to maximizing the contrast of each grid and the criterion of minimum loss of information.
The each Color Channel J of original image c(p) meet and be more than or equal to 0 and be less than or equal to 255 condition, according to formula (7), can obtain
0 ≤ 1 t 2 ( I c ( p ) - A c ) + A c ≤ 255 - - - ( 8 )
Solution formula (8), spreading rate t 2should meet following formula:
t 2 ≥ max { min c ∈ { r , g , b } min p ∈ B { I c ( p ) - A c - A c } , max c ∈ { r , g , b } max p ∈ B { I c ( p ) - A c 255 - A c } } - - - ( 9 )
(4-1) the sky regional spread rate based on maximizing contrast is estimated
In computing grid region, each point is contrast with the quadratic sum of the difference of the average of this place grid
C MSE = Σ i = 1 N B ( I c ( p ) - I ‾ c ( p ) ) 2 t 2 2 N B ( p ∈ B ) - - - ( 10 )
Wherein for I c(p) average of place grid B, N bfor the pixel number of grid B.Known according to formula (10), spreading rate t 2be inversely proportional to contrast.Therefore, spreading rate t 2meet under the constraint of formula (9), in order to maximize contrast, the spreading rate of non-day dummy section is got minimum value:
t 2 = max { min c ∈ { r , g , b } min p ∈ B { I c ( p ) - A c - A c } , max c ∈ { r , g , b } max p ∈ B { I c ( p ) - A c 255 - A c } } - - - ( 11 )
(4-2) the sky regional spread rate based on information loss minimum is optimized
Optimum contrast can cause gray-scale value (0, the α of the least part of image c) and the gray-scale value (β of largest portion c, 255) (c ∈ r, g, b)) and corresponding information dropout, α cand β crepresent observed image I cthe underflow in the region of blocking and overflow parameter value.Underflow parameter alpha ccorresponding original image J cgray-scale value 0, overflow parameter beta ccorresponding original image J cgray-scale value 255, respectively substitution formula (7), can obtain:
α c=(1-t 2)A c (12)
β c=255t 2+(1-t 2)A c (13)
Calculate original image J (p) maximum-contrast change after underflow and overflow region area as information loss function.Calculate based on histogrammic information loss function, can obtain:
E loss = Σ c ∈ { r , g , b } { Σ i = 0 α c ( i - A c t 2 ) + A c ) 2 h c ( i ) + Σ i = β c 255 ( i - A c t 2 + A c - 255 ) 2 h c ( i ) } - - - ( 14 )
Wherein h c(i) be the histogram of pixel i at Color Channel c, A cfor the atmosphere intensity of illumination of Color Channel c.In order to meet the requirement that maximizes contrast and minimum losses function simultaneously, according to Lagrange's multiplier function, be converted into the minimum problem of lower surface function.
E(t 2,λ)=-C MSE+λE loss (15)
λ is the relative importance weights of controlling contrast and information loss.T to formula (15) respectively 2changes persuing is led with λ, sets up system of equations
∂ E ( t 2 , λ ) ∂ t 2 = 0 ∂ E ( t 2 , λ ) ∂ λ = 0 - - - ( 16 )
The t of solution formula (16) 2for the optimum spreading rate of non-day dummy section.
Step 5, the average spreading rate based on constant coefficient and non-day dummy section of image is estimated the spreading rate of day dummy section
Because the sky dummy section in image obviously wants large with respect to the depth of field of non-day dummy section, its corresponding spreading rate should be less.A lot of defogging method capables remove to estimate the spreading rate of day dummy section by non-sky regional spread rate method of estimation, cause noise and the color distortion of day dummy section.According to formula (2), some p that can day dummy section 1spreading rate t 1point p with non-space region 2spreading rate t 2scale-up factor:
t 1 t 2 = e - d ( p 1 ) e - d ( p 2 ) = e d ( p 2 ) - d ( p 1 ) = b - - - ( 17 )
Point p 1depth of field d (p 1), some p 2depth of field d (p 2), the difference of both distances is d (p 2)-d (p 1).The same day, the difference of dummy section and non-sky area pixel point depth of field distance was 1,2,5,10, and 15,20 o'clock, corresponding spreading rate scale-up factor b was as shown in table 1.Can know according to table 1, when the difference of distance is larger, coefficient is less.Under haze weather condition, general visibility is smaller, and day dummy section and the non-sky area pixel point depth of field are relatively little, and coefficient is relatively large.
The range difference of table 1 sky and non-sky pixel and spreading rate ratio
d(p 2)-d(p 1) b
-1 0.36787944117144
-2 0.13533528323661
-5 0.00673794699909
-10 0.00004539992976
-15 0.00000030590232
-20 0.00000000206115
The spreading rate of supposing day dummy section in image is all identical, is multiplied by the spreading rate of less relative coefficient calculating day dummy section by the spreading rate average of the non-day all grids of dummy section:
t 1 = b t 2 ‾ - - - ( 18 )
Step 6, according to atmospheric scattering model, utilizes the model parameter of estimating, merges the Recovery image of output sky and non-day dummy section.
The optimum spreading rate t of the non-day dummy section solving according to formula (16) 2atmosphere intensity of illumination A with the 3rd step estimation cbring formula (1) into, non-day dummy section of Recovery image.According to the optimum spreading rate t of the sky dummy section of formula (18) 1atmosphere intensity of illumination A with the 3rd step estimation c, bring formula (1) into, the sky dummy section of Recovery image.Again two parts content is merged to piece image output.
Innovative point of the present invention comprises:
(1) sky district inclusion the priori such as haze degree and weather condition when image acquisition, and the depth of field of the two differs larger, the present invention is by gauss hybrid models and expect that peaked method is divided into day dummy section and non-day dummy section haze image, adopts different tactful Recovery images to have better specific aim and adaptability to the region after cutting apart.
(2) problem of haze image maximum is that picture contrast is lower, change contrast and can cause information loss, therefore the present invention adopts the method for balance contrast and information loss, both the overall picture quality of the contrast of image can be improved, the local detail such as texture and edge of image can also be retained.
It dummy section and the existing difference of non-day dummy section are also related, and difference is that the different grids of day dummy section have the close depth of field, and contact is all under identical external condition, to obtain.The present invention utilizes the average spreading rate of non-day dummy section, is multiplied by different hazes scale parameter all over the world, does not need the different grids of sky dummy section to estimate respectively, has realized again the self-adaptation of different images according to the average spreading rate of non-day dummy section.
The invention provides the image defogging method capable of a kind of optimum contrast and minimum information loss; method and the approach of this technical scheme of specific implementation are a lot; the above is only the preferred embodiment of the present invention; should be understood that; for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.In the present embodiment not clear and definite each ingredient all available prior art realized.

Claims (4)

1. an image defogging method capable for optimum contrast and minimum information loss, is characterized in that, comprises the steps:
Step 1, sets up the haze iconic model based on McCartney atmospheric scattering model;
Step 2, utilizes gauss hybrid models and expectation maximization algorithm to segment the image into day dummy section and non-day dummy section two classes;
Step 3, the atmosphere intensity of illumination of the method estimated image sky dummy section based on four points of tree iteration;
Step 4, the method piecemeal of dividing based on grid, the spreading rate of each grid cell of the non-day dummy section of image based on optimum contrast and the estimation of minimum information loss criterion;
Step 5, the normal scale-up factor method of the average spreading rate of non-day dummy section of employing and image is estimated the spreading rate of day dummy section;
Step 6, according to atmospheric scattering model, utilizes the model parameter of estimating, merges the Recovery image of output sky and non-day dummy section.
2. the image defogging method capable of a kind of optimum contrast according to claim 1 and minimum information loss, is characterized in that, the image irradiation model under haze condition is:
I c(p)=t(p)J c(p)+(1-t(p))A c
Wherein, J cand I (p) c(p) represent respectively the pixel p of original image and observed image, c ∈ r, g, b) represent three kinds of Color Channels of red, green, blue; A cbe illustrated in along the atmosphere intensity of illumination of observer's direction of visual lines infinite point, t (p) ∈ [0,1] represents the spreading rate along light, and t (p) is inversely proportional to scene depth, and computing formula is:
t(p)=e -ρd(p)
Wherein ρ is total scattering coefficient, and d (p) represents the depth of field of camera to pixel p.
3. the image defogging method capable of a kind of optimum contrast according to claim 1 and minimum information loss, is characterized in that, in described step 2,
If the N of haze image I pixel { p 1, p 2..., p nindependent same distribution, wherein pixel p icorresponding red, green and blue color value is respectively I r(p i), I g(p i) and I b(p i), 1≤i≤N, i.e. (p i=(I r(p i), I g(p i), I b(p i)) t); Haze gradation of image information is mixed and is formed by two darker and brighter Gaussian density function, pixel p imixing probability density function P be expressed as:
P ( p i ) = Σ r = 1 2 α r f r ( p i , θ r ) ,
Wherein α rrepresent the weight of r density component in hybrid density, meet α r>=0 and θ r={ μ r, σ rr density component parameters, wherein μ r, σ rrepresent respectively average and variance, f r(p i, θ r) be r Gaussian density function component.
Comprise the steps:
Step 21: adopting FCM Algorithms is two clusters to haze image I initial division, and initializes weights, average and variance;
Step 22: calculate the posterior probability that each pixel belongs to respectively two models, and upgrade the weight of each model according to posterior probability, average and covariance;
Step 23: by the weight after upgrading, average and covariance are upgraded posterior probability, is divided into each pixel in the class of posterior probability maximum again, and the log-likelihood function of computed image.
Step 24: if log-likelihood function convergence stops iteration and carry out step 25, otherwise returns to step 22;
Step 25: utilize bayesian criterion pixel to be divided into the model of posterior probability maximum, in two class results after cutting apart, selecting the larger class of average in two models is sky image, and another part is non-space image.
4. the image defogging method capable of a kind of optimum contrast according to claim 1 and minimum information loss, is characterized in that, in step 3, the sky image after cutting apart is carried out four points of tree location of iteration and estimates atmosphere intensity of illumination; Sky image is evenly divided into four according to the position at 1/2nd places of height and width to sky image, then calculate the draw brightness value maximum region of each piecemeal as the piecemeal of dividing next time, iteration is divided, and finishes until piecemeal is less than the threshold value of appointment; Using the mean value of last piecemeal color as atmosphere intensity of illumination.
CN201410504518.9A 2014-09-26 2014-09-26 Image defogging method with optimal contrast ratio and minimal information loss Active CN104200445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410504518.9A CN104200445B (en) 2014-09-26 2014-09-26 Image defogging method with optimal contrast ratio and minimal information loss

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410504518.9A CN104200445B (en) 2014-09-26 2014-09-26 Image defogging method with optimal contrast ratio and minimal information loss

Publications (2)

Publication Number Publication Date
CN104200445A true CN104200445A (en) 2014-12-10
CN104200445B CN104200445B (en) 2017-04-26

Family

ID=52085731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410504518.9A Active CN104200445B (en) 2014-09-26 2014-09-26 Image defogging method with optimal contrast ratio and minimal information loss

Country Status (1)

Country Link
CN (1) CN104200445B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978719A (en) * 2015-06-16 2015-10-14 浙江工业大学 Self-adaptive traffic video real-time defogging method based on temporal-spatial coherence
CN105389784A (en) * 2015-12-07 2016-03-09 魅族科技(中国)有限公司 Image processing method and terminal
CN105469372A (en) * 2015-12-30 2016-04-06 广西师范大学 Mean filtering-based fog-degraded image sharp processing method
CN105513024A (en) * 2015-12-07 2016-04-20 魅族科技(中国)有限公司 Method and terminal for processing image
CN106204494A (en) * 2016-07-15 2016-12-07 潍坊学院 A kind of image defogging method comprising large area sky areas and system
CN106954022A (en) * 2017-03-08 2017-07-14 广东欧珀移动通信有限公司 Image processing method, device and terminal
CN107845078A (en) * 2017-11-07 2018-03-27 北京航空航天大学 A kind of unmanned plane image multithreading clarification method of metadata auxiliary
CN108369651A (en) * 2015-12-01 2018-08-03 天青公司 Information extraction is carried out using image data
CN109685735A (en) * 2018-12-21 2019-04-26 温州大学 Single picture defogging method based on mist layer smoothing prior
CN109726686A (en) * 2018-12-29 2019-05-07 西安天和防务技术股份有限公司 Scene recognition method, device, computer equipment and storage medium
CN110717556A (en) * 2019-09-25 2020-01-21 南京旷云科技有限公司 Posterior probability adjusting method and device for target recognition
CN111047540A (en) * 2019-12-27 2020-04-21 嘉应学院 Image defogging method based on sky segmentation and application system thereof
CN111553405A (en) * 2020-04-24 2020-08-18 青岛杰瑞工控技术有限公司 Clustering fog recognition algorithm based on pixel density K-means
CN112634171A (en) * 2020-12-31 2021-04-09 上海海事大学 Image defogging method based on Bayes convolutional neural network and storage medium
CN116337088A (en) * 2023-05-30 2023-06-27 中国人民解放军国防科技大学 Foggy scene relative motion estimation method and device based on bionic polarization vision

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596857A (en) * 2018-05-09 2018-09-28 西安邮电大学 Single image to the fog method for intelligent driving

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100067823A1 (en) * 2008-09-16 2010-03-18 Microsoft Corporation Dehazing an Image Using a Three-Dimensional Reference Model
US20110043603A1 (en) * 2006-01-18 2011-02-24 Technion Research & Development Foundation Ltd. System And Method For Dehazing
CN102768760A (en) * 2012-07-04 2012-11-07 电子科技大学 Quick image dehazing method on basis of image textures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043603A1 (en) * 2006-01-18 2011-02-24 Technion Research & Development Foundation Ltd. System And Method For Dehazing
US20100067823A1 (en) * 2008-09-16 2010-03-18 Microsoft Corporation Dehazing an Image Using a Three-Dimensional Reference Model
CN102768760A (en) * 2012-07-04 2012-11-07 电子科技大学 Quick image dehazing method on basis of image textures

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FATAL R: "Single image dehazing", 《ACM TRANSACTIONS ON GRAPHICS》 *
JIN-HWAN KIM等: "Optimized contrast enhancement for real-time image and video dehazing", 《JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION》 *
YANG JING-YU等: "Using dark channel prior to quickly remove haze from a signal image", 《GEOMATICS AND INFORMATION SCIENCE OF WUHAN UNIVERSITY》 *
嵇晓强等: "暗原色先验图像去雾算法研究", 《光电子 激光》 *
王学军等: "基于自适应EM算法的光学图像海域分割", 《信号与信息处理》 *
甘佳佳等: "结合精确大气散射图计算的图像快速去雾", 《中国图象图形学报》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978719A (en) * 2015-06-16 2015-10-14 浙江工业大学 Self-adaptive traffic video real-time defogging method based on temporal-spatial coherence
CN108369651A (en) * 2015-12-01 2018-08-03 天青公司 Information extraction is carried out using image data
CN108369651B (en) * 2015-12-01 2022-08-09 天津瞰天科技有限责任公司 Method, system and non-transitory computer-readable storage medium for extracting sky area
CN105389784A (en) * 2015-12-07 2016-03-09 魅族科技(中国)有限公司 Image processing method and terminal
CN105513024A (en) * 2015-12-07 2016-04-20 魅族科技(中国)有限公司 Method and terminal for processing image
CN105469372A (en) * 2015-12-30 2016-04-06 广西师范大学 Mean filtering-based fog-degraded image sharp processing method
CN106204494A (en) * 2016-07-15 2016-12-07 潍坊学院 A kind of image defogging method comprising large area sky areas and system
CN106204494B (en) * 2016-07-15 2019-11-22 潍坊学院 A kind of image defogging method and system comprising large area sky areas
CN106954022B (en) * 2017-03-08 2019-10-25 Oppo广东移动通信有限公司 Image processing method, device and terminal
CN106954022A (en) * 2017-03-08 2017-07-14 广东欧珀移动通信有限公司 Image processing method, device and terminal
CN107845078A (en) * 2017-11-07 2018-03-27 北京航空航天大学 A kind of unmanned plane image multithreading clarification method of metadata auxiliary
CN107845078B (en) * 2017-11-07 2020-04-14 北京航空航天大学 Unmanned aerial vehicle image multithreading sharpening method assisted by metadata
CN109685735A (en) * 2018-12-21 2019-04-26 温州大学 Single picture defogging method based on mist layer smoothing prior
CN109726686B (en) * 2018-12-29 2021-03-30 西安天和防务技术股份有限公司 Scene recognition method and device, computer equipment and storage medium
CN109726686A (en) * 2018-12-29 2019-05-07 西安天和防务技术股份有限公司 Scene recognition method, device, computer equipment and storage medium
CN110717556A (en) * 2019-09-25 2020-01-21 南京旷云科技有限公司 Posterior probability adjusting method and device for target recognition
CN111047540B (en) * 2019-12-27 2023-07-28 嘉应学院 Image defogging method based on sky segmentation and application system thereof
CN111047540A (en) * 2019-12-27 2020-04-21 嘉应学院 Image defogging method based on sky segmentation and application system thereof
CN111553405A (en) * 2020-04-24 2020-08-18 青岛杰瑞工控技术有限公司 Clustering fog recognition algorithm based on pixel density K-means
CN111553405B (en) * 2020-04-24 2023-08-18 青岛杰瑞工控技术有限公司 Group fog recognition algorithm based on pixel density K-means clustering
CN112634171A (en) * 2020-12-31 2021-04-09 上海海事大学 Image defogging method based on Bayes convolutional neural network and storage medium
CN112634171B (en) * 2020-12-31 2023-09-29 上海海事大学 Image defogging method and storage medium based on Bayesian convolutional neural network
CN116337088A (en) * 2023-05-30 2023-06-27 中国人民解放军国防科技大学 Foggy scene relative motion estimation method and device based on bionic polarization vision
CN116337088B (en) * 2023-05-30 2023-08-11 中国人民解放军国防科技大学 Foggy scene relative motion estimation method and device based on bionic polarization vision

Also Published As

Publication number Publication date
CN104200445B (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN104200445A (en) Image defogging method with optimal contrast ratio and minimal information loss
US10607089B2 (en) Re-identifying an object in a test image
Huang et al. An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems
CN102252623B (en) Measurement method for lead/ground wire icing thickness of transmission line based on video variation analysis
CN107103591B (en) Single image defogging method based on image haze concentration estimation
CN110378849B (en) Image defogging and rain removing method based on depth residual error network
CN104881879B (en) A kind of remote sensing images haze emulation mode based on dark channel prior
CN102930514A (en) Rapid image defogging method based on atmospheric physical scattering model
CN103985091A (en) Single image defogging method based on luminance dark priori method and bilateral filtering
CN103578083A (en) Single image defogging method based on joint mean shift
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN105096272A (en) De-hazing method based on dual-tree complex wavelet
CN102646267B (en) Degraded image restoration method and system
CN105447825A (en) Image defogging method and system
CN104346783A (en) Processing method and processing device for defogging image
CN103793885A (en) Regionalization image restoration method under uneven lighting in strong scattering optical imaging environment
CN106530240A (en) Image defogging method based on multi-scale fusion and total variational optimization
Raikwar et al. An improved linear depth model for single image fog removal
CN104318528A (en) Foggy weather image restoration method based on multi-scale WLS filtering
CN106780390B (en) Single image to the fog method based on marginal classification Weighted Fusion
Raikwar et al. Tight lower bound on transmission for single image dehazing
CN105913391B (en) A kind of defogging method can be changed Morphological Reconstruction based on shape
Fu et al. An anisotropic Gaussian filtering model for image de-hazing
CN104966273A (en) DCM-HTM haze-removing method suitably used for optical remote sensing images
CN104168402A (en) Method and device for video frame image defogging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220317

Address after: 215500 5th floor, building 4, 68 Lianfeng Road, Changfu street, Changshu City, Suzhou City, Jiangsu Province

Patentee after: Changshu intellectual property operation center Co.,Ltd.

Address before: 215500 Changshou City South Three Ring Road No. 99, Suzhou, Jiangsu

Patentee before: CHANGSHU INSTITUTE OF TECHNOLOGY