CN106600560A - Image defogging method for automobile data recorder - Google Patents

Image defogging method for automobile data recorder Download PDF

Info

Publication number
CN106600560A
CN106600560A CN201611196660.7A CN201611196660A CN106600560A CN 106600560 A CN106600560 A CN 106600560A CN 201611196660 A CN201611196660 A CN 201611196660A CN 106600560 A CN106600560 A CN 106600560A
Authority
CN
China
Prior art keywords
image
input
layer
output
absorbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611196660.7A
Other languages
Chinese (zh)
Other versions
CN106600560B (en
Inventor
王秀
余春艳
林晖翔
徐小丹
叶鑫焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201611196660.7A priority Critical patent/CN106600560B/en
Publication of CN106600560A publication Critical patent/CN106600560A/en
Application granted granted Critical
Publication of CN106600560B publication Critical patent/CN106600560B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image defogging method for an automobile data recorder. With a quadtree method, an atmospheric light value is obtained by calculation; a rough transmissivity map is obtained by using a trained convolution neural network and the transmissivity map is optimized by using a guidance filtering method; and then inverse solution is carried out by using an atmospheric scattering model to obtain a restored image. According to the image defogging method, a gray foggy image can be processed effectively and the image brightness after processing can be improved, so that the image edge details can be kept and the image color can be restored. The image defogging method can be applied to defogging processing of an automobile data recorder; the video processing requirement can be met; and practicability is high.

Description

A kind of image defogging method suitable for drive recorder
Technical field
The present invention relates to technical field of image processing, particularly a kind of image defogging method suitable for drive recorder.
Background technology
Drive recorder is essentially instrument, is the crucial supplier of accident evidence.Certainly, night and rain The probability that greasy weather gas occurs accident is higher.For driver, in any extreme environment, require that evidence is as clear as possible It is clear, effective.Accordingly, it is capable to the no image detail that clearly can be reduced under night and misty rain environment under low photograph environment, complete so as to ensure Weather can export the research emphasis that the monitoring effect of high-quality is drive recorder.Intelligence point is implanted in drive recorder Analysis, mist elimination, low-light (level) image enhaucament scheduling algorithm will become a kind of trend, possess very strong user's value.
In real world, mist is a kind of common natural phenomena.Due to being affected by mist, outdoor visual system is caused to be adopted The image of collection produces the phenomenons such as contrast reduction, cross-color, detailed information loss, obtains effective information to acquisition system and causes Directly affect.It is optimized for the low-light (level) image such as greasy weather, so as to obtain the application of the image in drive recorder of sharpening In have very high researching value.
At present, defogging method mainly has the image recovery method based on physical model and the image based on non-physical model to increase Strong method.Image enchancing method based on non-physical model considers the features such as Misty Image has cross-color, contrast and declines, From the angle for improving picture contrast and correction of color, using image processing algorithm for example histogram equalization, homomorphic filtering, Contrast enhancing and Retinex etc. improve picture contrast, and prominent image detail information improves picture quality.This kind of method Shortcoming is not consider to be disturbed by the greasy weather and cause the physical cause of image degradation, and information damage is easily caused in prominent details The situation of mistake.Based on the image recovery method of physical model analyze acquisition system the greasy weather obtain degraded image physics into Cause, i.e. target light source run into the larger aerosol suspension granule of particle radii in air during harvester is traveled to, There is reciprocal action with which, so that the result that light is redistributed with certain rule, sets up Misty Image degradation model with this, lead to Cross the inverse process recovery sharpening image for solving image degradation.Said method has been achieved for significant effect in terms of mist elimination, But most of method is based on statistics or the prior information such as assumes not with versatility, cause algorithm at some in particular cases Misty Image mist elimination effect on driving birds is not good.
In recent years, convolutional neural networks achieve breakthrough progress in classification and identification, provide an advantage in that face Feature is adaptively extracted to particular problem, and the feature extracted has more identification.Therefore, extracted using convolutional neural networks The characteristic information related to Misty Image and then predicted transmittance simultaneously realize that mist elimination is a kind of new feasible program.
Therefore, based on described above, the application chooses air light value using quadtree approach, then using convolutional Neural net Network extracts transmittance characteristic, and optimizes transmittance figure using guiding filtering method, removes the blocking effect between image, effectively can locate Reason ash covers weather image and improves the brightness of artwork, and the color of original image is preferably gone back while mist elimination.In driving recording There is good practicality on instrument.
The content of the invention
It is an object of the invention to provide a kind of image defogging method suitable for drive recorder, to overcome prior art Middle existing defects.
For achieving the above object, the technical scheme is that:A kind of image defogging method suitable for drive recorder, Comprise the following steps:
Step S1:The air light value of input Misty Image is calculated using quadtree approach;
Step S2:Build and train one to predict towards absorbance and the absorbance self adaptation based on convolutional neural networks Estimate model, the absorbance of the input Misty Image is estimated by calling the absorbance adaptive estimation model, and obtain right The transmittance figure answered;
Step S3:Optimize the transmittance figure produced by step S2 using guiding filtering method, remove between image block Blocky effect, obtain absorbance optimization figure;
Step S4:The air light value obtained by step S1 and step S3 are obtained absorbance optimization figure to substitute into One atmospherical scattering model, it is Converse solved to obtain mist elimination image.
Further, in step S1, also comprise the steps:
Step S11:The input Misty Image is divided into into four size identical regions;
Step S12:Four regions to obtaining carry out average, variance and Mean-Variance computing respectively;
Step S13:The maximum region of Mean-Variance operation result is chosen, judges area size whether more than a default threshold Value;If so, then perform step S11;Otherwise, execution step S14;
Step S14:In choosing the region, (255,255,255) nearest pixel will be its RGB tri- logical for range pixel value The pixel value in road is used as air light value A.
Further, in step S2, the absorbance adaptive estimation model includes input and output layer, convolution Layer, MAXOUT layers, MaxPool layers and Brelu layers, using Caffe deep learning frameworks, are trained as follows:
Step S21:Fl transmission;One group of size is input into for 16 × 16 and the image with tri- Color Channels of RGB, successively Computing, learns greasy weather feature, Analysis On Multi-scale Features and local extremal features respectively, and it is defeated that correspondence is obtained under current network parameter Enter the transmittance values of image;
Step S22:Reverse transfer;The error between the transmittance values and true value of reality output is calculated using Euclidean distance, Reverse transfer control information updates the weights of each layer network and biasing.
Further, in step S21, also comprise the steps:
Step S211:Initialization network parameter:Arrange initial momentum, learning rate, network iterative calculation learning rate to decline Multiple, model greatest iteration cycle, final training error and test error are arranged;
Step S212:It is 16 × 16 images with tri- Color Channels of RGB to be input into one group of size, by convolutional layer C1 Greasy weather feature is practised, is calculated as follows:
Wherein,WithWave filter in expression convolutional layer and biasing respectively, biasing are randomly provided;* Represent convolution operation;I is input block;T represents the numbering of output characteristic figure, t=1,2 ... n1;n1Represent output characteristic figure Number, is worth for 16;
Step S213:Using the output of convolutional layer C1 as the input of MAXOUT layer MO2, characteristic pattern is formed, according to such as lower section Formula is calculated:
Wherein, k is packet count, is worth for 4;T represents the numbering of output characteristic figure, t=1,2 ... n2;n2Figure number is characterized, It is worth for 4;
Step S214:Using the output of MAXOUT layer MO2 as the input of convolutional layer C3, learnt by convolutional layer C3 multiple dimensioned Feature, is calculated as follows:
Wherein,WithWave filter in expression convolutional layer C3 and biasing respectively;n3For The characteristic pattern number of C3 layers output, is worth for 48;T represents the numbering of output characteristic figure, t=1,2 ... n3;Represent remainder operate; I, j represent row, column;
Step S215:Using the output of convolutional layer C3 as the input of MaxPool layer MP4, learnt by MaxPool layers MP4 Local extremum feature, is calculated as follows:
Wherein, Ω (x) is the window that central point is x, and size is identical with filter size;n4For the defeated of MaxPool layer MP4 Go out information dimension, with n3Equal, t represents the numbering of output characteristic figure, t=1,2 ... n4
Step S216:The characteristic pattern obtained in MaxPool layer MP4 is changed into by convolutional layer C5 for 1 × 1 eigenvalue letter Breath, and eigenvalue is constrained in the range of [0,1] by a Brelu activation primitives:
Wherein,WithWave filter in expression convolutional layer C5 and biasing respectively, t represent that output is special Levy the numbering of figure, t=1;
Further, in step S216, the Brelu activation primitives are:F6=min (1, max (0, F5))。
Further, in step S22, also comprise the steps:
The difference of convolutional layer C5 output valves and true value is calculated as loss function using Euclidean distance, then counting loss letter The gradient of the relative last layer parameter of number, by gradient information back transfer to last layer, by that analogy, until passing to input layer. Specifically it is calculated as follows:
Wherein,Represent the output valve of n-th training sample, ynThe true value of n-th training sample is represented, N represents sample Number.
Further, in step S2, the input Misty Image equalization is divided into into some 16 × 16 image blocks, And each image block is input into into the absorbance adaptive estimation model, the saturating of each image block is obtained by network fl transmission Radiance rate value, combination are formed and input picture size identical rough transmittance figure, obtain absorbance t of the input Misty Image (x)。
Further, in implementation steps S3, the gray level image g of be input into Misty Image I is chosen as navigational figure;Will The transmittance figure that I is produced through step S2 is used as input picture, and the transmittance figure is designated as p;Using navigational figure certainly The average of body, the average between variance computing and navigational figure and input picture, the content of variance computing reservation restored image, And the smooth edge details of the input Misty Image are transmitted to output image, so as to eliminate the blocking effect of image, transmitted Rate optimizes image q.
Further, comprise the steps:
Step S31:The marginal information smoothed in gray level image g is passed to into image q, in the window w that a radius is rk It is interior, the linear relationship of local is set up between g and q, is calculated as follows:
Wherein, i and k are pixel index, and a and b is the coefficient of the linear function when window center is located at k;
Step S32:Transmittance figure p and q before and after optimizing guiding filtering has maximum similarity, i.e.,:
By formula is minimized, a is tried to achievekAnd bkOptimal solution, be calculated as follows:
Wherein, μkAnd σkIt is navigational figure g in window ωkInterior meansigma methodss and variance;|ωk| represent ωkMiddle pixel Number;Represent input picture p in window ωkInterior meansigma methodss;
Step S33:The value of whole output image q is obtained using Mean Method, is specifically calculated as follows:
Wherein,WithRepresent the mean coefficient of all windows for covering i, wkIt is all including The window of pixel i, k are its centers.
Further, in step S4, the mist elimination image is obtained as follows:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein, I (x) and J (x) represent pixel x in Misty Image and the corresponding pixel value of picture rich in detail respectively;A is big Gas light value;T (x) represents corresponding transmittance values at pixel x.
Compared to prior art, the invention has the advantages that:The present invention provides a kind of suitable for drive recorder Image defogging method, using quadtree approach choose air light value, then using convolutional neural networks extract transmittance characteristic, And optimize transmittance figure using guiding filtering method, the blocking effect between image is removed, weather image can be covered simultaneously with effective process ash Improve the brightness of artwork, and the color of original image is preferably gone back while mist elimination.Clearly can go back under night and misty rain environment Image detail under former low photograph environment, so as to ensure the round-the-clock monitoring effect that can export high-quality of drive recorder.
Description of the drawings
Fig. 1 is the flow chart of the image defogging method for being applied to drive recorder in the present invention.
Fig. 2 (a) is input into Misty Image during choosing air light value for quadtree approach in one embodiment of the invention.
Fig. 2 (b) is chosen area schematic diagram during quadtree approach selection air light value in one embodiment of the invention.
Fig. 2 (c) chooses position during choosing air light value for quadtree approach in one embodiment of the invention and illustrates Figure.
Fig. 3 estimates model support composition for the absorbance in one embodiment of the invention based on convolutional neural networks.
Fig. 4 is activation primitive Brelu in one embodiment of the invention.
Fig. 5 (a) is input Misty Image schematic diagram during smooth restored image blocking effect in one embodiment of the invention.
Fig. 5 (b) is navigational figure schematic diagram during smooth restored image blocking effect in one embodiment of the invention.
Fig. 5 (c) optimizes image schematic diagram for absorbance during restored image blocking effect is smoothed in one embodiment of the invention.
Specific embodiment
Below in conjunction with the accompanying drawings, technical scheme is specifically described.
The present invention provides a kind of image defogging method suitable for drive recorder, as shown in figure 1, specifically including following step Suddenly:
Step S1:The air light value of input Misty Image is calculated using quadtree approach;
Step S2:Convolutional neural networks (CNN) model towards absorbance prediction is built and trained, the mould is called Type can estimate the absorbance for obtaining Misty Image.
Step S3:The transmittance figure produced using guiding filtering method optimization step S2, removes the block effect between image block Should, obtain absorbance optimization figure;
Step S4:The air light value that step S1 is obtained and step S3 obtain transmittance figure and substitute into atmospherical scattering model, inverse Mist elimination image is obtained to solution.
In the present embodiment, in step sl:Air light value A is estimated using QuadTree algorithm, from the angle of global search, The region for differing greatly is chosen with reference to average, variance computing, suitable air light value A is therefrom chosen.As shown in Fig. 2 concrete wrap Include following steps:
Step S11:Input Misty Image in Fig. 2 (a) is divided into into four size identical regions, such as Fig. 2 (b) institutes Show;
Step S12:Average, variance and Mean-Variance computing are carried out respectively for aforementioned four region;
Step S13:The maximum region of Mean-Variance operation result is chosen, judges area size whether higher than predetermined threshold value. If so, then execution step S11;Otherwise, execution step S14;In the present embodiment, predetermined threshold value is set to 49.
Step S14:Choose range pixel value in the region (255,255,255) nearest pixel, by its three passages , used as air light value A, such as Fig. 2 (c) is shown for corresponding pixel value.
In the present embodiment, in step s 2:The absorbance ART network based on convolutional neural networks (CNN) for building Model, as shown in figure 3, including input and output layer, convolutional layer, MAXOUT layers, MaxPool layers and Brelu layers.
In the present embodiment, absorbance adaptive estimation model process of the training based on convolutional neural networks (CNN), will 100 resolution for 640 × 480 fog free images be divided into 16 × 16 image block.Each image block uses random number functions Transmittance values of 10 numerical value between 0-1 are produced, artificial Misty Image block known to 250,000 transmittance values is finally constituted and is made For training set.Using training set, an absorbance ART network mould based on convolutional neural networks (CNN) is built and trains Type, with Misty Image block as input, successively extracts the wherein related feature of absorbance by multilamellar convolutional layer, finally by Brelu Layer predicts absorbance t (x) of the Misty Image.Using Caffe deep learning frameworks, two steps are specifically divided into:
Step S21:Fl transmission.It is 16 × 16 images with tri- Color Channels of RGB to be input into one group of size, is transported successively Calculate, learn greasy weather feature, Analysis On Multi-scale Features and local extremal features etc. respectively, and it is defeated that correspondence is obtained under current network parameter Enter the transmittance values of image.
Step S22:Reverse transfer.The error between the transmittance values and true value of reality output is calculated using Euclidean distance, Reverse transfer control information updates the weights of each layer network and biasing.
In the present embodiment, step S21 specifically includes following steps:
Step S211:Initialization network parameter:Initial momentum is set to 0.9, learning rate and is set to 0.005, and network often changes 100000 times learning rate declines 0.5 times, and the model greatest iteration cycle is 500,000 time, and final training error and test error set It is set to 0.0088 and 0.0086.
Step S212:One group of size is input into for 16 × 16 images with tri- Color Channels of RGB.Ground floor (convolutional layer C1) for learning greasy weather feature, be made up of 16 characteristic patterns, convolution kernel size be 5 × 5, number be 16, adopt average for 0 and standard deviation be 0.001 Gauss distribution.During convolution algorithm, sliding step is 1, and input matrix non-boundary is expanded, final feature Figure size is 12 × 12.Specifically it is calculated as follows:
Wherein,WithWave filter in expression convolutional layer and biasing respectively, biasing are randomly provided;* Represent convolution operation;I is input block;T represents the numbering of output characteristic figure, t=1,2 ... n1;n1Represent output characteristic figure Number, preferably, the value is 16;
Step S213:Input of the output of ground floor as the second layer (MAXOUT layer MO2), MO2 by 4 sizes be 12 × 16 neurons are divided into 4 groups by 12 characteristic pattern composition first, and per group of 4 neurons carry out maximum pixel-by-pixel and compare, shape Into a characteristic pattern with pixel maximum, by that analogy, 4 characteristic patterns are formed.Specifically it is calculated as follows:
Wherein, k is packet count, preferably, the value is 4;T represents the numbering of output characteristic figure, t=1,2 ... n2;n2For spy Figure number is levied, preferably, the value is 4;
Step S214:Input (convolutional layer C3) of the output of the second layer (MO2) as third layer, C3 are multiple dimensioned for learning Feature, be made up of for 12 × 12 characteristic pattern 48 sizes, each neuron respectively with the volume that size is 3 × 3,5 × 5 and 7 × 7 Product core carries out convolution.Have 16 per class convolution kernel, adopt average for 0 and standard deviation be 0.001 Gauss distribution.Convolution algorithm When, sliding step is 1, and it is 1,2 and that sliding window size is the length that 3 × 3,5 × 5 and 7 × 7 difference homography borders are expanded 3.Finally, the characteristic pattern size that the convolution algorithm that three class convolution kernels are completed is obtained is identical, is 12 × 12.Specifically it is calculated as follows:
Wherein,WithWave filter in expression convolutional layer C3 and biasing respectively;n3For The characteristic pattern number of volume basic unit C3 layer outputs, preferably, the value is 48;T represents the numbering of output characteristic figure, t=1,2 ... n3;Represent remainder operate;I, j represent row, column;
Step S215:Used as the 4th layer of input (MaxPool layer MP4), MaxPool layers are used for for the output of third layer (C3) Study local extremum feature, by 48 it is big it is little be that 6 × 6 characteristic pattern is constituted.Each neuron and the wave filter that size is 7 × 7 Pondization operation is carried out, a maximum is chosen from 7 × 7 region as the value of current whole region.Pond computing similar to Convolution algorithm, it is final to obtain the characteristic pattern that size is for 6 × 6.Specifically it is calculated as follows:
Wherein, Ω (x) is the window that central point is x, and size is identical with filter size;n4For the defeated of MaxPool layer MP4 Go out information dimension, with n3Equal, t represents the numbering of output characteristic figure, t=1,2 ... n4
Step S216:4th layer of (MaxPool layers) characteristic pattern is changed into layer 5 (convolutional layer C5) 1 × 1 eigenvalue Information, convolution kernel number be 1, adopt average for 0 and standard deviation be 0.001 Gauss distribution.During convolution algorithm, sliding window step A length of 1, the extension of input matrix non-boundary obtains 1 characteristic value information, and using a Brelu activation primitive by eigenvalue about Beam is in the range of [0,1].
Wherein,WithWave filter in expression convolutional layer C5 and biasing respectively, t represent that output is special The numbering of figure is levied, preferably, t=1.The Brelu activation primitives, as shown in figure 4, concrete formula is:
F6=min (1, max (0, F5))
In the present embodiment, in step S22:The difference conduct of layer 5 output valve and true value is calculated using Euclidean distance Loss function, then gradient of the counting loss function with respect to last layer parameter, including weight, biasing etc., gradient information is reverse Last layer is passed to, by that analogy, until passing to input layer.Specifically it is calculated as follows:
Wherein,Represent the output valve of n-th training sample, ynThe true value of n-th training sample is represented, N represents sample Number.
In the present embodiment, in step s 2:Estimate to obtain absorbance t (x) of Misty Image by calling the model.Tool Body step is:Input picture equalization is divided into into some 16 × 16 image blocks, and each image block is input into into convolutional neural networks, The transmittance values of each image block are obtained by network fl transmission, combination is formed and transmitted with input picture size identical roughly Rate figure;
In the present embodiment, in step s3:The gray-scale maps g of input Misty Image I is chosen as navigational figure;By I Jing The transmittance figure (being designated as p) of step S2 generation is crossed as input picture;Using the average of navigational figure itself, variance computing and Average, variance computing between navigational figure and input picture retains the content of restored image, and transmits what Misty Image was smoothed Edge details obtain absorbance optimization image q to output image so as to eliminate the blocking effect of image.Such as Fig. 5 (a)~Fig. 5 (c) It is shown, specifically include following steps:
Step S31:The marginal information smoothed in gray level image g is passed to into image q, in the window w that a radius is rk It is interior, the linear relationship of local is set up between g and q;Window w of the radius for rkIn, the length and alleviating distention in middle-JIAO that parameter r chooses transmittance figure is larger Be multiplied by 0.04, then round;Regularization parameter eps is 10-6;Specifically it is calculated as follows:
Wherein, i and k are pixel index, and a and b is the coefficient of the linear function when window center is located at k.
Step S32:Transmittance figure p and q before and after optimizing guiding filtering has maximum similarity, i.e.,:
By formula is minimized, a is tried to achievekAnd bkOptimal solution, be specifically calculated as follows:
Wherein, μkAnd σkIt is navigational figure g in window ωkInterior meansigma methodss and variance;|ωk| represent ωkMiddle pixel Number;Represent input picture p in window ωkInterior meansigma methodss.
Step S33:The value of whole output image q is obtained using Mean Method, is specifically calculated as follows:
Wherein,WithRepresent the mean coefficient of all windows for covering i, wkIt is all including The window of pixel i, k are its centers.
In the present embodiment, in step s 4:Air light value A that step S1 is obtained and step S3 obtain transmittance figure generation Enter atmospherical scattering model, it is Converse solved to obtain mist elimination image.Comprise the following steps that:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein, I (x) and J (x) represent pixel x in Misty Image and the corresponding pixel value of picture rich in detail respectively;A is big Gas light value, is a global information amount;T (x) represents corresponding transmittance values at pixel x.
It is more than presently preferred embodiments of the present invention, all changes made according to technical solution of the present invention, produced function are made During with scope without departing from technical solution of the present invention, protection scope of the present invention is belonged to.

Claims (10)

1. a kind of image defogging method suitable for drive recorder, it is characterised in that comprise the following steps:
Step S1:The air light value of input Misty Image is calculated using quadtree approach;
Step S2:Build and train one to predict towards absorbance and the absorbance ART network based on convolutional neural networks Model, estimates the absorbance of the input Misty Image by calling the absorbance adaptive estimation model, and obtains corresponding Transmittance figure;
Step S3:Optimize the transmittance figure produced by step S2 using guiding filtering method, remove the block between image block Shape effect, obtains absorbance optimization figure;
Step S4:The air light value obtained by step S1 and step S3 are obtained into absorbance optimization figure substitution one big Gas scattering model, it is Converse solved to obtain mist elimination image.
2. a kind of image defogging method suitable for drive recorder according to claim 1, it is characterised in that described In step S1, also comprise the steps:
Step S11:The input Misty Image is divided into into four size identical regions;
Step S12:Four regions to obtaining carry out average, variance and Mean-Variance computing respectively;
Step S13:The maximum region of Mean-Variance operation result is chosen, judges area size whether more than a predetermined threshold value;If It is then to perform step S11;Otherwise, execution step S14;
Step S14:Choose range pixel value in the region (255,255,255) nearest pixel, by its tri- passage of RGB Pixel value is used as air light value A.
3. a kind of image defogging method suitable for drive recorder according to claim 1, it is characterised in that described In step S2, the absorbance adaptive estimation model include input and output layer, convolutional layer, MAXOUT layers, MaxPool layers and Brelu layers, using Caffe deep learning frameworks, are trained as follows:
Step S21:Fl transmission;It is 16 × 16 and the image with tri- Color Channels of RGB to be input into one group of size, is transported successively Calculate, learn greasy weather feature, Analysis On Multi-scale Features and local extremal features respectively, and correspondence is obtained under current network parameter to be input into The transmittance values of image;
Step S22:Reverse transfer;The error between the transmittance values and true value of reality output is calculated using Euclidean distance, reversely Transmission error information updates the weights of each layer network and biasing.
4. a kind of image defogging method suitable for drive recorder according to claim 3, it is characterised in that described In step S21, also comprise the steps:
Step S211:Initialization network parameter:Arrange initial momentum, learning rate, network iterative calculation learning rate decline multiple, Model greatest iteration cycle, final training error and test error are arranged;
Step S212:It is 16 × 16 images with tri- Color Channels of RGB to be input into one group of size, learns mist by convolutional layer C1 Its feature, is calculated as follows:
Wherein,WithWave filter in expression convolutional layer and biasing respectively, biasing are randomly provided;* represent Convolution operation;I is input block;T represents the numbering of output characteristic figure, t=1,2 ... n1;n1Represent the number of output characteristic figure;
Step S213:Using the output of convolutional layer C1 as the input of MAXOUT layer MO2, characteristic pattern is formed, is counted as follows Calculate:
Wherein, k is packet count;T represents the numbering of output characteristic figure, t=1,2 ... n2;n2It is characterized figure number;
Step S214:Using the output of MAXOUT layer MO2 as the input of convolutional layer C3, multiple dimensioned spy is learnt by convolutional layer C3 Levy, calculate as follows:
Wherein,WithWave filter in expression convolutional layer C3 and biasing respectively;n3To roll up base The characteristic pattern number of layer C3 outputs;T represents the numbering of output characteristic figure, t=1,2 ... n3;Represent remainder operate;I, j are represented Row, column;
Step S215:Using the output of convolutional layer C3 as the input of MaxPool layer MP4, local is learnt by MaxPool layers MP4 Extremal features, are calculated as follows:
Wherein, Ω (x) is the window that central point is x, and size is identical with filter size;n4Output for MaxPool layer MP4 is believed Breath dimension, with n3It is equal, and t represents the numbering of output characteristic figure, t=1,2 ... n4
Step S216:The characteristic pattern obtained in MaxPool layer MP4 is changed into by convolutional layer C5 for 1 × 1 characteristic value information, And eigenvalue is constrained in the range of [0,1] by a Brelu activation primitives:
Wherein,WithWave filter in expression convolutional layer C5 and biasing respectively, t represent output characteristic figure Numbering.
5. a kind of image defogging method suitable for drive recorder according to claim 4, it is characterised in that described In step S216, the Brelu activation primitives are:F6=min (1, max (0, F5))。
6. a kind of image defogging method suitable for drive recorder according to claim 4, it is characterised in that described In step S22, also comprise the steps:
The difference of convolutional layer C5 output valves and true value is calculated as loss function using Euclidean distance, then counting loss function phase Gradient to last layer parameter, by gradient information back transfer to last layer, by that analogy, until passing to input layer.Specifically It is calculated as follows:
Wherein,Represent the output valve of n-th training sample, ynThe true value of n-th training sample is represented, N represents sample number.
7. a kind of image defogging method suitable for drive recorder according to claim 1, it is characterised in that described In step S2, the input Misty Image equalization is divided into into some 16 × 16 image blocks, and will be the input of each image block described Absorbance adaptive estimation model, obtains the transmittance values of each image block by network fl transmission, and combination is formed and input Image size identical rough transmittance figure, obtains absorbance t (x) of the input Misty Image.
8. a kind of image defogging method suitable for drive recorder according to claim 1, it is characterised in that implementing In step S3, the gray level image g of be input into Misty Image I is chosen as navigational figure;By I through step S2 produce it is saturating Rate figure is penetrated as input picture, and the transmittance figure is designated as into p;Using the average of navigational figure itself, variance computing and draw The content that the average between image and input picture, variance computing retain restored image is led, and transmits the input Misty Image Smooth edge details obtain absorbance optimization image q to output image so as to eliminate the blocking effect of image.
9. a kind of image defogging method suitable for drive recorder according to claim 8, it is characterised in that include as Lower step:
Step S31:The marginal information smoothed in gray level image g is passed to into image q, in the window w that a radius is rkIt is interior, g and The linear relationship of local is set up between q, is calculated as follows:
Wherein, i and k are pixel index, and a and b is the coefficient of the linear function when window center is located at k;
Step S32:Transmittance figure p and q before and after optimizing guiding filtering has maximum similarity, i.e.,:
By formula is minimized, a is tried to achievekAnd bkOptimal solution, be calculated as follows:
Wherein, μkAnd σkIt is navigational figure g in window ωkInterior meansigma methodss and variance;|ωk| represent ωkThe number of middle pixel;Represent input picture p in window ωkInterior meansigma methodss;
Step S33:The value of whole output image q is obtained using Mean Method, is specifically calculated as follows:
Wherein,WithRepresent the mean coefficient of all windows for covering i, wkIt is all comprising pixel i Window, k is its center.
10. a kind of image defogging method suitable for drive recorder according to claim 1, it is characterised in that in institute State in step S4, obtain the mist elimination image as follows:
I (x)=J (x) t (x)+A (1-t (x)),
Wherein, I (x) and J (x) represent pixel x in Misty Image and the corresponding pixel value of picture rich in detail respectively;A is atmosphere light Value;T (x) represents corresponding transmittance values at pixel x.
CN201611196660.7A 2016-12-22 2016-12-22 A kind of image defogging method suitable for automobile data recorder Expired - Fee Related CN106600560B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611196660.7A CN106600560B (en) 2016-12-22 2016-12-22 A kind of image defogging method suitable for automobile data recorder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611196660.7A CN106600560B (en) 2016-12-22 2016-12-22 A kind of image defogging method suitable for automobile data recorder

Publications (2)

Publication Number Publication Date
CN106600560A true CN106600560A (en) 2017-04-26
CN106600560B CN106600560B (en) 2019-07-12

Family

ID=58602516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611196660.7A Expired - Fee Related CN106600560B (en) 2016-12-22 2016-12-22 A kind of image defogging method suitable for automobile data recorder

Country Status (1)

Country Link
CN (1) CN106600560B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN107451967A (en) * 2017-07-25 2017-12-08 北京大学深圳研究生院 A kind of single image to the fog method based on deep learning
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN108898562A (en) * 2018-06-22 2018-11-27 大连海事大学 A kind of mobile device image defogging method based on deep learning
CN108986046A (en) * 2018-07-09 2018-12-11 西安理工大学 A kind of traveling monitoring image defogging method
CN109118451A (en) * 2018-08-21 2019-01-01 李青山 A kind of aviation orthography defogging algorithm returned based on convolution
CN109493300A (en) * 2018-11-15 2019-03-19 湖南鲲鹏智汇无人机技术有限公司 The real-time defogging method of Aerial Images and unmanned plane based on FPGA convolutional neural networks
CN109523474A (en) * 2018-10-19 2019-03-26 福州大学 A kind of enhancement method of low-illumination image based on greasy weather degradation model
CN109544470A (en) * 2018-11-08 2019-03-29 西安邮电大学 A kind of convolutional neural networks single image to the fog method of boundary constraint
CN109637187A (en) * 2019-01-07 2019-04-16 合肥工业大学 City Roadside Parking position unmanned charge monitoring and managing method and system
CN111046828A (en) * 2019-12-20 2020-04-21 西安交通大学 Dust removal and noise reduction neural network method for mine underground monitoring image
CN115456913A (en) * 2022-11-07 2022-12-09 四川大学 Method and device for defogging night fog map

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050636A (en) * 2014-06-05 2014-09-17 华侨大学 Method for enhancing granularity-controllable low-illuminance image
CN104281999A (en) * 2013-07-12 2015-01-14 东北师范大学 Single image defogging method based on structural information
US20150287170A1 (en) * 2013-05-28 2015-10-08 Industry Foundation Of Chonnam National University Apparatus for improving fogged image using user-controllable root operator
CN105574827A (en) * 2015-12-17 2016-05-11 中国科学院深圳先进技术研究院 Image defogging method and device
US20160129365A1 (en) * 2014-11-12 2016-05-12 Ventana 3D, Llc Background imagery for enhanced pepper's ghost illusion
CN105976338A (en) * 2016-05-12 2016-09-28 山东大学 Dark channel prior defogging method based on sky recognition and segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150287170A1 (en) * 2013-05-28 2015-10-08 Industry Foundation Of Chonnam National University Apparatus for improving fogged image using user-controllable root operator
CN104281999A (en) * 2013-07-12 2015-01-14 东北师范大学 Single image defogging method based on structural information
CN104050636A (en) * 2014-06-05 2014-09-17 华侨大学 Method for enhancing granularity-controllable low-illuminance image
US20160129365A1 (en) * 2014-11-12 2016-05-12 Ventana 3D, Llc Background imagery for enhanced pepper's ghost illusion
CN105574827A (en) * 2015-12-17 2016-05-11 中国科学院深圳先进技术研究院 Image defogging method and device
CN105976338A (en) * 2016-05-12 2016-09-28 山东大学 Dark channel prior defogging method based on sky recognition and segmentation

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107301624B (en) * 2017-06-05 2020-03-17 天津大学 Convolutional neural network defogging method based on region division and dense fog pretreatment
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN107451967A (en) * 2017-07-25 2017-12-08 北京大学深圳研究生院 A kind of single image to the fog method based on deep learning
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering
CN107451967B (en) * 2017-07-25 2020-06-26 北京大学深圳研究生院 Single image defogging method based on deep learning
CN108805839A (en) * 2018-06-08 2018-11-13 西安电子科技大学 Combined estimator image defogging method based on convolutional neural networks
CN108898562A (en) * 2018-06-22 2018-11-27 大连海事大学 A kind of mobile device image defogging method based on deep learning
CN108898562B (en) * 2018-06-22 2022-04-12 大连海事大学 Mobile equipment image defogging method based on deep learning
CN108986046A (en) * 2018-07-09 2018-12-11 西安理工大学 A kind of traveling monitoring image defogging method
CN108986046B (en) * 2018-07-09 2021-12-21 西安理工大学 Driving monitoring image defogging method
CN109118451A (en) * 2018-08-21 2019-01-01 李青山 A kind of aviation orthography defogging algorithm returned based on convolution
CN109523474A (en) * 2018-10-19 2019-03-26 福州大学 A kind of enhancement method of low-illumination image based on greasy weather degradation model
CN109544470A (en) * 2018-11-08 2019-03-29 西安邮电大学 A kind of convolutional neural networks single image to the fog method of boundary constraint
CN109493300A (en) * 2018-11-15 2019-03-19 湖南鲲鹏智汇无人机技术有限公司 The real-time defogging method of Aerial Images and unmanned plane based on FPGA convolutional neural networks
CN109637187A (en) * 2019-01-07 2019-04-16 合肥工业大学 City Roadside Parking position unmanned charge monitoring and managing method and system
CN111046828A (en) * 2019-12-20 2020-04-21 西安交通大学 Dust removal and noise reduction neural network method for mine underground monitoring image
CN115456913A (en) * 2022-11-07 2022-12-09 四川大学 Method and device for defogging night fog map

Also Published As

Publication number Publication date
CN106600560B (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN106600560A (en) Image defogging method for automobile data recorder
CN106599773B (en) Deep learning image identification method and system for intelligent driving and terminal equipment
CN108549892B (en) License plate image sharpening method based on convolutional neural network
CN111598030A (en) Method and system for detecting and segmenting vehicle in aerial image
CN109377459B (en) Super-resolution deblurring method of generative confrontation network
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
CN109584188B (en) Image defogging method based on convolutional neural network
Tang et al. Single image dehazing via lightweight multi-scale networks
CN104217404A (en) Video image sharpness processing method in fog and haze day and device thereof
CN108564549A (en) A kind of image defogging method based on multiple dimensioned dense connection network
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
CN110503613A (en) Based on the empty convolutional neural networks of cascade towards removing rain based on single image method
CN111861925A (en) Image rain removing method based on attention mechanism and gate control circulation unit
CN105719247A (en) Characteristic learning-based single image defogging method
CN110310241A (en) A kind of more air light value traffic image defogging methods of fusion depth areas segmentation
CN109858487A (en) Weakly supervised semantic segmentation method based on watershed algorithm and image category label
CN111401207B (en) Human body action recognition method based on MARS depth feature extraction and enhancement
CN106339984A (en) Distributed image super-resolution method based on K-means driven convolutional neural network
CN113657528B (en) Image feature point extraction method and device, computer terminal and storage medium
Jiang et al. Dfnet: Semantic segmentation on panoramic images with dynamic loss weights and residual fusion block
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
CN107301625B (en) Image defogging method based on brightness fusion network
CN117495718A (en) Multi-scale self-adaptive remote sensing image defogging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190712

Termination date: 20211222

CF01 Termination of patent right due to non-payment of annual fee