CN106127702A - A kind of image mist elimination algorithm based on degree of depth study - Google Patents
A kind of image mist elimination algorithm based on degree of depth study Download PDFInfo
- Publication number
- CN106127702A CN106127702A CN201610437482.6A CN201610437482A CN106127702A CN 106127702 A CN106127702 A CN 106127702A CN 201610437482 A CN201610437482 A CN 201610437482A CN 106127702 A CN106127702 A CN 106127702A
- Authority
- CN
- China
- Prior art keywords
- depth
- degree
- image
- layer
- neutral net
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000003595 mist Substances 0.000 title claims abstract description 89
- 238000003379 elimination reaction Methods 0.000 title claims abstract description 30
- 230000008030 elimination Effects 0.000 title claims abstract description 27
- 230000007935 neutral effect Effects 0.000 claims abstract description 76
- 238000012549 training Methods 0.000 claims abstract description 68
- 230000008485 antagonism Effects 0.000 claims abstract description 26
- 238000012360 testing method Methods 0.000 claims abstract description 17
- 230000004069 differentiation Effects 0.000 claims abstract description 14
- 238000013507 mapping Methods 0.000 claims abstract description 9
- 230000001537 neural effect Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000010276 construction Methods 0.000 claims description 12
- 238000003475 lamination Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 9
- 239000012141 concentrate Substances 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 6
- 210000004218 nerve net Anatomy 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 5
- 210000005036 nerve Anatomy 0.000 claims description 5
- 238000000889 atomisation Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 2
- 230000004927 fusion Effects 0.000 abstract 1
- 239000010410 layer Substances 0.000 description 90
- 238000013528 artificial neural network Methods 0.000 description 5
- 239000011229 interlayer Substances 0.000 description 5
- 238000000149 argon plasma sintering Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The invention discloses a kind of image mist elimination algorithm based on degree of depth study, have the fog in mist image to disturb for removing, reduce the fog impact on picture quality.Including: step 1, obtain training sample set and test sample collection;Step 2, is concentrated with mist image and carries out HSL spatial variations, extract and have the low bright feature of mist image local, and all characteristic components are carried out scaling and normalized sample;Step 3, finds out differentiation perspective rate, makes depth discrimination neural fusion confrontation type train;Step 4, utilizes the degree of depth to generate antagonism neutral net and is trained features described above component, and study is set up the mapping network between mist image and perspective rate;Step 5, uses the degree of depth to generate neutral net and test sample collection is carried out mist elimination test.Heretofore described mist elimination algorithm, sets up out mist image to the mapping relations between perspective rate by degree of depth learning algorithm, effectively solves the problem that conventional mist elimination algorithm prior information is not enough, have and preferably remove fog effect.
Description
Technical field
The present invention relates to image processing techniques, pattern recognition and artificial intelligence field, be specifically related to a kind of based on the degree of depth
The image mist elimination algorithm practised.
Background technology
In recent years, along with the development of artificial intelligence, increasing image capture device has under different scenes
Automatically the ability of target is identified.But, the environmental factors restriction that this automatic identification ability is frequently subjected in scene, particularly mist
Or the existence of haze, reduce saturation and the contrast of target in image, it has also become the automatic recognition effect of the equipment that affects important
Factor, it is proposed that a kind of effective and feasible mist elimination algorithm has important theoretical and practical significance.
Traditional mist elimination algorithm is broadly divided into two classes: mist elimination algorithm based on atmosphere light scattering model image restoration with based on
The mist elimination algorithm that image enhaucament is theoretical.The mist elimination algorithm of main flow is all based on what atmosphere light scattering model was set up at present, Qi Zhongying
It is dark channel prior mist elimination algorithm with widest image defogging method.For the defogging method that presently, there are, have owing to lacking
Effective priori in mist image, so all cannot obtain the perspective rate of optimum, presents color inclined during image restoration
The phenomenon moved.Therefore, the perspective rate how obtaining optimum becomes the key issue in image mist elimination algorithm.
Summary of the invention
The present invention proposes a kind of image mist elimination algorithm based on degree of depth study, its object is to generate antagonism by the degree of depth
Training sample is concentrated data to be trained study by neutral net, obtains there is the mapping network between mist characteristics of image and perspective rate,
Find the perspective rate of adaptive optimal, carry out restoring conversion according to atmosphere light scattering model afterwards, finally realize image mist elimination.
It is an object of the invention to be realized by following technical proposals.
A kind of image mist elimination algorithm based on degree of depth study, comprises the following steps:
Step 1, sets up sample graph image set, including training sample set and test sample collection;Wherein for training sample set, adopt
Collection fog there may be the image without mist under scene, and is manually atomized without mist image, is manually had mist image, finally
With without mist image construction training sample set;For test sample collection, use true fog scene image;
Step 2, concentrates training sample and has mist image to carry out color notation conversion space, from RGB (red, green, blue) spatial alternation
To HSL (hue, saturation, intensity) space, obtain the colourity of image, saturation and monochrome information characteristic component respectively, afterwards
Under the rgb space of original image, obtain the low bright values in local and air light value, and all data are done at scaling and normalization
Reason;
Step 3, calculates optimum perspective rate by training sample set, and in making it learn as the degree of depth, the degree of depth generates antagonism nerve
The differentiation component of network training input, completes the training of network confrontation type jointly with another input quantity structure perspective rate;
Step 4, concentrates obtain to have mist image rgb space and HSL space characteristics component, and local based on training sample
Low bright characteristic component and differentiation perspective rate, use the degree of depth in degree of deep learning algorithm to generate antagonism neutral net and be trained,
To the perspective rate having in mist image, study is set up the mapping network between mist image and perspective rate.Wherein, the degree of depth generates antagonism god
Generated neutral net through network by depth discrimination neutral net and the degree of depth to form;
Step 5, uses the above-mentioned degree of depth to generate neutral net to truly there being mist image to carry out mist elimination process.
Preferably, step 1 farther includes, and chooses image without mist that fog there may be under scene as initial training sample
This collection, uses that repaiies that figure software obtains initial training sample to be manually atomized image, and same scene has mist image with without mist
Image is as training sample set, and gathering afterwards has mist image as test sample collection in true fog scene.
Preferably, step 2 farther includes, and original sample is carried out eigentransformation, comprises the steps:
1) concentrate at training image, artificial atomization image is carried out HSL color notation conversion space, finally gives RGB Yu HSL two
Plant the characteristic component in space;
2) the calculating process of the low bright values in the local of image block is:
In formula, r, g, b are respectively the triple channel of rgb space, Ω (x0) represent with x0Centered by regional area pixel
Collection, JlowX () is the low bright values of regional area;Ir(x)、Ig(x)、IbX () indicates triple channel pixel in mist image rgb space respectively
Value;
3) to above-mentioned RGB Yu HSL space characteristics component, and the low bright characteristic component in local carries out scaling so that it is dimension
Number is 256*256, and is normalized each component, makes each component pixel point value belong to [0,1];
4) air light value is:
Ac=min (max (Ic(x)),0.8) (2)
In formula, { r, g, b} are channel number in rgb space to c ∈, IcX () indicates in mist image under c corresponding color passage
Pixel value.
Preferably, step 3 farther includes, and differentiates that perspective rate is in described training:
In formula, JcX () is without the pixel value under c corresponding color passage in mist image.
Preferably, step 4 farther includes, and carries out the degree of depth and generates antagonism neural metwork training, comprises the steps:
1) construction depth differentiates neutral net, including 1 input layer, 5 convolutional layers, 4 batches of regularization layers, 1 entirely connect
Connecing layer and 1 output layer, connected mode is: input layer → convolutional layer 1 → convolutional layer 2 → batch regularization layer 1 → convolutional layer 3 → batch
Regularization layer 2 → convolutional layer 4 → batch regularization layer 3 → convolutional layer, 5 → batch regularization layer, 4 → full articulamentum → output layer, wherein,
In depth discrimination neutral net, in convolutional layer to batch regularization layer conversion process, interlayer structure is not changed in;
Wherein, input layer is by structure perspective rateWith differentiation perspective rate tdistX () is respectively fed to depth discrimination nerve net
In network, after multilamellar process of convolution, obtain characterizing the high-order feature of input information attribute, deliver to defeated after full articulamentum processes
Going out layer, carry out discriminant classification by activation primitive Sigmoid function, wherein Sigmoid function is:
2) construction depth generates neutral net, including 1 input layer, 2 convolutional layers, 2 warp laminations, 4 batches of canonicals
Changing layer and 1 output layer, connected mode is: input layer → convolutional layer 1 → batch regularization layer 1 → convolutional layer, 2 → batch regularization layer 2
→ warp lamination 1 → batch regularization layer 3 → warp lamination, 2 → batch regularization layer 4 → output layer, and output layer is for pond
Layer, is the result exported behind batch regularization layer 4 pond.RGB, HSL are sent totally together with the low bright 7 layers of characteristic component in local by input layer
Enter the degree of depth and generate in neutral net, reduce with mapping through hidden layer feature extraction, final output layer output construction perspective rate
3) in the training degree of depth generates antagonism nerve net, for ensureing the dependency between component, prior-constrained model is added, if
The cost function set the goal:
In formula, m represents the number of samples of minimum batch in training,Result is exported for depth discrimination neutral net,Represent the differentiation perspective rate of i-th sample, G (z(i)) it is that the degree of depth generates neutral net output result, z(i)Represent that the degree of depth generates
I-th group of input sample characteristics in neutral net;
Generating antagonism neural metwork training in the degree of depth and include forward-propagating and two processes of back propagation, wherein forward passes
Broadcast middle employing layering greedy algorithm to be trained, by training sample successively abstract, complete the extraction process of feature;Back propagation
Middle employing SGD (stochastic gradient descent) parameter training algorithm, utilizes calibration information to carry out having monitor mode to learn, and generates the degree of depth
Antagonism neutral net is carried out from pushing up to the regulation of end parameter.
Preferably, step 4.3 farther includes, and in training, definition differentiates label, and { 0,1}, i.e. calibration information is only with 0 or 1
Representing predetermined and differentiate result, wherein 1 represents that input data are to differentiate perspective rate, and 0 represents that input data are for structure perspective rate.For
The degree of depth generates neutral net, after initialization, low to above-mentioned RGB, HSL and local bright 7 layers of characteristic component is sent into the degree of depth and generates nerve
In network, output construction perspective rate is as the input quantity of depth discrimination neutral net, in the degree of depth generates neural metwork training,
The calibration information of structure perspective rate is set to 1, in conjunction with differentiating result, utilizes SGD algorithm to carry out parameter adjustment, and wherein the degree of depth generates god
In network, the gradient of undated parameter isFor depth discrimination neutral net, after initialization,
Respectively differentiating that perspective rate inputs in depth discrimination neutral net with structure perspective rate, and differentiate that perspective rate calibration information is set to
1, structure perspective rate calibration information is set to 0, in conjunction with differentiating result, utilizes SGD algorithm to carry out parameter adjustment, wherein depth discrimination god
In network, the gradient of undated parameter isBy depth discrimination nerve net
Network generates neutral net and the calibration information of structure perspective rate carries out the different demarcation of 0 and 1 from the degree of depth so that the degree of depth generates nerve
Network can obtain the perspective rate of optimum, and depth discrimination neutral net carries out Accurate classification to perspective rate, it is achieved it is right that the degree of depth generates
The confrontation type training of anti-neutral net.
Preferably, step 5 farther includes, and obtains HSL space characteristics component, and extract test image from test image
The low bright feature in local, afterwards RGB, HSL with local low bright characteristic component send into above-mentioned training the degree of depth generate neutral net
In, obtain constructing perspective rateCarry out mist elimination conversion according to the following formula:
In formula,It is image after mist elimination corresponding for c for channel layer,The output of neutral net is generated for the c correspondence degree of depth
Perspective rate.
The present invention combines the low bright feature in local, it is proposed that a kind of image mist elimination algorithm based on degree of depth study.Calculate with tradition
Method is compared, institute of the present invention extracting method, it is not necessary to have substantial amounts of prior information in mist image, learn by substantial amounts of training sample
There is the perspective rate in mist image, be finally completed mist image to changing without mist image accordingly.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the present invention;
Fig. 2 is that the degree of depth of the present invention generates antagonism neural network training process schematic diagram;
Fig. 3 is depth discrimination neural network structure figure of the present invention;
Fig. 4 is that the degree of depth of the present invention generates neural network structure figure.
Detailed description of the invention
Below in conjunction with the accompanying drawings and detailed description of the invention further illustrates the present invention, but the embodiment that this kind is described by accompanying drawing
It is exemplary, is only used for explaining the present invention, it is impossible to limit the scope of the claims in the present invention protection.
The present invention a kind of based on the degree of depth study image mist elimination algorithm as it is shown in figure 1, in figure solid line represent forward-propagating
Journey, dotted line represents back-propagation process, and its key step is described below:
1, training sample set and test sample collection are obtained
Choosing fog and there may be under environment, indoor, outdoor scene image without mist, as initial sample set, uses
Initial sample set is manually atomized by Photoshop software, obtains there is mist image, with initial sample under corresponding scene and illumination
This collection composition training sample set.Gathering afterwards has mist image as test sample collection in true fog environment.
2, the degree of depth generates neutral net input feature vector component extraction
Carrying out eigentransformation by initial sample, each input feature vector extracted in training degree of depth generation neutral net divides
Amount, comprises the steps:
1) concentrate at training image, artificial atomization image is carried out HSL color notation conversion space, finally gives RGB Yu HSL two
Plant the characteristic component in space;
2) combine the low bright characteristic in local, for training network adds the low bright feature in local, extract the low bright characteristic component in local
Calculating process be:
In formula, r, g, b are respectively the triple channel of rgb space, Ω (x0) represent with x0Centered by regional area pixel
Collection, JlowX () is the low bright values of regional area;Ir(x)、Ig(x)、IbX () indicates in mist image triple channel picture in rgb space respectively
Element value;
3) to above-mentioned RGB Yu HSL space characteristics component, and the low bright characteristic component in local carries out scaling so that it is dimension
Number is 256*256, and each component carries out linear normalization process, makes each component pixel point value belong to [0,1];
4) air light value is selected
In training sample, calculate the maximum having each passage of mist image, as the air light value A of this passage, for avoiding
Air light value is affected by local highlight regions in image, sets its upper limit threshold as 0.8, and in single channel, air light value is selected concrete
Formula is:
Ac=min (max (Ic(x)),0.8) (2)
In formula, { r, g, b} are channel number in rgb space to c ∈, IcX () indicates in mist image under c corresponding color passage
Pixel value.
3, depth discrimination neutral net input feature vector component extraction
Optimum perspective rate is calculated so that it is in learning as the degree of depth, the degree of depth generates antagonism neutral net instruction by training sample set
Practice the differentiation component of input, jointly complete the training of network confrontation type with another input quantity structure perspective rate.Make training sample set
In have mist image and without between mist image change perspective rate as differentiate perspective rate, in described training differentiate perspective rate be:
In formula, { r, g, b} are channel number in rgb space to c ∈, IcX () is for there to be Color Channel corresponding for c in mist image.
4, the degree of depth generates antagonism neural metwork training
Concentrate obtain to have mist image rgb space and HSL space characteristics component, and the low bright spy in local based on training sample
Levy component and differentiate perspective rate, using the degree of depth in degree of deep learning algorithm to generate antagonism neutral net and be trained, obtain there is mist
Perspective rate in image, study is set up the mapping network between mist image and perspective rate.Wherein, the degree of depth generates antagonism neutral net
Generated neutral net by depth discrimination neutral net and the degree of depth to form.The degree of depth generates antagonism neural network training process such as Fig. 2 institute
Show, wherein FdistRepresent the extraction process differentiating perspective rate, realize according to formula (3), FdedazeRepresent and completed by perspective rate
Mist image, to the conversion without mist image, realizes according to formula (6), G Yu D represents that the degree of depth generates neutral net and depth discrimination respectively
Neutral net, implementing step is:
1) construction depth differentiate neutral net, each interlayer connected mode as shown in Figure 3: input layer → convolutional layer 1 → convolution
Layer 2 → batch regularization layer 1 → convolutional layer, 3 → batch regularization layer 2 → convolutional layer, 4 → batch regularization layer 3 → convolutional layer, 5 → batch canonical
Change layer 4 → full articulamentum → output layer.It is worthy of note: due in the operation of convolutional layer to batch regularization layer, interlayer structure
Be not changed in, thus in Fig. 3 respectively convolutional layer 2 and batch regularization layer 1, convolutional layer 3 and batch regularization layer 2, convolutional layer 4 with batch just
Then change layer 3, convolutional layer 5 is placed in same structure sheaf with batch regularization layer 4.
Wherein, input layer data are three layers of characteristic component, for the three-channel characteristic component of RGB, structure perspective rate
With differentiation perspective rate tdistX () is respectively fed in depth discrimination neutral net as input, by convolution operation.First order convolution
Select the convolution kernel of 8*8*48;Second level convolution selects the convolution kernel of 6*6*48, processes through batch regular sheaf, then after convolution output
Carrying out ReLU activation primitive operation, this operation is all passed through in the output of follow-up convolution;Third level convolutional layer selects the convolution kernel of 4*4*64;
Fourth stage convolution selects the convolution kernel of 5*5*80;Level V convolution selects the convolution kernel of 7*7*80;After 5 convolution operation, defeated
Go out and carry out full attended operation, send into output layer afterwards, carry out discriminant classification by activation primitive Sigmoid function, wherein
Sigmoid function is:
2) construction depth generates neutral net, each interlayer connected mode as shown in Figure 4: input layer → convolutional layer 1 → batch just
Then change layer 1 → convolutional layer 2 → batch regularization layer 2 → warp lamination, 1 → batch regularization layer 3 → warp lamination, 2 → batch regularization layer 4
→ output layer.It is worthy of note: owing to, in convolutional layer (or warp lamination) to the operation of batch regularization layer, interlayer structure does not has
Change, therefore Fig. 4 is respectively convolutional layer 1 and batch regularization layer 1, convolutional layer 2 and batch regularization layer 2, warp lamination 1 and batch regularization
Layer 3, warp lamination 2 and batch regularization layer 4 are placed in same structure sheaf.Wherein, input layer be input sample data RGB, HSL and
7 layers of characteristic component of the lowest bright feature, first order convolution selects the convolution kernel of 9*9*64;The convolution of second level convolution 4*4*128
Core, carries out deconvolution operation afterwards.The convolution kernel of 9*9*224 is selected in first order deconvolution, and step-length is 3;Second level deconvolution choosing
With the convolution kernel of 11*11*3, step-length is 2.The most all convolutional layers all process through batch regular sheaf, then after exporting with warp lamination
Carry out the operation of ReLU (rectification linear unit) activation primitive.Reduce, to defeated with the mapping of deconvolution through the feature extraction of convolution
Going out data and carry out the process of average pondization, select 18*18 template that data are carried out pond, step-length is 1, and in the present invention, pondization operation is main
In network to be solved, image generates the grid problem occurred, and structure perspective rate pondization exportedAs output layer;
3) for ensureing the dependency between component, in the training degree of depth generates antagonism nerve net, prior-constrained model is added, if
The cost function set the goal:
In formula, m represents the number of samples of minimum batch in training,Result is exported for depth discrimination neutral net,Represent the differentiation perspective rate of i-th sample, G (z(i)) it is that the degree of depth generates neutral net output result, z(i)Represent that the degree of depth generates
I-th group of input sample characteristics in neutral net;
4) generate antagonism neural metwork training in the degree of depth and include that the degree of depth generates neutral net and depth discrimination neutral net
Training, first initialize entire depth generate antagonism neural network parameter, parameter initial rules is: random parameter in convolutional layer
Meeting average is 0, and variance is the normal distribution of 0.02, and in batch regularization layer, random parameter meets average is 1, and variance is 0.02 just
State is distributed.Set afterwards and differentiate that { 0,1}, wherein 1 represents that input data are to differentiate perspective rate to label, and 0 represents that input data are structure
Make perspective rate.The degree of depth generates in neural metwork training, above-mentioned 7 layers of characteristic component is sent into the degree of depth and generates in neutral net, output
Structure perspective rate is as the input quantity of depth discrimination neutral net, in this degree of depth generates neutral net, and the mark of structure perspective rate
Determining information and be set to 1, in conjunction with differentiating result, utilize SGD algorithm to carry out parameter adjustment, wherein the gradient of undated parameter isIn depth discrimination neutral net, respectively differentiating perspective rate and structure perspective rate input depth discrimination
In neutral net, and differentiating that perspective rate calibration information is set to 1, structure perspective rate calibration information is set to 0, in conjunction with differentiating result,
Utilizing SGD algorithm to carry out parameter adjustment, wherein the gradient of undated parameter isLogical
Cross depth discrimination neutral net and the degree of depth to generate neutral net the calibration information of structure perspective rate carries out the different demarcation of 0 and 1,
Making the degree of depth generate neutral net and can obtain the perspective rate of optimum, perspective rate is accurately divided by depth discrimination neutral net
Class, it is achieved the degree of depth generates the confrontation type training of antagonism neutral net.
5, image mist elimination test
There is mist image to carry out HSL conversion test sample concentration, obtain the characteristic component under HSL space, and extract survey
Attempt the low bright feature in local of picture, afterwards refreshing with the degree of depth generation that the low bright characteristic component in local sends into above-mentioned training to RGB, HSL
In network, obtain constructing perspective rateFollow-up carry out mist elimination process according to atmosphere light value model, carry out mist elimination according to the following formula
Conversion:
In formula,It is image after mist elimination corresponding for c for channel layer,The output of neutral net is generated for the c correspondence degree of depth
Perspective rate.
Above-described embodiment is the specific embodiment of the present invention, and institute is it will be clear that embodiments of the present invention not only limiting
It is formed on the replacement of above-described embodiment, the follow-up various changes carried out in embodiment and similarity method, all without departing from
Scope defined in appended claims.
Claims (7)
1. an image mist elimination algorithm based on degree of depth study, it is characterised in that comprise the following steps:
Step 1, sets up sample graph image set, including training sample set and test sample collection;
Wherein, for training sample set, gather fog and there may be the image without mist under scene, and carry out manually without mist image
Atomization, is manually had mist image, final with without mist image construction training sample set;
For test sample collection, use true fog scene image;
Step 2, concentrates training sample and has mist image to carry out color notation conversion space, transform to HSL space from rgb space, respectively
Obtain the colourity of image, saturation and monochrome information characteristic component, under the rgb space of original image, obtain the low bright values in local afterwards
With air light value, and all data are done scaling and normalized;
Step 3, calculates optimum perspective rate by training sample set, and in making it learn as the degree of depth, the degree of depth generates antagonism neutral net
The differentiation component of training input, completes the training of network confrontation type jointly with another input quantity structure perspective rate;
Step 4, concentrates based on training sample obtain to have mist image rgb space and HSL space characteristics component, and local is low bright
Characteristic component and differentiation perspective rate, use the degree of depth in degree of deep learning algorithm to generate antagonism neutral net and be trained, had
Perspective rate in mist image, study is set up the mapping network between mist image and perspective rate;Wherein, the degree of depth generates antagonism nerve net
Network is generated neutral net by depth discrimination neutral net and the degree of depth and forms;
Step 5, uses the above-mentioned degree of depth to generate neutral net to truly there being mist image to carry out mist elimination process.
Method the most according to claim 1, it is characterised in that: in step 1, choose fog there may be under scene without mist
Image as initial training sample set, uses that repaiies that figure software obtains initial training sample to be manually atomized image, and identical field
Scape has mist image with without mist image as training sample set, gathering afterwards has mist image as test in true fog scene
Sample set.
Method the most according to claim 1, it is characterised in that:
Step 2 farther includes sub-step:
2.1 concentrate at training image, artificial atomization image is carried out HSL color notation conversion space, finally gives RGB Yu HSL two kinds
Characteristic component in space;
The calculating process of the low bright values in local of 2.2 image blocks is:
In formula, r, g, b are respectively the triple channel of rgb space, Ω (x0) represent with x0Centered by regional area pixel point set, Jlow
X () is the low bright values of regional area;Ir(x)、Ig(x)、IbX () indicates triple channel pixel value in mist image rgb space respectively;
2.3 pairs of above-mentioned RGB Yu HSL space characteristics components, and locally low bright characteristic component carries out scaling so that it is dimension
For 256*256, and each component is normalized, makes each component pixel point value belong to [0,1];
2.4 air light values are:
Ac=min (max (Ic(x)),0.8) (2)
In formula, { r, g, b} are channel number in rgb space to c ∈, IcX () indicates the picture in mist image under c corresponding color passage
Element value.
Method the most according to claim 1, it is characterised in that: in step 3, described differentiation perspective rate is obtained by following formula:
In formula, JcX () is without the pixel value under c corresponding color passage in mist image.
Method the most according to claim 1, it is characterised in that:
Step 4 farther includes sub-step:
4.1 construction depths differentiate neutral net, including 1 input layer, 5 convolutional layers, 4 batches of regularization layers, 1 full connection
Layer and 1 output layer, connected mode is: input layer → convolutional layer 1 → convolutional layer 2 → batch regularization layer 1 → convolutional layer 3 → batch just
Then change layer 2 → convolutional layer 4 → batch regularization layer 3 → convolutional layer, 5 → batch regularization layer, 4 → full articulamentum → output layer;
Wherein, input layer is by structure perspective rate t (x) and differentiation perspective rate tdistX () is respectively fed in depth discrimination neutral net,
After multilamellar process of convolution, obtain characterizing the high-order feature of input information attribute, after full articulamentum processes, deliver to output layer,
Carrying out discriminant classification by activation primitive Sigmoid function, wherein Sigmoid function is:
4.2 construction depths generate neutral net, including 1 input layer, 2 convolutional layers, 2 warp laminations, 4 batches of regularizations
Layer and 1 output layer, connected mode is: input layer → convolutional layer 1 → batch regularization layer 1 → convolutional layer 2 → batch regularization layer 2 →
Warp lamination 1 → batch regularization layer 3 → warp lamination, 2 → batch regularization layer 4 → output layer, wherein, output layer is batch regularization
The pond layer that layer 4 exports after carrying out pond;
Wherein, RGB, HSL are sent into the degree of depth totally together with the low bright 7 layers of characteristic component in local and generate in neutral net by input layer, warp
Hidden layer feature extraction is reduced with mapping, final output layer output construction perspective rate
4.3 in the training degree of depth generates antagonism nerve net, for ensureing the dependency between component, adds prior-constrained model, sets
The cost function of target:
In formula, m represents the number of samples of minimum batch in training,Result is exported for depth discrimination neutral net,Table
Show the differentiation perspective rate of i-th sample, G (z(i)) it is that the degree of depth generates neutral net output result, z(i)Represent that the degree of depth generates nerve
I-th group of input sample characteristics in network;
Generate antagonism neural metwork training in the degree of depth and include forward-propagating and two processes of back propagation, wherein in forward-propagating
Use layering greedy algorithm to be trained, by training sample successively abstract, complete the extraction process of feature;Back propagation is adopted
With stochastic gradient descent parameter training algorithm, utilize calibration information to carry out having monitor mode to learn, the degree of depth is generated antagonism nerve
Network is carried out from the parameter regulation pushed up the end of to.
Method the most according to claim 5, it is characterised in that: in step 4.3, in training, definition differentiation label 0,1},
I.e. calibration information only represents predetermined with 0 or 1 and differentiates result, and wherein 1 represents that input data are to differentiate perspective rate, and 0 represents input number
According to for constructing perspective rate;
Neutral net is generated for the degree of depth, after initialization, RGB, HSL and local 7 layers of characteristic component of low amounts is sent into the degree of depth and generate
In neutral net, output construction perspective rate is as the input quantity of depth discrimination neutral net;
In the degree of depth generates neural metwork training, the calibration information of structure perspective rate is set to 1, in conjunction with differentiating result, utilizes SGD
Algorithm carries out parameter adjustment, and during wherein the degree of depth generates neutral net, the gradient of undated parameter is:
For depth discrimination neutral net, after initialization, respectively differentiating perspective rate and structure perspective rate input depth discrimination god
In network, and differentiating that perspective rate calibration information is set to 1, structure perspective rate calibration information is set to 0, in conjunction with differentiating result, and profit
Parameter adjustment is carried out with SGD algorithm;
Wherein in depth discrimination neutral net, the gradient of undated parameter is:
Generate neutral net by depth discrimination neutral net and the degree of depth and the calibration information of structure perspective rate is carried out 0 and 1 not
With demarcating so that the degree of depth generates neutral net can obtain the perspective rate of optimum, and perspective rate is carried out by depth discrimination neutral net
Accurate classification, it is achieved the degree of depth generates the confrontation type training of antagonism neutral net.
Method the most according to claim 1, it is characterised in that: in step 5, from test image, obtain HSL space characteristics
Component, and extract the low bright feature in local of test image, afterwards RGB, HSL are sent into above-mentioned training with the low bright characteristic component in local
The degree of depth generate in neutral net, obtain constructing perspective rateCarry out mist elimination conversion according to the following formula:
In formula,It is image after c correspondence mist elimination for channel number,The perspective of the output of neutral net is generated for the c correspondence degree of depth
Rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610437482.6A CN106127702B (en) | 2016-06-17 | 2016-06-17 | A kind of image defogging method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610437482.6A CN106127702B (en) | 2016-06-17 | 2016-06-17 | A kind of image defogging method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106127702A true CN106127702A (en) | 2016-11-16 |
CN106127702B CN106127702B (en) | 2018-08-14 |
Family
ID=57470779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610437482.6A Active CN106127702B (en) | 2016-06-17 | 2016-06-17 | A kind of image defogging method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106127702B (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106910175A (en) * | 2017-02-28 | 2017-06-30 | 武汉大学 | A kind of single image defogging algorithm based on deep learning |
CN106920206A (en) * | 2017-03-16 | 2017-07-04 | 广州大学 | A kind of steganalysis method based on confrontation neutral net |
CN107016406A (en) * | 2017-02-24 | 2017-08-04 | 中国科学院合肥物质科学研究院 | The pest and disease damage image generating method of network is resisted based on production |
CN107122826A (en) * | 2017-05-08 | 2017-09-01 | 京东方科技集团股份有限公司 | Processing method and system and storage medium for convolutional neural networks |
CN107220600A (en) * | 2017-05-17 | 2017-09-29 | 清华大学深圳研究生院 | A kind of Picture Generation Method and generation confrontation network based on deep learning |
CN107255647A (en) * | 2017-05-10 | 2017-10-17 | 中国科学院合肥物质科学研究院 | Microelement contents of soil analyzing and predicting method based on X-ray fluorescence spectra and depth confrontation study |
CN107273936A (en) * | 2017-07-07 | 2017-10-20 | 广东工业大学 | A kind of GAN image processing methods and system |
CN107301625A (en) * | 2017-06-05 | 2017-10-27 | 天津大学 | Image defogging algorithm based on brightness UNE |
CN107358626A (en) * | 2017-07-17 | 2017-11-17 | 清华大学深圳研究生院 | A kind of method that confrontation network calculations parallax is generated using condition |
CN107451967A (en) * | 2017-07-25 | 2017-12-08 | 北京大学深圳研究生院 | A kind of single image to the fog method based on deep learning |
CN107705262A (en) * | 2017-10-10 | 2018-02-16 | 中山大学 | A kind of defogging method based on mixing priori learning model |
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
CN107766815A (en) * | 2017-10-18 | 2018-03-06 | 福州大学 | A kind of vision assistant service operation system and method for running |
CN107798669A (en) * | 2017-12-08 | 2018-03-13 | 北京小米移动软件有限公司 | Image defogging method, device and computer-readable recording medium |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
CN107967671A (en) * | 2017-10-30 | 2018-04-27 | 大连理工大学 | With reference to data study and the image defogging method of physics priori |
CN107968962A (en) * | 2017-12-12 | 2018-04-27 | 华中科技大学 | A kind of video generation method of the non-conterminous image of two frames based on deep learning |
CN108022222A (en) * | 2017-12-15 | 2018-05-11 | 西北工业大学 | A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network |
CN108038832A (en) * | 2017-12-25 | 2018-05-15 | 中国科学院深圳先进技术研究院 | A kind of underwater picture Enhancement Method and system |
CN108229508A (en) * | 2016-12-15 | 2018-06-29 | 富士通株式会社 | For the training device and training method of training image processing unit |
CN108229526A (en) * | 2017-06-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | Network training, image processing method, device, storage medium and electronic equipment |
CN108460739A (en) * | 2018-03-02 | 2018-08-28 | 北京航空航天大学 | A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network |
WO2018157804A1 (en) * | 2017-02-28 | 2018-09-07 | 华为技术有限公司 | Method and device for question response |
CN108564550A (en) * | 2018-04-25 | 2018-09-21 | Oppo广东移动通信有限公司 | Image processing method, device and terminal device |
CN108615226A (en) * | 2018-04-18 | 2018-10-02 | 南京信息工程大学 | A kind of image defogging method fighting network based on production |
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN108734670A (en) * | 2017-04-20 | 2018-11-02 | 天津工业大学 | The restoration algorithm of single width night weak illumination haze image |
CN108830827A (en) * | 2017-05-02 | 2018-11-16 | 通用电气公司 | Neural metwork training image generation system |
CN108921220A (en) * | 2018-06-29 | 2018-11-30 | 国信优易数据有限公司 | Image restoration model training method, device and image recovery method and device |
CN108986044A (en) * | 2018-06-28 | 2018-12-11 | 广东工业大学 | A kind of image removes misty rain method, apparatus, equipment and storage medium |
CN109064430A (en) * | 2018-08-07 | 2018-12-21 | 中国人民解放军陆军炮兵防空兵学院 | It is a kind of to remove cloud method and system containing cloud atlas for region is taken photo by plane |
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN109359675A (en) * | 2018-09-28 | 2019-02-19 | 腾讯科技(武汉)有限公司 | Image processing method and equipment |
CN109410135A (en) * | 2018-10-02 | 2019-03-01 | 复旦大学 | It is a kind of to fight learning-oriented image defogging plus mist method |
CN109410131A (en) * | 2018-09-28 | 2019-03-01 | 杭州格像科技有限公司 | The face U.S. face method and system of confrontation neural network are generated based on condition |
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN109509156A (en) * | 2018-10-31 | 2019-03-22 | 聚时科技(上海)有限公司 | A kind of image defogging processing method based on generation confrontation model |
CN109544652A (en) * | 2018-10-18 | 2019-03-29 | 江苏大学 | Add to weigh imaging method based on the nuclear magnetic resonance that depth generates confrontation neural network |
CN109636754A (en) * | 2018-12-11 | 2019-04-16 | 山西大学 | Based on the pole enhancement method of low-illumination image for generating confrontation network |
CN109712083A (en) * | 2018-12-06 | 2019-05-03 | 南京邮电大学 | A kind of single image to the fog method based on convolutional neural networks |
CN109886210A (en) * | 2019-02-25 | 2019-06-14 | 百度在线网络技术(北京)有限公司 | A kind of traffic image recognition methods, device, computer equipment and medium |
CN109993804A (en) * | 2019-03-22 | 2019-07-09 | 上海工程技术大学 | A kind of road scene defogging method generating confrontation network based on condition |
WO2019136772A1 (en) * | 2018-01-11 | 2019-07-18 | 深圳大学 | Blurred image restoration method, apparatus and device, and storage medium |
CN110135574A (en) * | 2018-02-09 | 2019-08-16 | 北京世纪好未来教育科技有限公司 | Neural network training method, image generating method and computer storage medium |
CN110148088A (en) * | 2018-03-14 | 2019-08-20 | 北京邮电大学 | Image processing method, image rain removing method, device, terminal and medium |
CN110291540A (en) * | 2017-02-10 | 2019-09-27 | 谷歌有限责任公司 | Criticize renormalization layer |
CN110378848A (en) * | 2019-07-08 | 2019-10-25 | 中南大学 | A kind of image defogging method based on derivative figure convergence strategy |
CN110520875A (en) * | 2017-04-27 | 2019-11-29 | 日本电信电话株式会社 | Learning type signal separation method and learning type signal separator |
WO2020019738A1 (en) * | 2018-07-24 | 2020-01-30 | 深圳先进技术研究院 | Plaque processing method and device capable of performing magnetic resonance vessel wall imaging, and computing device |
WO2020029033A1 (en) * | 2018-08-06 | 2020-02-13 | 深圳大学 | Haze image clearing method and system, and storable medium |
CN110852972A (en) * | 2019-11-11 | 2020-02-28 | 苏州科技大学 | Single image rain removing method based on convolutional neural network |
CN111369472A (en) * | 2020-03-12 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image defogging method and device, electronic equipment and medium |
CN109410144B (en) * | 2018-10-31 | 2020-11-27 | 聚时科技(上海)有限公司 | End-to-end image defogging processing method based on deep learning |
WO2021000400A1 (en) * | 2019-07-02 | 2021-01-07 | 平安科技(深圳)有限公司 | Hospital guide similar problem pair generation method and system, and computer device |
CN112419163A (en) * | 2019-08-21 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Single image weak supervision defogging method based on priori knowledge and deep learning |
CN112435194A (en) * | 2020-11-30 | 2021-03-02 | 辽宁石油化工大学 | Domain-adaptive defogging method based on generation countermeasure network |
WO2021046752A1 (en) * | 2019-09-11 | 2021-03-18 | Covidien Lp | Systems and methods for neural-network based color restoration |
CN112580611A (en) * | 2021-02-21 | 2021-03-30 | 江苏铨铨信息科技有限公司 | Air pollution assessment method based on IGAN-CNN model |
CN112925932A (en) * | 2021-01-08 | 2021-06-08 | 浙江大学 | High-definition underwater laser image processing system |
CN113256541A (en) * | 2021-07-16 | 2021-08-13 | 四川泓宝润业工程技术有限公司 | Method for removing water mist from drilling platform monitoring picture by machine learning |
WO2022047625A1 (en) * | 2020-09-01 | 2022-03-10 | 深圳先进技术研究院 | Image processing method and system, and computer storage medium |
TWI758223B (en) * | 2019-01-11 | 2022-03-11 | 美商谷歌有限責任公司 | Computing method with dynamic minibatch sizes and computing system and computer-readable storage media for performing the same |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971337A (en) * | 2014-04-29 | 2014-08-06 | 杭州电子科技大学 | Infrared image haze removal method based on atmospheric transmission characteristics |
CN105574827A (en) * | 2015-12-17 | 2016-05-11 | 中国科学院深圳先进技术研究院 | Image defogging method and device |
-
2016
- 2016-06-17 CN CN201610437482.6A patent/CN106127702B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971337A (en) * | 2014-04-29 | 2014-08-06 | 杭州电子科技大学 | Infrared image haze removal method based on atmospheric transmission characteristics |
CN105574827A (en) * | 2015-12-17 | 2016-05-11 | 中国科学院深圳先进技术研究院 | Image defogging method and device |
Non-Patent Citations (3)
Title |
---|
IAN J. GOODFELLOW: "Generative Adversarial Nets", 《PROCEEDINGS OF ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》 * |
刘楠 等: "基于加权暗通道的图像去雾方法", 《光子学报》 * |
庞春颖 等: "一种改进的图像快速去雾新方法", 《光子学报》 * |
Cited By (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229508A (en) * | 2016-12-15 | 2018-06-29 | 富士通株式会社 | For the training device and training method of training image processing unit |
CN108229508B (en) * | 2016-12-15 | 2022-01-04 | 富士通株式会社 | Training apparatus and training method for training image processing apparatus |
CN110291540A (en) * | 2017-02-10 | 2019-09-27 | 谷歌有限责任公司 | Criticize renormalization layer |
US11887004B2 (en) | 2017-02-10 | 2024-01-30 | Google Llc | Batch renormalization layers |
CN107016406A (en) * | 2017-02-24 | 2017-08-04 | 中国科学院合肥物质科学研究院 | The pest and disease damage image generating method of network is resisted based on production |
US11734319B2 (en) | 2017-02-28 | 2023-08-22 | Huawei Technologies Co., Ltd. | Question answering method and apparatus |
WO2018157804A1 (en) * | 2017-02-28 | 2018-09-07 | 华为技术有限公司 | Method and device for question response |
CN106910175B (en) * | 2017-02-28 | 2020-01-24 | 武汉大学 | Single image defogging algorithm based on deep learning |
CN106910175A (en) * | 2017-02-28 | 2017-06-30 | 武汉大学 | A kind of single image defogging algorithm based on deep learning |
CN106920206B (en) * | 2017-03-16 | 2020-04-14 | 广州大学 | Steganalysis method based on antagonistic neural network |
CN106920206A (en) * | 2017-03-16 | 2017-07-04 | 广州大学 | A kind of steganalysis method based on confrontation neutral net |
CN108734670A (en) * | 2017-04-20 | 2018-11-02 | 天津工业大学 | The restoration algorithm of single width night weak illumination haze image |
CN108734670B (en) * | 2017-04-20 | 2021-05-18 | 天津工业大学 | Method for restoring single night weak-illumination haze image |
CN110520875A (en) * | 2017-04-27 | 2019-11-29 | 日本电信电话株式会社 | Learning type signal separation method and learning type signal separator |
CN110520875B (en) * | 2017-04-27 | 2023-07-11 | 日本电信电话株式会社 | Learning type signal separation method and learning type signal separation device |
CN108830827A (en) * | 2017-05-02 | 2018-11-16 | 通用电气公司 | Neural metwork training image generation system |
US11537873B2 (en) | 2017-05-08 | 2022-12-27 | Boe Technology Group Co., Ltd. | Processing method and system for convolutional neural network, and storage medium |
WO2018205676A1 (en) * | 2017-05-08 | 2018-11-15 | 京东方科技集团股份有限公司 | Processing method and system for convolutional neural network, and storage medium |
CN107122826A (en) * | 2017-05-08 | 2017-09-01 | 京东方科技集团股份有限公司 | Processing method and system and storage medium for convolutional neural networks |
CN107255647A (en) * | 2017-05-10 | 2017-10-17 | 中国科学院合肥物质科学研究院 | Microelement contents of soil analyzing and predicting method based on X-ray fluorescence spectra and depth confrontation study |
CN107220600B (en) * | 2017-05-17 | 2019-09-10 | 清华大学深圳研究生院 | A kind of Picture Generation Method and generation confrontation network based on deep learning |
CN107220600A (en) * | 2017-05-17 | 2017-09-29 | 清华大学深圳研究生院 | A kind of Picture Generation Method and generation confrontation network based on deep learning |
CN107301625A (en) * | 2017-06-05 | 2017-10-27 | 天津大学 | Image defogging algorithm based on brightness UNE |
CN107301625B (en) * | 2017-06-05 | 2021-06-01 | 天津大学 | Image defogging method based on brightness fusion network |
CN108229526B (en) * | 2017-06-16 | 2020-09-29 | 北京市商汤科技开发有限公司 | Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment |
CN108229526A (en) * | 2017-06-16 | 2018-06-29 | 北京市商汤科技开发有限公司 | Network training, image processing method, device, storage medium and electronic equipment |
CN107273936A (en) * | 2017-07-07 | 2017-10-20 | 广东工业大学 | A kind of GAN image processing methods and system |
CN107358626A (en) * | 2017-07-17 | 2017-11-17 | 清华大学深圳研究生院 | A kind of method that confrontation network calculations parallax is generated using condition |
CN107358626B (en) * | 2017-07-17 | 2020-05-15 | 清华大学深圳研究生院 | Method for generating confrontation network calculation parallax by using conditions |
CN107451967B (en) * | 2017-07-25 | 2020-06-26 | 北京大学深圳研究生院 | Single image defogging method based on deep learning |
CN107451967A (en) * | 2017-07-25 | 2017-12-08 | 北京大学深圳研究生院 | A kind of single image to the fog method based on deep learning |
CN107705262A (en) * | 2017-10-10 | 2018-02-16 | 中山大学 | A kind of defogging method based on mixing priori learning model |
CN107766815A (en) * | 2017-10-18 | 2018-03-06 | 福州大学 | A kind of vision assistant service operation system and method for running |
CN107766815B (en) * | 2017-10-18 | 2021-05-18 | 福州大学 | Visual auxiliary service operation method |
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
CN107967671B (en) * | 2017-10-30 | 2021-05-18 | 大连理工大学 | Image defogging method combining data learning and physical prior |
CN107967671A (en) * | 2017-10-30 | 2018-04-27 | 大连理工大学 | With reference to data study and the image defogging method of physics priori |
CN107798669A (en) * | 2017-12-08 | 2018-03-13 | 北京小米移动软件有限公司 | Image defogging method, device and computer-readable recording medium |
CN107798669B (en) * | 2017-12-08 | 2021-12-21 | 北京小米移动软件有限公司 | Image defogging method and device and computer readable storage medium |
CN107968962A (en) * | 2017-12-12 | 2018-04-27 | 华中科技大学 | A kind of video generation method of the non-conterminous image of two frames based on deep learning |
CN108022222A (en) * | 2017-12-15 | 2018-05-11 | 西北工业大学 | A kind of thin cloud in remote sensing image minimizing technology based on convolution-deconvolution network |
CN107945140A (en) * | 2017-12-20 | 2018-04-20 | 中国科学院深圳先进技术研究院 | A kind of image repair method, device and equipment |
CN108038832A (en) * | 2017-12-25 | 2018-05-15 | 中国科学院深圳先进技术研究院 | A kind of underwater picture Enhancement Method and system |
WO2019136772A1 (en) * | 2018-01-11 | 2019-07-18 | 深圳大学 | Blurred image restoration method, apparatus and device, and storage medium |
CN110135574A (en) * | 2018-02-09 | 2019-08-16 | 北京世纪好未来教育科技有限公司 | Neural network training method, image generating method and computer storage medium |
CN108460739A (en) * | 2018-03-02 | 2018-08-28 | 北京航空航天大学 | A kind of thin cloud in remote sensing image minimizing technology based on generation confrontation network |
CN110148088A (en) * | 2018-03-14 | 2019-08-20 | 北京邮电大学 | Image processing method, image rain removing method, device, terminal and medium |
CN110148088B (en) * | 2018-03-14 | 2023-09-19 | 北京邮电大学 | Image processing method, image rain removing method, device, terminal and medium |
CN108615226A (en) * | 2018-04-18 | 2018-10-02 | 南京信息工程大学 | A kind of image defogging method fighting network based on production |
CN108615226B (en) * | 2018-04-18 | 2022-02-11 | 南京信息工程大学 | Image defogging method based on generation type countermeasure network |
CN108564550A (en) * | 2018-04-25 | 2018-09-21 | Oppo广东移动通信有限公司 | Image processing method, device and terminal device |
CN108564550B (en) * | 2018-04-25 | 2020-10-02 | Oppo广东移动通信有限公司 | Image processing method and device and terminal equipment |
CN109272455A (en) * | 2018-05-17 | 2019-01-25 | 西安电子科技大学 | Based on the Weakly supervised image defogging method for generating confrontation network |
CN109272455B (en) * | 2018-05-17 | 2021-05-04 | 西安电子科技大学 | Image defogging method based on weak supervision generation countermeasure network |
CN108665432A (en) * | 2018-05-18 | 2018-10-16 | 百年金海科技有限公司 | A kind of single image to the fog method based on generation confrontation network |
CN109493303B (en) * | 2018-05-30 | 2021-08-17 | 湘潭大学 | Image defogging method based on generation countermeasure network |
CN109493303A (en) * | 2018-05-30 | 2019-03-19 | 湘潭大学 | A kind of image defogging method based on generation confrontation network |
CN108986044A (en) * | 2018-06-28 | 2018-12-11 | 广东工业大学 | A kind of image removes misty rain method, apparatus, equipment and storage medium |
CN108921220A (en) * | 2018-06-29 | 2018-11-30 | 国信优易数据有限公司 | Image restoration model training method, device and image recovery method and device |
WO2020019738A1 (en) * | 2018-07-24 | 2020-01-30 | 深圳先进技术研究院 | Plaque processing method and device capable of performing magnetic resonance vessel wall imaging, and computing device |
WO2020029033A1 (en) * | 2018-08-06 | 2020-02-13 | 深圳大学 | Haze image clearing method and system, and storable medium |
CN109064430A (en) * | 2018-08-07 | 2018-12-21 | 中国人民解放军陆军炮兵防空兵学院 | It is a kind of to remove cloud method and system containing cloud atlas for region is taken photo by plane |
CN109064430B (en) * | 2018-08-07 | 2020-10-09 | 中国人民解放军陆军炮兵防空兵学院 | Cloud removing method and system for aerial region cloud-containing image |
CN109410131B (en) * | 2018-09-28 | 2020-08-04 | 杭州格像科技有限公司 | Face beautifying method and system based on condition generation antagonistic neural network |
CN109359675A (en) * | 2018-09-28 | 2019-02-19 | 腾讯科技(武汉)有限公司 | Image processing method and equipment |
CN109410131A (en) * | 2018-09-28 | 2019-03-01 | 杭州格像科技有限公司 | The face U.S. face method and system of confrontation neural network are generated based on condition |
CN109410135A (en) * | 2018-10-02 | 2019-03-01 | 复旦大学 | It is a kind of to fight learning-oriented image defogging plus mist method |
CN109410135B (en) * | 2018-10-02 | 2022-03-18 | 复旦大学 | Anti-learning image defogging and fogging method |
CN109544652A (en) * | 2018-10-18 | 2019-03-29 | 江苏大学 | Add to weigh imaging method based on the nuclear magnetic resonance that depth generates confrontation neural network |
CN109509156B (en) * | 2018-10-31 | 2021-02-05 | 聚时科技(上海)有限公司 | Image defogging processing method based on generation countermeasure model |
CN109410144B (en) * | 2018-10-31 | 2020-11-27 | 聚时科技(上海)有限公司 | End-to-end image defogging processing method based on deep learning |
CN109509156A (en) * | 2018-10-31 | 2019-03-22 | 聚时科技(上海)有限公司 | A kind of image defogging processing method based on generation confrontation model |
CN109712083A (en) * | 2018-12-06 | 2019-05-03 | 南京邮电大学 | A kind of single image to the fog method based on convolutional neural networks |
CN109636754B (en) * | 2018-12-11 | 2022-05-31 | 山西大学 | Extremely-low-illumination image enhancement method based on generation countermeasure network |
CN109636754A (en) * | 2018-12-11 | 2019-04-16 | 山西大学 | Based on the pole enhancement method of low-illumination image for generating confrontation network |
TWI758223B (en) * | 2019-01-11 | 2022-03-11 | 美商谷歌有限責任公司 | Computing method with dynamic minibatch sizes and computing system and computer-readable storage media for performing the same |
CN109886210A (en) * | 2019-02-25 | 2019-06-14 | 百度在线网络技术(北京)有限公司 | A kind of traffic image recognition methods, device, computer equipment and medium |
CN109993804A (en) * | 2019-03-22 | 2019-07-09 | 上海工程技术大学 | A kind of road scene defogging method generating confrontation network based on condition |
WO2021000400A1 (en) * | 2019-07-02 | 2021-01-07 | 平安科技(深圳)有限公司 | Hospital guide similar problem pair generation method and system, and computer device |
CN110378848B (en) * | 2019-07-08 | 2021-04-20 | 中南大学 | Image defogging method based on derivative map fusion strategy |
CN110378848A (en) * | 2019-07-08 | 2019-10-25 | 中南大学 | A kind of image defogging method based on derivative figure convergence strategy |
CN112419163A (en) * | 2019-08-21 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Single image weak supervision defogging method based on priori knowledge and deep learning |
WO2021046752A1 (en) * | 2019-09-11 | 2021-03-18 | Covidien Lp | Systems and methods for neural-network based color restoration |
CN110852972A (en) * | 2019-11-11 | 2020-02-28 | 苏州科技大学 | Single image rain removing method based on convolutional neural network |
CN111369472B (en) * | 2020-03-12 | 2021-04-23 | 北京字节跳动网络技术有限公司 | Image defogging method and device, electronic equipment and medium |
CN111369472A (en) * | 2020-03-12 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Image defogging method and device, electronic equipment and medium |
WO2022047625A1 (en) * | 2020-09-01 | 2022-03-10 | 深圳先进技术研究院 | Image processing method and system, and computer storage medium |
CN112435194A (en) * | 2020-11-30 | 2021-03-02 | 辽宁石油化工大学 | Domain-adaptive defogging method based on generation countermeasure network |
CN112925932A (en) * | 2021-01-08 | 2021-06-08 | 浙江大学 | High-definition underwater laser image processing system |
CN112580611A (en) * | 2021-02-21 | 2021-03-30 | 江苏铨铨信息科技有限公司 | Air pollution assessment method based on IGAN-CNN model |
CN113256541B (en) * | 2021-07-16 | 2021-09-17 | 四川泓宝润业工程技术有限公司 | Method for removing water mist from drilling platform monitoring picture by machine learning |
CN113256541A (en) * | 2021-07-16 | 2021-08-13 | 四川泓宝润业工程技术有限公司 | Method for removing water mist from drilling platform monitoring picture by machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN106127702B (en) | 2018-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106127702A (en) | A kind of image mist elimination algorithm based on degree of depth study | |
CN103971342B (en) | A kind of image noise detection method based on convolutional neural networks | |
CN112614077B (en) | Unsupervised low-illumination image enhancement method based on generation countermeasure network | |
CN105574827B (en) | A kind of method, apparatus of image defogging | |
CN105069779B (en) | A kind of architectural pottery surface detail pattern quality detection method | |
CN103810504B (en) | Image processing method and device | |
CN110348376A (en) | A kind of pedestrian's real-time detection method neural network based | |
CN109118467A (en) | Based on the infrared and visible light image fusion method for generating confrontation network | |
CN109118445B (en) | Underwater image enhancement method based on multi-branch generation countermeasure network | |
CN106991666B (en) | A kind of disease geo-radar image recognition methods suitable for more size pictorial informations | |
CN109559310A (en) | Power transmission and transformation inspection image quality evaluating method and system based on conspicuousness detection | |
CN106447646A (en) | Quality blind evaluation method for unmanned aerial vehicle image | |
CN105741328A (en) | Shot image quality evaluation method based on visual perception | |
CN107016415A (en) | A kind of coloured image Color Semantic sorting technique based on full convolutional network | |
CN104966085A (en) | Remote sensing image region-of-interest detection method based on multi-significant-feature fusion | |
CN111652822B (en) | Single image shadow removing method and system based on generation countermeasure network | |
CN107330455A (en) | Image evaluation method | |
CN109829868A (en) | A kind of lightweight deep learning model image defogging method, electronic equipment and medium | |
CN108960404A (en) | A kind of people counting method and equipment based on image | |
CN106934455A (en) | Remote sensing image optics adapter structure choosing method and system based on CNN | |
CN110245695A (en) | A kind of TBM rock slag order of magnitude recognition methods based on convolutional neural networks | |
CN105303546B (en) | Neighbour's propagation clustering image partition method based on fuzzy connectedness | |
CN103955942A (en) | SVM-based depth map extraction method of 2D image | |
CN109886146A (en) | Flood information remote-sensing intelligent acquisition method and equipment based on Machine Vision Detection | |
CN107766828A (en) | UAV Landing Geomorphological Classification method based on wavelet convolution neutral net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |