CN108898562A - A kind of mobile device image defogging method based on deep learning - Google Patents

A kind of mobile device image defogging method based on deep learning Download PDF

Info

Publication number
CN108898562A
CN108898562A CN201810652664.4A CN201810652664A CN108898562A CN 108898562 A CN108898562 A CN 108898562A CN 201810652664 A CN201810652664 A CN 201810652664A CN 108898562 A CN108898562 A CN 108898562A
Authority
CN
China
Prior art keywords
indicate
image
pixel
rate matrix
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810652664.4A
Other languages
Chinese (zh)
Other versions
CN108898562B (en
Inventor
杨溪
范玉龙
余涛
陈荣
张天伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN201810652664.4A priority Critical patent/CN108898562B/en
Publication of CN108898562A publication Critical patent/CN108898562A/en
Application granted granted Critical
Publication of CN108898562B publication Critical patent/CN108898562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a kind of mobile device image defogging method based on deep learning, includes the following steps:Obtain the foggy image acquired in real time;Foggy image input area detects network, extracts to region-by-region foggy image feature and exports the relevant characteristic pattern of foggy image;Characteristic pattern is passed to nonlinear regression network layer, obtains the medium transmissivity of each zonule of foggy image, obtains transmission rate matrix;It transmits rate matrix and is passed to Steerable filter module, output fining transmission rate matrix;By transmission rate matrix and there is the grayscale image of mist figure to calculate atmosphere light;Restore the image after the collected foggy image obtains defogging by transmiting rate matrix.Agent model of the present invention by the deep neural network model with region detection function as defogging method, the image block image cropping at fixed size is not needed in training network model, the receptive field for expanding the network node of each layer fully takes into account the relationship in image between each region.

Description

A kind of mobile device image defogging method based on deep learning
Technical field
The present invention relates to a kind of image defogging methods of mobile device based on deep learning, belong to image defogging processing neck Domain.
Background technique
In the design of depth network structure, existing means are first using the convolution kernel of multiple scales to input picture Feature extraction is carried out, is then merged the feature extracted, here using the technology of multi-scale feature fusion.Again Using maxout as activation primitive, the effect of the activation primitive is the convolution in order to learn to obtain to extract dark channel information Core.It is finally the output for predicting to the end using the method for nonlinear regression.But often due in training network mould When type, the image block image cropping at fixed size is needed, limits the receptive field of model, and makes model incomplete In view of the relationship between each region in image.
Summary of the invention
The it is proposed of the present invention in view of the above problems, a kind of mobile device image defogging method based on deep learning, including Following steps:S1:Obtain the foggy image acquired in real time;S2:The foggy image input area detection that step S1 is acquired Network, the extraction of the region-by-region foggy image have mist feature and export foggy image characteristic pattern;S3:The institute that step S2 is exported It states foggy image characteristic pattern and is passed to nonlinear regression network, the medium transmissivity for obtaining each zonule of the foggy image obtains To transmission rate matrix;S4:Transmission rate matrix process of refinement is obtained into fining transmission rate matrix;S5:Pass through the fining It transmits rate matrix and foggy image grayscale image calculates atmosphere light;S6:Rate matrix, which is transmitted, by the fining restores the acquisition To foggy image obtain defogging after image and picture is exported.
Further, the region detection network is operated by sliding window, and extracting to region-by-region the foggy image has mist special It levies and the relationship characteristic between adjacent area is extracted by convolution operation.
Further, the matrix process of refinement refines the transmission rate matrix by Steerable filter;
The Steerable filter is:
Wherein, wherein ωkIndicate the window centered on pixel k, tiIndicate that the medium of fining transmission rate matrix is saturating The value at i-th of position of rate is penetrated,Indicate the pixel value in the grayscale image at i-th of position, akAnd bkIt respectively indicates linear Coefficient constant value,Indicate the value in the medium transmission rate matrix at i-th of position, niIndicate redundancy, i and k distinguish table Show i and k pixel,Indicate a of i pixelkAverage value,Indicate the b of i pixelkAverage value, ε indicate penalty coefficient, E (ak,bk) indicate ωkCost function.
Further, the step S3 calculates medium transmissivity by neural network structure;The neural network structure packet It includes:First unit Ai(x) and second unit Bi(x);
The first unit Ai(x) it is expressed as:
gi(x)=Wi×x,Wi∈R3×3×c
Second of unit Bi(x) it is expressed as:
Bi(x)=Fi(x)+x;
Wherein, the input of x expression unit, the index of i representation module,Indicate 1 pixel × 1 pixel convolution operation, a is indicated A-th of 1 pixels × 1 pixel convolution kernel, rBN indicate activation primitive, giIndicate 3 pixel x3 pixel convolution operations, Fi(x) heap is indicated The module of folded ampuliform structure, W indicate the weight of the neural network structure, R1×1×cIndicate that the first dimension dimension is 1 pixel, the Two-dimentional dimension is 1 pixel, and third dimension dimension is the three-dimensional tensor of c, and c indicates the port number of x, R3×3×cFirst dimension dimension is 3 pictures Element, the second dimension dimension is 3 pixels, and third dimension dimension is the three-dimensional tensor of c.
Further, the medium transmissivity Xreg
Xsliding(x)=r (Wsliding×x),Wsliding∈R3×3×c
Xreg(x)=rb(Wreg×x),Wreg∈R1×1×c
Wherein, XslidingIndicate the output of sliding window, r indicates activation primitive ReLU, XregIndicate the output of recurrence layer, rb:x→ Min (max (x, 0), 1), W indicate the weight of neural network structure, WslidingIndicate the weighted value of the neural network of sliding window, WregIndicate the weighted value of the neural network of recurrence layer.
Further, the pixel value of the transmission rate matrix of step S3 output is arranged successively from small to large, from minimum Pixel value rise take the 1% of all pixels value in a matrix corresponding position be p0.01;In the foggy image grayscale image p0.01 Pixel maximum is found in respective pixel, and obtains the location of pixels p of pixel maximum, finds position in the foggy image The corresponding pixel value of p, and take the mean value of pixel value to obtain atmosphere light by channel the pixel value of these positions.
Further, recovery module restores fog free images J (p) by following formula:
Wherein, I is that input has a mist figure, and t is the medium transmissivity after step 4 fining, A is estimated by step 5 The value for the global atmosphere light that meter comes out.
The advantage of the invention is that:Firstly, the present invention is made by the deep neural network model with region detection function For the agent model of defogging method, the image block image cropping at fixed size is not needed in training network model, is expanded The receptive field of the network node of each layer, fully takes into account the relationship in image between each region.
Secondly, across channel cascade pond technology and residual error structure are used in network structure design, so that defogging model Operand is greatly reduced while being capable of real time processed images with better generalization ability and computational efficiency;Using grayscale image and The method that transmissivity combines calculates atmosphere light, avoids interference of the white object to calculating process.
Finally, we carry out light-weight design, light-weighted network to model using the method for network model parameter reduction It can be deployed on mobile phone, within the acceptable calculating time, defogging processing is carried out to image.
Detailed description of the invention
For the clearer technical solution for illustrating the embodiment of the present invention or the prior art, to embodiment or will show below There is attached drawing needed in technical description to do one simply to introduce, it should be apparent that, the accompanying drawings in the following description is only Some embodiments of the present invention without creative efforts, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is overall structure diagram of the invention.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present invention clearer, below with reference to the embodiment of the present invention In attached drawing, technical solution in the embodiment of the present invention carries out clear and complete description:
It is as shown in Figure 1 a kind of mobile device image defogging method based on deep learning of the present invention comprising following step Suddenly:
S1:Obtain the foggy image acquired in real time;
S2:The foggy image input area that step S1 is acquired detects network, has mist figure described in the extraction of region-by-region As having mist feature and exporting foggy image characteristic pattern;
S3:The foggy image characteristic pattern that step S2 is exported is passed to nonlinear regression network, has mist figure described in acquisition The medium transmissivity of each zonule of picture obtains transmission rate matrix;
S4:Transmission rate matrix process of refinement is obtained into fining transmission rate matrix;
S5:Rate matrix is transmitted by the fining and foggy image grayscale image calculates Real-Time Atmospheric light;
S6:Image after restoring the collected foggy image acquisition defogging by the fining transmission rate matrix is simultaneously Picture is exported.
In the present embodiment, the region detection network is operated by sliding window, extracts to region-by-region the foggy image There is mist feature and the relationship characteristic between adjacent area is extracted by process of convolution.
As preferred embodiment, in the defogging to real time picture, first entitled ShowImage's The defogging to picture is realized in Acitivity.Click event is registered to button btn_choose simultaneously, when clicking the button When can generate a new Intent object, in comprising for from photograph album obtain photo required for information.The object passes Give startActivityForResult method.Its result can return in onActivityResult () method.In this method In, the path of the selected picture of user is got from the information of return, using in BitmapFactory class DedcodeFile () method gets specific pictorial information, and the picture is shown on interface by ImageView object Show.
Defogging is carried out to the picture chosen by clicking " defogging " button on interface.Click thing in the button In part, dehazeImg () method is called to complete the defogging of picture.The not available save button state by before simultaneously It is changed to can be used.Further, bitmap (btimap) object that will acquire first in dehazeImg () method, is converted to phase The RGB triple channel picture answered, convenient for the processing of subsequent step.Next, the Andorid interface pair provided using tensorflow There is mist picture to be handled.Here model and its parameter loaded by tensorflow is for we with Region Proposal The special convolutional neural networks structure of Network theory building, parameter is that have on mist/fogless image data collection largely It carries out many experiments and trains the parameter come, while the model further comprises the design philosophy of MobileNet, guaranteeing picture On the basis of handling quality, the requirement by model to resource is reduced.
As preferred embodiment, in the defogging to real time picture, use Feed () method in TensorflowInferenceInterface class object, will have mist picture to be input to corresponding network In structure node.The run () method for recalling the object has mist figure to input using the convolutional neural networks model of offer Piece is handled.Finally, by fetch () method of the object, after getting processing from corresponding neural network output node The atmospheric transmissivity for having mist picture, and constitute and have the atmospheric transmissivity figure of mist picture.There to be mist figure to be mentioned by the library OpenCV The tool of confession is converted into matrix (Mat) object.By the matrix object and the atmospheric transmissivity obtained above for having mist picture Figure is transmitted to enhance () method, and this method has mist picture for cooperate that atmospherical scattering model and neural network model estimate Atmospheric transmissivity figure carries out defogging.In enhence () method, obtain the smallest preceding 1% in atmospheric transmissivity figure It is worth, and finds the corresponding position of these values from atmospheric transmissivity figure.From these positions, the grayscale image energy for having mist picture is found The position of maximum value.The position that will be finally obtained has been applied in mist picture, and the average value of these positions is used as and restores picture Atmosphere light.
As preferred embodiment, the matrix process of refinement refines the transmissivity square by Steerable filter Battle array;The Steerable filter is:
Wherein, wherein ωkIndicate the window centered on pixel k, tiIndicate that the medium of fining transmission rate matrix is saturating The value at i-th of position of rate is penetrated,Indicate the pixel value in the grayscale image at i-th of position, akAnd bkIt respectively indicates linear Coefficient constant value,Indicate the value in the medium transmission rate matrix at i-th of position, niIndicate redundancy, i and k distinguish table Show i and k pixel,Indicate a of i pixelkAverage value,Indicate the b of i pixelkAverage value, ε indicate penalty coefficient, E (ak,bk) indicate ωkCost function.
It is that the figure more refines to the application-oriented filtering algorithm of atmospheric transmissivity figure as preferred embodiment.? In present embodiment, the step S3 calculates medium transmissivity by neural network structure;The neural network structure includes:The One unit Ai(x) and second unit Bi(x);
The first unit Ai(x) it is expressed as:
gi(x)=Wi×x,Wi∈R3×3×c
Second of unit Bi(x) it is expressed as:
Bi(x)=Fi(x)+x;
Wherein, x indicates the input of unit, the index of i representation module, fi aIndicate 1 pixel × 1 pixel convolution operation, a is indicated A-th of 1 pixels × 1 pixel convolution kernel, rBN indicate activation primitive, giIndicate 3 pixel x3 pixel convolution operations, Fi(x) heap is indicated The module of folded ampuliform structure, W indicate the weight of the neural network structure, R1×1×cIndicate that the first dimension dimension is 1 pixel, the Two-dimentional dimension is 1 pixel, and third dimension dimension is the three-dimensional tensor of c, and c indicates the port number of x, R3×3×cFirst dimension dimension is 3 pictures Element, the second dimension dimension is 3 pixels, and third dimension dimension is the three-dimensional tensor of c.
As preferred embodiment, the medium transmissivity Xreg
Xsliding(x)=r (Wsliding×x),Wsliding∈R3×3×c
Xreg(x)=rb(Wreg×x),Wreg∈R1×1×c
Wherein, XslidingIndicate the output of sliding window, r indicates activation primitive ReLU, XregIndicate the output of recurrence layer, rb:x→ Min (max (x, 0), 1), W indicate the weight of neural network structure, WslidingIndicate the weighted value of the neural network of sliding window, WregIndicate the weighted value of the neural network of recurrence layer.
In the present embodiment, the pixel value of the transmission rate matrix of step S3 output is arranged successively from small to large, Taken from minimum pixel value the 1% of all pixels value in a matrix corresponding position be p0.01;In the foggy image grayscale image Pixel maximum is found in p0.01 respective pixel, and obtains the location of pixels p of pixel maximum, is looked in the foggy image The mean value of pixel value is taken to obtain atmosphere light by channel to the corresponding pixel value of position p, and to the pixel value of these positions.
In the present embodiment, recovery module restores fog free images J (p) by following formula:
Wherein, I is that input has a mist figure, and t is the medium transmissivity after fining, A is estimated by step 5 Global atmosphere light value.It is to be understood that can also be carried out by other means to mist elimination image in other embodiments Restore, as long as can satisfy can clearly show image.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (7)

1. a kind of mobile device image defogging method based on deep learning, which is characterized in that include the following steps:
S1:Obtain the foggy image acquired in real time;
S2:The foggy image input area that step S1 is acquired detects network, and the extraction of the region-by-region foggy image has Mist feature simultaneously exports foggy image characteristic pattern;
S3:The foggy image characteristic pattern that step S2 is exported is passed to nonlinear regression network, obtains the foggy image The medium transmissivity of each zonule obtains transmission rate matrix;
S4:Transmission rate matrix process of refinement is obtained into fining transmission rate matrix;
S5:Rate matrix is transmitted by the fining and foggy image grayscale image calculates atmosphere light;
S6:Restore the collected foggy image by the fining transmission rate matrix and obtains the image after defogging and will figure As output.
2. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
The region detection network is operated by sliding window, and extracting to region-by-region the foggy image has mist feature and grasped by convolution Make the relationship characteristic between extraction adjacent area.
3. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
The matrix process of refinement refines the transmission rate matrix by Steerable filter;
The Steerable filter calculates the fining transmission rate matrix:
Wherein, ωkIndicate the window centered on pixel k, tiIndicate i-th of medium transmissivity of fining transmission rate matrix Value at position,Indicate the pixel value in the grayscale image at i-th of position, akAnd bkRespectively indicate linear coefficient constant Value,Indicate the value in the medium transmission rate matrix at i-th of position, niIndicate redundancy, i and k respectively indicate i and k picture Vegetarian refreshments,Indicate a of i pixelkAverage value,Indicate the b of i pixelkAverage value, ε indicate penalty coefficient, E (ak,bk) indicate ωkCost function.
4. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
The step S3 passes through neural computing medium transmissivity;The structure of the neural network includes:First unit Ai(x) With second unit Bi(x);
The first unit Ai(x) it is expressed as:
gi(x)=Wi×x,Wi∈R3×3×c
Second of unit Bi(x) it is expressed as:
Bi(x)=Fi(x)+x;
Wherein, x indicates the input of unit, the index of i representation module, fi aIndicate 1 pixel × 1 pixel convolution operation, a indicates a A 1 pixel × 1 pixel convolution kernel, rBN indicate activation primitive, giIndicate 3 pixel x3 pixel convolution operations, Fi(x) stacking is indicated The module of ampuliform structure, W indicate the weight of the neural network structure, R1×1×cIndicate that the first dimension dimension is 1 pixel, the second dimension Dimension is 1 pixel, and third dimension dimension is the three-dimensional tensor of c, and c indicates the port number of x, R3×3×cFirst dimension dimension is 3 pixels, the Two-dimentional dimension is 3 pixels, and third dimension dimension is the three-dimensional tensor of c.
5. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
The medium transmissivity Xreg
Xsliding(x)=r (Wsliding×x),Wsliding∈R3×3×c
Xreg(x)=rb(Wreg×x),Wreg∈R1×1×c
Wherein, XslidingIndicate the output of sliding window, r indicates activation primitive ReLU, XregIndicate the output of recurrence layer, rb:x→min (max (x, 0), 1), W indicate the weight of neural network structure, WslidingIndicate the weighted value of the neural network of sliding window, Wreg Indicate the weighted value of the neural network of recurrence layer.
6. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
The pixel value of the transmission rate matrix of step S3 output is arranged successively from small to large, is taken from minimum pixel value all Corresponding position is p0.01 to the 1% of pixel value in a matrix;It is found in the foggy image grayscale image p0.01 respective pixel Pixel maximum, and the location of pixels p of pixel maximum is obtained, p corresponding pixel value in position is found in the foggy image, And the mean value of pixel value is taken to obtain atmosphere light by channel the pixel value of these positions.
7. a kind of mobile device image defogging method based on deep learning according to claim 1, it is further characterized in that:
Recovery module restores fog free images J (p) by following formula:
Wherein, I is that input has a mist figure, and t is medium transmissivity after fining, A be estimated by step 5 it is complete The value of office's atmosphere light.
CN201810652664.4A 2018-06-22 2018-06-22 Mobile equipment image defogging method based on deep learning Active CN108898562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810652664.4A CN108898562B (en) 2018-06-22 2018-06-22 Mobile equipment image defogging method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810652664.4A CN108898562B (en) 2018-06-22 2018-06-22 Mobile equipment image defogging method based on deep learning

Publications (2)

Publication Number Publication Date
CN108898562A true CN108898562A (en) 2018-11-27
CN108898562B CN108898562B (en) 2022-04-12

Family

ID=64345802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810652664.4A Active CN108898562B (en) 2018-06-22 2018-06-22 Mobile equipment image defogging method based on deep learning

Country Status (1)

Country Link
CN (1) CN108898562B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829868A (en) * 2019-02-28 2019-05-31 华南理工大学 A kind of lightweight deep learning model image defogging method, electronic equipment and medium
CN110738623A (en) * 2019-10-18 2020-01-31 电子科技大学 multistage contrast stretching defogging method based on transmission spectrum guidance
CN111626960A (en) * 2020-05-29 2020-09-04 Oppo广东移动通信有限公司 Image defogging method, terminal and computer storage medium
CN112419166A (en) * 2020-09-24 2021-02-26 南京晓庄学院 Image defogging method based on combination of local region segmentation and SCN
US11024062B2 (en) 2018-06-11 2021-06-01 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for evaluating image quality
CN113643199A (en) * 2021-07-27 2021-11-12 上海交通大学 Image defogging method and system under foggy condition based on diffusion information
CN114648467A (en) * 2022-05-18 2022-06-21 中山大学深圳研究院 Image defogging method and device, terminal equipment and computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750674A (en) * 2012-04-26 2012-10-24 长春理工大学 Video image defogging method based on self-adapting allowance
CN104008527A (en) * 2014-04-16 2014-08-27 南京航空航天大学 Method for defogging single image
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN106169176A (en) * 2016-06-27 2016-11-30 上海集成电路研发中心有限公司 A kind of image defogging method
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori
CN106600560A (en) * 2016-12-22 2017-04-26 福州大学 Image defogging method for automobile data recorder
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750674A (en) * 2012-04-26 2012-10-24 长春理工大学 Video image defogging method based on self-adapting allowance
CN104008527A (en) * 2014-04-16 2014-08-27 南京航空航天大学 Method for defogging single image
CN105931220A (en) * 2016-04-13 2016-09-07 南京邮电大学 Dark channel experience and minimal image entropy based traffic smog visibility detection method
CN106169176A (en) * 2016-06-27 2016-11-30 上海集成电路研发中心有限公司 A kind of image defogging method
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori
CN106600560A (en) * 2016-12-22 2017-04-26 福州大学 Image defogging method for automobile data recorder
CN107301624A (en) * 2017-06-05 2017-10-27 天津大学 The convolutional neural networks defogging algorithm pre-processed based on region division and thick fog
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHAOQING REN ET AL.: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《ARXIV:1506.01497V3 [CS.CV]》 *
TAO ZHANG ET AL.: "Robust Image Dehazing Using a Duided Filter", 《2015 12TH INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (FSKD)》 *
胡晨辉 等: "导向滤波优化的单幅去雾算法", 《传感器与微系统》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11024062B2 (en) 2018-06-11 2021-06-01 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for evaluating image quality
CN109829868A (en) * 2019-02-28 2019-05-31 华南理工大学 A kind of lightweight deep learning model image defogging method, electronic equipment and medium
CN110738623A (en) * 2019-10-18 2020-01-31 电子科技大学 multistage contrast stretching defogging method based on transmission spectrum guidance
CN111626960A (en) * 2020-05-29 2020-09-04 Oppo广东移动通信有限公司 Image defogging method, terminal and computer storage medium
CN112419166A (en) * 2020-09-24 2021-02-26 南京晓庄学院 Image defogging method based on combination of local region segmentation and SCN
CN112419166B (en) * 2020-09-24 2024-01-05 南京晓庄学院 Image defogging method based on combination of local region segmentation and SCN
CN113643199A (en) * 2021-07-27 2021-11-12 上海交通大学 Image defogging method and system under foggy condition based on diffusion information
CN113643199B (en) * 2021-07-27 2023-10-27 上海交通大学 Image defogging method and system under foggy condition based on diffusion information
CN114648467A (en) * 2022-05-18 2022-06-21 中山大学深圳研究院 Image defogging method and device, terminal equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN108898562B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN108898562A (en) A kind of mobile device image defogging method based on deep learning
CN109493350B (en) Portrait segmentation method and device
CN111985343A (en) Method for constructing behavior recognition deep network model and behavior recognition method
CN109325954A (en) Image partition method, device and electronic equipment
CN108765278A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN106650615B (en) A kind of image processing method and terminal
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN105608456A (en) Multi-directional text detection method based on full convolution network
CN110188747A (en) A kind of sloped correcting method of text image, device and image processing equipment
CN112597941A (en) Face recognition method and device and electronic equipment
CN110399788A (en) AU detection method, device, electronic equipment and the storage medium of image
CN110263768A (en) A kind of face identification method based on depth residual error network
CN107749048B (en) Image correction system and method, and color blindness image correction system and method
CN111080670A (en) Image extraction method, device, equipment and storage medium
CN112581409A (en) Image defogging method based on end-to-end multiple information distillation network
CN111833360B (en) Image processing method, device, equipment and computer readable storage medium
CN110288715A (en) Virtual necklace try-in method, device, electronic equipment and storage medium
WO2021103474A1 (en) Image processing method and apparatus, storage medium and electronic apparatus
CN110942037A (en) Action recognition method for video analysis
CN112861970A (en) Fine-grained image classification method based on feature fusion
CN113361387A (en) Face image fusion method and device, storage medium and electronic equipment
CN113936309A (en) Facial block-based expression recognition method
CN108537109A (en) Monocular camera sign Language Recognition Method based on OpenPose
Cambuim et al. An efficient static gesture recognizer embedded system based on ELM pattern recognition algorithm
CN113435408A (en) Face living body detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant