CN109242788A - One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method - Google Patents

One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method Download PDF

Info

Publication number
CN109242788A
CN109242788A CN201810952196.2A CN201810952196A CN109242788A CN 109242788 A CN109242788 A CN 109242788A CN 201810952196 A CN201810952196 A CN 201810952196A CN 109242788 A CN109242788 A CN 109242788A
Authority
CN
China
Prior art keywords
image
light
level
low
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810952196.2A
Other languages
Chinese (zh)
Inventor
钱慧
陈晓旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810952196.2A priority Critical patent/CN109242788A/en
Publication of CN109242788A publication Critical patent/CN109242788A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to one kind to be based on coding-decoding convolutional neural networks low-light (level) image optimization method, uses low-light (level) image with corresponding normal illumination image as training dataset first;Then by way of data-driven, using the training dataset training U-Net type neural network model of step S1, its autonomous learning data characteristics is enabled;Optimization is reconstructed to collected low-light (level) image finally by the U-Net type neural network model after training, realizes image reconstruction.The present invention can be in the case where can not accurately model restructing algorithm, important feature in autonomous learning low-light (level) image, be not necessarily to manual intervention.

Description

One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method
Technical field
The present invention relates to image procossing domains, especially a kind of excellent based on coding-decoding convolutional neural networks low-light (level) image Change method.
Background technique
In real life, due to the limitation of environmental factor and imaging system condition, often there is many in image Noise, these picture noises have very big interference to the accuracy that human vision analyzes and determines.Currently, many noise images are all It can be restored well by restructing algorithm, however, under low-light level environment, since photon numbers and noise are relatively low, It therefore under image quality is under normal conditions bad, and it is very big for optimizing having always for picture quality under low-light level environment Market application scenarios, but up to the present under low-light level environment optimize picture quality still there is very big challenge.
Aiming at the problem that optimizing picture quality under low-light level environment, many solutions have been proposed at present.A kind of side Case is to obtain more visible image by adjusting image system hardware, such as sensitivity is turned up to increase brightness, passes through opening Aperture, prolonging exposure time increase in the way of flash lamp etc. signal-to-noise ratio under low-light level environment.But these methods have them Respective disadvantage, sensitivity, which is turned up, can make noise be simultaneously amplified, and prolonging exposure time can be because camera shake or target be moved It is dynamic to fog, therefore image quality is often bad.
Another scheme is that low-light (level) image is reconstructed by image processing algorithm, improves the brightness and noise of image Than.A kind of effective image processing algorithm is three-dimensional block matching algorithm, which divides the image into the block of same size, root first According to the similitude between them, it is combined into one group of three-dimensional matrice, is then handled with the method for associated filters, finally by Inverse transformation, treated, result is returned in original image, thus the reconstructed image after being optimized, however due to this method Belong to non-blind property reconstruct, needs to carry out manual intervention, the noise level parameters in clearly specified present image, if the noise of setting Rank is too small, and many noises may be still remained on image, if the level of noise of setting is excessive, may cause smooth, use Scene is very restricted.Another algorithm is the burst imaging method proposed by Hasinoff et al., and this method passes through school Good effect is just obtained with the mode for mixing multiple low-light (level) images, but due to needing intensive compliance evaluation, centainly The complexity of model is increased in degree, therefore is not suitable for the processing of video image.
Summary of the invention
In view of this, the purpose of the present invention is to propose to one kind based on coding-decoding convolutional neural networks low-light (level) image it is excellent Change method, being capable of important spy in the case where can not accurately model to restructing algorithm, in autonomous learning low-light (level) image Sign is not necessarily to manual intervention.
The present invention is realized using following scheme: one kind being based on coding-decoding convolutional neural networks low-light (level) image optimization side Method, specifically includes the following steps:
Step S1: using low-light (level) image with corresponding normal illumination image as training dataset;
Step S2: by way of data-driven, using the training dataset training U-Net type neural network mould of step S1 Type enables its autonomous learning data characteristics;
Step S3: collected low-light (level) image is reconstructed by the U-Net type neural network model after training excellent Change, realizes image reconstruction;Wherein, the reconstruction and optimization formula of the U-Net type neural network model after the training are as follows:
Wherein, RlearnFor by the U-Net type neural network model after study, f () is loss function, g () is regularization Parametric function (purpose is in order to avoid over-fitting), Θ are all parameters in deep neural network model, and θ is the member in Θ Element, xnFor the imaging under the conditions of normal light source, ynFor the imaging under low-light level environment, RθFor U-Net type neural network model, N is Image pixel point quantity.By multiple cycle trainings, model is made to reach optimum efficiency, once training is completed, RlearnIt can pass through Low-light (level) image optimized after reconstructed image.
Further, the U-Net type neural network model is constricted path in left-half, to extract the spy of data Sign, right half part is path expander, to increase the dimension of characteristic pattern;In order to be accurately positioned detail textures feature, road will be shunk The characteristic pattern of diameter is combined with the characteristic pattern of identical dimensional on symmetrical expansion path, and final output is counted up to input dimension, channel Exactly the same data realize image reconstruction end to end.
Further, the U-Net type neural network model is full convolutional network structure, and input picture first passes around 7 layers Convolutional layer is obtained wherein 3 × 3 convolution kernels that each convolutional layer is 1 by step-length carry out convolution twice, is carried out using padding Zero padding operation guarantees that the input dimension of convolution is identical as dimension is exported, the characteristic pattern after each convolution using activation primitive into Row Nonlinear Mapping;Wherein, convolutional layer 1 is known as constricted path to convolutional layer 4, use between different convolutional layers step-length for 22 × 2 filters carry out maximum pondization operation, to extract main feature, reduce the dimension of characteristic pattern;Convolutional layer 4 claims to convolutional layer 7 For path expander, uses step-length to carry out deconvolution for 22 × 2 filters between different convolutional layers, increases the dimension of characteristic pattern, Subsequent convolution operation is executed again after merging simultaneously with the characteristic pattern in constricted path with identical dimensional convolutional layer, is passing through 7 After layer convolutional layer, characteristic pattern identical with input dimension, 1 × 1 convolution nuclear convolution for being 1 using one layer of step-length, reconstruct are obtained Obtain output image.
Further, step S2 specifically:
Step S21: using low-light (level) image as input, by the forward-propagating of U-Net type neural network model, one is obtained Open output image;
Step S22: the penalty values of output image and corresponding normal illumination image are found out using L1 loss function, and are used Adam optimizer updates model parameter value;
Step S23: entire model 3000 periods of iteration, cycle iterations terminate, then model training flexure.
Further, step S21 further include: first by the sample of low-light (level) image and normal illumination image in same zone Domain is cut, and the image size after cutting is 256 × 256, and carries out the operation including overturning to the image after cutting Increase data set size, prevents over-fitting.
Further, in step S23, in 3000 periods, wherein the learning rate in preceding 1500 periods is 0.0001, The learning rate in 1500 periods is set as 0.00001 afterwards.
The present invention by the way of data-driven, using encode end to end-decode convolutional neural networks, by convolution Neural network is trained, and obtains model parameter, can be can not be accurately to restructing algorithm compared to traditional images restructing algorithm In the case where being modeled, important feature in autonomous learning low-light (level) image is not necessarily to manual intervention.Once convolutional neural networks Model training is completed, and just can quickly realize low-light (level) image reconstruction.Simultaneously in the quality of optimization low-light (level) image, therewith Preceding algorithm is compared, and has apparent advantage.
Compared with prior art, the invention has the following beneficial effects:
1, the present invention is handled image using deep neural network algorithm, can be suitable for currently on the market any A imaging system can reduce imaging system cost compared to physical hardware optimization method.
2. the present invention optimizes the algorithm of picture quality under low-light level environment compared to tradition, using the mould of deep neural network Type carries out image procossing, can main feature in autonomous learning data, be not necessarily to manual intervention, for optimization low-light-level imaging this Class can not accurately construct the scene of reconstruction model, have a clear superiority.
3, the present invention has certain generalization ability, and the image for using the imaging system different from data set to obtain is as defeated Enter, good effect can be obtained to a certain extent.
4, the present invention is suitable for the imaging operation under low-light level environment, optimizes the image matter of imaging system under low-light level environment Amount, under low brightness condition target detection and tracking, object identification all have a good application prospect.
Detailed description of the invention
Fig. 1 is the neural network structure schematic diagram of the embodiment of the present invention.
Fig. 2 is the neural metwork training flow diagram of the embodiment of the present invention.
Fig. 3 is the using process diagram of the embodiment of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless another It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
It present embodiments provides a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, specific packet Include following steps:
Step S1: using low-light (level) image with corresponding normal illumination image as training dataset;
Step S2: by way of data-driven, using the training dataset training U-Net type neural network mould of step S1 Type enables its autonomous learning data characteristics;
Step S3: collected low-light (level) image is reconstructed by the U-Net type neural network model after training excellent Change, realizes image reconstruction;Wherein, the reconstruction and optimization formula of the U-Net type neural network model after the training are as follows:
Wherein, RlearnFor by the U-Net type neural network model after study, f () is loss function, g () is regularization Parametric function (purpose is in order to avoid over-fitting), Θ are all parameters in deep neural network model, and θ is the member in Θ Element, xnFor the imaging under the conditions of normal light source, ynFor the imaging under low-light level environment, RθFor U-Net type neural network model, N is Image pixel point quantity.By multiple cycle trainings, model is made to reach optimum efficiency, once training is completed, RlearnIt can pass through Low-light (level) image optimized after reconstructed image.
As shown in Figure 1, the model of the present embodiment be a kind of deformation of coding-decoding convolutional neural networks, entire model from U-shaped, therefore referred to as U-Net model is showed in structure.In the present embodiment, the U-Net type neural network model is on a left side Half portion is divided into constricted path, and to extract the feature of data, right half part is path expander, to increase the dimension of characteristic pattern; In order to be accurately positioned detail textures feature, by the characteristic pattern phase of the characteristic pattern of constricted path and identical dimensional on symmetrical expansion path In conjunction with final output and input dimension, the identical data of port number realize image reconstruction end to end.
In the present embodiment, the U-Net type neural network model is full convolutional network structure, and input picture first passes around 7 layers of convolutional layer, wherein each convolutional layer by step-length be 13 × 3 convolution kernels carry out twice convolution obtain, using padding into Row zero padding operation guarantees that the input dimension of convolution is identical as output dimension, and the characteristic pattern after each convolution is using activation primitive Carry out Nonlinear Mapping;Wherein, convolutional layer 1 is known as constricted path to convolutional layer 4, use between different convolutional layers step-length for 22 × 2 filters carry out maximum pondization operation, to extract main feature, reduce the dimension of characteristic pattern;Convolutional layer 4 arrives convolutional layer 7 Referred to as path expander uses between different convolutional layers step-length to carry out deconvolution for 22 × 2 filters, increases the dimension of characteristic pattern Degree, while subsequent convolution operation is executed again after merging with the characteristic pattern in constricted path with identical dimensional convolutional layer, it is passing through After crossing 7 layers of convolutional layer, characteristic pattern identical with input dimension, 1 × 1 convolution nuclear convolution for being 1 using one layer of step-length, weight are obtained Structure obtains output image.
Preferably, the activation primitive is line rectification function.
As shown in Fig. 2, in the present embodiment, step S2 specifically:
Step S21: using low-light (level) image as input, by the forward-propagating of U-Net type neural network model, one is obtained Open output image;
Step S22: the penalty values of output image and corresponding normal illumination image are found out using L1 loss function, and are used Adam optimizer updates model parameter value;
Step S23: entire model 3000 periods of iteration, cycle iterations terminate, then model training flexure.
In the present embodiment, step S21 further include: first by the sample of low-light (level) image and normal illumination image in phase It is cut with region, the image size after cutting is 256 × 256, and carries out including overturning to the image after cutting Operation prevents over-fitting to increase data set size.
In the present embodiment, in step S23, in 3000 periods, wherein the learning rate in preceding 1500 periods is 0.0001, the learning rate in rear 1500 periods is set as 0.00001.
Particularly, it as shown in figure 3, the present embodiment acquires image by imaging system first under low-brightness scene, will adopt The image data collected is transferred to deep neural network image processing module, and trained U-Net is contained in the module Model parameter, reconstructed image of the input picture after deep neural network image processing module is optimized, with low-light level field Imaging under scape is compared, and is all improved significantly from human vision and picture appraisal quality index.
The present embodiment by the way of data-driven, using encode end to end-decode convolutional neural networks, by volume Product neural network is trained, and obtains model parameter, compared to traditional images restructing algorithm, can accurately can not calculated reconstruct In the case that method is modeled, important feature in autonomous learning low-light (level) image is not necessarily to manual intervention.Once convolutional Neural net Network model training is completed, and just can quickly realize low-light (level) image reconstruction.Simultaneously in the quality of optimization low-light (level) image, with Algorithm before is compared, and has apparent advantage.
The foregoing is merely presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with Modification, is all covered by the present invention.

Claims (6)

1. one kind is based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is characterised in that: including following step It is rapid:
Step S1: using low-light (level) image with corresponding normal illumination image as training dataset;
Step S2: by way of data-driven, training U-Net type neural network model using the training dataset of step S1, Enable its autonomous learning data characteristics;
Step S3: being reconstructed optimization to collected low-light (level) image by the U-Net type neural network model after training, real Existing image reconstruction;Wherein, the reconstruction and optimization formula of the U-Net type neural network model after the training are as follows:
Wherein, RlearnFor by the U-Net type neural network model after study, f () is loss function, g () is regularization parameter Function, Θ are all parameters in deep neural network model, and θ is the element in Θ, xnFor the imaging under the conditions of normal light source, ynFor the imaging under low-light level environment, RθFor U-Net type neural network model, N is image pixel point quantity.
2. it is according to claim 1 a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is special Sign is: the U-Net type neural network model is constricted path in left-half, to extract the feature of data, right side It is divided into path expander, to increase the dimension of characteristic pattern;In order to be accurately positioned detail textures feature, by the characteristic pattern of constricted path It is combined with the characteristic pattern of identical dimensional on symmetrical expansion path, final output and input dimension, the identical number of port number According to realizing image reconstruction end to end.
3. it is according to claim 2 a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is special Sign is: the U-Net type neural network model is full convolutional network structure, and input picture first passes around 7 layers of convolutional layer, wherein 3 × 3 convolution kernels that each convolutional layer is 1 by step-length carry out convolution twice and obtain, and carry out zero padding operation using padding, protect The input dimension for demonstrate,proving convolution is identical as output dimension, and the characteristic pattern after each convolution carries out non-linear reflect using activation primitive It penetrates;Wherein, convolutional layer 1 is known as constricted path to convolutional layer 4, use between different convolutional layers step-length for 22 × 2 filters into Row maximum pondization operation, to extract main feature, reduces the dimension of characteristic pattern;Convolutional layer 4 is known as expanding road to convolutional layer 7 Diameter uses between different convolutional layers step-length to carry out deconvolution for 22 × 2 filters, increases the dimension of characteristic pattern, at the same with receipts Characteristic pattern on contracting path with identical dimensional convolutional layer executes subsequent convolution operation after merging again, is passing through 7 layers of convolutional layer Afterwards, characteristic pattern identical with input dimension, 1 × 1 convolution nuclear convolution for being 1 using one layer of step-length are obtained, reconstruct is exported Image.
4. it is according to claim 1 a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is special Sign is: step S2 specifically:
Step S21: using low-light (level) image as input, by the forward-propagating of U-Net type neural network model, obtain one it is defeated Image out;
Step S22: the penalty values of output image and corresponding normal illumination image are found out using L1 loss function, and use Adam Optimizer updates model parameter value;
Step S23: entire model 3000 periods of iteration, cycle iterations terminate, then model training is completed.
5. it is according to claim 4 a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is special Sign is: step S21 further include: first cuts out the sample of low-light (level) image and normal illumination image in same area It cuts, the image size after cutting is 256 × 256, and carries out the operation including overturning to the image after cutting to increase number According to collection size, over-fitting is prevented.
6. it is according to claim 4 a kind of based on coding-decoding convolutional neural networks low-light (level) image optimization method, it is special Sign is: in step S23, in 3000 periods, wherein the learning rate in preceding 1500 periods is 0.0001, rear 1500 week The learning rate of phase is set as 0.00001.
CN201810952196.2A 2018-08-21 2018-08-21 One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method Pending CN109242788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810952196.2A CN109242788A (en) 2018-08-21 2018-08-21 One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810952196.2A CN109242788A (en) 2018-08-21 2018-08-21 One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method

Publications (1)

Publication Number Publication Date
CN109242788A true CN109242788A (en) 2019-01-18

Family

ID=65071688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810952196.2A Pending CN109242788A (en) 2018-08-21 2018-08-21 One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method

Country Status (1)

Country Link
CN (1) CN109242788A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978962A (en) * 2019-04-09 2019-07-05 广州市交通高级技工学校(广州市交通技师学院) A kind of low contrast indicating value digital image recognition method towards the calibrating of darkroom illumination photometer
CN110033419A (en) * 2019-04-17 2019-07-19 山东超越数控电子股份有限公司 A kind of processing method being adapted to warship basic image defogging
CN110060251A (en) * 2019-04-26 2019-07-26 福州大学 A kind of building surface crack detecting method based on U-Net
CN110070498A (en) * 2019-03-12 2019-07-30 浙江工业大学 A kind of image enchancing method based on convolution self-encoding encoder
CN110097106A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 The low-light-level imaging algorithm and device of U-net network based on deep learning
CN110097515A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 Low-light (level) image processing algorithm and device based on deep learning and spatio-temporal filtering
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks
CN110706173A (en) * 2019-09-27 2020-01-17 中国计量大学 Atomic force microscope image blind restoration method based on convolutional neural network
CN110728643A (en) * 2019-10-18 2020-01-24 上海海事大学 Low-illumination band noise image optimization method based on convolutional neural network
CN111047532A (en) * 2019-12-06 2020-04-21 广东启迪图卫科技股份有限公司 Low-illumination video enhancement method based on 3D convolutional neural network
CN111931857A (en) * 2020-08-14 2020-11-13 桂林电子科技大学 MSCFF-based low-illumination target detection method
WO2020238123A1 (en) * 2019-05-31 2020-12-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for improving color quality of images
US11057634B2 (en) 2019-05-15 2021-07-06 Disney Enterprises, Inc. Content adaptive optimization for neural data compression
WO2021232195A1 (en) * 2020-05-18 2021-11-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image optimization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105519109A (en) * 2013-08-06 2016-04-20 微软技术许可有限责任公司 Encoding video captured in low light
CN105844238A (en) * 2016-03-23 2016-08-10 乐视云计算有限公司 Method and system for discriminating videos
CN105894046A (en) * 2016-06-16 2016-08-24 北京市商汤科技开发有限公司 Convolutional neural network training and image processing method and system and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105519109A (en) * 2013-08-06 2016-04-20 微软技术许可有限责任公司 Encoding video captured in low light
CN105844238A (en) * 2016-03-23 2016-08-10 乐视云计算有限公司 Method and system for discriminating videos
CN105894046A (en) * 2016-06-16 2016-08-24 北京市商汤科技开发有限公司 Convolutional neural network training and image processing method and system and computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KIN GWN LORE ET AL.: "LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement", 《PATTERN RECOGNITION》 *
OLAF RONNEBERGER ET AL.: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV》 *
ZHANG, C. ET AL.: "CT artifact reduction via U-net CNN", 《SPIE 10574,MEDICAL IMAGING 2018: IMAGE PROCESSING》 *
刘超等: "超低照度下微光图像的深度卷积自编码网络复原", 《光学精密工程》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070498A (en) * 2019-03-12 2019-07-30 浙江工业大学 A kind of image enchancing method based on convolution self-encoding encoder
CN109978962A (en) * 2019-04-09 2019-07-05 广州市交通高级技工学校(广州市交通技师学院) A kind of low contrast indicating value digital image recognition method towards the calibrating of darkroom illumination photometer
CN109978962B (en) * 2019-04-09 2022-05-17 广州市交通高级技工学校(广州市交通技师学院) Low-contrast indicating value image intelligent identification method for darkroom illuminometer calibration
CN110033419A (en) * 2019-04-17 2019-07-19 山东超越数控电子股份有限公司 A kind of processing method being adapted to warship basic image defogging
CN110097106A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 The low-light-level imaging algorithm and device of U-net network based on deep learning
CN110097515A (en) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 Low-light (level) image processing algorithm and device based on deep learning and spatio-temporal filtering
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110163815B (en) * 2019-04-22 2022-06-24 桂林电子科技大学 Low-illumination reduction method based on multi-stage variational self-encoder
CN110060251A (en) * 2019-04-26 2019-07-26 福州大学 A kind of building surface crack detecting method based on U-Net
US11057634B2 (en) 2019-05-15 2021-07-06 Disney Enterprises, Inc. Content adaptive optimization for neural data compression
WO2020238123A1 (en) * 2019-05-31 2020-12-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for improving color quality of images
US11991487B2 (en) 2019-05-31 2024-05-21 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for improving color quality of images
CN110378845B (en) * 2019-06-17 2021-05-25 杭州电子科技大学 Image restoration method based on convolutional neural network under extreme conditions
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks
CN110706173A (en) * 2019-09-27 2020-01-17 中国计量大学 Atomic force microscope image blind restoration method based on convolutional neural network
CN110728643A (en) * 2019-10-18 2020-01-24 上海海事大学 Low-illumination band noise image optimization method based on convolutional neural network
CN111047532A (en) * 2019-12-06 2020-04-21 广东启迪图卫科技股份有限公司 Low-illumination video enhancement method based on 3D convolutional neural network
WO2021232195A1 (en) * 2020-05-18 2021-11-25 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image optimization
CN111931857B (en) * 2020-08-14 2022-09-02 桂林电子科技大学 MSCFF-based low-illumination target detection method
CN111931857A (en) * 2020-08-14 2020-11-13 桂林电子科技大学 MSCFF-based low-illumination target detection method

Similar Documents

Publication Publication Date Title
CN109242788A (en) One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method
US20200151858A1 (en) Image contrast enhancement method and device, and storage medium
CN108550115B (en) Image super-resolution reconstruction method
Zhou et al. Lednet: Joint low-light enhancement and deblurring in the dark
CN110378845B (en) Image restoration method based on convolutional neural network under extreme conditions
CN109712083A (en) A kind of single image to the fog method based on convolutional neural networks
CN109255758B (en) Image enhancement method based on all 1 x 1 convolution neural network
CN111915525B (en) Low-illumination image enhancement method capable of generating countermeasure network based on improved depth separation
CN107203985B (en) A kind of more exposure image fusion methods under end-to-end deep learning frame
CN112001863B (en) Underexposure image recovery method based on deep learning
CN108280814A (en) Light field image angle super-resolution rate method for reconstructing based on perception loss
CN113168670A (en) Bright spot removal using neural networks
CN111402145B (en) Self-supervision low-illumination image enhancement method based on deep learning
CN112308803B (en) Self-supervision low-illumination image enhancement and denoising method based on deep learning
Pang et al. Fan: Frequency aggregation network for real image super-resolution
CN114897752B (en) Single-lens large-depth-of-field computing imaging system and method based on deep learning
CN109584188A (en) A kind of image defogging method based on convolutional neural networks
CN106296618A (en) A kind of color image defogging method based on Gaussian function weighted histogram regulation
CN114219722A (en) Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing
CN112465726A (en) Low-illumination adjustable brightness enhancement method based on reference brightness index guidance
CN111553856B (en) Image defogging method based on depth estimation assistance
Panetta et al. Deep perceptual image enhancement network for exposure restoration
Zhang et al. INFWIDE: Image and feature space Wiener deconvolution network for non-blind image deblurring in low-light conditions
Chen et al. Single-image hdr reconstruction with task-specific network based on channel adaptive RDN
CN113112439A (en) Image fusion method, training method, device and equipment of image fusion model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190118