CN117351216A - Image self-adaptive denoising method based on supervised deep learning - Google Patents
Image self-adaptive denoising method based on supervised deep learning Download PDFInfo
- Publication number
- CN117351216A CN117351216A CN202311651465.9A CN202311651465A CN117351216A CN 117351216 A CN117351216 A CN 117351216A CN 202311651465 A CN202311651465 A CN 202311651465A CN 117351216 A CN117351216 A CN 117351216A
- Authority
- CN
- China
- Prior art keywords
- image
- noise
- denoising
- original image
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 20
- 238000010801 machine learning Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 42
- 230000003044 adaptive effect Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 10
- 235000008331 Pinus X rigitaeda Nutrition 0.000 claims description 5
- 235000011613 Pinus brutia Nutrition 0.000 claims description 5
- 241000018646 Pinus brutia Species 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 2
- 238000004088 simulation Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image self-adaptive denoising method based on supervised deep learning, which comprises the following steps: constructing a plurality of training data sets; inputting the noisy original image and the corresponding non-noisy original image in the first training data set into a machine learning model for training to obtain input layer parameters, middle layer parameters and initial output layer parameters of a denoising network; constructing a fully-connected neural network, and taking the output of the fully-connected neural network as the output layer parameter of the denoising network to replace the initial output layer parameter; inputting the noisy original image and the corresponding non-noisy original image, analog gain and digital gain in the second training data set into a denoising network and an initial joint denoising network formed by a fully-connected neural network for training to obtain a joint denoising network; self-adaptive denoising of the original image is realized through a joint denoising network; the image denoising generalization capability of the AI network model is improved by utilizing the analog gain and the digital gain of the image.
Description
Technical Field
The invention relates to the technical field of image denoising, in particular to an image self-adaptive denoising method based on supervised deep learning.
Background
During the acquisition and storage of digital images, the images captured by the sensors are of the raw type, due to the hardware characteristics, and are generally subject to various noise, including light noise, dark noise, read noise, quantization noise, etc. ISP technology (techniques and methods for various processing and enhancement of digital images) can eliminate these disturbances and restore the original details and colors of the image. The basic processes of the ISP include image acquisition, preprocessing, main processing, and post-processing. Image acquisition refers to the conversion of an optical signal into a digital signal by a camera or sensor. In this process, the denoising module in the ISP is a part with a larger influence on the image quality, and compared with the traditional denoising algorithm, the AI network model (for example, the MLP multi-layer perceptron) can surpass the most advanced traditional algorithm in the training results under the conditions of sufficient training set number, sufficient training time length and reasonable network parameter setting. Therefore, the AI network model has infinite image quality improvement possibility, can process fusion models of various types of noise, and is not two choices for extremely dim light, night scenes, underwater shooting, medical imaging and other high-precision requirements, extremely large noise and extremely weak signals, scenes and AI denoising. However, the current AI denoising algorithm has poor generalization capability, and does not utilize two information of analog gain and digital gain of an image to perform denoising.
In view of the above, the invention provides an image adaptive denoising method based on supervised deep learning, which improves the denoising generalization capability of an AI network model image by utilizing the analog gain and the digital gain of the image.
Disclosure of Invention
The invention aims to provide an image self-adaptive denoising method based on supervised deep learning, which comprises the following steps: constructing a plurality of training data sets; the training data set comprises a noisy original image, a non-noisy original image, an analog gain and a digital gain; inputting the noisy original image and the corresponding non-noisy original image in the first training data set into a machine learning model for training to obtain input layer parameters, middle layer parameters and initial output layer parameters of a denoising network; the denoising network is used for processing the input original image with noise to obtain an original image without noise; constructing a fully-connected neural network, and taking the output of the fully-connected neural network as the output layer parameter of the denoising network to replace the initial output layer parameter; the fully-connected neural network is used for processing the analog gain and the digital gain corresponding to the input noisy original image to obtain the output layer parameters of the denoising network; inputting a noisy original image and a corresponding non-noisy original image, analog gain and digital gain in a second training data set into an initial joint denoising network formed by the denoising network and the fully-connected neural network for training, and updating parameters of the initial joint denoising network based on a joint loss function to obtain a joint denoising network; the joint loss function comprises an L1 loss function, a perception loss function, a color loss function and a gradient loss function; and inputting the original image and the corresponding analog gain and digital gain into the joint denoising network to realize the self-adaptive denoising of the original image.
Further, the training data set comprises a training data set acquired in a real scene and a synthesized training data set; wherein the first training data set is the synthesized training data set; the second training data set is the training data set acquired in the real scene.
Further, for a training data set acquired in a real scene: taking image data shot by an image acquisition device as the original image with noise; and denoising the original image with noise acquired by the image acquisition equipment by using a multi-frame combination mode based on the characteristic of the noise zero mean value to obtain the original image without noise.
Further, the analog gain and the digital gain are obtained based on parameters of the sensor in the image acquisition device when the noisy original image is captured.
Further, for the synthetic training data set: using a sense500 data set, and carrying out long exposure shooting through an image sensor to obtain the original image without noise; adding Gaussian poisson noise to the sense500 data set to obtain a noise data set; based on the noise data set, performing noise calibration on the multi-frame gray level graph to obtain noise parameters of the Gaussian poisson model under different analog gains; and processing the original image without noise, which is shot by an image sensor, based on the noise parameters of the Gao Sibo pine model, so as to obtain the original image with noise.
Further, the expression of the L1 loss function is:
;
wherein,representing the L1 penalty function,/->Representing an i-th noise-free raw image in the second training dataset;representing an i-th noisy original image in the second training data set; />Represents an L1 norm; DDN () represents the output of the initial joint denoising network.
Further, the expression of the perceptual loss function is:
;
wherein,representing a perceptual loss function; />Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; />(-) represents a feature map extracted using a denoising network; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
Further, the expression of the color loss function is:
;
wherein,representing a color loss function; />Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; c () represents a feature map generated by convolution operation using Gaussian blur kernels; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
Further, the gradient loss function has the expression:
;
wherein,representing a gradient loss function; g (-) represents the use of transverse and longitudinal directionsExtracting a characteristic diagram extracted by a gradient extraction operator; />Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
Further, the machine learning model is a UNet network or an SGN network.
The technical scheme of the embodiment of the invention has at least the following advantages and beneficial effects:
the image denoising method provided by the invention can generate the denoised raw image by using the analog gain and the digital gain of the noisy raw image.
The network model comprises a DDN (deep denoising network) and an MLP (fully connected neural network), wherein the DDN is a denoising network, and the MLP dynamically adjusts parameters of a DDN output layer according to analog gain and digital gain, so that the DDN can adjust denoising effect according to the two gains, and further realize self-adaptive denoising.
Drawings
FIG. 1 is an exemplary flow chart of an image adaptive denoising method based on supervised deep learning provided by the present invention;
fig. 2 is an exemplary schematic diagram of a joint denoising network provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Fig. 1 is an exemplary flowchart of an image adaptive denoising method based on supervised deep learning.
As shown in fig. 1, the image adaptive denoising method based on supervised deep learning includes:
step 110, constructing a plurality of training data sets; the training data set includes a noisy raw image, a non-noisy raw image, an analog gain, and a digital gain. Wherein, the Analog Gain (Analog Gain) is the Gain of the Analog brightness signal outputted by the camera sensor as the amplification; the Digital Gain (Digital Gain) is a Gain calculated as an amplification after converting an analog signal into a Digital signal. Noise is also introduced while the signal is amplified, and the magnitude of the introduced noise has a certain relationship with the two parameters.
The training data set comprises a training data set acquired in a real scene and a synthesized training data set; wherein the first training data set is the synthesized training data set; the second training data set is the training data set acquired in the real scene.
For training data sets acquired in real scenes: taking image data shot by an image acquisition device as the original image with noise; and denoising the original image with noise acquired by the image acquisition equipment by using a multi-frame combination mode based on the characteristic of the noise zero mean value to obtain the original image without noise. The analog gain and the digital gain are obtained based on parameters of the sensor in the image acquisition device when the noisy original image is captured. For example, based on a data set in a real scene, capturing a real raw image by using a sensor, and acquiring data of different analog gains and digital gains by controlling parameters of the captured image; the photographed image is affected by various noises, shot noise is generated in the image, and thus the obtained original image with noise is also required to be denoised, and based on the characteristic of zero mean value of the noise, multi-frame merging denoising is used, so that the original image without noise is obtained.
For the synthetic training dataset: using a sense500 data set, wherein the data set is an open-source raw image data set, and comprises 1015 raw images in a dark scene with the size of 3000x4000 pixels, and performing long exposure shooting for 1 to 2 seconds by using an image sensor to obtain the original image without noise; because denoising is performed by adopting a supervised learning mode, a noisy image corresponding to an original image without noise is also required to be obtained, and the method is as follows: (1) The noisy image can be obtained by adding Gao Sibo pine noise to an original image without noise, the noise is very close to the actual noise, the parameters of the Gao Sibo pine noise can be obtained by carrying out noise calibration on a multi-frame gray scale map shot by a sensor, and according to Gao Sibo pine noise model theory, the parameters corresponding to the noise model, namely the corresponding Gaussian poisson noise, can be obtained by calculating variance and mean value of the multi-frame gray scale map.
Step 120, inputting the noisy original image and the corresponding non-noisy original image in the first training data set into a machine learning model for training to obtain input layer parameters, middle layer parameters and initial output layer parameters of the denoising network; the denoising network is used for processing the input original image with noise to obtain an original image without noise. Wherein the machine learning model is a UNet network or an SGN network.
Step 130, constructing a fully-connected neural network, and taking the output of the fully-connected neural network as the output layer parameter of the denoising network to replace the initial output layer parameter; the fully-connected neural network is used for processing the analog gain and the digital gain corresponding to the input noisy original image to obtain the output layer parameters of the denoising network.
Step 140, inputting the noisy original image and the corresponding non-noisy original image, analog gain and digital gain in the second training data set into an initial joint denoising network formed by the denoising network and the fully connected neural network for training, and updating parameters of the initial joint denoising network based on a joint loss function to obtain a joint denoising network; the joint loss functions include an L1 loss function, a perceptual loss function, a color loss function, and a gradient loss function. The joint denoising network has the effect of generating a raw graph with good denoising effect, and can perform adaptive denoising according to the input raw graph with noise, the analog gain of the raw graph and the digital gain of the raw graph.
Wherein, the expression of the L1 loss function is:
;
wherein,representing the L1 penalty function,/->Representing an i-th noise-free raw image in the second training dataset;representing an i-th noisy original image in the second training data set; />Representing the L1 norm.
The expression of the perceptual loss function is:
;
wherein,representing a perceptual loss function; />Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; />(-) represents a feature map extracted using a denoising network; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
The expression of the color loss function is:
;
wherein,representing a color loss function; />Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; c () represents a feature map generated by convolution operation using Gaussian blur kernels; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
The gradient loss function is expressed as:
;
wherein,representing a gradient loss function; g () represents a feature map extracted by a transverse and longitudinal gradient extraction operator; />Representing a second training numberAn ith original image without noise in the data set; />Representing an i-th noisy original image in the second training data set; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
Step 150, inputting the original image and the corresponding analog gain and digital gain into the joint denoising network to realize the self-adaptive denoising of the original image.
By pre-training the denoising network DDN, the DDN can acquire an initial training parameter and has certain denoising capability. The data set used in training is based on synthetic noise, and the pre-trained DDN model has good generalization because of the large number of synthetic noise data sets and low acquisition cost.
By training the joint denoising model based on the data set under the real scene, compared with the data set based on the synthetic noise, the data distribution is more real, and two parameters of analog gain and digital gain are easier to acquire; the pre-trained DDN denoising network can reduce the training difficulty and enable the denoising effect of the training model to be more generalization.
Fig. 2 is an exemplary schematic diagram of a joint denoising network provided by the present invention.
As shown in fig. 2, the joint denoising network includes a pre-trained denoising network and a prediction network (i.e., a fully connected neural network, MLP), by inputting analog gain and digital gain into the fully connected neural network, the fully connected neural network outputs output layer parameters of the denoising network, by inputting a noisy raw imageThe input denoising network is processed by an input layer, an intermediate layer and an output layer of the denoising network to obtain an original image without noise output by the denoising network>The method comprises the steps of carrying out a first treatment on the surface of the Original image without noise output by model +.>And the input original image without noise +.>Construction of the loss function->、/>、/>Andand iteratively adjusting parameters of the denoising network and the fully-connected neural network in the joint denoising network to obtain a final joint denoising network.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An image self-adaptive denoising method based on supervised deep learning is characterized by comprising the following steps:
constructing a plurality of training data sets; the training data set comprises a noisy original image, a non-noisy original image, an analog gain and a digital gain;
inputting the noisy original image and the corresponding non-noisy original image in the first training data set into a machine learning model for training to obtain input layer parameters, middle layer parameters and initial output layer parameters of a denoising network; the denoising network is used for processing the input original image with noise to obtain an original image without noise;
constructing a fully-connected neural network, and taking the output of the fully-connected neural network as the output layer parameter of the denoising network to replace the initial output layer parameter; the fully-connected neural network is used for processing the analog gain and the digital gain corresponding to the input noisy original image to obtain the output layer parameters of the denoising network;
inputting a noisy original image and a corresponding non-noisy original image, analog gain and digital gain in a second training data set into an initial joint denoising network formed by the denoising network and the fully-connected neural network for training, and updating parameters of the initial joint denoising network based on a joint loss function to obtain a joint denoising network; the joint loss function comprises an L1 loss function, a perception loss function, a color loss function and a gradient loss function;
and inputting the original image and the corresponding analog gain and digital gain into the joint denoising network to realize the self-adaptive denoising of the original image.
2. The supervised deep learning based image adaptive denoising method according to claim 1, wherein the training data set comprises a training data set acquired in a real scene and a synthesized training data set; wherein the first training data set is the synthesized training data set; the second training data set is the training data set acquired in the real scene.
3. The supervised deep learning based image adaptive denoising method of claim 2, wherein for a training data set acquired in a real scene:
taking image data shot by an image acquisition device as the original image with noise;
and denoising the original image with noise acquired by the image acquisition equipment by using a multi-frame combination mode based on the characteristic of the noise zero mean value to obtain the original image without noise.
4. The supervised deep learning based image adaptive denoising method of claim 3, wherein the analog gain and the digital gain are obtained based on parameters of the noisy original image captured by a sensor in the image acquisition device.
5. The supervised deep learning based image adaptive denoising method of claim 2, wherein for the synthesized training dataset:
using a sense500 data set, and carrying out long exposure shooting through an image sensor to obtain the original image without noise;
adding Gaussian poisson noise to the sense500 data set to obtain a noise data set;
based on the noise data set, performing noise calibration on the multi-frame gray level graph of the image sensor to obtain noise parameters of the Gaussian poisson model under different simulation gains;
and processing the original image without noise, which is shot by an image sensor, based on the noise parameters of the Gao Sibo pine model, so as to obtain the original image with noise.
6. The supervised deep learning based image adaptive denoising method of claim 1, wherein the expression of the L1 penalty function is:
;
wherein,representing the L1 penalty function,/->Representing an i-th noise-free raw image in the second training dataset; />Representing an i-th noisy original image in the second training data set; />Represents an L1 norm; DDN () represents the output of the initial joint denoising network.
7. The supervised deep learning based image adaptive denoising method of claim 1, wherein the expression of the perceptual loss function is:
;
wherein,representing a perceptual loss function; />Representing an i-th noise-free raw image in the second training dataset;representing an i-th noisy original image in the second training data set; />(-) represents a feature map extracted using a denoising network; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
8. The supervised deep learning based image adaptive denoising method of claim 1, wherein the expression of the color loss function is:
;
wherein,representing a color loss function; />Representing an i-th noise-free raw image in the second training dataset;representing an i-th noisy original image in the second training data set; c () represents a feature map generated by convolution operation using Gaussian blur kernels; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
9. The supervised deep learning based image adaptive denoising method of claim 1, wherein the expression of the gradient loss function is:
;
wherein,representing a gradient loss function; g () represents a feature map extracted by a transverse and longitudinal gradient extraction operator; />Representing an i-th noise-free raw image in the second training dataset; />Representing the i-th noisy original graph in the second training data setAn image; DDN (-) represents the output of the initial joint denoising network; />Representing the L1 norm.
10. The supervised deep learning based image adaptive denoising method of claim 1, wherein the machine learning model is a UNet network or an SGN network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311651465.9A CN117351216B (en) | 2023-12-05 | 2023-12-05 | Image self-adaptive denoising method based on supervised deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311651465.9A CN117351216B (en) | 2023-12-05 | 2023-12-05 | Image self-adaptive denoising method based on supervised deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117351216A true CN117351216A (en) | 2024-01-05 |
CN117351216B CN117351216B (en) | 2024-02-02 |
Family
ID=89361754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311651465.9A Active CN117351216B (en) | 2023-12-05 | 2023-12-05 | Image self-adaptive denoising method based on supervised deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117351216B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593188A (en) * | 2024-01-19 | 2024-02-23 | 成都宜图智享信息科技有限公司 | Super-resolution method based on unsupervised deep learning and corresponding equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070047838A1 (en) * | 2005-08-30 | 2007-03-01 | Peyman Milanfar | Kernel regression for image processing and reconstruction |
US20100045820A1 (en) * | 2008-08-20 | 2010-02-25 | Freescale Semiconductor, Inc. | Gain controlled threshold in denoising filter for image signal processing |
KR20210116923A (en) * | 2020-03-18 | 2021-09-28 | 에스케이텔레콤 주식회사 | Method for Training a Denoising Network, Method and Device for Operating Image Processor |
CN113643189A (en) * | 2020-04-27 | 2021-11-12 | 深圳市中兴微电子技术有限公司 | Image denoising method, device and storage medium |
CN113852759A (en) * | 2021-09-24 | 2021-12-28 | 豪威科技(武汉)有限公司 | Image enhancement method and shooting device |
US20220367039A1 (en) * | 2021-05-07 | 2022-11-17 | Canon Medical Systems Corporation | Adaptive ultrasound deep convolution neural network denoising using noise characteristic information |
-
2023
- 2023-12-05 CN CN202311651465.9A patent/CN117351216B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070047838A1 (en) * | 2005-08-30 | 2007-03-01 | Peyman Milanfar | Kernel regression for image processing and reconstruction |
US20100045820A1 (en) * | 2008-08-20 | 2010-02-25 | Freescale Semiconductor, Inc. | Gain controlled threshold in denoising filter for image signal processing |
KR20210116923A (en) * | 2020-03-18 | 2021-09-28 | 에스케이텔레콤 주식회사 | Method for Training a Denoising Network, Method and Device for Operating Image Processor |
CN113643189A (en) * | 2020-04-27 | 2021-11-12 | 深圳市中兴微电子技术有限公司 | Image denoising method, device and storage medium |
US20220367039A1 (en) * | 2021-05-07 | 2022-11-17 | Canon Medical Systems Corporation | Adaptive ultrasound deep convolution neural network denoising using noise characteristic information |
CN113852759A (en) * | 2021-09-24 | 2021-12-28 | 豪威科技(武汉)有限公司 | Image enhancement method and shooting device |
Non-Patent Citations (5)
Title |
---|
ARATI PAUL等: "Wavelet enabled convolutional autoencoder based deep neural network for hyperspectral image denoising", 《MULTIMEDIA TOOLS AND APPLICATIONS》, vol. 81, pages 2529, XP037679210, DOI: 10.1007/s11042-021-11689-z * |
CHING-TA LU等: "Image enhancement using deep-learning fully connected neural network mean filter", 《THE JOURNAL OF SUPERCOMPUTING》, vol. 77, pages 3144, XP037366182, DOI: 10.1007/s11227-020-03389-6 * |
RINI SMITA THAKUR等: "State-of-art analysis of image denoising methods using convolutional neural networks", 《 INSTITUTION OF ENGINEERING AND TECHNOLOGY》, vol. 13, pages 2367 - 2380, XP006084378, DOI: 10.1049/iet-ipr.2019.0157 * |
TIM BROOKS等: "Unprocessing Images for Learned Raw Denoising", 《PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, pages 11036 - 11045 * |
果实: "基于卷积神经网络的真实相机图像去噪的研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 2, pages 138 - 1737 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117593188A (en) * | 2024-01-19 | 2024-02-23 | 成都宜图智享信息科技有限公司 | Super-resolution method based on unsupervised deep learning and corresponding equipment |
CN117593188B (en) * | 2024-01-19 | 2024-04-12 | 成都宜图智享信息科技有限公司 | Super-resolution method based on unsupervised deep learning and corresponding equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117351216B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028163B (en) | Combined image denoising and dim light enhancement method based on convolutional neural network | |
CN115442515B (en) | Image processing method and apparatus | |
CN111028177B (en) | Edge-based deep learning image motion blur removing method | |
CN111968044A (en) | Low-illumination image enhancement method based on Retinex and deep learning | |
CN117351216B (en) | Image self-adaptive denoising method based on supervised deep learning | |
CN106373105B (en) | Multi-exposure image artifact removing fusion method based on low-rank matrix recovery | |
CN110619593A (en) | Double-exposure video imaging system based on dynamic scene | |
Wang et al. | Joint iterative color correction and dehazing for underwater image enhancement | |
CN112348747A (en) | Image enhancement method, device and storage medium | |
WO2022133194A1 (en) | Deep perceptual image enhancement | |
EP4187484A1 (en) | Cbd-net-based medical endoscopic image denoising method | |
CN113658057A (en) | Swin transform low-light-level image enhancement method | |
CN111242860A (en) | Super night scene image generation method and device, electronic equipment and storage medium | |
CN114862698A (en) | Method and device for correcting real overexposure image based on channel guidance | |
CN115393227A (en) | Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning | |
Saleh et al. | Adaptive uncertainty distribution in deep learning for unsupervised underwater image enhancement | |
Wan et al. | Purifying low-light images via near-infrared enlightened image | |
Saleem et al. | A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset | |
CN112614063B (en) | Image enhancement and noise self-adaptive removal method for low-illumination environment in building | |
CN116229081A (en) | Unmanned aerial vehicle panoramic image denoising method based on attention mechanism | |
CN115829868B (en) | Underwater dim light image enhancement method based on illumination and noise residual image | |
CN116664451A (en) | Measurement robot measurement optimization method based on multi-image processing | |
Xu et al. | Deep residual convolutional network for natural image denoising and brightness enhancement | |
Chang et al. | RGNET: a two-stage low-light image enhancement network without paired supervision | |
Kinoshita et al. | Deep inverse tone mapping using LDR based learning for estimating HDR images with absolute luminance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |