CN116188325A - Image denoising method based on deep learning and image color space characteristics - Google Patents

Image denoising method based on deep learning and image color space characteristics Download PDF

Info

Publication number
CN116188325A
CN116188325A CN202310341880.8A CN202310341880A CN116188325A CN 116188325 A CN116188325 A CN 116188325A CN 202310341880 A CN202310341880 A CN 202310341880A CN 116188325 A CN116188325 A CN 116188325A
Authority
CN
China
Prior art keywords
image
channels
noise
channel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310341880.8A
Other languages
Chinese (zh)
Inventor
庞愫
张伟
朱志良
于海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202310341880.8A priority Critical patent/CN116188325A/en
Publication of CN116188325A publication Critical patent/CN116188325A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image denoising method based on deep learning and image color space characteristics, and relates to the technical field of image processing. The invention considers the imaging principle of the image, utilizes the influence of noise on channels with different colors to perform preliminary processing on the image so as to enhance more image information and help the network to train. The final algorithm can denoise the image containing the real noise and the image containing the artificial synthetic noise, and meanwhile, the detail information of the image is reserved. Therefore, the network designed by the invention can train a denoising image result with universality, improves the simplicity and accuracy of image denoising, and has practical application value.

Description

Image denoising method based on deep learning and image color space characteristics
Technical Field
The invention relates to the technical field of image processing, in particular to an image denoising method based on deep learning and image color space characteristics.
Background
Image denoising refers to processing a low-quality image with noise to make the low-quality image free of noise, recovering image details and improving image quality. The method belongs to the field of digital image processing and mainly aims to solve the problems of low image quality and influence on visual perception effect caused by the limitation of imaging equipment and network transmission loss. Before deep learning techniques are not popular, many conventional image denoising algorithms, such as mean filtering, median filtering, gaussian filtering, fourier transform, wavelet transform, etc., filter a noise picture in the spatial domain or the transform domain so that pixel values are modified to generate a denoised picture.
With the deep learning technology being deeply developed and applied in the research of the computer vision field, the application of the deep learning technology to the image denoising process has become a new idea for solving the problem. Compared with the traditional image algorithm, the method automatically learns the function mapped from the noise image to the clean image in the training process, and obtains the result exceeding the traditional method. Since the roll-up neural network is widely used in the image field, it has achieved great success in high-level image tasks due to its efficient feature extraction capability. After the convolutional neural network is applied to an image denoising algorithm to obtain excellent performance, the model proves that the convolutional neural network has better performance in detail recovery. Therefore, we need to study how to solve the denoising problem of the image using the deep learning technique.
Most of the traditional denoising algorithms are based on Gaussian filtering or wavelet transformation for denoising, have low universality and low speed for image denoising, cannot flexibly process images with different noise levels, and limit the value of application in real life. Meanwhile, most of the existing deep learning-based algorithms are used for denoising the artificially synthesized additive Gaussian white noise image, so that denoising research on the real noise image is less, and the noise in the real image is the noise which needs to be solved most at present and widely exists in real life.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image denoising method based on deep learning and image color space characteristics. Taking the imaging principle of the image into consideration, the influence of noise on different color channels is utilized to perform preliminary processing on the image so as to enhance more image information and help the network to train. The final algorithm can denoise the image containing the real noise and the image containing the artificial synthetic noise, and meanwhile, the detail information of the image is reserved. Therefore, the network designed by the invention can train a denoising image result with universality, improves the simplicity and accuracy of image denoising, and has practical application value.
An image denoising method based on deep learning and image color space characteristics, comprising the following steps:
step 1: reading a data set, and carrying out data enhancement on the noise image;
the method comprises the steps of reading noise images in public data sets of real noise and artificial synthetic noise, randomly cutting the noise images into image blocks with the same size, cutting the image blocks at the same positions, and then rotating the images to select operations of rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, rotating by 90 degrees and overturning, rotating by 180 degrees and overturning, rotating by 270 degrees and overturning, horizontally overturning and vertically overturning with 50% probability;
step 2: establishing a network model;
the network model includes: the CPBlock image preprocessing operation module is used for providing more characteristic information for the processed image; conv is a convolution layer, the convolution kernel is 3*3, and the Conv is used for changing the channel number of an image and generating a feature map for network training; the Basicblock is used as a basic module for extracting deep features;
inputting the noise image block processed in the step 1 into a network model, firstly performing preprocessing operation through CPBlock, splicing the image generated by preprocessing and the noise image block processed in the step 1 among channels, changing the channel from 3 channels to 6 channels, and changing the channel from the channel to 64 channels through a first convolution layer; deep feature extraction is carried out through four Basicblock modules, and the number of channels after output is 64; after the 64 channels are changed into 3 channels through the last convolution layer, adding the 3 channels with the noise image blocks processed in the step 1, and finally generating an image prediction after network denoising;
step 3: preprocessing an input noise image;
firstly, converting the noise image block processed in the step 1 into an HSV mode by using an OpenCV library; performing Gaussian blur on the image, and dividing the image into a plurality of areas according to the value range of each color of H, S, V; summing up the values of R, G, B channels in each divided color area, and calculating an average value; the average of the R, G, B channels is ranked, the third channel is ranked for the average, and the gradient of the first and second channels is ranked for reference for the average, as follows:
Figure BDA0004158304810000021
for the channel with the second mean rank, the gradient of the channel with the first mean rank is provided for reference, and the formula is as follows:
Figure BDA0004158304810000022
wherein, the subscript low is the channel with the lowest mean value, the middle is the channel with the middle rank of the mean value, and high is the channel with the highest mean value, newValue low And newValue middle Mean for pixel value updated for corresponding channel low 、mean middle 、mean high Representing the mean value of the pixel values in the corresponding channel low 、value middle 、value high Original pixel values for the corresponding channels;
aiming at the channel with the first rank of the mean value, the value of the channel is not modified; for each divided color area, executing the operation to finally generate a tensor with different noise image values and the same size, and splicing the tensor with the noise image blocks processed in the step 1 into a 6-channel feature map to obtain a preprocessed result;
step 4: information exchange between channels is carried out through the convolution layer;
the result after pretreatment is input into a first convolution layer, the size of a convolution kernel is 3*3, and the convolution kernel is changed from 6 channels to 64 channels, so that further information exchange between channels is performed, and shallow layer characteristics are extracted.
Step 5: deep features are extracted through the basic module, and network effects are improved;
the basic module is provided with two branches along the network transmission direction, wherein the first branch is used for extracting global features, namely a convolution layer, an average pooling layer, a convolution layer, a ReLU activation layer and a convolution layer, and the second convolution layer is convolution of 1*1 in order to improve the training speed and reduce the calculation consumption; the second branch is used for extracting local features, namely a convolution layer, a ReLU activation layer and a convolution layer; multiplying the output results of the two branches and outputting the result as the sum of an input characteristic diagram and a product characteristic diagram;
step 6: setting the times of repeatedly passing through the basic module, and learning the deeper features;
step 7: after the 64 channels are converted into 3-channel images through the convolution layer, adding the 3-channel images with the noise image blocks processed in the step 1 to obtain denoised images, and finishing training;
step 8: calculating a loss function of the denoised image generated in the step 7 and the corresponding noiseless clean image in the training set, wherein the loss function is that
Figure BDA0004158304810000031
Wherein delta 1 For L1Loss, clean is Clean image without noise, +.>
Figure BDA0004158304810000032
And 7, selecting an optimization function as Adam for the denoised image generated in the step 7, wherein the parameters are default parameters, and the initial value of the learning rate is 3 x 10 -5
The beneficial effects of adopting above-mentioned technical scheme to produce lie in:
the invention provides an image denoising method based on deep learning and image color space characteristics, which can effectively remove real noise and artificial synthetic noise, reduce damage to image details in the denoising process and keep certain detail information; compared with the existing method, the model is simpler and more convenient, less calculation resources are consumed in training, denoising can be performed on a real noise image, and the result is more robust and better in performance.
Drawings
FIG. 1 is a flowchart of an overall image denoising method according to an embodiment of the present invention;
FIG. 2 is a flow chart of image preprocessing in an embodiment of the invention;
FIG. 3 is a diagram of a network architecture in an embodiment of the present invention;
FIG. 4 is a graph of the denoising result of artificial synthetic noise in an embodiment of the present invention;
fig. 5 is a diagram of a real noise removal result in an embodiment of the present invention.
Detailed Description
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
The data sets used in this embodiment are two, the first is a real image data set SIDD (Smartphone Image Denoising Dataset, a noise reduction processing set of a camera of a smart phone) disclosed on a network, the second is an artificial synthetic noise data set BSD500 (Berkeley Segmentation Dataset) in which additive white gaussian noise is artificially added, and the total noise levels of the artificial addition are three: 5. 10, 25.
An image denoising method based on deep learning and image color space characteristics, as shown in FIG. 1, is a main flow chart of the invention, comprising the following steps:
step 1: reading a data set, and carrying out data enhancement on the noise image;
reading 920 noise images in public data sets of real noise and artificial synthetic noise, randomly cutting the noise images into image blocks with the same size of 48 multiplied by 48, cutting the image blocks at the same position of a clean image, then rotating the image for data enhancement, and selecting operations of rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, rotating by 90 degrees and turning, rotating by 180 degrees and turning, rotating by 270 degrees and turning, horizontally turning and vertically turning with 50% probability respectively; before training each round, randomly cutting image blocks and rotating the image blocks for 20 times on the read 920 images to increase the number of noise images in the training process and perform data enhancement operation;
step 2: establishing a network model;
a network model is built and the overall network structure is shown in fig. 3. Wherein x is the noise image block processed in the step 1; the CPBlock image preprocessing operation module is used for providing more characteristic information for the processed image, and the specific operation is shown in the step 3; conv is a convolution layer, the convolution kernel is 3*3, and the Conv is used for changing the channel number of an image and generating a feature map for network training; the basic block is used as a basic module for extracting deep features, and the details of the basic block are described in detail in the step 5; the prediction is the generated denoised image.
Inputting the noise image block x processed in the step 1 into a network model, firstly performing preprocessing operation through CPBlock, splicing the image generated by preprocessing and the noise image block x processed in the step 1 among channels, changing the channel from 3 channels to 6 channels, and changing the channel from the channel to 64 channels through a first convolution layer; deep feature extraction is carried out through four Basicblock modules, and the number of channels after output is 64; after the 64 channels are changed into 3 channels through the last convolution layer, adding the 3 channels with the noise image block x processed in the step 1, and finally generating an image prediction after network denoising;
step 3: preprocessing an input noise image;
as shown in fig. 2, the noise image read by the network is in RGB mode, and the HSV format can well divide the image into ten colors of black, white, gray, red, orange, yellow, green, cyan, blue and purple. In order to facilitate image color region segmentation, firstly converting a noise image block processed in the step 1 into an HSV mode by using an OpenCV library; meanwhile, in order to reduce the influence of noise on the segmentation areas, carrying out 5*5 Gaussian blur on the image, and then dividing the image into a plurality of areas according to the value range of each color of H, S, V; summing up the values of R, G, B channels in each divided color area, and calculating an average value; it is considered that color channels with higher average values are less affected by noise. The average of the R, G, B channels is ranked, the channel with the third rank is ranked for the average, and the gradient of the channel with the first rank and the second rank of the average is used as reference for the average, so that more information is added, and the formula is as follows:
Figure BDA0004158304810000041
for the channel with the second mean rank, the gradient of the channel with the first mean rank is provided for reference, and the formula is as follows:
Figure BDA0004158304810000051
wherein, the subscript low is the channel with the lowest mean value, the middle is the channel with the middle rank of the mean value, and high is the channel with the highest mean value, newValue low And newValue middle Mean for pixel value updated for corresponding channel low 、mean middle 、mean high Representing the mean value of the pixel values in the corresponding channel low 、value middle 、value high Original pixel values for the corresponding channels;
aiming at the channel with the first rank of the mean value, no variation amplitude of other channels can be used for reference, and the value of the channel is not modified; for each divided color area, executing the operation to finally generate a tensor with different noise image values and the same size, and splicing the tensor with the noise image blocks processed in the step 1 into a 6-channel feature map to obtain a preprocessed result;
step 4: information exchange between channels is carried out through the convolution layer;
the result after pretreatment is input into a first convolution layer, the size of a convolution kernel is 3*3, and the convolution kernel is changed from 6 channels to 64 channels, so that further information exchange between channels is performed, and shallow layer characteristics are extracted.
Step 5: deep features are extracted through the basic module, and network effects are improved;
as shown in basic block in fig. 1, the basic module has two branches along the network transmission direction, the first branch is used for extracting global features, which are convolution layers, an average pooling layer, a convolution layer, a ReLU activation layer and a convolution layer, wherein the second convolution layer is convolution of 1*1 in order to increase training speed and reduce calculation consumption; the second branch is used for extracting local features, namely a convolution layer, a ReLU activation layer and a convolution layer; multiplying the output results of the two branches and outputting the result as the sum of an input characteristic diagram and a product characteristic diagram;
step 6: setting the number of times of repeated passing through the basic module, repeating extraction for multiple times in the embodiment, and learning deeper features;
step 7: after the 64 channels are converted into 3-channel images through the convolution layer, adding the 3-channel images with the noise image blocks processed in the step 1 to obtain denoised images, and finishing training;
step 8: and (3) carrying out loss function calculation on the denoised image generated in the step (7) and the corresponding noiseless clean image in the training set, so as to help the network to train better. The loss function is
Figure BDA0004158304810000052
Wherein delta 1 For L1Loss, clean is Clean image without noise, +.>
Figure BDA0004158304810000053
And 7, selecting an optimization function as Adam for the denoised image generated in the step 7, wherein the parameters are default parameters, and the initial value of the learning rate is 3 x 10 -5
According to the invention, at the beginning of design, the complexity of real image noise and an image imaging principle are considered, the distribution of the real image noise in the R, G, B three channels is different, and the noise distribution in the range of each color area is also different, so that the input data set picture is initially processed before network training, and the information which can be provided by the noise image is further increased.
The module for feature extraction in the whole network has strong learning capability, adopts a simpler residual network structure, can avoid the condition of training overfitting caused by excessive feature extraction layers and over-deep network, and can ensure that the network is sufficiently trained at the same time so as to conveniently achieve good denoising effect on various noise images without additionally training other models with specific noise.
In the embodiment of the invention, a large amount of noise images are input during training, and the preliminary image processing is firstly carried out by utilizing the noise distribution condition rule of each color channel, so that more image information is added, and meanwhile, the deep learning network model is adopted for carrying out feature extraction, so that the noise characteristics of different levels can be acquired by a network. Flexible handling is achieved. The synthetic noise denoising effect diagram is shown in fig. 4, the real noise denoising effect diagram is shown in fig. 5, and the denoising result for each data set is shown in table 1.
Figure BDA0004158304810000061
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (7)

1. An image denoising method based on deep learning and image color space characteristics, comprising the following steps:
step 1: reading a data set, and carrying out data enhancement on the noise image;
step 2: establishing a network model;
step 3: preprocessing an input noise image;
step 4: information exchange between channels is carried out through the convolution layer;
step 5: deep features are extracted through the basic module, and network effects are improved;
step 6: setting the times of repeatedly passing through the basic module, and learning the deeper features;
step 7: after the 64 channels are converted into 3-channel images through the convolution layer, adding the 3-channel images with the noise image blocks processed in the step 1 to obtain denoised images, and finishing training;
step 8: and (3) carrying out loss function calculation on the denoised image generated in the step (7) and the corresponding noiseless clean image in the training set, and finishing image denoising.
2. The image denoising method based on deep learning and image color space characteristics according to claim 1, wherein the step 1 specifically comprises: the method comprises the steps of reading noise images in public data sets of real noise and artificial synthetic noise, randomly cutting the noise images into image blocks with the same size, cutting the image blocks at the same positions, and then rotating the images to select operations of rotating by 90 degrees, rotating by 180 degrees, rotating by 270 degrees, rotating by 90 degrees and overturning, rotating by 180 degrees and overturning, rotating by 270 degrees and overturning, horizontally overturning and vertically overturning with 50% probability.
3. The image denoising method based on deep learning and image color space characteristics according to claim 1, wherein the network model in step 2 comprises: the CPBlock image preprocessing operation module is used for providing more characteristic information for the processed image; conv is a convolution layer, the convolution kernel is 3*3, and the Conv is used for changing the channel number of an image and generating a feature map for network training; the Basicblock is used as a basic module for extracting deep features;
inputting the noise image block processed in the step 1 into a network model, firstly performing preprocessing operation through CPBlock, splicing the image generated by preprocessing and the noise image block processed in the step 1 among channels, changing the channel from 3 channels to 6 channels, and changing the channel from the channel to 64 channels through a first convolution layer; deep feature extraction is carried out through four Basicblock modules, and the number of channels after output is 64; and (2) after the 64 channels are changed into 3 channels through the last convolution layer, adding the 3 channels with the noise image blocks processed in the step (1), and finally generating the image prediction after network denoising.
4. The image denoising method based on deep learning and image color space characteristics according to claim 1, wherein the step 3 specifically comprises: firstly, converting the noise image block processed in the step 1 into an HSV mode by using an OpenCV library; performing Gaussian blur on the image, and dividing the image into a plurality of areas according to the value range of each color of H, S, V; summing up the values of R, G, B channels in each divided color area, and calculating an average value; the average of the R, G, B channels is ranked, the third channel is ranked for the average, and the gradient of the first and second channels is ranked for reference for the average, as follows:
Figure FDA0004158304770000011
for the channel with the second mean rank, the gradient of the channel with the first mean rank is provided for reference, and the formula is as follows:
Figure FDA0004158304770000021
wherein, the subscript low is the channel with the lowest mean value, the middle is the channel with the middle rank of the mean value, and high is the channel with the highest mean value, newValue low And newValue middle Mean for pixel value updated for corresponding channel low 、mean middle 、mean high Representing corresponding channelsMean value of inner pixel values low 、value middle 、value high Original pixel values for the corresponding channels;
aiming at the channel with the first rank of the mean value, the value of the channel is not modified; and (3) for each divided color area, executing the operation, finally generating a tensor with different noise image values and the same size, and splicing the tensor with the noise image blocks processed in the step (1) into a 6-channel feature map to obtain a result after preprocessing.
5. The image denoising method based on deep learning and image color space characteristics according to claim 1, wherein the step 4 specifically comprises: the result after pretreatment is input into a first convolution layer, the size of a convolution kernel is 3*3, and the convolution kernel is changed from 6 channels to 64 channels, so that further information exchange between channels is performed, and shallow layer characteristics are extracted.
6. The image denoising method based on deep learning and image color space characteristics according to claim 1, wherein in step 5, the base module has two branches along the network transmission direction, the first branch is used for extracting global features, which are convolution layers, average pooling layers, convolution layers, reLU activation layers and convolution layers, wherein the second convolution layer is a convolution of 1*1 in order to increase training speed and reduce calculation consumption; the second branch is used for extracting local features, namely a convolution layer, a ReLU activation layer and a convolution layer; the output results of the two branches are multiplied and output as the sum of the input feature map and the product feature map.
7. The image denoising method based on deep learning and image color space characteristics as claimed in claim 1, wherein the loss function in step 8 is
Figure FDA0004158304770000022
Wherein delta 1 For L1Loss, clean is Clean image without noise, +.>
Figure FDA0004158304770000023
And 7, selecting an optimization function as Adam for the denoised image generated in the step 7, wherein the parameters are default parameters, and the initial value of the learning rate is 3 x 10 -5 。/>
CN202310341880.8A 2023-03-31 2023-03-31 Image denoising method based on deep learning and image color space characteristics Pending CN116188325A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310341880.8A CN116188325A (en) 2023-03-31 2023-03-31 Image denoising method based on deep learning and image color space characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310341880.8A CN116188325A (en) 2023-03-31 2023-03-31 Image denoising method based on deep learning and image color space characteristics

Publications (1)

Publication Number Publication Date
CN116188325A true CN116188325A (en) 2023-05-30

Family

ID=86444520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310341880.8A Pending CN116188325A (en) 2023-03-31 2023-03-31 Image denoising method based on deep learning and image color space characteristics

Country Status (1)

Country Link
CN (1) CN116188325A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433674A (en) * 2023-06-15 2023-07-14 锋睿领创(珠海)科技有限公司 Semiconductor silicon wafer detection method, device, computer equipment and medium
CN116843582A (en) * 2023-08-31 2023-10-03 南京诺源医疗器械有限公司 Denoising enhancement system and method of 2CMOS camera based on deep learning
CN116912305A (en) * 2023-09-13 2023-10-20 四川大学华西医院 Brain CT image three-dimensional reconstruction method and device based on deep learning
CN117094909A (en) * 2023-08-31 2023-11-21 青岛天仁微纳科技有限责任公司 Nanometer stamping wafer image acquisition processing method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433674A (en) * 2023-06-15 2023-07-14 锋睿领创(珠海)科技有限公司 Semiconductor silicon wafer detection method, device, computer equipment and medium
CN116433674B (en) * 2023-06-15 2023-08-18 锋睿领创(珠海)科技有限公司 Semiconductor silicon wafer detection method, device, computer equipment and medium
CN116843582A (en) * 2023-08-31 2023-10-03 南京诺源医疗器械有限公司 Denoising enhancement system and method of 2CMOS camera based on deep learning
CN116843582B (en) * 2023-08-31 2023-11-03 南京诺源医疗器械有限公司 Denoising enhancement system and method of 2CMOS camera based on deep learning
CN117094909A (en) * 2023-08-31 2023-11-21 青岛天仁微纳科技有限责任公司 Nanometer stamping wafer image acquisition processing method
CN117094909B (en) * 2023-08-31 2024-04-02 青岛天仁微纳科技有限责任公司 Nanometer stamping wafer image acquisition processing method
CN116912305A (en) * 2023-09-13 2023-10-20 四川大学华西医院 Brain CT image three-dimensional reconstruction method and device based on deep learning
CN116912305B (en) * 2023-09-13 2023-11-24 四川大学华西医院 Brain CT image three-dimensional reconstruction method and device based on deep learning

Similar Documents

Publication Publication Date Title
Tian et al. Deep learning on image denoising: An overview
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN116188325A (en) Image denoising method based on deep learning and image color space characteristics
CN111754438B (en) Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN111127336B (en) Image signal processing method based on self-adaptive selection module
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
CN110533614B (en) Underwater image enhancement method combining frequency domain and airspace
CN112991493B (en) Gray image coloring method based on VAE-GAN and mixed density network
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN113724164B (en) Visible light image noise removing method based on fusion reconstruction guidance filtering
CN113284061B (en) Underwater image enhancement method based on gradient network
CN112598602A (en) Mask-based method for removing Moire of deep learning video
US9230161B2 (en) Multiple layer block matching method and system for image denoising
CN116797488A (en) Low-illumination image enhancement method based on feature fusion and attention embedding
Sun et al. Underwater image enhancement with encoding-decoding deep CNN networks
CN115272072A (en) Underwater image super-resolution method based on multi-feature image fusion
CN113436101B (en) Method for removing rain by Dragon lattice tower module based on efficient channel attention mechanism
Cai et al. Underwater image processing system for image enhancement and restoration
CN109003247B (en) Method for removing color image mixed noise
CN117351340A (en) Underwater image enhancement algorithm based on double-color space
CN116862794A (en) Underwater image processing method based on double compensation and contrast adjustment
CN116363001A (en) Underwater image enhancement method combining RGB and HSV color spaces
CN114529713A (en) Underwater image enhancement method based on deep learning
CN113379641A (en) Single image rain removing method and system based on self-coding convolutional neural network
Shi et al. Underwater image enhancement based on adaptive color correction and multi-scale fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination