CN114529463A - Image denoising method and system - Google Patents

Image denoising method and system Download PDF

Info

Publication number
CN114529463A
CN114529463A CN202210031691.6A CN202210031691A CN114529463A CN 114529463 A CN114529463 A CN 114529463A CN 202210031691 A CN202210031691 A CN 202210031691A CN 114529463 A CN114529463 A CN 114529463A
Authority
CN
China
Prior art keywords
image
denoising
noise
feature extraction
ghost
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210031691.6A
Other languages
Chinese (zh)
Inventor
李天平
冯凯丽
李萌
韩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202210031691.6A priority Critical patent/CN114529463A/en
Publication of CN114529463A publication Critical patent/CN114529463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image denoising method and system, comprising the following steps: acquiring a noise image; inputting the noise image into a denoising model to obtain a denoising image; the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image. The image denoising effect is improved, the image denoising efficiency is improved, the training parameters are reduced, and the network structure training time is shortened.

Description

Image denoising method and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image denoising method and system.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The image denoising is an essential step in a plurality of image processing problems and is widely applied in the fields of biology, medicine, military and the like, so that the improvement of the image denoising effect has very important significance. The traditional image denoising method has a complex model and contains many parameters which need to be adjusted manually. The deep learning is applied to image denoising because of its strong learning ability, and is used for improving the disadvantages existing in the traditional image denoising method.
In particular, the convolutional neural network-based method shows a strong performance in image denoising. However, the memory consumption of the deep convolutional neural network is still large, the training time of the model is also long, and the high image denoising efficiency is difficult to achieve.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides an image denoising method and system, which not only improve the image denoising effect, but also improve the image denoising efficiency, reduce the training parameters and shorten the network structure training time.
In order to achieve the purpose, the invention adopts the following technical scheme:
a first aspect of the present invention provides an image denoising method, including:
acquiring a noise image;
inputting the noise image into a denoising model to obtain a denoising image;
the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image.
Further, the multi-scale feature extraction block adopts a plurality of 1 × 1 convolutional layers and 3 × 3 void convolutional layers to respectively extract features of the noise image, so as to obtain features of different scales.
Further, the multi-scale feature extraction block sequentially inputs the features of different scales into the feature combination layer and the feature fusion layer to obtain a multi-scale feature map of the noise image.
Further, the Ghost feature extraction block comprises a plurality of Ghost blocks connected in sequence.
Further, the Ghost block performs feature extraction on the input feature map by adopting a 32-channel 1 × 1 convolutional layer; and processing the features extracted from the 32-channel 1 × 1 convolutional layer by using a depth separable convolutional layer to obtain an output feature map.
Further, an output feature map obtained by a depth separable convolutional layer in the last Ghost block in the Ghost feature extraction block is the noise feature.
Furthermore, the denoising model adopts a mean square error as a loss function and is optimized by an Adam optimizer.
A second aspect of the present invention provides an image denoising system, comprising:
an image acquisition module configured to: acquiring a noise image;
a denoising module configured to: inputting the noise image into a denoising model to obtain a denoising image;
the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image.
A third aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of a method for denoising an image as described above.
A fourth aspect of the present invention provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of a method for image denoising as described above when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an image denoising method, which uses a Ghost feature extraction block to replace a common convolutional layer for extracting image features, reduces training parameters in a training process, improves the image denoising efficiency, uses the Ghost feature extraction block to replace the convolutional layer as an integral framework of a network, can greatly reduce parameters in network training, improves the image denoising efficiency, and simultaneously adopts a multi-scale feature extraction method to extract more image noise features, thereby improving the image denoising effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are included to illustrate an exemplary embodiment of the invention and not to limit the invention.
FIG. 1 is a network structure diagram of a denoising model according to a first embodiment of the present invention;
FIG. 2 is a block diagram of a multi-scale feature extraction block according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an image denoising method according to a first embodiment of the present invention;
FIG. 4(a) is a graph of the denoising result of an image in a BSD68 test set according to the method of the first embodiment of the present invention;
FIG. 4(b) is a graph of the de-noising results of the DnCNN method on one image in the BSD68 test set;
FIG. 4(c) is a graph of the denoising results of an image in the BSD68 test set by the FFDnet method;
FIG. 4(d) is a graph of the result of one image in the BSD68 test set with white Gaussian noise added;
FIG. 5(a) is a diagram illustrating a denoising result of an image in a McMaster data set according to a first embodiment of the present invention;
FIG. 5(b) is a graph of the denoising results of the ADNet method on one image in the McMaster data set;
FIG. 5(c) is a graph of the result of one image in the McMaster data set with Gaussian white noise added;
figure 5(d) is the original image of one image in the McMaster dataset.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example one
The embodiment provides an image denoising method, as shown in fig. 3, including the following steps:
step 1, obtaining an image data set, and adding noise to obtain an image training set.
Specifically, additive white gaussian noise is selected as a noise type for training, noise is added to a clean picture of an image data set, and an original image and a noise-containing image form a training set.
And 2, training the denoising model by adopting an image training set, and taking the original image in the training set and the training sample pair of the noisy image as the input of the network structure. As shown in fig. 1, the network structure sequentially adopts a multi-scale feature extraction block (multi-scale feature extraction module) and a Ghost feature extraction block (Ghost feature extraction module), performs multi-feature extraction and Ghost-based feature extraction on a noise image to obtain extracted noise features, calculates a loss function value in each iteration process, automatically adjusts parameters according to results, and obtains and stores an optimal parameter of a denoising model when training is completed.
Step 3, carrying out image denoising on the test set by using the trained denoising model, and specifically obtaining a noise image in the test set; and inputting the noise image into a denoising model to obtain a denoising image. And extracting image noise characteristics through a network structure, and then subtracting the noise characteristics from the noise image based on the residual error idea to obtain a denoised image.
In the multi-scale feature extraction block, an improved spatial pyramid module with hole convolution is used. Enlarging the field of view is an effective way to capture contextual information. The dilation convolution is the addition of a space to the underlying convolution kernel to enlarge the convolution kernel and thus the field of view without losing feature size. The dilation convolution can not only obtain the same effect as the common convolution, but also obtain a larger receptive field, so that more dense data characteristics are obtained, and the dilation convolution can improve the image denoising effect when applied to an image denoising method.
As shown in fig. 2, the multi-scale feature extraction block performs feature extraction on the noise image by using a plurality of 1 × 1 convolutional layers and 3 × 3 void convolutional layers, specifically, the input image of the multi-scale feature extraction block is respectively passed through one 1 × 1 convolutional layer and three 3 × 3 void convolutions, where the convolution kernels of the three void convolutions are 6,12 and 18, respectively; then, the obtained 4 features with different scales are input into a feature combination layer (concat), the 4 features with different scales are spliced together (data splicing can be performed by using a concat function in python), and finally, the features are sent into a feature fusion layer (1 × 1 convolution layer) to be fused, and then multi-scale information used for extracting a noise image is output. Image denoising requires manipulation of the entire image, which requires the network to have sufficient receptive field to cover a larger image area. The multi-scale feature extraction block can increase the receptive field without changing the size of the feature map, and is beneficial to extracting multi-scale information. The combination of the hole convolution and the multi-scale ensures that more comprehensive characteristics of the image can be extracted with smaller calculation amount.
The Ghost feature extraction block comprises a plurality of Ghost blocks which are connected in sequence, and specifically, the Ghost feature extraction block is formed by connecting 10 Ghost blocks in sequence. Each Ghost block adopts a 32-channel 1 multiplied by 1 convolutional layer to perform feature extraction on the input feature graph; processing the features extracted from the 1 × 1 convolutional layer of the 32 channels by using a depth separable convolutional layer to obtain an output feature map; and the output characteristic graph obtained by the depth separable convolution layer in the last Ghost block in the Ghost feature extraction block is the noise feature. Specifically, each Ghost block is essentially composed of a two-part operation, the first part being the necessary feature enrichment to obtain the input features using a 1 × 1 convolution. First, a normal 1 × 1 convolution layer is performed, which is a small amount of convolution, such as the convolution normally using 64 channels, here 32 channels are used, and the 1 × 1 convolution functions like feature integration to generate feature compression of the input feature layer for extracting necessary features of the image. The second part adopts depth separable convolution to obtain similar feature maps with compressed features, and redundant features are transformed from the generated feature maps. One convolution kernel of the depth separable convolution is responsible for one channel, one channel is convoluted by only one convolution kernel, the calculation amount is smaller than that of the ordinary convolution operation, and in the case of obtaining the same number of feature maps, the parameter amount of the depth separable convolution is about 1/3 of the ordinary convolution, and the noise feature is generated by utilizing the feature concentration obtained in the previous step. The feature extraction module improves the network depth, reduces parameters during network training, greatly reduces the consumption of memory in the training process, and reduces the training time.
The denoising model is trained by adopting a degradation equation y-x + z, wherein z is additive white gaussian noise, y is a noisy image, the noise f (y) is predicted through the denoising model, and then a clean image x is obtained through x-y-f (y). Therefore, combining the existing CNN denoising method, a denoising model is trained by adopting mean square error, and the realization of the process can be expressed as
Figure BDA0003466675430000061
Where θ represents the parameters of the medium training model, N represents N noisy clean image pairs, and the loss function recovers the potentially clean images through an Adam optimizer.
And after the model training is finished, testing the training result to evaluate the training effect of the network structure. This example was tested on common data sets Set12 and BSD 68. The peak signal-to-noise ratio (PSNR) is used to measure the effectiveness of image denoising. In the embodiment, the contrast of the denoising effect is performed on the gray image and the color image respectively.
For grayscale images, tables 1 and 2 show the PSNR values and the average PSNR value for each image on the Set12 data Set for the present invention and the most advanced 11 methods (BM3D, WNNM, EPLL, MLP, CSF, TNRD, DnCNN, IRCNN, FFDNet, BRDNet, and dudetet) at noise levels (δ) of 25 and 50, respectively. It can be seen from tables 1 and 2 that the PSNR values are improved over the most advanced methods in most images. Fig. 4(a), 4(b), 4(c) and 4(d) are visualizations of one image of the BSD68 test set taken at a noise level of 35, along with both DnCNN and FFDNet methods and noisy images.
TABLE 1 comparison of noise level 25 with advanced method on Set12 dataset
Image of a person C.man House Peppers Starfish Monarch Airplane Parrot Lena Barbara Boat Man Couple Mean value
BM3D 29.45 32.85 30.16 28.56 29.25 28.42 28.93 32.07 30.71 29.90 29.61 29.71 29.97
WNNM 29.64 33.22 30.42 29.03 29.80 28.69 29.15 32.24 31.24 30.03 29.76 29.82 30.26
EPLL 29.26 32.17 30.17 28.51 29.39 28.61 28.95 31.73 28.61 29.74 29.66 29.53 29.69
MLP 29.61 32.56 30.30 28.82 29.61 28.82 29.25 32.25 29.54 29.97 29.88 29.73 30.03
CSF 29.48 32.39 30.32 28.80 29.62 28.72 28.90 31.79 29.03 29.76 29.71 29.53 29.84
TNRD 29.72 32.53 30.57 29.02 29.85 28.88 29.18 32.00 29.41 29.91 29.87 29.71 30.06
DnCNN 30.18 33.06 30.87 29.41 30.28 29.13 29.43 32.44 30.00 30.21 30.10 30.12 30.43
IRCNN 30.08 33.06 30.88 29.27 30.09 29.12 29.47 32.43 29.92 30.17 30.04 30.08 30.38
FFDNet 30.10 33.28 30.93 29.32 30.08 29.04 29.44 32.57 30.01 30.25 30.11 30.20 30.44
BRDNet 31.39 33.41 31.04 29.46 30.50 29.20 29.55 32.65 30.34 30.33 30.14 30.28 30.61
DudeNet 30.23 33.24 30.98 29.53 30.44 29.14 29.48 32.52 30.15 30.24 30.08 30.15 30.52
The invention 30.40 33.45 31.06 29.47 30.53 29.22 29.58 32.67 30.37 30.34 30.20 30.31 30.80
TABLE 2 comparison of noise level 50 with the advanced method on Set12 dataset
Image of a person C.man House Peppers Starfish Monarc Airplan Parrot Lena Barbara Boat Man Couple Mean value
BM3D 26.1 29.6 26.6 25.0 25.82 25.10 25.9 29.0 27.2 26.7 26.8 26.4 26.7
WNNM 26.4 30.3 26.9 25.4 26.32 25.42 26.1 29.2 27.7 26.9 26.9 26.6 27.0
EPLL 26.1 29.1 26.8 25.1 25.94 25.31 25.9 28.6 24.8 26.7 26.7 26.3 26.4
MLP 26.3 29.6 26.6 25.4 26.26 25.56 26.1 29.3 25.2 27.0 27.0 26.6 26.7
TNRD 26.6 29.4 27.1 25.4 26.31 25.59 26.1 28.9 25.7 26.9 26.9 26.5 26.8
DnCNN 27.0 30.0 27.3 25.7 26.78 25.87 26.4 29.3 26.2 27.2 27.2 26.9 27.1
IRCNN 26.8 29.9 27.3 25.5 26.61 25.89 26.5 29.4 26.2 27.1 27.1 26.8 27.1
FFDNet 27.0 30.3 27.5 25.7 26.81 25.89 26.5 29.6 26.4 27.3 27.2 27.0 27.3
BRDNet 27.4 30.5 27.6 25.7 26.97 25.93 26.6 29.7 26.8 27.3 27.2 27.1 27.4
DudeNe 27.2 30.2 27.5 25.8 26.93 25.88 26.5 29.4 26.4 27.2 27.1 26.9 27.3
The invention 27.4 30.5 27.5 25.8 26.98 25.96 26.2 29.7 26.8 27.4 27.2 27.2 27.4
For color images, this example was tested on three data sets DBSD68, Kodak24, and McMaster at noise levels of 15, 25, 35, 50, and 70, and compared to the 6 most advanced methods. Table 3 shows the average PSNR values for the various data sets at different noise levels. As can be seen from Table 3, the PSNR values of the present invention are improved under different noise levels of different data sets. Fig. 5(a), fig. 5(b), fig. 5(c) and fig. 5(d) are the visualization results of a noise-containing image and an original image of an image and the ADNet method in the McMaster test set selected in the embodiment at a noise level of 35.
TABLE 3 comparison with other advanced methods on different datasets
Figure BDA0003466675430000081
Figure BDA0003466675430000091
In addition to comparing the image denoising effects, training times are also compared. Under the same experimental environment, the time spent in training the Waterloo application data set by the method for 50 times of training iterations is reduced by 25.4 hours compared with the BRDNet method.
Example two
The embodiment provides an image denoising system, which specifically includes the following modules:
an image acquisition module configured to: acquiring a noise image;
a denoising module configured to: inputting the noise image into a denoising model to obtain a denoising image;
the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image.
It should be noted that, each module in the present embodiment corresponds to each step in the first embodiment one to one, and the specific implementation process is the same, which is not described herein again.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in an image denoising method as described in the first embodiment.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps in an image denoising method as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image denoising method, comprising:
acquiring a noise image;
inputting the noise image into a denoising model to obtain a denoising image;
the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image.
2. The image denoising method of claim 1, wherein the multi-scale feature extraction block respectively performs feature extraction on the noise image by using a plurality of 1 x 1 convolutional layers and 3 x 3 void convolutional layers to obtain features of different scales.
3. The image denoising method of claim 2, wherein the multi-scale feature extraction block sequentially inputs the features of different scales into the feature combination layer and the feature fusion layer to obtain a multi-scale feature map of a noise image.
4. The method as claimed in claim 1, wherein the Ghost feature extraction block comprises a plurality of Ghost blocks connected in sequence.
5. The image denoising method of claim 4, wherein the Ghost block performs feature extraction on the input feature map by using a 32-channel 1 x 1 convolutional layer; and processing the features extracted from the 32-channel 1 × 1 convolutional layer by using a depth separable convolutional layer to obtain an output feature map.
6. The method of claim 5, wherein an output feature map obtained from a depth-separable convolutional layer in a last Ghost block in the Ghost feature extraction blocks is the noise feature.
7. An image denoising method according to claim 1, wherein the denoising model adopts a mean square error as a loss function and is optimized by an Adam optimizer.
8. An image denoising system, comprising:
an image acquisition module configured to: acquiring a noise image;
a denoising module configured to: inputting the noise image into a denoising model to obtain a denoising image;
the denoising model sequentially passes through the multi-scale feature extraction block and the Ghost feature extraction block, the noise features are obtained after the noise image is subjected to feature extraction, and the noise features are subtracted by the noise image to obtain the denoising image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a method for denoising an image according to any one of claims 1-7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of a method for image denoising as claimed in any one of claims 1-7 when executing the program.
CN202210031691.6A 2022-01-12 2022-01-12 Image denoising method and system Pending CN114529463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210031691.6A CN114529463A (en) 2022-01-12 2022-01-12 Image denoising method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210031691.6A CN114529463A (en) 2022-01-12 2022-01-12 Image denoising method and system

Publications (1)

Publication Number Publication Date
CN114529463A true CN114529463A (en) 2022-05-24

Family

ID=81620563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210031691.6A Pending CN114529463A (en) 2022-01-12 2022-01-12 Image denoising method and system

Country Status (1)

Country Link
CN (1) CN114529463A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising

Similar Documents

Publication Publication Date Title
CN109003240B (en) Image denoising method based on multi-scale parallel CNN
CN114140353B (en) Swin-Transformer image denoising method and system based on channel attention
CN111369487B (en) Hyperspectral and multispectral image fusion method, system and medium
CN110443768B (en) Single-frame image super-resolution reconstruction method based on multiple consistency constraints
CN111062880A (en) Underwater image real-time enhancement method based on condition generation countermeasure network
Zuo et al. Convolutional neural networks for image denoising and restoration
CN112419184A (en) Spatial attention map image denoising method integrating local information and global information
CN113689517B (en) Image texture synthesis method and system for multi-scale channel attention network
CN111210395B (en) Retinex underwater image enhancement method based on gray value mapping
CN113837959B (en) Image denoising model training method, image denoising method and system
CN114240797B (en) OCT image denoising method, device, equipment and medium
CN115205147A (en) Multi-scale optimization low-illumination image enhancement method based on Transformer
CN111127354A (en) Single-image rain removing method based on multi-scale dictionary learning
CN113450290A (en) Low-illumination image enhancement method and system based on image inpainting technology
CN113191983A (en) Image denoising method and device based on deep learning attention mechanism
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN114529463A (en) Image denoising method and system
CN113962878B (en) Low-visibility image defogging model method
Chen et al. Image quality assessment guided deep neural networks training
CN112734649A (en) Image degradation method and system based on lightweight neural network
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
Piriyatharawet et al. Image denoising with deep convolutional and multi-directional LSTM networks under Poisson noise environments
CN113344935B (en) Image segmentation method and system based on multi-scale difficulty perception
CN112435174B (en) Underwater image processing method based on double-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination