CN115578287A - Image noise reduction method, device, equipment and storage medium - Google Patents

Image noise reduction method, device, equipment and storage medium Download PDF

Info

Publication number
CN115578287A
CN115578287A CN202211344162.8A CN202211344162A CN115578287A CN 115578287 A CN115578287 A CN 115578287A CN 202211344162 A CN202211344162 A CN 202211344162A CN 115578287 A CN115578287 A CN 115578287A
Authority
CN
China
Prior art keywords
image
image block
edge layer
layers
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211344162.8A
Other languages
Chinese (zh)
Inventor
俞克强
王松
刘硕
董振昊
邵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211344162.8A priority Critical patent/CN115578287A/en
Publication of CN115578287A publication Critical patent/CN115578287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image denoising method, device, equipment and storage medium, and filtering processing is performed on a texture layer and an edge layer of an image separately, so that information of a boundary with a severe pixel value change and a texture with a fine change in the image can be better reserved, and a better denoising effect is achieved. The method comprises the following steps: decomposing the image to be denoised into M texture layers and an edge layer, respectively carrying out filtering processing on the M texture layers and the edge layer, and carrying out image reconstruction according to the M texture layers and the edge layer after the filtering processing to obtain the denoised image.

Description

Image noise reduction method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image denoising method, apparatus, device, and storage medium.
Background
The application field of the image is wide, and the image relates to the fields of biomedicine, military, traffic security, machine vision and the like. The quality of the image directly affects its usefulness in these areas. However, the images are inevitably interfered by noise in the processes of acquisition, processing and transmission. Therefore, it is of great importance to filter noise in the image. The existing mainstream image noise reduction technology mainly comprises the following steps: a spatial domain image denoising technique, a frequency domain image denoising technique, a spatial domain and frequency domain mixed image denoising technique, a learning-like image denoising technique, and the like. The spatial domain image denoising technique usually adopts algorithms such as mean filtering, gaussian filtering, median filtering and the like. The correlation algorithm of the spatial domain image noise reduction technology has poor noise reduction effect on the boundary region and the flat region. For example, when performing noise reduction, averaging is performed only on the gray values of all the pixels in the selected image block as the gray value of the central pixel, so that the processing effect on sudden change or regular change and other situations between the pixels is poor, and a lot of useful image information can be lost. The main examples of the frequency domain image denoising Technology include wavelet and noise reduction algorithms such as Discrete Cosine Transform (DCT) filtering algorithm, which transform an image into a frequency domain by using a transform method such as fourier transform and cosine transform, and perform noise reduction processing on irregular frequencies of abrupt change in the frequency domain. A disadvantage of frequency domain image noise reduction is that large noise suppression is generally not as good as spatial domain noise reduction. The image denoising technology with the mixed space domain and frequency domain mainly comprises algorithms such as a three-dimensional Block Matching algorithm (Block Matching 3D, BM3D), a low rank clustering algorithm (WNNM), and the like, wherein the algorithms combine space domain and frequency domain denoising, and the image denoising capability is greatly improved. The learning type image noise reduction technology mainly comprises deep learning and dictionary learning type noise reduction algorithms, wherein the algorithms use pre-collected samples to train the noise reduction algorithms, so that a good effect can be achieved, but the calculated amount is huge, and the application to actual products is difficult.
As non-local mean filtering (NLM), BM3D, WNNM, and other algorithms with good noise reduction effects, the core of the algorithms is to find an image block similar to a target image block, and perform noise reduction processing on pixel values in the target image block according to pixel values in a plurality of similar blocks, and the biggest problem of these algorithms is that the matching of a candidate block and the target block may be inaccurate. Fig. 1 is a schematic diagram of matching similar candidate blocks with a current target block, as shown in fig. 1, the upper side is the target block, the middle side and the lower side are candidate blocks a and B, respectively, the candidate block a can be obtained to be more similar to the target block through a current block comparison algorithm (for example, the similarity between one pixel point of the candidate block and the target block is added by 1, and the final candidate block with the highest similarity is the most similar candidate block), but from the viewpoint of human vision, the edge of an image is more concerned, and the candidate block B is more similar to the target block.
Disclosure of Invention
The application provides an image denoising method, device, equipment and storage medium, and solves the problem of poor image denoising effect in the prior art.
In a first aspect, the present application provides an image denoising method, including:
decomposing an image to be denoised into M texture layers and an edge layer, wherein M is a positive integer;
performing filtering processing on the M texture layers and performing filtering processing on the edge layer;
and performing image reconstruction according to the M texture layers after filtering and the edge layer after filtering to obtain an image after noise reduction.
Further, the decomposing the image to be denoised into M texture layers and an edge layer includes:
performing edge feature extraction on the image to be subjected to noise reduction to obtain M candidate edge layers, wherein the 1 st candidate edge layer in the M candidate edge layers is obtained by performing edge feature extraction on the image to be subjected to noise reduction, the Nth candidate edge layer is obtained by performing edge feature extraction on the N-1 th candidate edge layer, and N is greater than or equal to 2 and less than or equal to M;
and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
Further, the filtering the M texture layers includes:
determining K second image blocks with highest similarity to a first image block in the M texture layers according to the first image block taking the pixel point as the center aiming at each pixel point in the M texture layers, wherein K is a positive integer;
and reassigning the values of the pixel points according to the values of the central pixel points of the K second image blocks.
Further, the filtering the edge layer includes:
determining K fourth image blocks with the highest similarity to a third image block in the edge layer according to the third image block taking the pixel point as the center aiming at each pixel point in the edge layer, wherein K is a positive integer;
and reassigning the values of the pixel points according to the values of the central pixel points of the K fourth image blocks.
Further, before determining, in the edge layer, K fourth image blocks having a highest similarity to the third image block according to the third image block centered on the pixel point, the method includes:
determining the average brightness of the third image block and the average brightness of each of O fourth image blocks of which the similarity with the third image block is greater than a set threshold, wherein O is greater than or equal to K;
and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block.
Further, the method for performing luminance correction on the pixel values of the pixel points in the fourth image block according to the average luminance of the fourth image block and the average luminance of the third image block includes:
according to
Figure BDA0003916577470000031
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, a represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
Further, the image to be denoised is a single-channel image, or a multi-channel image, or a fusion image of visible light and infrared light.
In a second aspect, the present application provides a data transmission apparatus, comprising:
the decomposition module is used for decomposing an image to be denoised into M texture layers and an edge layer, wherein M is a positive integer;
the filtering module is used for carrying out filtering processing on the M texture layers and carrying out filtering processing on the edge layer;
and the reconstruction module is used for carrying out image reconstruction according to the M texture layers after the filtering processing and the edge layer after the filtering processing to obtain the image after the noise reduction.
Further, the decomposition module is further configured to, when decomposing an image to be subjected to noise reduction into M texture layers and an edge layer, specifically perform edge feature extraction on the image to be subjected to noise reduction to obtain M candidate edge layers, where a 1 st candidate edge layer in the M candidate edge layers is obtained by performing edge feature extraction based on the image to be subjected to noise reduction, an nth candidate edge layer is obtained by performing edge feature extraction based on an N-1 th candidate edge layer, and N is greater than or equal to 2 and less than or equal to M; and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
Further, when the filtering module performs filtering processing on the M texture layers, the filtering module is further configured to determine, for each pixel point in the M texture layers, K second image blocks with the highest similarity to a first image block in the M texture layers according to the first image block centered on the pixel point, where K is a positive integer, and reassign values of the pixel points according to values of central pixel points of the K second image blocks;
and when the filtering module carries out filtering processing on the edge layer, the filtering module is also used for determining K fourth image blocks with the highest similarity with a third image block in the edge layer according to the third image block taking the pixel point as the center, wherein K is a positive integer, and the value of the pixel point is re-assigned according to the value of the central pixel point of the K fourth image blocks.
The filtering module is used for determining the average brightness of the third image block and the average brightness of each of O fourth image blocks with the similarity with the third image block larger than a set threshold value before determining K fourth image blocks with the highest similarity with the third image block in the edge layer according to the third image block with the pixel point as the center, wherein O is larger than or equal to K; and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image blocks according to the average brightness of the fourth image blocks and the average brightness of the third image blocks.
The filtering module is used for correcting the brightness of the pixel values of the pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block, and is also used for correcting the brightness according to the average brightness of the fourth image block and the average brightness of the third image block
Figure BDA0003916577470000051
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, A represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
Further, the image to be denoised is a single-channel image, or a multi-channel image, or a fusion image of visible light and infrared light.
In a third aspect, the present application provides an electronic device, which comprises at least a processor and a memory, wherein the processor is configured to implement the method of the first aspect when executing a computer program or instructions stored in the memory.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program or instructions which, when executed by a processor, implement the method of the first aspect as described above.
The image to be denoised is decomposed into M texture layers and an edge layer, the M texture layers and the edge layer are subjected to filtering processing respectively, image reconstruction is carried out according to the M texture layers after the filtering processing and the edge layer after the filtering processing, the image after the denoising is obtained, the texture layers and the edge layer of the image are subjected to filtering processing separately, the information of the edge with the sharp pixel value change and the fine texture change in the image can be better reserved, and the better denoising effect is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an existing target block matching similar candidate block according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image denoising method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a texture layer acquisition according to an embodiment of the present application;
fig. 4 is a schematic diagram of matching a similar candidate block after luminance correction for a target block according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a complete implementation process of an image denoising method in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a data transmission device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the foregoing drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily intended to limit the order or sequence in which they are presented unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
In the prior art, a key step of a conventional noise reduction algorithm which is currently superior, such as NLM, BM3D, WNNM, etc., is to find a plurality of candidate blocks similar to a target block in an image to be subjected to noise reduction, but since a boundary with a drastic pixel value change and texture information with a subtle change may be mixed in an initial image to be subjected to noise reduction, matching of the candidate block and the target block in a conventional noise reduction algorithm is not accurate enough, and partial information of an image edge and a texture may be lost, resulting in poor noise reduction effect of a subsequent image.
In order to improve the accuracy of image block matching and improve the noise reduction effect of the noise-reduced image, embodiments of the present application provide an image noise reduction method, apparatus, device, and storage medium.
Example 1:
fig. 2 is a flowchart of an image denoising method provided in an embodiment of the present application, where the method may be applied to electronic devices such as a mobile terminal, a computer terminal, a monitoring camera, and a server, and the method includes:
s201: the electronic device decomposes an image to be denoised into M texture layers and an edge layer, wherein M is a positive integer.
In one possible implementation, for an image to be denoised, the electronic device may perform edge feature extraction on the image to be denoised by using an edge preserving filter or the like, decompose the image to be denoised into M candidate edge layers, and use the M-th candidate edge layer as an edge layer to be denoised;
and subtracting the 1 st candidate edge layer from the image to be denoised to obtain the 1 st texture layer, and repeating the steps until the M candidate edge layer and the M-1 th candidate edge layer are subtracted to obtain other M texture layers, and finally obtaining the M texture layers. Fig. 3 is a schematic diagram illustrating a texture layer acquisition according to an embodiment of the present disclosure. As shown in fig. 3, the texture layer may be obtained by subtracting an image B from an image a, where the image a may be an image to be denoised or a last candidate edge layer, and the image B is a current candidate edge layer.
The texture information of the image to be denoised can be left in the texture layer more by decomposing the image into a plurality of texture layers, so that the condition that the texture layer information left in the edge layer reduces the accuracy of edge layer block matching is avoided.
In a possible embodiment, in order to make the image block similarity matching more accurate and obtain a better noise reduction effect, in this embodiment, decomposing the image to be noise reduced into M texture layers and an edge layer includes:
performing edge feature extraction on the image to be subjected to noise reduction to obtain M candidate edge layers, wherein the 1 st candidate edge layer in the M candidate edge layers is obtained by performing edge feature extraction on the image to be subjected to noise reduction, the Nth candidate edge layer is obtained by performing edge feature extraction on the N-1 th candidate edge layer, and N is greater than or equal to 2 and less than or equal to M;
and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
In the implementation of the present application, an edge preserving filter may be used to perform edge feature extraction on the image to be denoised, specifically, the edge feature extraction may be performed on the image to be denoised to obtain a 1 st candidate edge layer, and then, feature extraction may be performed according to the 1 st candidate edge layer to obtain a 2 nd candidate edge layer, that is, the 1 st candidate edge layer is removed and the edge feature extraction is performed on the image to be denoised, and the other candidate edge layers are obtained by performing edge feature extraction on the previous candidate edge layer; wherein, the Mth candidate edge layer is used as the edge layer.
After M candidate edge layers are obtained, the M texture layers are obtained by carrying out subtraction operation on the image to be subjected to noise reduction and the M candidate edge layers, specifically, the 1 st texture layer is obtained by subtracting the 1 st candidate edge layer from the image to be subjected to noise reduction, the 2 nd texture layer is obtained by subtracting the 2 nd candidate edge layer from the 1 st candidate edge layer, namely, the 1 st texture layer is removed, the image to be subjected to noise reduction and the 1 st candidate edge layer are subtracted, and other texture layers are obtained by subtracting the current candidate edge layer from the previous candidate edge layer.
The edge-preserving filter may be a weighted least squares filter (WLS), a bilateral filter, a laplacian feature mapping filter (LEP), an NLM, or the like.
For example, first, 4 candidate edge layers are extracted from an image to be denoised by using an edge-preserving filter, in the following manner:
B 1 =ψ α (I);
B 2 =ψ α (B 1 );
B 3 =ψ α (B 2 );
B 4 =ψ α (B 3 );
and then 4 texture layers are obtained by calculation according to the following formula:
T 1 =I-B 1
T 2 =B 1 -B 2
T 3 =B 2 -B 3
T 4 =B 3 -B 4
B=B 4
where ψ is an edge preserving filter algorithm, α is a parameter corresponding to an edge preserving filter, B1, B2, B3, and B4 are respectively the 1 st, 2 nd, 3 rd, and 4 th candidate edge layers, T1, T2, T3, and T4 are respectively the 1 st, 2 nd, 3 rd, and 4 th texture layers, and B4 is selected as the edge layer B.
S202: carrying out filtering processing on the M texture layers and carrying out filtering processing on the edge layer;
the electronic equipment determines a target image block taking the pixel point as a central pixel point for each pixel point of each texture layer based on M texture layers obtained through decomposition, determines a plurality of candidate image blocks similar to the target image block in the M texture layers, and calculates the value of the pixel point in the final target image block, namely the noise reduction result, according to the pixel value of the central pixel point of the candidate image blocks and the weights of the candidate image blocks.
And aiming at each pixel point in the edge layer, determining a target image block taking the pixel point as a central pixel point, determining a plurality of candidate image blocks similar to the target image block in the edge layer, and calculating to obtain the value of the pixel point in the final target image block, namely a noise reduction result, according to the pixel values of the central pixel points of the candidate image blocks and the weights of the candidate image blocks.
The weights of the candidate image blocks may be preset, for example, there are K candidate image blocks, and the weight of each candidate image block is 1/K, or the weight corresponding to the candidate image block is calculated according to the euclidean distance between the candidate image block and the target image block.
S203: and performing image reconstruction according to the M texture layers after the filtering processing and the edge layers after the filtering processing to obtain the image after noise reduction.
In one possible implementation, the electronic device may perform image reconstruction on the filtered M texture layers and the filtered edge layer by using the following formula:
deI=max(0,min(deT 1 +deT 2 +…+deT M +deB,255))
deI is the pixel value of the pixel point of the denoised image, deT 1 、deT 2 And deT M The pixel values of the pixel points of the 1 st texture layer, the 2 nd texture layer and the Mth texture layer after filtering processing are respectively, and deB is the value of the pixel point of the edge layer after filtering processing.
It should be understood that, in the embodiment of the present application, the image to be denoised may be a single-channel image, a multi-channel image, a fused image of visible light and infrared, or the like.
Optionally, if a multi-channel image or a fused image including visible light and infrared is encountered, the multi-channel image or the fused image including visible light and infrared may be decomposed into a single-channel image, and the single-channel image is processed. And finally, combining the processed single-channel images into a multi-channel image or a fusion image containing visible light and infrared. The multichannel image or the fusion image containing visible light and infrared can be converted into other color domains by a color coding mode YUV and then processed.
The method comprises the steps of decomposing an image to be denoised into M texture layers and an edge layer, conducting filtering processing on the M texture layers and the edge layer respectively, conducting image reconstruction according to the M texture layers after the filtering processing and the edge layer after the filtering processing, obtaining the image after denoising, conducting filtering processing on the texture layers and the edge layer of the image separately, and better retaining information of the sharp boundaries and the fine textures of the changes of pixel values in the image so as to achieve better denoising effect.
Example 2:
on the basis of the foregoing embodiments, in an embodiment of the present application, the performing filtering processing on the M texture layers includes:
determining K second image blocks with highest similarity to a first image block in the M texture layers according to the first image block taking the pixel point as the center aiming at each pixel point in the M texture layers, wherein K is a positive integer;
and reassigning the values of the pixel points according to the values of the central pixel points of the K second image blocks.
For each pixel point in each texture layer of M, determining a first image block by taking the pixel point as a center, wherein the size of the first image block can be 3 × 3 or 5 × 5, and the size of the first image block can be set according to actual conditions, but the size of the first image block does not exceed 9 × 9 under general conditions, so that the first image block is prevented from influencing the noise reduction effect of the image too much;
the candidate second image blocks having the same size as the first image block are traversed in M texture layers, and K most similar candidate second image blocks are selected as the second image blocks, where the K second image blocks may include the first image block, and the image block similarity algorithm may use, for example, a cosine similarity algorithm, a hash algorithm, or a Structural Similarity Index (SSIM), a deep learning algorithm, or the like.
After the K second image blocks are determined, the electronic device can reassign the central pixel points of the first image blocks according to the values of the central pixel points of the K second image blocks and the weights corresponding to the second image blocks.
Specifically, the following formula may be referred to determine a weight formula of the second image block and obtain a noise reduction result of the center pixel of the first image block.
Specifically, the weight formula for determining each second image block may be:
Figure BDA0003916577470000111
or
Figure BDA0003916577470000112
Among them, weight k K =1, 2, 3, … K, the weight of the K-th second image block,
Figure BDA0003916577470000113
subtracting the sum of absolute values of pixel values of each pixel point of a kth second image block from the pixel value of each pixel point of a first image block in a texture layer, wherein n is the number of pixel points contained in the first image block and the second image block, and sigma is a parameter set artificially;
the formula for determining the noise reduction result of the center pixel point of the first image block may be:
Figure BDA0003916577470000121
wherein, targeting tA is the reduction of the central pixel point of the first image block in the M texture layersNoise results cenA k The pixel value of the central pixel point of the kth second image block corresponding to the first image block in the texture layer is obtained.
Example 3:
on the basis of the foregoing embodiments, in an embodiment of the present application, the performing filtering processing on the edge layer includes:
determining K fourth image blocks with the highest similarity to a third image block in the edge layer according to the third image block taking the pixel point as the center aiming at each pixel point in the edge layer, wherein K is a positive integer;
and reassigning the values of the pixel points according to the values of the central pixel points of the K fourth image blocks.
For each pixel point in the edge layer, a third image block is determined by taking the pixel point as a center, the size of the third image block can be 3 × 3, or 5 × 5, and the size of the third image block can be manually set according to actual conditions, but the size of the third image block cannot exceed 9 × 9 under general conditions, so that the third image block is prevented from greatly influencing the noise reduction effect of the image.
Traversing candidate fourth image blocks with the same size as the third image block in the edge layer, and selecting K most similar candidate fourth image blocks as the fourth image blocks, wherein the K fourth image blocks may include the third image block, and the image block similarity algorithm may use, for example, a cosine similarity algorithm, a hash algorithm, or SSIM, a deep learning algorithm, and the like, which are not described herein again;
and after the K fourth image blocks are determined, re-assigning the central pixel points of the third image blocks according to the values of the central pixel points of the K fourth image blocks and the weights corresponding to the fourth image blocks.
Specifically, a formula for calculating the weight of the fourth image block and a formula for obtaining the noise reduction result of the center pixel point of the third image block are provided herein, and it should be noted that the weight formula and the formula for obtaining the noise reduction result of the center pixel point are not limited to the examples given in the embodiments of the present application;
specifically, the weight formula for determining each fourth image block is as follows:
Figure BDA0003916577470000131
or
Figure BDA0003916577470000132
Wherein, W k K =1, 2, 3, … K for the weight corresponding to the kth fourth image block,
Figure BDA0003916577470000133
subtracting the sum of absolute values of pixel values of each pixel point of a kth fourth image block from the pixel value of each pixel point of a third image block in the edge layer, wherein n is the number of pixel points contained in the third image block and the fourth image block, and sigma is a parameter set artificially;
the formula for determining the noise reduction result of the center pixel point of the third image block is as follows:
Figure BDA0003916577470000134
wherein the detargetadA is a noise reduction result of a central pixel point of the third image block in the edge layer, and the cenadA k And the pixel value of the central pixel point of the kth fourth image block corresponding to the third image block in the edge layer is obtained.
Example 4:
in order to find a fourth image block with higher similarity from a third image block in an edge layer, in this embodiment of the present application, on the basis of the foregoing embodiments, before determining, in the edge layer, K fourth image blocks with highest similarity to the third image block according to the third image block centered on the pixel point, the method includes:
determining the average brightness of the third image block and the average brightness of each of O fourth image blocks of which the similarity with the third image block is greater than a set threshold, wherein O is greater than or equal to K;
and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block.
As shown in fig. 1, the upper side is a target block, and the middle and lower sides are candidate blocks a and B, respectively, a conventional image block similarity algorithm may consider that the target block is more similar to the candidate block a, and if the target block is image-fused with the candidate block a, edge blurring may be caused, so that in order to select a fourth image block in an edge layer, which is more similar to the third image block, luminance correction needs to be performed on O fourth image blocks.
Before brightness correction, the average brightness of the third image block and the average brightness of each of O fourth image blocks whose similarity to the third image block is greater than a set threshold need to be determined; the average brightness can be the sum of the gray values of all the pixels in the image block, except the number of the pixels contained in the image block.
Further, in a possible implementation manner, in order to select a fourth image block that is more similar to a third image block in an edge layer, the method for performing luminance correction on pixel values of pixel points in the fourth image block according to an average luminance of the fourth image block and an average luminance of the third image block includes:
according to
Figure BDA0003916577470000141
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, A represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
Fig. 4 is a schematic diagram of matching a similar candidate block after luminance correction for a target block, provided in an embodiment of the present application, where the image block of fig. 4 is based on a structure of the image block of fig. 1 after luminance correction, the upper side is the target block, and the middle and lower sides are candidate blocks a and B after luminance correction respectively, and an existing image block similarity algorithm may consider that the target block is more similar to the candidate block B after luminance correction, which better conforms to human vision, and edge information of the image block is better retained, so that searching for a similar fourth image block for a third image block in an edge layer after luminance correction may make a noise reduction effect of the edge layer better.
To better explain the image denoising process in the embodiment of the present application, based on the above embodiment, fig. 5 is a complete implementation process of an image denoising method in the embodiment of the present application. The specific process comprises the following steps:
s501: and (5) decomposing the image.
Firstly, extracting 4 candidate edge layers from an image to be denoised by using an edge preserving filter, wherein the specific mode is as follows:
B 1 =Ψα(I);
B 2 =Ψα(B 1 );
B 3 =Ψα(B 2 );
B 4 =Ψα(B 3 );
and then 4 texture layers are obtained by calculation according to the following formula:
T 1 =I-B 1
T 2 =B 1 -B 2
T 3 =B 2 -B 3
T 4 =B 3 -B 4
B=B 4
where Ψ is an edge preserving filter algorithm, α is a parameter corresponding to an edge preserving filter, B1, B2, B3, and B4 are respectively the 1 st, 2 nd, 3 rd, and 4 th candidate edge layers, T1, T2, T3, and T4 are respectively the 1 st, 2 nd, 3 rd, and 4 th texture layers, and B4 is selected as the edge layer B.
S502: and (5) matching blocks.
In the texture layer, for T 1 、T 2 、T 3 、T 4 According to the first image block taking the pixel point as the center, traversing T 1 、T 2 、T 3 、T 4 Selecting 6 second image blocks with highest similarity from the image blocks with the same size as the first image block;
in the edge layer, aiming at each pixel point of B, traversing the image blocks with the same size as the third image block in B and selecting 12 fourth image blocks with the highest similarity according to the third image block taking the pixel point as the center, then calculating the average brightness of the third image block and the 12 fourth image blocks, and according to the average brightness
Figure BDA0003916577470000161
Performing brightness correction on pixel values of pixel points in the 12 fourth image blocks, and selecting 6 fourth image blocks with highest similarity from the 12 fourth image blocks after brightness correction;
adA represents a pixel value of a pixel point in the fourth image block after luminance correction, a represents a pixel value of a pixel point in the fourth image block before luminance correction, meanA represents an average luminance of the fourth image block, and meanTarget represents an average luminance of the third image block; the method for judging the similarity of the image blocks can be a hash average algorithm.
S503: and (4) calculating the weight.
Calculating the weight of each second image block and each fourth image block according to the following formula:
Figure BDA0003916577470000162
or
Figure BDA0003916577470000163
Wherein, is weight p The weight corresponding to the p-th second image block or the p-th fourth image block, p =1, 2, 3, 4, sigma is an external configuration parameter,
Figure BDA0003916577470000164
and subtracting the sum of the absolute value of each pixel point of the p-th second image block and the absolute value of each pixel point of the p-th fourth image block from the pixel value of each pixel point of the first image block in the texture layer, and subtracting the sum of the absolute values of the pixel values of each pixel point of the p-th fourth image block from the pixel value of each pixel point of the third image block in the edge layer, wherein n is the number of the pixel points contained in the third image block or the first image block.
S504: and (5) noise reduction and filtering.
Calculating the noise reduction result of the central pixel point of each first image block and each third image block according to the following formula:
Figure BDA0003916577470000165
wherein the destination B is the noise reduction result of the center pixel point of the first image block in the texture layer or the noise reduction result of the center pixel point of the third image block in the edge layer, cenB p The pixel value of the central pixel point of the p-th second image block corresponding to the first image block in the texture layer or the pixel value of the central pixel point of the p-th fourth image block corresponding to the third image block in the edge layer.
S505: and (5) image reconstruction.
And adopting the following formula to reconstruct the images of the 4 texture layers after the filtering processing and the edge layers after the filtering processing.
deI=max(0,min(deT 1 +deT 2 +deT 3 +deT 4 +deB,255))
deI is the pixel value of the pixel point of the denoised image, deT 1 、deT 2 、deT 3 And deT 4 The pixel values of the pixel points of the 1 st texture layer, the 2 nd texture layer, the 3 rd texture layer and the 4 th texture layer after filtering processing are respectively, and deB is the value of the pixel point of the edge layer after filtering processing.
Example 5:
based on the above image denoising method, an embodiment of the present application further provides an image denoising device, and fig. 6 is a schematic structural diagram of a data transmission device provided in the embodiment of the present application, where the device includes:
a decomposition module 601, configured to decompose an image to be denoised into M texture layers and an edge layer, where M is a positive integer;
a filtering module 602, configured to perform filtering processing on the M texture layers and perform filtering processing on the edge layer;
a reconstructing module 603, configured to perform image reconstruction according to the M texture layers after the filtering processing and the edge layer after the filtering processing, so as to obtain an image after noise reduction.
The decomposition module 601 is further configured to, when the image to be subjected to noise reduction is decomposed into M texture layers and an edge layer, specifically perform edge feature extraction on the image to be subjected to noise reduction to obtain M candidate edge layers, where a 1 st candidate edge layer in the M candidate edge layers is obtained by performing edge feature extraction on the image to be subjected to noise reduction, an nth candidate edge layer is obtained by performing edge feature extraction on the N-1 th candidate edge layer, and N is greater than or equal to 2 and less than or equal to M; and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
When the filtering module 602 performs filtering processing on the M texture layers, the filtering module is further configured to determine, for each pixel point in the M texture layers, K second image blocks with the highest similarity to a first image block in the M texture layers according to the first image block centered on the pixel point, where K is a positive integer, and reassign the value of the pixel point according to the value of a central pixel point of the K second image blocks;
when the filtering module 602 performs filtering processing on the edge layer, the filtering module is further configured to determine, for each pixel point in the edge layer, K fourth image blocks with the highest similarity to a third image block in the edge layer according to the third image block centered on the pixel point, where K is a positive integer, and reassign the value of the pixel point according to the value of a central pixel point of the K fourth image blocks.
The filtering module 602, according to a third image block centered on the pixel point, is further configured to determine, before determining, in the edge layer, K fourth image blocks with a highest similarity to the third image block, an average luminance of the third image block and an average luminance of each of O fourth image blocks with a similarity to the third image block being greater than a set threshold, where O is greater than or equal to K; and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image blocks according to the average brightness of the fourth image blocks and the average brightness of the third image blocks.
The filtering module 602 is configured to perform brightness correction on pixel values of pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block, and further configured to perform brightness correction according to the average brightness of the fourth image block and the average brightness of the third image block
Figure BDA0003916577470000181
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, a represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
The image to be denoised is a single-channel image, or a multi-channel image, or a fusion image of visible light and infrared.
The device can be deployed in any electronic equipment such as mobile equipment, computer terminals, monitoring cameras and servers which can perform image noise reduction.
Example 6:
fig. 7 is a schematic structural diagram of an electronic device provided in the present application. As shown in fig. 6, the electronic apparatus includes: a processor 701, a communication interface 702, a memory 703 and a communication bus 704, wherein the processor 701, the communication interface 702 and the memory 703 are communicated with each other through the communication bus 704.
The memory 703 has stored therein a computer program which, when executed by the processor 701, causes the processor 701 to perform any of the steps of the image noise reduction method described above.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this is not intended to represent only one bus or type of bus.
The communication interface 702 is used for communication between the electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
Example 7:
on the basis of the foregoing embodiments, the present application further provides a computer-readable storage medium, where a computer program executable by an electronic device is stored in the computer-readable storage medium, and when the program is run on the electronic device, the electronic device is caused to perform any one of the steps of the image denoising method described above.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MO), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), solid State Disks (SSDs), etc.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (16)

1. A method for image noise reduction, the method comprising:
decomposing an image to be denoised into M texture layers and an edge layer, wherein M is a positive integer;
performing filtering processing on the M texture layers and performing filtering processing on the edge layer;
and performing image reconstruction according to the M texture layers after filtering and the edge layer after filtering to obtain an image after noise reduction.
2. The method of claim 1, wherein decomposing the image to be denoised into M texture layers and an edge layer comprises:
performing edge feature extraction on the image to be subjected to noise reduction to obtain M candidate edge layers, wherein the 1 st candidate edge layer in the M candidate edge layers is obtained by performing edge feature extraction on the image to be subjected to noise reduction, the Nth candidate edge layer is obtained by performing edge feature extraction on the N-1 th candidate edge layer, and N is greater than or equal to 2 and less than or equal to M;
and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
3. The method of claim 1, wherein said filtering said M texture layers comprises:
for each pixel point in the M texture layers, according to a first image block taking the pixel point as a center, K second image blocks with the highest similarity with the first image block are determined in the M texture layers, wherein K is a positive integer;
and according to the values of the central pixel points of the K second image blocks, the values of the pixel points are re-assigned.
4. The method of claim 1, wherein the filtering the edge layer comprises:
for each pixel point in the edge layer, according to a third image block taking the pixel point as a center, K fourth image blocks with the highest similarity with the third image block are determined in the edge layer, wherein K is a positive integer;
and reassigning the values of the pixel points according to the values of the central pixel points of the K fourth image blocks.
5. The method according to claim 4, wherein before determining, in the edge layer, K fourth image blocks with highest similarity to the third image block according to a third image block centered on the pixel point, the method comprises:
determining the average brightness of the third image block and the average brightness of each of O fourth image blocks with the similarity with the third image block larger than a set threshold, wherein O is larger than or equal to K;
and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block.
6. The method according to claim 5, wherein performing luminance correction on pixel values of pixel points in the fourth image block according to the average luminance of the fourth image block and the average luminance of the third image block comprises:
according to
Figure FDA0003916577460000021
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, a represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
7. The method according to claim 1, wherein the image to be denoised is a single-channel image, or a multi-channel image, or a fused image of visible light and infrared.
8. An image noise reduction apparatus, characterized by comprising:
the decomposition module is used for decomposing an image to be denoised into M texture layers and an edge layer, wherein M is a positive integer;
the filtering module is used for carrying out filtering processing on the M texture layers and carrying out filtering processing on the edge layer;
and the reconstruction module is used for carrying out image reconstruction according to the M texture layers after the filtering processing and the edge layer after the filtering processing to obtain the image after the noise reduction.
9. The apparatus according to claim 8, wherein the decomposition module is specifically configured to, when decomposing an image to be noise-reduced into M texture layers and an edge layer, specifically perform edge feature extraction on the image to be noise-reduced to obtain M candidate edge layers, where a 1 st candidate edge layer of the M candidate edge layers is obtained by performing edge feature extraction based on the image to be noise-reduced, an nth candidate edge layer is obtained by performing edge feature extraction based on an N-1 th candidate edge layer, N is greater than or equal to 2 and less than or equal to M; and obtaining the M texture layers according to the difference between the image to be denoised and the 1 st candidate edge layer and the difference between any two adjacent candidate edge layers in the M candidate edge layers, and taking the Mth candidate edge layer in the M candidate edge layers as the edge layer.
10. The apparatus according to claim 8, wherein the filtering module, when performing filtering processing on the M texture layers, is specifically configured to determine, for each pixel point in the M texture layers, K second image blocks with a highest similarity to a first image block in the M texture layers according to the first image block centered on the pixel point, where K is a positive integer; and reassigning the values of the pixel points according to the values of the central pixel points of the K second image blocks.
11. The apparatus according to claim 8, wherein the filtering module, when performing filtering processing on the edge layer, is specifically configured to determine, for each pixel point in the edge layer, K fourth image blocks that have a highest similarity to a third image block according to the third image block that takes the pixel point as a center in the edge layer, where K is a positive integer; and reassigning the values of the pixel points according to the values of the central pixel points of the K fourth image blocks.
12. The apparatus of claim 11, wherein the filtering module is further configured to determine, before determining, in the edge layer, K fourth image blocks with a highest similarity to the third image block for a third image block centered on the pixel point, an average luminance of the third image block and an average luminance of each of O fourth image blocks with a similarity to the third image block greater than a set threshold, where O is greater than or equal to K; and aiming at each fourth image block in the O fourth image blocks, performing brightness correction on pixel values of pixel points in the fourth image block according to the average brightness of the fourth image block and the average brightness of the third image block.
13. The apparatus according to claim 12, wherein the filtering module performs luma correction on pixel values of pixels in the fourth image block according to the average luminance of the fourth image block and the average luminance of the third image block, specifically according to the average luminance of the fourth image block
Figure FDA0003916577460000041
And performing brightness correction on pixel values of pixel points in the fourth image block, wherein adA represents the pixel values of the pixel points in the fourth image block after the brightness correction, a represents the pixel values of the pixel points in the fourth image block before the brightness correction, meanA represents the average brightness of the fourth image block, and meanTarget represents the average brightness of the third image block.
14. The apparatus of claim 8, wherein the image to be denoised is a single channel image, or is a multi-channel image, or is a fused image of visible light and infrared.
15. An electronic device, comprising a processor and a memory, wherein the processor, when executing computer programs or instructions stored in the memory, implements the method of any of claims 1-7.
16. A computer-readable storage medium, characterized in that it stores a computer program or instructions which, when executed by a processor, implement the method according to any one of the preceding claims 1-7.
CN202211344162.8A 2022-10-31 2022-10-31 Image noise reduction method, device, equipment and storage medium Pending CN115578287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211344162.8A CN115578287A (en) 2022-10-31 2022-10-31 Image noise reduction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211344162.8A CN115578287A (en) 2022-10-31 2022-10-31 Image noise reduction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115578287A true CN115578287A (en) 2023-01-06

Family

ID=84588312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211344162.8A Pending CN115578287A (en) 2022-10-31 2022-10-31 Image noise reduction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115578287A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993626A (en) * 2023-09-26 2023-11-03 成都市晶林科技有限公司 Infrared image noise reduction method and system based on time-space domain

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993626A (en) * 2023-09-26 2023-11-03 成都市晶林科技有限公司 Infrared image noise reduction method and system based on time-space domain

Similar Documents

Publication Publication Date Title
Mohan et al. A deep neural network learning‐based speckle noise removal technique for enhancing the quality of synthetic‐aperture radar images
US9443286B2 (en) Gray image processing method and apparatus based on wavelet transformation
CN110796615B (en) Image denoising method, device and storage medium
CN107172322B (en) Video noise reduction method and device
Xu et al. A switching weighted vector median filter based on edge detection
WO2016074725A1 (en) Non local image denoising
CN111179201B (en) Video denoising method and electronic equipment
CN115619683B (en) Image processing method, apparatus, device, storage medium, and computer program product
US10911785B2 (en) Intelligent compression of grainy video content
CN110880001A (en) Training method, device and storage medium for semantic segmentation neural network
CN115578287A (en) Image noise reduction method, device, equipment and storage medium
WO2020049567A1 (en) Model-free physics-based reconstruction of images acquired in scattering media
CN115984570A (en) Video denoising method and device, storage medium and electronic device
Singh et al. Triple Threshold Statistical Detection filter for removing high density random-valued impulse noise in images
CN111882565A (en) Image binarization method, device, equipment and storage medium
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
CN110880183A (en) Image segmentation method, device and computer-readable storage medium
CN113011433B (en) Filtering parameter adjusting method and device
CN117078574A (en) Image rain removing method and device
Li et al. A cascaded algorithm for image quality assessment and image denoising based on CNN for image security and authorization
CN109978928B (en) Binocular vision stereo matching method and system based on weighted voting
Palacios-Enriquez et al. Sparse technique for images corrupted by mixed Gaussian-impulsive noise
Park et al. Image enhancement for extremely low light conditions
Zhang et al. A new image filtering method: Nonlocal image guided averaging
CN113469889B (en) Image noise reduction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination