CN112529854A - Noise estimation method, device, storage medium and equipment - Google Patents

Noise estimation method, device, storage medium and equipment Download PDF

Info

Publication number
CN112529854A
CN112529854A CN202011375395.5A CN202011375395A CN112529854A CN 112529854 A CN112529854 A CN 112529854A CN 202011375395 A CN202011375395 A CN 202011375395A CN 112529854 A CN112529854 A CN 112529854A
Authority
CN
China
Prior art keywords
frequency component
noise estimation
estimation value
value
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011375395.5A
Other languages
Chinese (zh)
Other versions
CN112529854B (en
Inventor
李琤
宋风龙
黄亦斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011375395.5A priority Critical patent/CN112529854B/en
Publication of CN112529854A publication Critical patent/CN112529854A/en
Application granted granted Critical
Publication of CN112529854B publication Critical patent/CN112529854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and discloses a noise estimation method, a device, a storage medium and equipment, which comprise: firstly, dividing an N-frame image to be estimated into a plurality of image blocks with fixed sizes; performing DCT (discrete cosine transformation) on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; and then calculating a pixel estimation value of the low-frequency component, calculating an optimal subthreshold value corresponding to the intermediate-frequency component, correcting the power spectral density of the high-frequency component, and correcting an ISP (internet service provider) access of an image signal processor of the high-frequency component to obtain an initial noise estimation value, and determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low-frequency component of the N frames of images. Therefore, the accuracy of the noise estimation result of the real hand image and the video can be improved due to the fact that the low-frequency component, the medium-frequency component and the high-frequency component are simultaneously and respectively subjected to power spectral density correction, ISP (internet service provider) access correction and the like.

Description

Noise estimation method, device, storage medium and equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a noise estimation method, apparatus, storage medium, and device.
Background
With the rapid development of mobile internet, internet of things and Artificial Intelligence (AI) technologies, the photographing and video quality of terminal devices such as smart phones and the like has a rapid and rapid progress. But is limited by the hardware performance of the optical sensor of the terminal device and the hardware area and power consumption constraints of an Image Signal Processor (ISP), and the image and video quality is still not high enough. Therefore, image and video denoising is still a very urgent need, so that the imaging quality is greatly improved, and the later computer vision processing accuracy is improved. In order to achieve effective image and video denoising, accurate noise estimation is crucial, and the actual denoising effect is greatly influenced.
Currently, there are two general methods for noise estimation: an estimation method based on Poisson-Gaussian Noise (PG Noise) model, which is to perform Discrete Wavelet Transform (DWT) on an image by assuming that original data Noise is composed of signal-related Poisson Noise and signal-unrelated Gaussian Noise, and aims to separate image signals from Noise, segment the image at DWT low frequency, obtain signal intensity estimated values by using low-frequency coefficients of each segmented region respectively, obtain Noise intensity estimated values by using high-frequency coefficients, finally fit relational expressions of all signals and Noise intensities to obtain a final Noise estimation model, but because the accuracy of the estimation method is seriously influenced by image contents, the robustness is insufficient, and for original data dark regions and overexposure positions, the Noise estimation method does not satisfy the assumption of the P-G model and needs additional correction, meanwhile, the model does not consider camera shooting parameters, and the estimation of the whole scene noise cannot be achieved; yet another commonly used Noise estimation method is an estimation method based on a Noise Flow (Noise Flow) model, which converts a White Gaussian Noise (WGN) distribution into a true Noise distribution through data training, and uses a minimum Negative Likelihood Loss (NLL) as a training target. Some shooting parameters (such as ISO, camera model and the like) are introduced as prior information to guide the generalization of the network on different scenes, but the method has the disadvantages that operators including a large number of Jacobian ranks, matrix inversions and the like cannot obtain the support and acceleration operations of hardware (such as a Graphic Processing Unit (GPU), a Network Processing Unit (NPU) and the like) in the calculation process, and a noise estimation module in the method also needs about 160GMAC (global amplitude computer) calculation amount, which is far beyond the current mobile phone calculation capability. Therefore, it is necessary to design a full-scene, high-quality, and efficient noise estimation AI model for noise of real images and videos to improve the accuracy of the noise estimation result of the real hand images and videos.
Disclosure of Invention
The embodiment of the application provides a noise estimation method, a noise estimation device, a storage medium and equipment, which are beneficial to overcoming the defects of the existing noise estimation method, reducing estimation errors and improving the accuracy of the noise estimation result of a real hand image and a video.
In a first aspect, the present application provides a noise estimation method, including: when noise estimation is carried out, firstly, dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete Cosine Transform (DCT) is carried out on each image block to obtain low-frequency components, intermediate-frequency components and high-frequency components; and then, determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frames of images.
Compared with the prior art, when the noise estimation is carried out on the video and the image, the embodiment of the application adaptively separates the signal and the noise through the DCT coefficient, utilizes multi-frame information, increases the number of sample blocks for estimation, and corrects the color noise and the ISP path, so that the error of noise estimation can be reduced, particularly in a dark area and an overexposure area, the calculation mode is simple and efficient, and the accuracy of the noise estimation result of the real hand image and the video is improved.
In one possible implementation, after determining a final noise estimation result of the N-frame image to be estimated according to the pixel estimation value and the initial noise estimation value of the low-frequency component of the N-frame image, the method further includes: carrying out model fitting by using a light sensitivity ISO value, a pixel estimation value and an initial noise estimation value of the camera to obtain a relational expression of a fitting model; and calculating parameters of a fitting model by using a least square method to obtain a pixel estimation value and a full scene noise intensity curved surface corresponding to the initial noise estimation value as a final fitting result.
In one possible implementation, calculating the pixel estimation value of the low frequency component includes: dividing a pixel value range into K intervals according to the low-frequency component; wherein K is a positive integer greater than 0; and calculating the average value of all DCT low-frequency components in each interval to obtain the pixel estimation value of the area.
In one possible implementation, modifying the power spectral density of the high frequency component includes: and correcting the power spectral density of the high-frequency component through cooperative filtering.
In one possible implementation, modifying the ISP path of the image signal processor for the high frequency component to obtain an initial noise estimation value includes: and correcting the high-frequency component by using the ISP access parameters, and calculating the median of the corrected high-frequency component to be used as an initial noise estimation value.
In a second aspect, the present application further provides a noise estimation apparatus, including: the device comprises a dividing unit, a calculating unit and a judging unit, wherein the dividing unit is used for dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete cosine transform is carried out on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; wherein N is a positive integer greater than 1; a first calculation unit for calculating a pixel estimation value of the low frequency component; the second calculating unit is used for calculating the optimal sub-threshold corresponding to the intermediate frequency component; the correction unit is used for correcting the power spectral density of the high-frequency component and correcting an ISP (internet service provider) access of the image signal processor of the high-frequency component to obtain an initial noise estimation value; and the determining unit is used for determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low-frequency component of the N frames of images.
In a possible implementation manner, the apparatus further includes: the fitting unit is used for performing model fitting by using a light sensitivity ISO value, a pixel estimation value and an initial noise estimation value of the camera to obtain a relational expression of a fitting model; and the obtaining unit is used for calculating parameters of a fitting model by using a least square method to obtain a pixel estimation value and a full scene noise intensity curved surface corresponding to the initial noise estimation value as a final fitting result.
In one possible implementation, the first computing unit includes: the dividing subunit is used for dividing the value range of the pixel value into K intervals according to the low-frequency component; wherein K is a positive integer greater than 0; and the obtaining subunit is used for calculating the average value of all DCT low-frequency components in each interval to obtain the pixel estimation value of the region.
In a possible implementation manner, the modification unit is specifically configured to: and correcting the power spectral density of the high-frequency component through cooperative filtering.
In a possible implementation manner, the modifying unit is further specifically configured to: and correcting the high-frequency component by using the ISP access parameters, and calculating the median of the corrected high-frequency component to be used as an initial noise estimation value.
In a third aspect, the present application further provides a noise estimation device, including: a memory, a processor;
a memory to store instructions; a processor configured to execute instructions in a memory to perform the method of the first aspect and any one of its possible implementations.
In a fourth aspect, the present application also provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method of the first aspect and any one of its possible implementations.
According to the technical scheme, the embodiment of the application has the following advantages:
when noise estimation is carried out, firstly, dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete Cosine Transform (DCT) is carried out on each image block to obtain low-frequency components, intermediate-frequency components and high-frequency components; and then, determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frames of images. Therefore, when the noise estimation is performed on the video and the image, the embodiment of the application adaptively separates the signal and the noise through the DCT coefficient, increases the number of sample blocks for estimation by using multi-frame information, and corrects the color noise and the ISP path, so that the error of noise estimation can be reduced, especially in a dark area and an overexposed area, the calculation mode is simple and efficient, and the accuracy of the noise estimation result of the real hand image and the video is improved.
Drawings
FIG. 1 is a schematic structural diagram of an artificial intelligence body framework provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 3 is a second exemplary embodiment of the present disclosure;
FIG. 4 is a third exemplary view of an application scenario according to an embodiment of the present application;
fig. 5 is a flowchart of a noise estimation method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a comparison between the noise estimation method and the related noise estimation method provided in the embodiments of the present application;
FIG. 7 is a graph illustrating comparison of actual calibration noise provided by an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a comparison of denoising network visual effects provided in an embodiment of the present application;
FIG. 9 is a second comparison diagram of the de-noising network visual effect provided by the embodiment of the present application;
FIG. 10 is a third schematic diagram illustrating comparison of the visual effect of the de-noised network provided in the embodiment of the present application;
FIG. 11 is a schematic diagram of noise data synthesis effects provided by embodiments of the present application;
fig. 12 is a block diagram of a noise estimation apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a noise estimation device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a noise estimation method, a noise estimation device, a storage medium and equipment, which can reduce noise estimation errors and improve the accuracy of noise estimation results of real hand images and videos.
Embodiments of the present application are described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The general workflow of the artificial intelligence system will be described first, please refer to fig. 1, which shows a schematic structural diagram of an artificial intelligence body framework, and the artificial intelligence body framework is explained below from two dimensions of "intelligent information chain" (horizontal axis) and "IT value chain" (vertical axis). Where "intelligent information chain" reflects a list of processes processed from the acquisition of data. For example, the general processes of intelligent information perception, intelligent information representation and formation, intelligent reasoning, intelligent decision making and intelligent execution and output can be realized. In this process, the data undergoes a "data-information-knowledge-wisdom" refinement process. The 'IT value chain' reflects the value of the artificial intelligence to the information technology industry from the bottom infrastructure of the human intelligence, information (realization of providing and processing technology) to the industrial ecological process of the system.
(1) Infrastructure
The infrastructure provides computing power support for the artificial intelligent system, realizes communication with the outside world, and realizes support through a foundation platform. Communicating with the outside through a sensor; the computing power is provided by intelligent chips (hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA and the like); the basic platform comprises distributed computing framework, network and other related platform guarantees and supports, and can comprise cloud storage and computing, interconnection and intercommunication networks and the like. For example, sensors and external communications acquire data that is provided to intelligent chips in a distributed computing system provided by the base platform for computation.
(2) Data of
Data at the upper level of the infrastructure is used to represent the data source for the field of artificial intelligence. The data relates to graphs, images, voice and texts, and also relates to the data of the Internet of things of traditional equipment, including service data of the existing system and sensing data such as force, displacement, liquid level, temperature, humidity and the like.
(3) Data processing
Data processing typically includes data training, machine learning, deep learning, searching, reasoning, decision making, and the like.
The machine learning and the deep learning can perform symbolized and formalized intelligent information modeling, extraction, preprocessing, training and the like on data.
Inference means a process of simulating an intelligent human inference mode in a computer or an intelligent system, using formalized information to think about and solve a problem by a machine according to an inference control strategy, and a typical function is searching and matching.
The decision-making refers to a process of making a decision after reasoning intelligent information, and generally provides functions of classification, sequencing, prediction and the like.
(4) General capabilities
After the above-mentioned data processing, further based on the result of the data processing, some general capabilities may be formed, such as algorithms or a general system, e.g. translation, analysis of text, computer vision processing, speech recognition, recognition of images, etc.
(5) Intelligent product and industrial application
The intelligent product and industry application refers to the product and application of an artificial intelligence system in various fields, and is the encapsulation of an artificial intelligence integral solution, the intelligent information decision is commercialized, and the landing application is realized, and the application field mainly comprises: intelligent terminal, intelligent transportation, intelligent medical treatment, autopilot, safe city etc..
The method and the device can be applied to the field of image processing in the field of artificial intelligence, and an application scene of falling to a product is introduced below.
The noise estimation process applied to the computing equipment such as the terminal equipment and the cloud product is as follows:
the noise estimation method provided by the embodiment of the application can be applied to the image processing process in computing equipment such as terminal equipment and cloud products, and particularly can be applied to cameras on the terminal equipment. Referring to fig. 2, fig. 3, and fig. 4, which are schematic views of application scenarios of the embodiment of the present application, respectively, as shown in fig. 2, a terminal device is provided with an AI system that implements an image processing function, such as an american camera installed in a mobile phone. The nonparametric estimation model flow chart based on the power spectral density can be used for firstly cutting a group of multiframe images to be estimated, carrying out DCT (discrete cosine transformation) on each image block, dividing DCT coefficients into high, medium and low frequency components and high frequency components, and then processing the DCT high, medium and low frequency coefficients in parallel. The separated high-frequency component is subjected to power spectrum correction and an ISP (internet service provider) access correction module to obtain standard deviation statistic of noise, and the low-frequency component is subjected to a pixel value interval division module to finally obtain the position and the pixel value of a DCT (discrete cosine transformation) block of each interval. Meanwhile, the intermediate frequency component judges whether the DCT block is used as an estimated sampling point through the self-adaptive threshold frequency division module, and on the basis of reserving the sampling quantity as much as possible, the influence of texture and structural information is eliminated, and the deviation of an estimation result is reduced. And finally, drawing a one-dimensional curve of the pixel value and the noise standard deviation. Next, as shown in fig. 3, after noise estimation is performed on different shooting parameters, the relationship between the estimated noise curve and the shooting parameters is further fitted by using ISO values corresponding to each group of images, and a final noise estimation result of the whole scene is obtained as a final fitting result. Furthermore, as shown in fig. 4, on one hand, for the denoising network, the estimated noise model can be directly used as network input, and the network is introduced to obtain prior information of noise, so that the generalization of the model is improved; on the other hand, based on the fitted noise model, a noisy image can be generated by using the clean image, training data is supplemented, and the network effect is improved. In fig. 4, N noisy videos of ISO scenes are input, after noise estimation of a multi-frame image is performed, N noise curves are output, and then a curved surface of the full-scene noise intensity can be obtained by fitting using minimum squared-error (MSE).
The terminal device can be a mobile phone, a tablet, a notebook computer, an intelligent wearable device and the like, and the terminal device can perform noise estimation processing on each group of acquired multi-frame images. It should be understood that the embodiments of the present application may also be applied to other scenarios requiring noise estimation, and no mention is made here of other application scenarios.
Based on the application scenarios, the embodiment of the application provides a noise estimation method, and the method can be applied to weak computing equipment such as terminal equipment or high-performance computing equipment such as cloud products. As shown in fig. 5, the method includes:
s501: dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete cosine transform is carried out on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; wherein N is a positive integer greater than 1.
In this embodiment, the N-frame image to be estimated may be image data (such as a landscape image captured by a user) captured by the terminal device through an image capture device (such as a camera), or may be previously stored image data obtained from the inside of the terminal device. The specific acquisition mode and the specific source of the N frames of images to be estimated are not limited, and can be selected according to actual conditions.
Further, after acquiring the N frames of images to be estimated, the terminal device may divide the N frames of images into a plurality of image blocks of fixed size; and performing Discrete Cosine Transform (DCT) on each image block to obtain a low frequency component, a medium frequency component, and a high frequency component. In an alternative implementation, a spatial Crop operation may be performed on an image sequence to be estimated (defined as I herein), where patch _ size is set to [ n, n ], and an image block is truncated by step size n _ step, for example, H × W × F I is divided into a plurality of n × n × 1 image blocks, where the corresponding I coordinates of pixels in a first image block are [0: n-1,0: n-1,0 ]; if the image sequence to be estimated is an original Raw domain image sequence, a shuffle operation needs to be performed on the image, and each color component is processed respectively. Then, discrete cosine transform may be performed on the divided image block, the position of the transformed DCT coefficient is represented as (i, j), and the frequency component, the intermediate frequency component, and the high frequency component are divided by the value of i + j for performing the subsequent steps S502-S505.
Specifically, first, an image sequence to be estimated may be defined as If∈[0,1,…,F]The resolution of each frame of image is [ H, W ]]F frames. Then, a dicing operation is performed on the image, the size of the image block being [ n ]patch,npatch]At a slice interval of nslideWherein the (I, j) th image block contains pixels corresponding to IfThe middle coordinate is [ i (n) ]slide-1):i*(nslide-1)+npatch-1,j*(nslide-1):j*(nslide-1)+npatch-1]. Then, DCT transformation is carried out on the image block to obtain a low-frequency component DLIntermediate frequency component DMAnd a high frequency component DHBy DCT coefficient position (i)dct,jdct) Division DL,DMAnd DHE.g. npatchWhen the value is 8, the value i isdct+jdctSet to 1 as the low frequency component DL1 is to be<idct+jdctSetting the value of ≦ 8 as the intermediate frequency component DMWill 8<idct+jdct16 is set as the high frequency component DHFor performing the following steps S502-S505.
S502: pixel estimation values for low frequency components are calculated.
In this embodiment, after the low frequency component is obtained in step S501, the value range of the pixel value may be further divided into K intervals according to the low frequency component; and K is a positive integer larger than 0, and the average value of all DCT low-frequency components in each interval is calculated to obtain the pixel estimation value of the area for executing the subsequent steps.
In particular, it can be based on the low frequency component DLThe pixel value range is divided into K intervals, i.e. DLDivided into K segments, with pixel values in the range of [0,1 ]]Then k isiPixel value range of individual intervalCorrespond to
Figure BDA0002808041830000061
By calculating the average of all DCT low frequency components for each interval (i.e., calculating
Figure BDA0002808041830000062
Average value of) to obtain an estimate of the pixels of the region, defined herein as
Figure BDA0002808041830000063
And obtain its position information
Figure BDA0002808041830000064
For processing the middle and high frequency components.
S503: and calculating the optimal sub-threshold corresponding to the intermediate frequency component.
In the present embodiment, after the intermediate frequency component is obtained in step S501, the k-th component is referred toiIndividual section, set up for sharing
Figure BDA0002808041830000065
Calculating the average value of the intermediate frequency components of the DCT blocks, sequencing the DCT blocks in ascending order according to the average value, taking the value of the mth block as a division threshold value delta p, iterating through a preset algorithm to finally obtain the optimal division threshold value p, and conforming to the requirement
Figure BDA0002808041830000066
Is/are as follows
Figure BDA00028080418300000621
And performing subsequent noise value estimation on each DCT block, wherein the preset algorithm specifically comprises the following steps:
Adaptive-Threshold Frequency Component Division
Input:D=Set of N×N DCT blocks
Output:Chosen Dm for noise estimation
1:for each pixel value bins:
2:Initialize p=0.8andΔp=0.05
3:Compute
Figure BDA0002808041830000067
Figure BDA0002808041830000068
4:Compute
Figure BDA0002808041830000069
Figure BDA00028080418300000610
5:while
Figure BDA00028080418300000611
and
Figure BDA00028080418300000612
and
Figure BDA00028080418300000613
6:Set p=p-△p
7:Update
Figure BDA00028080418300000614
8:while end
9:
Figure BDA00028080418300000615
10:for end。
s504: and correcting the power spectral density of the high-frequency component and correcting an ISP (internet service provider) access of the high-frequency component to obtain an initial noise estimation value.
It should be noted that the actual noise is not white noise, but has spatial correlation, and the frequency domain shows power spectral density variation, so that the estimation result is biased. Therefore, an optional implementation manner is that after the high-frequency component is obtained in step S501, the power spectral density of the high-frequency component may be corrected through collaborative filtering according to the power spectral density stability inconsistency of the color noise in the spatial domain and the time domain. Wherein, the collaborative filtering may adopt a wiener filter.
Specifically, for the k-thiInterval, get
Figure BDA00028080418300000616
Coordinates of (2)
Figure BDA00028080418300000617
To obtain
Figure BDA00028080418300000618
All of the high frequency components of
Figure BDA00028080418300000619
Computing
Figure BDA00028080418300000620
The ratio of the spatial domain standard deviation to the time domain standard deviation is used as a wiener filter coefficient to carry out collaborative filtering, so that the power spectral density of the DCT block is corrected. The specific formula for calculating the wiener filter coefficient is as follows:
Figure BDA0002808041830000071
wherein the content of the first and second substances,
Figure BDA0002808041830000072
denotes the kthiThe number of DCT blocks of the interval used for noise estimation; f represents the number of image frames;
then, utilize
Figure BDA0002808041830000073
For high frequency component
Figure BDA0002808041830000074
Carrying out correction; final meterCalculating the high-frequency standard deviation of all frames to obtain the noise standard deviation after power spectrum correction
Figure BDA0002808041830000075
Furthermore, the noise standard deviation after power spectrum correction is obtained in consideration of influence of other processing modules in the ISP on the noise intensity and characteristics
Figure BDA0002808041830000076
Then, the ISP channel parameter may be used to correct the ISP channel parameter, and the median of the corrected high-frequency component is calculated as the initial noise estimation value, so as to perform the following step S505.
Specifically, LSC Metric and AWB Gain can be extracted from the ISP pathway and reshaped into reshape to the input dimension
Figure BDA0002808041830000077
Multiplying and using the reciprocal and the power spectrum corrected noise standard deviation
Figure BDA0002808041830000078
Performing dot multiplication to obtain a correction result corrected by an ISP path
Figure BDA0002808041830000079
And can calculate each interval kiAs the noise estimation value of the interval, the median of the high frequency component of (2) is defined herein as
Figure BDA00028080418300000710
S505: and determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low-frequency component of the N frames of images.
In the present embodiment, the pixel estimation values of the low frequency components of the N-frame image are obtained through the above steps
Figure BDA00028080418300000711
And initial noise estimationEvaluating value
Figure BDA00028080418300000712
Thereafter, each frame belonging to the same interval
Figure BDA00028080418300000713
And
Figure BDA00028080418300000714
averaging is performed to obtain the final noise estimation result, and a relation curve (shown in the rightmost graph of fig. 2) between the two is drawn to obtain the final result of the estimation model. But since the estimation result is a discrete value, it is not obtained for the model
Figure BDA00028080418300000715
And
Figure BDA00028080418300000716
and further, linear interpolation estimation is utilized, so that the final noise estimation result of the N frames of images to be estimated can be determined.
Further, an optional implementation manner is that a relationship between the non-parametric noise estimation result and the shooting parameter may be modeled to implement full-scene noise fitting, so as to support noise estimation of the real-time 4K video at the mobile phone end. Specifically, after the final noise estimation result of the N-frame image is obtained, model fitting may be performed by using a sensitivity ISO value, a pixel estimation value, and an initial noise estimation value of the camera to obtain a relational expression of a fitting model, where the relational expression is as follows:
Figure BDA00028080418300000717
λa=a1×ISO2+a2×ISO+a3
λb=b1×ISO2+b2×ISO+b3
then, parameters of the fitting model can be calculated by using a least square method to obtainTo pixel estimate
Figure BDA00028080418300000718
And initial noise estimate
Figure BDA00028080418300000719
The corresponding full scene noise intensity surface (shown in the rightmost graph of fig. 3), which is defined here as σ (Y, ISO | β) as the final fit result. Wherein, beta represents model parameter, and the specific value is beta ═ ai,bi]。
Therefore, in the design of the noise estimation integral model, the non-parametric estimation model is adopted, the failure problem of the Poisson-Gaussian model in a dark area and an overexposure area is avoided, the relation between the pixel value and the noise intensity is directly estimated, and the estimation error is obviously reduced. Moreover, due to the introduction of multi-frame images, estimation sampling points are increased, and estimation accuracy is improved; meanwhile, the power spectrum of the image block is corrected by utilizing the correlation difference of the color noise in space and time, so that the estimation effect of the color noise is greatly improved. In addition, the self-adaptive threshold frequency dividing module is designed, interference caused by structures, textures and the like in the image is effectively eliminated by utilizing the intermediate frequency information of the image, and estimation errors are reduced. And when the ISP is corrected, the influence of each link in the mobile phone on the noise characteristic is corrected by introducing correction matrixes such as LSC, AWB and the like, and the method is simple and effective. And finally, modeling is carried out by utilizing the estimation result and camera shooting parameters, model parameters are fitted, and accurate parameter estimation under full shooting parameters and scenes is supported. Compared with other related noise estimation methods, the scheme of the application has better technical effects, and specifically comprises the following steps:
(1) compared with other related noise estimation methods, the method has better performance and more accurate estimation result.
As shown in fig. 6, for Raw rate data (i.e., data listed by the Sensor in fig. 6) acquired by using the Sensor and rate data after long-medium short-exposure fusion (i.e., data listed by HDR in fig. 6), the estimation method provided by the present application exceeds related Noise estimation methods such as White Noise, PG Noise, Noise Flow, and DCT in terms of both the bitmap average error (Q-Q error) and the relative entropy (KLD) (i.e., KL-entropy in fig. 6). That is, the smaller the values in the column of the Sensor and the column of the HDR in fig. 6, the smaller the error of the estimation, the higher the quality of the estimation result. Meanwhile, the estimation method provided by the application can obtain excellent estimation results for noises with different characteristics and distributions in different ISP positions and different ISO scenes, and is not repeated herein specifically.
(2) Compared with the real calibration noise, the difference is very small.
As shown in fig. 7, when comparing the noise estimation results of the full Raw domain positions, including the Sensor Raw, HDR, ATR, BLC, LSC, AWB, and GCD output Raw of different exposures, the noise estimation result of the present application represented by the discrete points in the graph is obtained, and the dashed line represents the statistical calibration result, so that it can be seen that the difference between the estimation value and the true value is very small, thereby indicating that the present application can effectively reflect the true noise level.
(3) The denoising network visual effect is better.
For a large model network, a noise variance map is calculated according to a specific image pixel value and a noise model, the noise variance map and an input image are input into a denoising network together, and a comparison result of performance and calculation amount of blind denoising is shown in fig. 8. For 4K images with the size of 2160 multiplied by 4096, the actual operation time is only 0.8ms provided under the hardware condition of the GPU V100, and the real-time performance of the algorithm is hardly influenced. Meanwhile, texture and structure information in the result can be obviously improved, and color noise and artifacts (artifacts) are effectively inhibited.
For a lightweight network, in order to meet the computation force constraint of weak computation force terminal equipment, it is the final goal to obtain a small computation network with a good denoising effect. The effectiveness of the noise estimation method in the present application can be verified on a 15GMAC Video JDD (joint de-noising and demosaicing). The JDD network is introduced in the same manner as the large model, and the comparison result of performance and computation amount is shown in fig. 9, and it can be seen from fig. 9 that the estimation method provided by the present application can improve PSNR by about 0.55dB compared to other related estimation methods, and only add 0.68GMAC extra computation. In addition, as shown in fig. 10 (in the two diagrams of fig. 10, the left side of each diagram represents a blind denoising manner, the middle represents a manner of introducing ISO information, and the right side represents the method of the present application), under 5lux illumination conditions, compared with the blind denoising manner and the manner of introducing ISO information, the estimation method provided by the present application can enable the JDD network to recover more detail textures, less color noise, more stable high-frequency details, and stronger weak-contrast textures.
(4) The noise data synthesis effect is good.
Based on the scheme of the application, the noise data of the Mate20pro mobile phone can be synthesized by adding random noise into the clean image, as shown in FIG. 11, the left image in FIG. 11 is real noise, the right image is noise estimated by the application, and the difference between the left image and the right image is almost zero, the consistency is strong, and the noise data synthesis effect of the application is good. Therefore, the scheme of the application can be utilized to effectively expand the training data set, synthesize noise data, greatly reduce the cost of data acquisition, effectively make up for the deficiency of real training data, and improve the accuracy of subsequent image processing operation.
In summary, in the noise estimation method provided in this embodiment, when performing noise estimation, first, an N frame image to be estimated is divided into a plurality of image blocks with fixed sizes; discrete Cosine Transform (DCT) is carried out on each image block to obtain low-frequency components, intermediate-frequency components and high-frequency components; and then, determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frames of images. Therefore, when the noise estimation is performed on the video and the image, the embodiment of the application adaptively separates the signal and the noise through the DCT coefficient, increases the number of sample blocks for estimation by using multi-frame information, and corrects the color noise and the ISP path, so that the error of noise estimation can be reduced, especially in a dark area and an overexposed area, the calculation mode is simple and efficient, and the accuracy of the noise estimation result of the real hand image and the video is improved.
For better understanding of the above noise estimation method provided in the embodiment of the present application, the noise estimation method is described by taking RAW domain image data with a resolution size [1080,1920] and a total frame number F equal to 7 as an example.
Specifically, first, a shuffle operation may be performed on the image data to obtain a size [540,960,28 ]]Image block of, setting npatchn slide8. Then, set idct+jdct1 is a low frequency component DL,1<idct+jdctLess than or equal to 8 is the intermediate frequency component DM,8<idct+jdct16 or less as high-frequency component DH. Then, D is addedLDividing the obtained product into 500 sections; meanwhile, the division threshold value Δ p is 0.005, and the optimal division threshold value of each interval is obtained through a preset algorithm
Figure BDA0002808041830000091
And, for the k-thiInterval, obtaining MpCoordinates of (2)
Figure BDA0002808041830000092
To obtain MpAll of the high frequency components of
Figure BDA0002808041830000093
Computing
Figure BDA0002808041830000094
The ratio of the spatial standard deviation to the temporal standard deviation of (d) modifies the power spectral density of the DCT block. Further, utilize
Figure BDA0002808041830000095
For high frequency component
Figure BDA0002808041830000096
Carrying out correction; further, calculating the high-frequency standard deviation of all frames to obtain the noise standard deviation after power spectrum correction
Figure BDA0002808041830000097
Further, the LSC Metric and AWB Gain are extracted and reshape is applied to [540/n ]patch,960/npatch]With each corresponding spatial position of the LSC Metric
Figure BDA0002808041830000098
Dot-product, AWB Gain with corresponding color channel
Figure BDA0002808041830000099
Dot multiplication; calculating each section k after correctioniAs a noise estimate value
Figure BDA00028080418300000910
Finally, k in the 7 framesiOf intervals
Figure BDA00028080418300000911
And
Figure BDA00028080418300000912
and respectively averaging the two to obtain the final estimation result. On the basis, the light sensitivity ISO value and the pixel estimation value of the camera can be utilized
Figure BDA00028080418300000913
And initial noise estimate
Figure BDA00028080418300000914
Performing model fitting to obtain a relational expression of a fitting model, solving parameters in the fitting model by a least square method by using actual data, and finally obtaining sigma (Y, ISO | beta) (as shown in the rightmost diagram of fig. 3), wherein beta represents a model parameter, and a specific value is beta ═ ai,bi]。
To facilitate better implementation of the above-described aspects of the embodiments of the present application, the following also provides relevant means for implementing the above-described aspects. Referring to fig. 12, an embodiment of the present application provides a noise estimation apparatus 1200. The apparatus 1200 may include: a dividing unit 1201, a first calculating unit 1202, a second calculating unit 1203, a correcting unit 1204, and a determining unit 1205. The dividing unit 1201 is configured to enable the apparatus 1200 to execute S501 in the embodiment shown in fig. 5. The first calculation unit 1202 is used to support the apparatus 1200 to execute S502 in the embodiment shown in fig. 5. The second calculation unit 1203 is used for the support apparatus 1200 to execute S503 in the embodiment shown in fig. 5. The modification unit 1204 is used for supporting the apparatus 1200 to execute S504 in the embodiment shown in fig. 5. The determination unit 1205 is used to support the apparatus 1200 to execute S505 in the embodiment shown in fig. 5. In particular, the method comprises the following steps of,
a dividing unit 1201, configured to divide an N-frame image to be estimated into a plurality of image blocks of fixed sizes; discrete cosine transform is carried out on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; n is a positive integer greater than 1;
a first calculation unit 1202 for calculating pixel estimation values of low-frequency components;
a second calculating unit 1203, configured to calculate an optimal sub-threshold corresponding to the intermediate frequency component;
a correcting unit 1204, configured to correct a power spectral density of the high-frequency component and correct an ISP path of the image signal processor of the high-frequency component to obtain an initial noise estimation value;
a determining unit 1205 is configured to determine a final noise estimation result of the N frame image to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frame image.
In an implementation manner of this embodiment, the apparatus further includes:
the fitting unit is used for performing model fitting by using a light sensitivity ISO value, a pixel estimation value and an initial noise estimation value of the camera to obtain a relational expression of a fitting model;
and the obtaining unit is used for calculating parameters of the fitting model by using a least square method to obtain a full scene noise intensity curved surface corresponding to the pixel estimation value and the initial noise estimation value as a final fitting result.
In one implementation manner of the present embodiment, the first calculation unit 1202 includes:
the dividing subunit is used for dividing the value range of the pixel value into K intervals according to the low-frequency component; wherein K is a positive integer greater than 0;
and the obtaining subunit is used for calculating the average value of all DCT low-frequency components in each interval to obtain the pixel estimation value of the region.
In an implementation manner of this embodiment, the correcting unit 1204 is specifically configured to:
the power spectral density of the high frequency component is corrected by the cooperative filtering.
In an implementation manner of this embodiment, the correcting unit 1204 is further specifically configured to:
and correcting the high-frequency component by using the ISP access parameters, and calculating the median of the corrected high-frequency component to be used as an initial noise estimation value.
In summary, in the noise estimation apparatus provided in this embodiment, when performing noise estimation, first, an N frame image to be estimated is divided into a plurality of image blocks with fixed sizes; discrete Cosine Transform (DCT) is carried out on each image block to obtain low-frequency components, intermediate-frequency components and high-frequency components; and then, determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frames of images. Therefore, when the noise estimation is performed on the video and the image, the embodiment of the application adaptively separates the signal and the noise through the DCT coefficient, increases the number of sample blocks for estimation by using multi-frame information, and corrects the color noise and the ISP path, so that the error of noise estimation can be reduced, especially in a dark area and an overexposed area, the calculation mode is simple and efficient, and the accuracy of the noise estimation result of the real hand image and the video is improved.
Referring to fig. 13, an embodiment of the present application provides a noise estimation apparatus 1300, which includes a memory 1301, a processor 1302, and a communication interface 1303,
a memory 1301 for storing instructions;
a processor 1302 for executing instructions in the memory 1301 for performing the noise estimation method described above as applied to the embodiment shown in fig. 5;
and a communication interface 1303 for performing communication.
The memory 1301, the processor 1302, and the communication interface 1303 are connected to each other by a bus 1304; the bus 1304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or type of bus.
In a specific embodiment, the processor 1302 is configured to, when performing noise estimation, first divide an N-frame image to be estimated into a plurality of image blocks with fixed sizes; discrete Cosine Transform (DCT) is carried out on each image block to obtain low-frequency components, intermediate-frequency components and high-frequency components; and then, determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value and the initial noise estimation value of the low frequency component of the N frames of images. For a detailed processing procedure of the processor 1302, please refer to the detailed description of S501, S502, S503, S504, and S505 in the embodiment shown in fig. 5, which is not repeated herein.
The memory 1301 may be a random-access memory (RAM), a flash memory (flash), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a register (register), a hard disk, a removable hard disk, a CD-ROM, or any other form of storage medium known to those skilled in the art.
The processor 1302 may be, for example, a Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), other programmable logic devices (FPGAs), a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like.
The communication interface 1303 may be an interface card, and may be an ethernet (ethernet) interface or an Asynchronous Transfer Mode (ATM) interface.
Embodiments of the present application also provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the above-mentioned noise estimation method.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. A method of noise estimation, the method comprising:
dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete cosine transform is carried out on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; n is a positive integer greater than 1;
calculating a pixel estimation value of the low-frequency component;
calculating an optimal sub-threshold corresponding to the intermediate frequency component;
correcting the power spectral density of the high-frequency component and correcting an ISP (internet service provider) access of the image signal processor of the high-frequency component to obtain an initial noise estimation value;
and determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value of the low-frequency component of the N frames of images and the initial noise estimation value.
2. The method according to claim 1, wherein after determining a final noise estimation result of the N-frame image to be estimated from the pixel estimation value of the low frequency component of the N-frame image and the initial noise estimation value, the method further comprises:
performing model fitting by using the light sensitivity ISO value of the camera, the pixel estimation value and the initial noise estimation value to obtain a relational expression of a fitting model;
and calculating parameters of the fitting model by using a least square method to obtain a full scene noise intensity curved surface corresponding to the pixel estimation value and the initial noise estimation value as a final fitting result.
3. The method of claim 1, wherein the calculating the pixel estimate for the low frequency component comprises:
dividing a pixel value range into K intervals according to the low-frequency component; k is a positive integer greater than 0;
and calculating the average value of all DCT low-frequency components in each interval to obtain the pixel estimation value of the area.
4. The method of claim 1, wherein the modifying the power spectral density of the high frequency component comprises:
and correcting the power spectral density of the high-frequency component through cooperative filtering.
5. The method of claim 1, wherein the image signal processor ISP path modification of the high frequency component to obtain an initial noise estimate comprises:
and correcting the high-frequency component by using the ISP access parameters, and calculating the median of the corrected high-frequency component to be used as an initial noise estimation value.
6. A noise estimation apparatus, characterized in that the apparatus comprises:
the device comprises a dividing unit, a calculating unit and a judging unit, wherein the dividing unit is used for dividing an N frame image to be estimated into a plurality of image blocks with fixed sizes; discrete cosine transform is carried out on each image block to obtain a low-frequency component, a medium-frequency component and a high-frequency component; n is a positive integer greater than 1;
a first calculation unit configured to calculate a pixel estimation value of the low frequency component;
the second calculating unit is used for calculating the optimal sub-threshold corresponding to the intermediate frequency component;
the correcting unit is used for correcting the power spectral density of the high-frequency component and correcting an ISP (internet service provider) access of the image signal processor of the high-frequency component to obtain an initial noise estimation value;
and the determining unit is used for determining a final noise estimation result of the N frames of images to be estimated according to the pixel estimation value of the low-frequency component of the N frames of images and the initial noise estimation value.
7. The apparatus of claim 6, further comprising:
the fitting unit is used for performing model fitting by using the light sensitivity ISO value of the camera, the pixel estimation value and the initial noise estimation value to obtain a relational expression of a fitting model;
and the obtaining unit is used for calculating the parameters of the fitting model by using a least square method to obtain a full scene noise intensity curved surface corresponding to the pixel estimation value and the initial noise estimation value as a final fitting result.
8. The apparatus of claim 6, wherein the first computing unit comprises:
the dividing subunit is used for dividing the value range of the pixel value into K intervals according to the low-frequency component; k is a positive integer greater than 0;
and the obtaining subunit is used for calculating the average value of all DCT low-frequency components in each interval to obtain the pixel estimation value of the region.
9. The apparatus according to claim 6, wherein the modification unit is specifically configured to:
and correcting the power spectral density of the high-frequency component through cooperative filtering.
10. The apparatus according to claim 6, wherein the modification unit is further configured to:
and correcting the high-frequency component by using the ISP access parameters, and calculating the median of the corrected high-frequency component to be used as an initial noise estimation value.
11. A noise estimation device, characterized in that the device comprises a memory, a processor;
the memory to store instructions;
the processor, configured to execute the instructions in the memory, to perform the method of any of claims 1-5.
12. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-5 above.
CN202011375395.5A 2020-11-30 2020-11-30 Noise estimation method, device, storage medium and equipment Active CN112529854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011375395.5A CN112529854B (en) 2020-11-30 2020-11-30 Noise estimation method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011375395.5A CN112529854B (en) 2020-11-30 2020-11-30 Noise estimation method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN112529854A true CN112529854A (en) 2021-03-19
CN112529854B CN112529854B (en) 2024-04-09

Family

ID=74995278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011375395.5A Active CN112529854B (en) 2020-11-30 2020-11-30 Noise estimation method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112529854B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591563A (en) * 2021-06-24 2021-11-02 金陵科技学院 Image fixed value impulse noise denoising method and model training method thereof
CN113628203A (en) * 2021-08-23 2021-11-09 苏州中科先进技术研究院有限公司 Image quality detection method and detection system
CN113643210A (en) * 2021-08-26 2021-11-12 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114697468A (en) * 2022-02-16 2022-07-01 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination
CN113628203B (en) * 2021-08-23 2024-05-17 苏州中科先进技术研究院有限公司 Image quality detection method and detection system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531420A (en) * 2019-08-09 2019-12-03 西安交通大学 The lossless separation method of industry disturbance noise in a kind of seismic data
CN110992288A (en) * 2019-12-06 2020-04-10 武汉科技大学 Video image blind denoising method used in mine shaft environment
CN111340713A (en) * 2018-12-18 2020-06-26 展讯通信(上海)有限公司 Noise estimation and denoising method and device for image data, storage medium and terminal
CN111815535A (en) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340713A (en) * 2018-12-18 2020-06-26 展讯通信(上海)有限公司 Noise estimation and denoising method and device for image data, storage medium and terminal
CN110531420A (en) * 2019-08-09 2019-12-03 西安交通大学 The lossless separation method of industry disturbance noise in a kind of seismic data
CN110992288A (en) * 2019-12-06 2020-04-10 武汉科技大学 Video image blind denoising method used in mine shaft environment
CN111815535A (en) * 2020-07-14 2020-10-23 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591563A (en) * 2021-06-24 2021-11-02 金陵科技学院 Image fixed value impulse noise denoising method and model training method thereof
CN113591563B (en) * 2021-06-24 2023-06-06 金陵科技学院 Image fixed value impulse noise denoising method and model training method thereof
CN113628203A (en) * 2021-08-23 2021-11-09 苏州中科先进技术研究院有限公司 Image quality detection method and detection system
CN113628203B (en) * 2021-08-23 2024-05-17 苏州中科先进技术研究院有限公司 Image quality detection method and detection system
CN113643210A (en) * 2021-08-26 2021-11-12 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114697468A (en) * 2022-02-16 2022-07-01 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment
CN114697468B (en) * 2022-02-16 2024-04-16 瑞芯微电子股份有限公司 Image signal processing method and device and electronic equipment
CN115359085A (en) * 2022-08-10 2022-11-18 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination

Also Published As

Publication number Publication date
CN112529854B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN112529854B (en) Noise estimation method, device, storage medium and equipment
CN108694705B (en) Multi-frame image registration and fusion denoising method
Claus et al. Videnn: Deep blind video denoising
CN110324664B (en) Video frame supplementing method based on neural network and training method of model thereof
CN115442515A (en) Image processing method and apparatus
CN107133923B (en) Fuzzy image non-blind deblurring method based on adaptive gradient sparse model
CN111192226B (en) Image fusion denoising method, device and system
CN109978774B (en) Denoising fusion method and device for multi-frame continuous equal exposure images
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
US11303793B2 (en) System and method for high-resolution, high-speed, and noise-robust imaging
WO2014070273A1 (en) Recursive conditional means image denoising
CN111242860A (en) Super night scene image generation method and device, electronic equipment and storage medium
Lei et al. An investigation of retinex algorithms for image enhancement
CN107945119B (en) Method for estimating correlated noise in image based on Bayer pattern
CN113011433B (en) Filtering parameter adjusting method and device
EP3913572A1 (en) Loss function for image reconstruction
CN115311149A (en) Image denoising method, model, computer-readable storage medium and terminal device
CN113438386A (en) Dynamic and static judgment method and device applied to video processing
CN116109489A (en) Denoising method and related equipment
CN115063301A (en) Video denoising method, video processing method and device
EP4302258A1 (en) Noise reconstruction for image denoising
CN113610964B (en) Three-dimensional reconstruction method based on binocular vision
CN114926348B (en) Device and method for removing low-illumination video noise
Yang et al. The spatial correlation problem of noise in imaging deblurring and its solution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant