CN111652814A - Video image denoising method and device, electronic equipment and storage medium - Google Patents
Video image denoising method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111652814A CN111652814A CN202010454733.8A CN202010454733A CN111652814A CN 111652814 A CN111652814 A CN 111652814A CN 202010454733 A CN202010454733 A CN 202010454733A CN 111652814 A CN111652814 A CN 111652814A
- Authority
- CN
- China
- Prior art keywords
- processed
- block
- image
- motion
- reference frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 188
- 238000012545 processing Methods 0.000 claims abstract description 117
- 238000006073 displacement reaction Methods 0.000 claims abstract description 21
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 18
- 238000013461 design Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 230000002123 temporal effect Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Picture Signal Circuits (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application discloses a video image denoising method, a video image denoising device, electronic equipment and a storage medium, which are used for improving the denoising accuracy of a video image and improving the denoising effect of the video image, and the method comprises the following steps: obtaining a reference frame image and an image to be processed; determining the motion grade of a block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed; sequentially performing time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result; and when the first filtering processing result does not meet the preset condition, calculating second filtering processing results corresponding to the blocks to be processed under each motion grade one by one according to the time domain filtering coefficient and the space domain filtering coefficient corresponding to each motion grade in the motion grade corresponding relation, and outputting the second filtering processing results with the highest information entropy as final filtering processing results.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for denoising a video image, an electronic device, and a storage medium.
Background
With the popularization of cameras in the social range, videos become important data in life, so the definition of the videos becomes a concern of people, denoising is an important ring in video processing, and the definition of pictures is directly influenced by the quality of the effect.
The most common video denoising method at present is 3D denoising. Although the existing adaptive 3D denoising can adaptively change the filtering weight in a time domain or a space domain based on a certain condition, and improve the problems of image blurring and motion tailing in the original algorithm to a certain extent, how to further improve the denoising accuracy of the video image is a considerable problem with the increase of the requirement of the user on the definition of the video.
Disclosure of Invention
The embodiment of the application provides a video image denoising method and device, electronic equipment and a storage medium, which are used for improving the denoising accuracy of a video image and improving the denoising effect of the video image.
Obtaining a reference frame image and an image to be processed;
determining the motion grade of a block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed;
sequentially performing time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result;
and when the first filtering processing result does not meet the preset condition, calculating second filtering processing results corresponding to the blocks to be processed under each motion grade one by one according to a time domain filtering coefficient and a space domain filtering coefficient corresponding to each motion grade in the motion grade corresponding relation, and outputting the second filtering processing results with the highest information entropy as final filtering processing results, wherein the information entropy is used for representing the information content in the blocks to be processed.
In a possible design, the reference frame image and the image to be processed are two consecutive frames of images in the same video, and the reference frame image is an image that is previous to the image to be processed.
In one possible design, the determining, from the reference frame image, a motion level of a block to be processed in the image to be processed includes:
according to the reference frame image, carrying out motion estimation on a block to be processed in the image to be processed, wherein the motion estimation is used for determining the motion displacement of the block to be processed relative to a corresponding block in the reference frame image;
and determining the motion grade of the block to be processed according to the motion estimation.
In one possible design, the performing motion estimation on a block to be processed in the image to be processed according to the reference frame image includes:
dividing the reference frame image and the image to be processed into a plurality of blocks according to the same rule; the block to be processed is any one or more blocks in a plurality of blocks of the image to be processed;
determining a block which is most matched with the relative position of the block to be processed in the image to be processed from a plurality of blocks of the reference frame image according to a preset matching principle;
and estimating the motion displacement of the block to be processed according to the most matched block.
In one possible design, the determining the motion level of the block to be processed according to the motion estimation includes:
determining a motion score of the block to be processed according to the motion estimation;
determining the motion grade according to the difference value of the motion grade and a reference error value;
the reference noise is a variance value with a minimum variance in a plurality of blocks included in the image to be processed, or the reference noise is a variance mean value of a block with a minimum variance value in at least two set regions arbitrarily selected from the image to be processed, each of the at least two regions includes at least one block, the variance represents a discrete degree of a data point, and the smaller the variance is, the smaller the information amount of the corresponding block is.
In one possible design, whether the first filtering processing result satisfies a preset condition is determined according to the following formula:
(Href-Hcur_after)<(Href-Hcur_before)*cofficient;
wherein HrefInformation entropy, H, of a block in the reference frame image matching the block to be processedcur_afterEntropy of the information processed for the block to be processed, Hcur_beforeThe coefficient is a preset parameter for controlling the error tolerance degree of the difference, and is the information entropy before the processing of the block to be processed.
In a second aspect, there is provided a denoising apparatus for a video image, including:
the acquisition module is used for acquiring a reference frame image and an image to be processed;
the determining module is used for determining the motion grade of a block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed;
the first processing module is used for sequentially carrying out time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result;
and the second processing module is used for calculating second filtering processing results of the blocks to be processed under each motion grade one by one according to the time domain filtering coefficient and the space domain filtering coefficient corresponding to each motion grade in the motion grade corresponding relation when the first filtering processing result does not meet the preset condition, and outputting the second filtering processing result with the highest information entropy as a final filtering processing result, wherein the information entropy is used for representing the information content in the blocks to be processed.
In a possible design, the reference frame image and the image to be processed are two consecutive frames of images in the same video, and the reference frame image is an image that is previous to the image to be processed.
In one possible design, the determining module is specifically configured to:
according to the reference frame image, carrying out motion estimation on a block to be processed in the image to be processed, wherein the motion estimation is used for determining the motion displacement of the block to be processed relative to a corresponding block in the reference frame image;
and determining the motion grade of the block to be processed according to the motion estimation.
In one possible design, the first processing module is specifically configured to:
dividing the reference frame image and the image to be processed into a plurality of blocks according to the same rule; the block to be processed is any one or more blocks in a plurality of blocks of the image to be processed;
determining a block which is most matched with the relative position of the block to be processed in the image to be processed from a plurality of blocks of the reference frame image according to a preset matching principle;
and estimating the motion displacement of the block to be processed according to the most matched block.
In one possible design, the first processing module is further configured to:
determining a motion score of the block to be processed according to the motion estimation;
determining the motion grade according to the difference value of the motion grade and a reference error value;
the reference noise is a variance value with a minimum variance in a plurality of blocks included in the image to be processed, or the reference noise is a variance mean value of a block with a minimum variance value in at least two set regions arbitrarily selected from the image to be processed, each of the at least two regions includes at least one block, the variance represents a discrete degree of a data point, and the smaller the variance is, the smaller the information amount of the corresponding block is.
In one possible design, the determining module is configured to determine whether the first filtering result satisfies a preset condition according to the following formula:
(Href-Hcur_after)<(Href-Hcur_before)*cofficient;
wherein HrefInformation entropy, H, of a block in the reference frame image matching the block to be processedcur_afterEntropy of the information processed for the block to be processed, Hcur_beforeThe coefficient is a preset parameter for controlling the error tolerance degree of the difference, and is the information entropy before the processing of the block to be processed.
In a third aspect, an electronic device is provided, which includes a memory, a processor and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the steps included in the method for denoising a video image according to the first aspect.
In a fourth aspect, a computer storage medium is provided, which stores computer-executable instructions for causing a computer to perform the steps included in the method for denoising a video image according to the first aspect.
The embodiment of the application has at least one or more of the following technical effects:
the motion grade of a block to be processed in an image to be processed can be determined according to a reference frame image, and then the block to be processed in the image to be processed is subjected to time domain filtering and space domain filtering according to a time domain filtering coefficient and a space domain filtering coefficient corresponding to the motion grade, so that whether an obtained first filtering processing result meets a preset condition or not can be judged, namely whether a current filtering effect is fed back is within an acceptable range or not is determined, if the first filtering processing result does not meet the preset condition, second filtering processing results corresponding to the block to be processed under each motion grade can be calculated one by one according to the time domain filtering coefficient and the space domain filtering coefficient corresponding to each motion grade in a corresponding relation of the motion grades, and the second filtering processing result with the highest information entropy is output as a final filtering processing result so as to ensure that the current filtering result is the optimal filtering result or an expected filtering result, therefore, the filtering and denoising accuracy of the image to be processed is improved, and the denoising effect of the video image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flowchart of a video image denoising method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a reference error value determination method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a video image denoising apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present invention. All other embodiments obtained by a person skilled in the art without any inventive work based on the embodiments described in the present application are within the scope of the protection of the technical solution of the present invention.
It should be noted that the terms "first", "second", and the like in the description and in the claims, and in the drawings described above in the embodiments disclosed in the present application, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments disclosed herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
Some concepts related to the present application are described below.
SAD (Sum of absolute differences) is a common primary block matching algorithm used in stereo matching of images, and its basic operation idea is to find the Sum of absolute differences between pixel values in two corresponding left and right pixel blocks.
Image noise (hereinafter simply referred to as noise) refers to unnecessary or unnecessary disturbance information present in image data.
The background and design concepts of the present application are presented below.
In the image field, three important indicators for subjective evaluation of video sharpness are noise, sharpness, and smear. Generally, if the denoising effect of 3D denoising is good, a large smear is caused, and if the smear is small, an excessive noise is caused, which is a problem inevitably caused by temporal filtering. In order to better improve the problem, in the prior art, time-domain filtering is performed according to the motion level of the current frame, and then spatial-domain filtering is performed after the image noise level is updated, so as to realize adaptive adjustment and spatial-domain denoising strength. The method adaptively changes the filtering weight in a time domain or a space domain based on a certain condition, and can improve the problems of image blurring and motion tailing in the original algorithm to a certain extent, but because the calculation is disposable and has no feedback, the denoising equipment can directly output the result on the premise of not knowing the denoising effect after denoising treatment such as filtering is usually completed, and the final denoising accuracy is lower.
In view of this, the applicant of the present application provides a video image denoising scheme, which can feed back a filtering and denoising result of this time, search an optimal filtering coefficient for 3D filtering if the filtering and denoising result cannot reach the standard, and directly output a filtered and denoised image if the filtering and denoising result reaches the standard.
In the scheme, after the image to be processed and the reference frame image are obtained, the motion grade of a block to be processed in the image to be processed can be determined according to the reference frame image, that is, the degree of motion displacement of the block to be processed relative to a corresponding block in the reference frame image is determined, so that 3D filtering (instant domain filtering plus spatial filtering) can be performed on the block to be processed in the image to be processed block by block, then the filtering result is judged, whether the current filtering effect is fed back within an acceptable range is determined, if the filtering result is not within the acceptable range, a specific time-spatial filtering model is searched (or a user can design according to own experience) to obtain an optimal filtering coefficient, so that the current filtering result is ensured to be the optimal filtering result or an expected filtering result, the accuracy of filtering and denoising of the image to be processed is improved, and the denoising effect of the video image is improved.
After introducing the background and design ideas of the embodiments of the present application, in order to further describe the denoising scheme for video images provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application. The method can be executed in sequence or in parallel according to the method shown in the embodiment or the figures when the method is executed in an actual processing procedure or a device (for example, a parallel processor or an application environment of multi-thread processing).
Referring to fig. 1, a flowchart of a method for denoising a video image according to an embodiment of the present disclosure is shown, where a specific flowchart of the method for denoising a video image in fig. 1 is described as follows:
step 101: and obtaining a reference frame image and an image to be processed.
In the embodiment of the present application, the reference frame image may be an image input by a user and subjected to denoising processing or not, or may be a previous frame image of two previous frame images and a next frame image of a continuous picture in the video to be processed, and the next frame image is used as the image to be processed. Preferably, the reference frame image and the to-be-processed image may be two consecutive frame images in the same video, and the reference frame image is a previous frame image of the to-be-processed image. Therefore, when the video is subjected to denoising processing, the motion displacement of the object in the image to be processed, which is determined according to the reference frame image, is more consistent, and the accuracy is higher.
Step 102: and determining the motion grade of the block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed.
In the embodiment of the present application, when determining the motion level of the block to be processed in the image to be processed according to the reference frame image, motion estimation may be performed on the block to be processed in the image to be processed according to the reference frame image, that is, motion displacement of the block to be processed relative to a corresponding block in the reference frame image is determined, and then motion estimation may be performed to determine the motion level of the block to be processed.
In the embodiment of the present application, there may be both moving objects and static objects in the captured video, and then the portion of the image to be processed may correspond to the moving object and the portion of the image to be processed corresponds to the static object. Therefore, the motion estimation can be carried out on the object in the motion state in the image to be processed, so that the 3D denoising processing can be carried out on the image to be processed according to the motion level.
Specifically, when the block to be processed in the image to be processed is subjected to motion estimation, the image to be processed and the reference frame image may be divided into a plurality of blocks according to the same division rule, and each block may correspond to a part of the reference frame image or the image to be processed. The block to be processed may be any of a plurality of blocks in the image to be processed, and the corresponding processing degrees may be different due to different motion states of objects included in the plurality of blocks of the image to be processed. Furthermore, a block that is most matched with the relative position of the block to be processed in the image to be processed can be determined from the plurality of blocks of the reference frame image according to a preset matching principle, and the motion displacement of the block to be processed can be estimated according to the most matched block.
For example, suppose that a reference frame image and an image to be processed are divided into 16 × 16 image blocks, then the current position and the surrounding 8 positions of the current block in the reference image are searched according to a three-step method, and 3 rounds of searching are performed sequentially, and a block which is most matched with the block to be filtered in 27 adjacent positions is found according to the SAD principle and formula (1). Specifically, in the embodiment of the present application, a traditional three-step method may be adopted, the step length is 4, 2, and 1, in addition, the current position of the block is added as a comparison position, and each search round compares 9 positions.
Wherein refijRepresents the current block in the reference frame image (i.e. the block corresponding to the block to be processed in the image to be processed) [ i, j ]]Pixel value under position, curijRepresenting blocks i, j to be processed in an image to be processed]The pixel value under the position.
In the embodiment of the present application, the motion levels may be divided into 255 levels of 1 to 255 in advance, each motion level corresponds to a range of motion level values, for example, the motion level value is 0 to 10, which is the first motion level.
In the embodiment of the present application, because the reference frame image and the to-be-processed image used in the embodiment of the present application are two consecutive frame images in the same video, noise of the reference frame image may affect the motion score of the determined to-be-processed block, that is, the motion score of the to-be-processed block determined according to the block corresponding to the to-be-processed block in the reference frame image may have a certain noise value, so as to make the motion level of the determined to-be-processed block more accurate, after the motion score of the to-be-processed block is determined according to the corresponding block in the reference frame image, a reference error value (that is, an error noise value that may exist in the motion score) may be subtracted from the motion score, and then the motion level of the to-be-processed block may be determined according to a difference between the motion score and the. That is to say, in the embodiment of the present application, a motion score of a block to be processed may be determined according to motion estimation, that is, a motion degree of a current region may be scored according to a level in a scoring manner; and then, the motion grade value of the block to be processed can be obtained according to the difference value between the motion score and the reference error value, and the motion grade of the block to be processed can be determined according to the motion grade value.
In this embodiment of the application, the reference error value may be a variance value with a minimum variance among a plurality of blocks included in the image to be processed, or the reference error value may also be a variance average value of a block with a minimum variance among at least two set regions arbitrarily selected from the image to be processed, where each of the at least two regions includes at least one block, the variance represents a discrete degree of a data point, and the smaller the variance, the smaller the information amount of the corresponding block is.
The consideration of using the variance value with the minimum variance among the plurality of blocks as the reference error value is as follows: 1. the variance represents the degree of dispersion of the data points, and the minimum variance means the minimum amount of information. 2. If a picture is free of noise, there is always a block with the least amount of information (the picture always has a region where flat pixel values are not transformed). 3. Under the condition that a picture has noise, noise is added to each block, because the random distribution of motion noise is 0, the noise added to each block is considered to be the same, and at this time, the minimum variance is counted, and the block with the minimum information amount in 2 is still the same block, so that the information amount of the block with the minimum information amount is omitted, and the variance of the block can be directly considered as a reference error value.
Further, if a block with the minimum information content is still taken as a noise model when a wide dynamic scene occurs, the block with the minimum information content may appear in a bright area, and then 3D denoising on a dark area is weak. In consideration of this special case, the aforementioned second way of determining the reference error value may be adopted, that is, the mean variance of the block with the smallest variance value in at least two set regions arbitrarily selected from the image to be processed is used as the reference error value. For example, fig. 2 is a video image (image), and as shown in fig. 2, five regions ABCDE may be selected from an image, variance values of blocks with the smallest variance S in each region may be obtained, and a mean value of the smallest variances S in the five regions may be calculated and used as a reference error value of the image.
Therefore, different reference error values can be adopted according to different conditions, so that the determined motion grade value is more accurate, the determined time domain filter coefficient and the determined space domain filter coefficient are more suitable for the current filtering and denoising scene, the final denoising precision of the video image is higher, and the denoising effect is better.
Specifically, the Motion Score Motion _ Score of the block to be processed may be calculated according to formula (2):
wherein, rBlock represents the matched block in the reference frame, cBlock represents the current block to be processed in the frame to be processed, rows represents the row number of the matched block, and cols represents the column number of the matched block.
Further, when the reference error value is a variance value with the minimum variance among a plurality of blocks included in the image to be processed, the reference error value may be calculated according to the following formula (3):
when the reference error value is the variance mean of the block with the smallest variance value in at least two set regions arbitrarily selected from the image to be processed, the reference error value may be calculated according to the following formula (4):
further, the motion level value abs may be determined according to the following equation (5):
abs=Motion_Score-Noise*Noise_coefficient (5);
wherein abs is a motion level value, Noise _ coefficient is an influence factor of Noise, and if the Noise in the image is larger overall, Noise _ coefficient can be set to be larger, otherwise, Noise _ coefficient can be set to be 1.5, and 1.5 is an empirical value.
Step 103: and sequentially carrying out time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result.
In the embodiment of the application, each motion level may correspond to a specific time domain filter coefficient and a specific spatial domain filter coefficient, and then the time domain filter coefficient and the spatial domain filter coefficient corresponding to the block to be processed may be determined according to the motion level of the block to be processed. The filtering coefficients can be set in a grading mode according to filtering processing experience in advance, namely, the time domain filtering coefficients and the space domain filtering coefficients can be individually customized by a user according to the experience of the user and are set in advance, so that the obtained filtering effect on videos can better accord with the watching habit of the user.
Further, the motion levels in the embodiment of the present application may have a one-to-one correspondence relationship with the temporal filter coefficients, for example, as shown in table one, the motion level is 1, and the corresponding temporal filter coefficient is also 1; when the motion level is 2, the corresponding temporal filtering coefficient is also 2, and so on, and the temporal filtering coefficient corresponding to the motion level 255 is 255. Each group of time domain filter coefficients (also called as time domain filter weights) corresponds to a group of space domain filter coefficients, the time domain filter weights represent the intensity of time domain filtering, d, Sigmacolor and sigmaspace represent coefficient values in space domain filtering (bilateral filtering), d is the diameter of each pixel field in the transition process, Sigmacolor is the sigma value of a color space filter, and Sigmapace is the sigma value in a coordinate space. It should be noted that table one is only an exemplary illustration of the temporal filter coefficients and the spatial filter coefficients corresponding to a part of the motion levels.
Table one:
temporal filtering weights | d | sigmacolor | sigmaspace |
10 | 15 | 20 | 10 |
9 | 15 | 25 | 10 |
8 | 15 | 30 | 15 |
7 | 15 | 35 | 15 |
6 | 15 | 40 | 20 |
5 | 15 | 45 | 20 |
4 | 15 | 50 | 25 |
3 | 15 | 55 | 25 |
2 | 15 | 60 | 30 |
1 | 15 | 65 | 30 |
Specifically, the time-domain filtering process may be performed on the block to be processed according to the following formula (6):
dst=(255-weight)*ref+weight*cur (6);
wherein, weight is a temporal filtering weight, ref is a reference frame image, cur is an image to be processed, dst is a temporal filtering result image, and the weight range is [1, 255 ].
Further, after the block to be processed is subjected to the time-domain filtering, the image obtained by the time-domain filtering may be subjected to the spatial-domain filtering, so as to obtain a first filtering result of the block to be processed. The spatial filtering may adopt bilateral filtering, and the filtering coefficient may be selected according to the motion level of the block to be processed.
Step 104: and judging whether the first filtering processing result meets a preset condition, if not, executing the step 105, and if so, executing the step 106.
In the embodiment of the application, after the first filtering processing result of the block to be processed is obtained, whether the first filtering processing result meets a preset condition or not can be judged, if the first filtering processing result meets the preset condition, the filtering and denoising processing effect of the block to be processed is better, and an image corresponding to the first filtering processing result can be directly output; if the preset condition is not met, the filtering and denoising processing needs to be carried out on the block to be processed again. Specifically, whether the first filtering processing result satisfies the preset condition may be determined according to the following formula (7):
wherein HrefFor the entropy of the information of the blocks in the reference frame image that match the block to be processed, Hcur_afterEntropy of information after processing for a block to be processed, Hcur_beforeThe coefficient is a preset parameter for controlling the error tolerance degree of the difference, and is the information entropy before the processing of the block to be processed.
Step 105: and according to the time domain filter coefficient and the space domain filter coefficient corresponding to each motion grade in the motion grade corresponding relation, calculating second filter processing results corresponding to the blocks to be processed under each motion grade one by one, and outputting the second filter processing results with the highest information entropy as final filter processing results, wherein the information entropy is used for representing the information content in the blocks to be processed.
In this embodiment of the application, if the first filtering result does not satisfy the preset condition, an exhaustion method may be adopted to search the preset motion level correspondence one by one, filter the block to be processed in the image to be processed one by one according to the time domain filter coefficient and the space domain filter coefficient corresponding to each motion level in the motion level correspondence, and perform information entropy calculation on the image included in the filtered block to be processed corresponding to each motion level, sort the calculation results of the block to be processed at each motion level, take the optimal information entropy as the optimal motion level, and further output the filtering result obtained at the optimal motion level (i.e., the second filtering result) as the final filtering result of the block to be processed, thereby improving the accuracy of filtering and denoising of the image to be processed, and improving the denoising effect.
Therefore, the filtering and denoising processing of all the blocks to be processed in the image to be processed is sequentially completed according to the method, and the filtering and denoising processing result of the whole image to be processed can be obtained. In addition, as the block-by-block filtering and denoising processing is carried out on the image to be processed in a blocking mode, the filtering and denoising processing result of each block is more accurate, the denoising effect is better, and the denoising effect of the whole image to be processed is further improved.
Step 106: and outputting a first filtering processing result.
In the embodiment of the application, when the first filtering processing result meets the preset condition, the first filtering processing result can be directly output, and then the image subjected to filtering and denoising processing is obtained.
For the convenience of understanding, the following describes in detail a specific process of performing noise reduction in the time-space domain of a video image, with only 1 block to be processed (hereinafter referred to as a first processing block) in the image to be processed, a block corresponding to the first processing block in a reference frame image as a first reference block, and a motion level correspondence relationship of 3 sets as an example.
In the first step, a reference frame image and an image to be processed are obtained.
And secondly, performing motion estimation on a first processing block in the image to be processed according to the reference frame image, namely determining the first processing block in the image to be processed, searching a plurality of reference blocks which are possibly matched with the first processing block from the reference frame image, calculating matching results of the plurality of reference blocks, determining a reference block with the highest matching result, namely the first reference block, from the plurality of reference blocks, and determining the motion displacement of the first processing block relative to the first reference block in the reference frame image.
Thirdly, determining the motion grade of the first processing block according to the motion estimation of the first processing block;
and fourthly, determining a preset time domain filter coefficient and a preset space domain filter coefficient corresponding to the motion grade of the first processing block.
And fifthly, respectively carrying out time domain filtering processing and spatial filtering processing on the first processing block according to a preset time domain filtering coefficient and a preset spatial filtering coefficient so as to obtain a first filtering processing result of the first processing block.
Sixthly, judging whether the first filtering result meets a preset condition, and if so, outputting the first filtering result as the filtering result of the first processing block; if the preset condition is not met, the subsequent steps are continuously executed.
And seventhly, searching 3 groups of motion grade corresponding relations one by adopting an exhaustion method, and respectively filtering the first processing block by utilizing a time domain filter coefficient and a space domain filter coefficient corresponding to the motion grade searched each time, so that 3 filtering results of the first processing block can be obtained, and the 3 filtering results can be compared to determine a filtering result with the highest information entropy, namely a second filtering result.
And step eight, outputting the second filtering processing result as a final filtering processing result.
Therefore, by the above method, the motion class of the block to be processed in the image to be processed can be determined according to the reference frame image, and the time-domain filtering and the space-domain filtering can be performed on the block to be processed in the image to be processed according to the time-domain filtering coefficient and the space-domain filtering coefficient corresponding to the motion class, so as to determine whether the obtained first filtering result meets the preset condition, i.e. whether the current filtering effect is fed back within the acceptable range is determined, if the first filtering result does not meet the preset condition, the second filtering result corresponding to the block to be processed in each motion class can be calculated one by one according to the time-domain filtering coefficient and the space-domain filtering coefficient corresponding to each motion class in the corresponding relationship of the motion class, and the second filtering result with the highest information entropy is output as the final filtering result, so as to ensure that the current filtering result is the best filtering result or the expected filtering result, therefore, the filtering and denoising accuracy of the image to be processed is improved, and the denoising effect of the video image is improved.
Based on the same inventive concept, the embodiment of the application provides a denoising device for a video image, and the denoising of the video image can be a hardware structure, a software module or a hardware structure and a software module. The denoising of the video image can be realized by a chip system, and the chip system can be composed of a chip and can also comprise the chip and other discrete devices. Referring to fig. 3, the denoising of a video image in the embodiment of the present application includes an obtaining module 301, a determining module 302, a first processing module 303, and a second processing module 304, where:
an obtaining module 301, configured to obtain a reference frame image and an image to be processed;
a determining module 302, configured to determine, according to the reference frame image, a motion level of a block to be processed in the image to be processed, where the motion level is used to represent a degree of motion displacement of the block to be processed;
a first processing module 303, configured to perform time-domain filtering processing and spatial-domain filtering processing on a block to be processed in sequence according to a preset time-domain filtering coefficient and a preset spatial-domain filtering coefficient corresponding to the motion level, so as to obtain a first filtering processing result;
and the second processing module 304 is configured to, when the first filtering result does not meet a preset condition, calculate, one by one, second filtering results corresponding to the block to be processed at each motion level according to a time domain filtering coefficient and a space domain filtering coefficient corresponding to each motion level in the motion level correspondence, and output, as a final filtering result, the second filtering result with the highest information entropy, where the information entropy is used to represent information content in the block to be processed.
In a possible implementation, the reference frame image and the image to be processed are two consecutive frames of images in the same video, and the reference frame image is an image of a frame preceding the image to be processed.
In a possible implementation, the determining module 302 is specifically configured to:
according to the reference frame image, carrying out motion estimation on a block to be processed in the image to be processed, wherein the motion estimation is used for determining the motion displacement of the block to be processed relative to a corresponding block in the reference frame image; and determining the motion level of the block to be processed according to the motion estimation.
In a possible implementation manner, the first processing module 303 is specifically configured to:
dividing the reference frame image and the image to be processed into a plurality of blocks according to the same rule; the block to be processed is any one or more blocks in a plurality of blocks of the image to be processed; determining a block which is most matched with the relative position of the block to be processed in the image to be processed from a plurality of blocks of the reference frame image according to a preset matching principle; and estimating the motion displacement of the block to be processed according to the most matched block.
In a possible implementation, the first processing module 303 is further configured to: determining a motion score of a block to be processed according to motion estimation; determining a motion grade according to the difference value of the motion grade and the reference error value; the reference noise is a variance value with the minimum variance among a plurality of blocks included in the image to be processed, or the reference noise is a variance mean value of a block with the minimum variance value among at least two set regions arbitrarily selected from the image to be processed, any region of the at least two regions includes at least one block, the variance represents a discrete degree of a data point, and the smaller the variance is, the smaller the information amount of the corresponding block is.
In one possible design, the determining module 302 is further configured to determine whether the first filtering result satisfies a preset condition according to the following formula:
(Href-Hcur_after)<(Href-Hcur_before)*cofficient;
wherein HrefFor the entropy of the information of the blocks in the reference frame image that match the block to be processed, Hcur_afterEntropy of processed information for the block to be processed, Hcur_beforeThe coefficient is a preset parameter for controlling the degree of difference fault tolerance for the information entropy before processing of the block to be processed.
With regard to the denoising apparatus for video images in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The division of the modules in the embodiments of the present disclosure is illustrative, and is only a logical function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present disclosure may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same inventive concept, the embodiment of the present application provides an electronic device, which may be a hardware structure, a software module, or a hardware structure plus a software module. The electronic device may be implemented by a system-on-chip, which may be constituted by a chip, or may include a chip and other discrete devices. Referring to fig. 4, an electronic device in this embodiment of the present application includes at least one processor 401 and a memory 402 connected to the at least one processor, a specific connection medium between the processor 401 and the memory 402 is not limited in this embodiment of the present application, in fig. 4, the processor 401 and the memory 402 are connected by a bus 400 as an example, the bus 400 is represented by a thick line in fig. 4, and a connection manner between other components is only schematically illustrated and is not limited. The bus 400 may be divided into an address bus, a data bus, a control bus, etc., and is shown with only one thick line in fig. 4 for ease of illustration, but does not represent only one bus or type of bus.
In the embodiment of the present application, the memory 402 stores instructions executable by the at least one processor 401, and the at least one processor 401 may execute the steps included in the foregoing denoising method for a video image by executing the instructions stored in the memory 402.
The hardware structure of the processor 401 may be a CPU, a DSP, an ASIC, etc., and the hardware structure of the memory 402 may be a flash memory, a hard disk, a multimedia card, a card memory, a RAM, an SRAM, etc., which will not be described again.
The processor 401 is a control center of the electronic device, and may connect various portions of the whole electronic device by using various interfaces and lines, and perform various functions and process data of the electronic device by operating or executing instructions stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring on the electronic device.
Optionally, the processor 401 may include one or more processing units, and the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401. In some embodiments, processor 401 and memory 402 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
Based on the same inventive concept, the present application further provides a computer storage medium storing computer instructions, which when executed on a computer, cause the computer to perform the steps of the denoising method for video images as described above.
In some possible implementations, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A method for denoising a video image, comprising:
obtaining a reference frame image and an image to be processed;
determining the motion grade of a block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed;
sequentially performing time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result;
and when the first filtering processing result does not meet the preset condition, calculating second filtering processing results corresponding to the blocks to be processed under each motion grade one by one according to a time domain filtering coefficient and a space domain filtering coefficient corresponding to each motion grade in the motion grade corresponding relation, and outputting the second filtering processing results with the highest information entropy as final filtering processing results, wherein the information entropy is used for representing the information content in the blocks to be processed.
2. The method of claim 1, wherein the reference frame image and the to-be-processed image are two consecutive frames of images in a same video, and the reference frame image is a frame of image before the to-be-processed image.
3. The method of claim 1, wherein said determining a level of motion for a block to be processed in the image to be processed from the reference frame image comprises:
according to the reference frame image, carrying out motion estimation on a block to be processed in the image to be processed, wherein the motion estimation is used for determining the motion displacement of the block to be processed relative to a corresponding block in the reference frame image;
and determining the motion grade of the block to be processed according to the motion estimation.
4. The method of claim 3, wherein the performing motion estimation on the block to be processed in the image to be processed according to the reference frame image comprises:
dividing the reference frame image and the image to be processed into a plurality of blocks according to the same rule; the block to be processed is any one or more blocks in a plurality of blocks of the image to be processed;
determining a block which is most matched with the relative position of the block to be processed in the image to be processed from a plurality of blocks of the reference frame image according to a preset matching principle;
and estimating the motion displacement of the block to be processed according to the most matched block.
5. The method of claim 4, wherein the determining the motion level of the block to be processed according to the motion estimation comprises:
determining a motion score of the block to be processed according to the motion estimation;
determining the motion grade according to the difference value of the motion grade and a reference error value;
the reference noise is a variance value with a minimum variance in a plurality of blocks included in the image to be processed, or the reference noise is a variance mean value of a block with a minimum variance value in at least two set regions arbitrarily selected from the image to be processed, each of the at least two regions includes at least one block, the variance represents a discrete degree of a data point, and the smaller the variance is, the smaller the information amount of the corresponding block is.
6. The method of claim 1, wherein whether the first filtering process result satisfies a preset condition is determined according to the following formula:
(Href-Hcur_after)<(Href-Hcur_before)*cofficient;
wherein HrefInformation entropy, H, of a block in the reference frame image matching the block to be processedcur_afterEntropy of the information processed for the block to be processed, Hcur_beforeThe coefficient is a preset parameter for controlling the error tolerance degree of the difference, and is the information entropy before the processing of the block to be processed.
7. A device for denoising a video image, comprising:
the acquisition module is used for acquiring a reference frame image and an image to be processed;
the determining module is used for determining the motion grade of a block to be processed in the image to be processed according to the reference frame image, wherein the motion grade is used for representing the degree of motion displacement of the block to be processed;
the first processing module is used for sequentially carrying out time domain filtering processing and space domain filtering processing on the block to be processed according to a preset time domain filtering coefficient and a preset space domain filtering coefficient corresponding to the motion grade to obtain a first filtering processing result;
and the second processing module is used for calculating second filtering processing results of the blocks to be processed under each motion grade one by one according to the time domain filtering coefficient and the space domain filtering coefficient corresponding to each motion grade in the motion grade corresponding relation when the first filtering processing result does not meet the preset condition, and outputting the second filtering processing result with the highest information entropy as a final filtering processing result, wherein the information entropy is used for representing the information content in the blocks to be processed.
8. The apparatus of claim 7, wherein the determination module is specifically configured to:
according to the reference frame image, carrying out motion estimation on a block to be processed in the image to be processed, wherein the motion estimation is used for determining the motion displacement of the block to be processed relative to a corresponding block in the reference frame image;
and determining the motion grade of the block to be processed according to the motion estimation.
9. An electronic device, comprising at least one processor and at least one memory, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1-6.
10. A computer storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010454733.8A CN111652814B (en) | 2020-05-26 | 2020-05-26 | Denoising method and device for video image, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010454733.8A CN111652814B (en) | 2020-05-26 | 2020-05-26 | Denoising method and device for video image, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111652814A true CN111652814A (en) | 2020-09-11 |
CN111652814B CN111652814B (en) | 2023-05-12 |
Family
ID=72343037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010454733.8A Active CN111652814B (en) | 2020-05-26 | 2020-05-26 | Denoising method and device for video image, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652814B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837242A (en) * | 2021-02-19 | 2021-05-25 | 成都国科微电子有限公司 | Image noise reduction processing method, device, equipment and medium |
CN113012061A (en) * | 2021-02-20 | 2021-06-22 | 百果园技术(新加坡)有限公司 | Noise reduction processing method and device and electronic equipment |
CN113327228A (en) * | 2021-05-26 | 2021-08-31 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113643210A (en) * | 2021-08-26 | 2021-11-12 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113674316A (en) * | 2021-08-04 | 2021-11-19 | 浙江大华技术股份有限公司 | Video noise reduction method, device and equipment |
CN113662562A (en) * | 2021-08-19 | 2021-11-19 | 西交利物浦大学 | EEG signal feature extraction method, device and storage medium for motor imagery task |
CN114240778A (en) * | 2021-12-15 | 2022-03-25 | 北京紫光展锐通信技术有限公司 | Video denoising method and device and terminal |
CN114626997A (en) * | 2020-12-11 | 2022-06-14 | 三星电子株式会社 | Method and system for image denoising |
CN115187491A (en) * | 2022-09-08 | 2022-10-14 | 阿里巴巴(中国)有限公司 | Image noise reduction processing method, image filtering processing method and device |
WO2023226584A1 (en) * | 2022-05-27 | 2023-11-30 | 腾讯科技(深圳)有限公司 | Image noise reduction method and apparatus, filtering data processing method and apparatus, and computer device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7405770B1 (en) * | 2004-10-13 | 2008-07-29 | Cirrus Logic, Inc. | Adaptive equalization method and system for reducing noise and distortion in a sampled video signal |
CN102769722A (en) * | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | Time-space domain hybrid video noise reduction device and method |
CN103024248A (en) * | 2013-01-05 | 2013-04-03 | 上海富瀚微电子有限公司 | Motion-adaptive video image denoising method and device |
CN103632352A (en) * | 2013-11-01 | 2014-03-12 | 华为技术有限公司 | Method for time domain noise reduction of noise image and related device |
CN104952041A (en) * | 2014-03-26 | 2015-09-30 | 安凯(广州)微电子技术有限公司 | Image filtering method and image filtering device |
US20150279006A1 (en) * | 2014-04-01 | 2015-10-01 | Samsung Techwin Co., Ltd. | Method and apparatus for reducing noise of image |
CN105208376A (en) * | 2015-08-28 | 2015-12-30 | 青岛中星微电子有限公司 | Digital noise reduction method and device |
CN107437238A (en) * | 2016-05-25 | 2017-12-05 | 上海联影医疗科技有限公司 | A kind of adaptive recursive noise reduction method and device of image block |
-
2020
- 2020-05-26 CN CN202010454733.8A patent/CN111652814B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7405770B1 (en) * | 2004-10-13 | 2008-07-29 | Cirrus Logic, Inc. | Adaptive equalization method and system for reducing noise and distortion in a sampled video signal |
CN102769722A (en) * | 2012-07-20 | 2012-11-07 | 上海富瀚微电子有限公司 | Time-space domain hybrid video noise reduction device and method |
CN103024248A (en) * | 2013-01-05 | 2013-04-03 | 上海富瀚微电子有限公司 | Motion-adaptive video image denoising method and device |
CN103632352A (en) * | 2013-11-01 | 2014-03-12 | 华为技术有限公司 | Method for time domain noise reduction of noise image and related device |
CN104952041A (en) * | 2014-03-26 | 2015-09-30 | 安凯(广州)微电子技术有限公司 | Image filtering method and image filtering device |
US20150279006A1 (en) * | 2014-04-01 | 2015-10-01 | Samsung Techwin Co., Ltd. | Method and apparatus for reducing noise of image |
CN105208376A (en) * | 2015-08-28 | 2015-12-30 | 青岛中星微电子有限公司 | Digital noise reduction method and device |
CN107437238A (en) * | 2016-05-25 | 2017-12-05 | 上海联影医疗科技有限公司 | A kind of adaptive recursive noise reduction method and device of image block |
Non-Patent Citations (2)
Title |
---|
董晶;陈黎;周小舟;: "基于区域寻优的监控视频感知噪声盲检测算法", 计算机工程与设计 * |
谭洪涛;田逢春;张莎;张静;邱宇;: "结合运动补偿的球体双边滤波视频降噪算法", 系统工程与电子技术 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114626997A (en) * | 2020-12-11 | 2022-06-14 | 三星电子株式会社 | Method and system for image denoising |
CN114626997B (en) * | 2020-12-11 | 2024-09-17 | 三星电子株式会社 | Method and system for image denoising |
CN112837242A (en) * | 2021-02-19 | 2021-05-25 | 成都国科微电子有限公司 | Image noise reduction processing method, device, equipment and medium |
CN112837242B (en) * | 2021-02-19 | 2023-07-14 | 成都国科微电子有限公司 | Image noise reduction processing method, device, equipment and medium |
CN113012061A (en) * | 2021-02-20 | 2021-06-22 | 百果园技术(新加坡)有限公司 | Noise reduction processing method and device and electronic equipment |
CN113327228A (en) * | 2021-05-26 | 2021-08-31 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113327228B (en) * | 2021-05-26 | 2024-04-16 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113674316A (en) * | 2021-08-04 | 2021-11-19 | 浙江大华技术股份有限公司 | Video noise reduction method, device and equipment |
CN113662562B (en) * | 2021-08-19 | 2024-02-09 | 西交利物浦大学 | Electroencephalogram signal feature extraction method and device for motor imagery task and storage medium |
CN113662562A (en) * | 2021-08-19 | 2021-11-19 | 西交利物浦大学 | EEG signal feature extraction method, device and storage medium for motor imagery task |
CN113643210A (en) * | 2021-08-26 | 2021-11-12 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN114240778A (en) * | 2021-12-15 | 2022-03-25 | 北京紫光展锐通信技术有限公司 | Video denoising method and device and terminal |
WO2023226584A1 (en) * | 2022-05-27 | 2023-11-30 | 腾讯科技(深圳)有限公司 | Image noise reduction method and apparatus, filtering data processing method and apparatus, and computer device |
CN115187491B (en) * | 2022-09-08 | 2023-02-17 | 阿里巴巴(中国)有限公司 | Image denoising processing method, image filtering processing method and device |
CN115187491A (en) * | 2022-09-08 | 2022-10-14 | 阿里巴巴(中国)有限公司 | Image noise reduction processing method, image filtering processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111652814B (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652814B (en) | Denoising method and device for video image, electronic equipment and storage medium | |
CN109859126B (en) | Video noise reduction method and device, electronic equipment and storage medium | |
Yang et al. | An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise | |
EP3488388B1 (en) | Video processing method and apparatus | |
CN109584204B (en) | Image noise intensity estimation method, storage medium, processing and recognition device | |
KR102605747B1 (en) | Video noise removal methods, devices, and computer-readable storage media | |
CN108305222B (en) | Image noise reduction method and device, electronic equipment and storage medium | |
CN107333027B (en) | A kind of method and apparatus of video image enhancement | |
CN109410124B (en) | Method and device for reducing noise of video image | |
CN110796615A (en) | Image denoising method and device and storage medium | |
CN107437238B (en) | Image block self-adaptive recursive noise reduction method and device | |
EP2816526B1 (en) | Filtering method and apparatus for recovering an anti-aliasing edge | |
CN106210448B (en) | Video image jitter elimination processing method | |
CN109063691A (en) | A kind of recognition of face bottom library optimization method and system | |
CN113012061A (en) | Noise reduction processing method and device and electronic equipment | |
US20160343113A1 (en) | System for enhanced images | |
CN112801890B (en) | Video processing method, device and equipment | |
KR100739753B1 (en) | Method and apparatus of bidirectional temporal noise reduction | |
CN115546047A (en) | Video image noise reduction method, device and medium based on improved local filtering algorithm | |
CN112435182B (en) | Image noise reduction method and device | |
CN111260590B (en) | Image noise reduction method and related product | |
CN104809705A (en) | Image denoising method and system based on threshold value block matching | |
US8189874B2 (en) | Method for motion detection of horizontal line | |
CN111652821A (en) | Low-light-level video image noise reduction processing method, device and equipment based on gradient information | |
CN107204011A (en) | A kind of depth drawing generating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |