CN114049271A - Video processing method and device, terminal and computer readable storage medium - Google Patents
Video processing method and device, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN114049271A CN114049271A CN202111328624.2A CN202111328624A CN114049271A CN 114049271 A CN114049271 A CN 114049271A CN 202111328624 A CN202111328624 A CN 202111328624A CN 114049271 A CN114049271 A CN 114049271A
- Authority
- CN
- China
- Prior art keywords
- image
- noise reduction
- parameter
- current frame
- filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 40
- 238000001914 filtration Methods 0.000 claims abstract description 286
- 230000009467 reduction Effects 0.000 claims abstract description 226
- 238000012545 processing Methods 0.000 claims abstract description 128
- 239000011159 matrix material Substances 0.000 claims description 145
- 238000000034 method Methods 0.000 claims description 102
- 230000008569 process Effects 0.000 claims description 35
- 238000012937 correction Methods 0.000 claims description 20
- 238000005070 sampling Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 239000002245 particle Substances 0.000 abstract description 20
- 230000000694 effects Effects 0.000 abstract description 18
- 238000010586 diagram Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 4
- 238000009499 grossing Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The application provides a video processing method, a video processing device, a terminal and a computer readable storage medium. The video processing method comprises the following steps: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images; performing time domain filtering processing on a plurality of frames of first noise reduction images to obtain a frame of second noise reduction images; and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing. According to the video processing method, the video processing device, the terminal and the computer readable storage medium, the phenomenon that small particle noise is integrated into large noise in a target image can be avoided, the denoising effect and the edge preserving effect can be maximized, and the target image with good denoising effect and enhanced contrast is obtained.
Description
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method, a video processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
With the rapid development of multimedia technology, video images are widely regarded and applied. The improvement of video capture technology and display technology makes people have higher and higher requirements on image quality, but the image quality is reduced to some extent in the transmission and conversion (such as imaging, display and the like) of images. In order to improve the quality of the video image and enhance the visual effect of the video image, video image enhancement processing is required. At present, when noise in a video image is enhanced, a spatial filtering denoising method, such as Non-Local Means (NLM), is mostly used for denoising, but a side effect exists in the realization effect that large-particle noise is difficult to remove or small-particle noise is aggregated into large-particle noise.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, a terminal and a non-volatile computer readable storage medium.
In the video processing method according to the embodiment of the present application, the video includes a current frame image and at least one previous frame image, and the video processing method includes: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images; performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of a second noise reduction image; and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
The video processing device of the embodiment of the application comprises a first filtering module, a second filtering module and a contrast enhancement module. The first filtering module is used for respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding frames of first noise reduction images. The second filtering module is used for performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of second noise reduction image. The contrast enhancement module is used for carrying out contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter so as to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness degree of the spatial filtering processing.
The terminal of the embodiments of the present application includes one or more processors, a memory, and one or more programs, where one or more of the programs are stored in the memory and executed by the one or more processors, the program including instructions for performing the video processing method of the embodiments of the present application. The video processing method comprises the following steps: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images; performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of a second noise reduction image; and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
A non-transitory computer-readable storage medium of an embodiment of the present application includes a computer program that, when executed by one or more processors, causes the one or more processors to perform a video processing method of: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images; performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of a second noise reduction image; and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
In the video processing method, the video processing device, the terminal and the non-volatile computer readable storage medium, the current frame image and the previous frame image are respectively subjected to spatial filtering processing, so that large particle noise in the current frame image and the previous frame image is removed, edge information and detail information in the current frame image and the previous frame image can be retained, and a first noise reduction image corresponding to the current frame image and a first noise reduction image corresponding to the previous frame image are obtained. And then performing time domain filtering processing on the multi-frame first noise reduction image, removing interframe random noise through multi-frame fusion to obtain a frame of second noise reduction image, and further avoiding the condition that large-particle noise exists in the second noise reduction image or the second noise is aggregated into large-particle noise by small noise. And finally, performing contrast enhancement processing on the current frame image according to the second noise reduction image and the preset first guide parameter to achieve a contrast enhancement effect.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a video processing method according to some embodiments of the present application;
FIG. 2 is a schematic block diagram of a video processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic block diagram of a terminal according to some embodiments of the present application;
FIG. 4 is a flow diagram of a video processing method according to some embodiments of the present application;
FIG. 5 is a diagram illustrating a mean filtering process performed on a current frame image and a reference image in a video processing method according to some embodiments of the present disclosure;
FIGS. 6-7 are flow diagrams of video processing methods according to some embodiments of the present application;
FIG. 8 is a schematic diagram of a first filtered image and a second filtered image in a video processing method according to some embodiments of the present application;
FIG. 9 is a diagram illustrating a first parameter matrix and a second parameter matrix in a video processing method according to some embodiments of the present application;
FIG. 10 is a flow diagram of a video processing method of some embodiments of the present application;
FIG. 11 is a schematic diagram illustrating a first denoising matrix obtained in a video processing method according to some embodiments of the present application;
FIG. 12 is a schematic diagram illustrating a method for obtaining a second noise reduction matrix according to some embodiments of the present disclosure;
FIGS. 13-15 are flow diagrams of video processing methods according to some embodiments of the present application;
FIG. 16 is a schematic diagram illustrating a third denoising matrix obtained by a video processing method according to some embodiments of the present application;
FIG. 17 is a schematic diagram illustrating a fourth denoising matrix obtained by a video processing method according to some embodiments of the present application;
18-19 are flow diagrams of video processing methods of some embodiments of the present application;
FIG. 20 is a schematic diagram of a connection between a non-volatile computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, the present application provides a video processing method, where a video includes a current frame image and at least one previous frame image, the video processing method includes:
01: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images;
03: performing time domain filtering processing on a plurality of frames of first noise reduction images to obtain a frame of second noise reduction images; and
05: and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
Referring to fig. 2, the present application further provides a video processing apparatus 10, where the video processing apparatus 10 includes a first filtering module 11, a second filtering module 13, and a contrast enhancing module 15. The first filtering module 11 is used to perform the method in 01, the second filtering module 13 is used to perform the method in 03, and the contrast enhancement module 15 is used to perform the method in 05. That is, the first filtering module 11 is configured to perform spatial filtering processing on the current frame image and the previous frame image respectively to obtain multiple corresponding frames of first noise-reduced images. The second filtering module 13 is configured to perform a temporal filtering process on multiple frames of the first noise-reduced image to obtain a frame of the second noise-reduced image. The contrast enhancement module 15 is configured to perform contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, where the first guide parameter is a filtering parameter of spatial filtering processing and is used to control a smoothness of the spatial filtering processing.
Referring to fig. 3, the present application also provides a terminal 100, where the terminal 100 includes one or more processors 30, a memory 50, and one or more programs. Wherein one or more programs are stored in the memory 50 and executed by the one or more processors 30, the programs including instructions for performing the video processing methods of embodiments of the present application. That is, when one or more processors 30 execute a program, the processors 30 may implement the methods in 01, 03, and 05. That is, the one or more processors 30 are operable to perform: respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images; performing time domain filtering processing on a plurality of frames of first noise reduction images to obtain a frame of second noise reduction images; and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
Specifically, the terminal 100 may include, but is not limited to, a mobile phone, a notebook computer, a smart television, a tablet computer, a smart watch, or a computer. The video processing apparatus 10 may be an integration of functional modules integrated in the terminal 100. The present application is described only by taking the terminal 100 as a mobile phone as an example, and the case where the terminal 100 is another type of device is similar to the mobile phone, and will not be described in detail.
With the rapid development of multimedia technology, video images are widely regarded and applied. The improvement of video capture technology and display technology makes people have higher and higher requirements on image quality, but the image quality is reduced to some extent in the transmission and conversion (such as imaging, display and the like) of images. In order to improve the quality of the video image and enhance the visual effect of the video image, video image enhancement processing is required. At present, when noise in a video image is enhanced, a spatial filtering denoising method, such as Non-Local Means (NLM), is mostly used for denoising, but a side effect exists in the realization effect that large-particle noise is difficult to remove or small-particle noise is aggregated into large-particle noise.
In the video processing method, the video processing apparatus 10 and the terminal 100 of the present application, the current frame image and the previous frame image are respectively subjected to spatial filtering to remove large particle noise in the current frame image and the previous frame image, and edge information and detail information in the current frame image and the previous frame image can be retained to obtain a first noise reduction image corresponding to one frame and the current frame image and a first noise reduction image corresponding to one frame and the previous frame image. And then performing time domain filtering processing on the multi-frame first noise reduction image, removing interframe random noise through multi-frame fusion to obtain a frame of second noise reduction image, and further avoiding the condition that large-particle noise exists in the second noise reduction image or the second noise is aggregated into large-particle noise by small noise. And finally, performing contrast enhancement processing on the current frame image according to the second noise reduction image and the preset first guide parameter to achieve a contrast enhancement effect.
For example, when the previous frame image includes two frames, the first filtering module 11 or the processor 30 performs spatial filtering on the current frame image, the previous frame image, and the previous frame image respectively to obtain three first noise-reduced images, and the three first noise-reduced images respectively correspond to the current frame image, the previous frame image, and the previous frame image one to one. Since the first noise-reduced image can remove large particle noise in the current frame image, the previous frame image and the previous frame image, the second filtering module combines the three first noise-reduced images to remove small particle noise, so as to maximally remove noise (including large particle noise and large particle noise integrated by small particle noise) in the current frame image. Or, when the previous frame image includes one frame, the first filtering module 11 or the processor 30 performs spatial filtering on the current frame image and the previous frame image respectively to obtain two first noise reduction images (the two first noise reduction images correspond to the current frame image and the previous frame image one to one), so that the second filtering module 13 or the processor 30 can perform fusion processing in combination with the two first noise reduction images to eliminate noise in the current frame image, and can effectively reduce the amount of computation. It is understood that the number of previous frame images may be more frames, such as 3 frames, 4 frames, etc., and accordingly, the number of first noise reduced images is kept consistent with the number of previous frame images.
Referring to fig. 4, in some embodiments, 01: the method for performing spatial filtering processing on a current frame image to obtain a first noise reduction image corresponding to the current frame image comprises the following steps:
011: acquiring a reference image according to the current frame image, wherein the resolution of the reference image is the same as that of the current frame image;
012: respectively carrying out mean value filtering processing on the current frame image and the reference image to obtain a first filtering image corresponding to the current frame image and a second filtering image corresponding to the reference image;
013: acquiring an intermediate result corresponding to the current frame image according to the first filtering image and the second filtering image, wherein the intermediate result is data generated in the process of performing spatial filtering processing on the current frame image;
014: acquiring a noise reduction parameter of the current frame image according to the intermediate result, the first filtering image and the second filtering image; and
015: and performing spatial filtering processing on the current frame image according to the noise reduction parameters to obtain a first noise reduction image.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can be further configured to perform the methods of 011, 012, 013, 014, and 015. That is, both the first filtering module 11 and the processor 30 can also be used to: acquiring a reference image according to the current frame image, wherein the resolution of the reference image is the same as that of the current frame image; respectively carrying out mean value filtering processing on the current frame image and the reference image to obtain a first filtering image corresponding to the current frame image and a second filtering image corresponding to the reference image; acquiring an intermediate result corresponding to the current frame image according to the first filtering image and the second filtering image, wherein the intermediate result is data generated in the process of performing spatial filtering processing on the current frame image; acquiring a noise reduction parameter of the current frame image according to the intermediate result, the first filtering image and the second filtering image; and performing spatial filtering processing on the current frame image according to the noise reduction parameters to obtain a first noise reduction image.
It should be noted that the first filtering module 11 or the processor 30 performs spatial filtering processing on the current frame image by executing the methods of 011, 012, 013, 014 and 015 to obtain a first noise-reduced image corresponding to the current frame image. Similarly, the first filtering module 11 or the processor 30 performs spatial filtering processing on the previous frame image to obtain a first noise-reduced image corresponding to the previous frame image, and the first noise-reduced image is also obtained by substantially the same method as that of 011, 012, 013, 014, and 015. That is, the spatial filtering processing is performed on the previous frame image to obtain a first noise reduction image corresponding to the previous frame image, including:
acquiring a reference image according to the previous frame image, wherein the resolution of the reference image is the same as that of the previous frame image;
respectively carrying out mean value filtering processing on the previous frame image and the reference image to obtain a first filtering image corresponding to the previous frame image and a second filtering image corresponding to the reference image;
acquiring an intermediate result corresponding to the previous frame image according to the first filtering image and the second filtering image, wherein the intermediate result is data generated in the process of performing spatial filtering processing on the previous frame image;
acquiring a noise reduction parameter of a previous frame image according to the intermediate result, the first filtering image and the second filtering image; and
and performing spatial filtering processing on the previous frame image according to the noise reduction parameters to obtain a first noise reduction image.
In the present application, only the specific process of performing spatial filtering processing on the current frame image is described in detail, and the process of performing spatial filtering processing on the previous frame image is the same as the process of performing spatial filtering processing on the current frame image, and a discussion thereof is omitted.
In the method 011, when the first filtering module 11 or the processor 30 obtains the reference image according to the current frame image, the current frame image can be directly used as the reference image to reduce the amount of calculation in the spatial filtering process, and the image effect with clear edges can be achieved when the spatial filtering process is performed according to the reference image and the current frame image. Or, an image of a type corresponding to the current frame image is acquired as a reference image according to the image type of the current frame image, so that noise in the current frame image can be removed more accurately. For example, when the image type of the current frame image is a face type, the reference image may be an image including face information. Or acquiring edge information in the current frame image as a reference image, or acquiring a binary image according to the current frame image as the reference image. The specific reference image acquisition method can be determined according to the application scene of the video, and only the resolution of the current frame image is ensured to be the same as that of the reference image. The present application describes the spatial filtering processing of a current frame image by taking a reference image as the current frame image as an example.
In the method 012, the first filtering module 11 or the processor 30 performs mean filtering on the current frame image and the reference image respectively to obtain a first filtered image corresponding to the current frame image and a second filtered image corresponding to the reference image. Specifically, the size of the filtering window used in the mean filtering process may be 5 × 5, but may also be other sizes, such as 3 × 3 and 7 × 7, and is not limited herein.
Referring to fig. 5, in an embodiment, a current frame image is denoted by a, a previous frame image is denoted by B, and a filtering window (shown by a dotted line in fig. 5) with a size of 5 × 5 is used for both the current frame image a and the previous frame image B to perform an average filtering process, so as to obtain a first filtered image a1 corresponding to the current frame image a and a second filtered image B1 corresponding to the previous frame image B (if the previous frame image B is a plurality of frames, the second filtered image B1 is also a plurality of frames, and here, only the previous frame image B is 1 frame, and the second filtered image B1 is also 1 frame for example). For example, when the mean filtering process is performed on the first pixel point in the current frame image a, the pixel values of all pixel points within the 5 × 5 window size around the first pixel point are summed, and then the obtained total pixel value is averaged to be used as the pixel value after the smoothing process of the first pixel point. If the pixel value of the first pixel point in the current frame image a is a00, the pixel value of the first pixel point in the first filtered image a1 is [ (a00+ a01+ a02+ a03+ a04) + (a10+ a11+ a12+ a13+ a14) + (a20+ a21+ a22+ a23+ a24) + (a30+ a31+ a32+ a33+ a34) + (a40+ a41+ a42+ a43+ a44) ]/25. When the average filtering processing is performed on the second pixel point a12 in the current frame image a, the pixel value of the second pixel point in the first filtered image a1 is an average value of 25 pixel values in a01, … … a05, a11, … … a15, a21, … … a25, a31, … … a35, a41, … … a45 filtering windows. And by analogy, all pixel points in the current frame image a are traversed to obtain pixel values at corresponding positions in the first filtered image a 1. When the number of edge pixels in the filtering window is less than 25, the missing pixels can be filtered in the following manner.
Referring to fig. 5, for example, when performing the average filtering process on the pixel point with the pixel value a77, 25 pixel values (a77, a78, a79, 0, a87, a88, a89, 0, a97, a98, a99, 0) in the filtering window are averaged, and the obtained average value is used as the pixel value at the corresponding position in the first filtered image a 1.
In other embodiments, when the corresponding first filtered image and the second filtered image are obtained according to the current frame image and the reference image, the median filtering process may be further performed on the current frame image and the reference image. Specifically, when the median filtering process is performed on the current pixel, all pixels in the filtering window (5 × 5) may be sorted, and the median of all pixel values may be used as the filtered pixel value of the current pixel.
In the method 013, the first filtering module 11 or the processor 30 obtains an intermediate result of the current frame image according to the pixel values in the first filtered image and the pixel values in the second filtered image. Specifically, each pixel point in the current frame image is considered in the intermediate result of the current frame image, so that the first filtering module 11 or the processor 30 can perform adaptive spatial filtering on each pixel point in the current frame image according to the intermediate result, and thus, the pixel information in the current frame image can be more accurately analyzed, and the denoising and edge preserving effects are maximized.
In the method 014 and the method 015, the intermediate result is combined with the first filtered image and the second filtered image to obtain the noise reduction parameter of the current frame image, and each pixel point in the current frame image is considered by the intermediate result, and the noise reduction parameter obtained according to the intermediate result also corresponds to each pixel point in the current frame image, so that the first filtering module 11 or the processor 30 can more accurately analyze the detail information or the flat region of the current frame image, thereby realizing the maximization of the noise reduction and edge preserving effects, and enabling the adjustable range of spatial filtering processing to be larger, and being capable of adapting to each pixel point to perform targeted filtering.
Referring to fig. 6, in some embodiments, 012: the method for performing mean filtering processing on a current frame image and a reference image respectively to obtain a first filtered image corresponding to the current frame image and a second filtered image corresponding to the reference image comprises the following steps:
0121: performing downsampling processing on the current frame image and the reference image to obtain a first sampling image corresponding to the current frame image and a second sampling image corresponding to the reference image;
0122: and respectively carrying out mean value filtering processing on the first sampling image and the second sampling image so as to obtain a first filtering image corresponding to the current frame image and a second filtering image corresponding to the reference image.
Referring to fig. 2 and fig. 3, the first filtering module 11 and the processor 30 may also be used to perform the methods of 0121 and 0122. That is, both the first filtering module 11 and the processor 30 can also be used to: performing downsampling processing on the current frame image and the reference image to obtain a first sampling image corresponding to the current frame image and a second sampling image corresponding to the reference image; and respectively carrying out mean value filtering processing on the first sampling image and the second sampling image so as to obtain a first filtering image corresponding to the current frame image and a second filtering image corresponding to the reference image.
Specifically, in one embodiment, the first filtering module 11 or the processor 30 may directly obtain the corresponding first filtered image and the second filtered image according to the current frame image and the reference image. In this embodiment, in the process of acquiring the corresponding first filtered image and the second filtered image according to the current frame image and the reference image, the first filtering module 11 or the processor 30 may perform downsampling on the current frame image and the reference image, and then perform mean filtering on the first sampled image and the second sampled image respectively to obtain the first filtered image and the second filtered image, so as to reduce the amount of calculation of data in the methods 013 and 014. In one example, when the current frame image and the reference image are downsampled, the downsampling parameter may be 2 × 2, that is, when the current frame image is downsampled, the resolution of the first sample image obtained is 1/2 × 1/2 of the current frame image, and when the reference image is downsampled, the resolution of the second sample image obtained is 1/2 × 1/2 of the reference image. The first sampling image and the second sampling image can be obtained by nearest neighbor interpolation or bilinear interpolation in the sampling process. For example, when the nearest neighbor interpolation is used for sampling in the downsampling process, assuming that the resolution of the current frame image a is 10 × 10 (as shown in fig. 5), and the downsampling parameter may be 2 × 2, the resolution of the first sample image is 5 × 5, the pixel value at the pixel point (cx, cy) in the first sample image is C (cx, cy), and the value ranges of cx and cy are [0, 4 ]. Where C (cx, cy) denotes a pixel value at the cx +1 th row and the cy +1 th column in the first sample image. For example, when cx is 0 and cy is 0, C (0, 0) denotes a pixel value at the 1 st row and the 1 st column in the first sample image. According to the nearest neighbor interpolation method, the pixel value at (cx, cy) in the first sample image is the pixel value at (cx × (width of current frame image a/width of first sample image), cy × (width of current frame image a/width of first sample image)) in the current frame image a. The width of the current frame image a is 10, the width of the first sample image is 5, and the positions of the cx (10/5) and cy (10/5) in the current frame image a are obtained by rounding. For example, the pixel value at (0, 2) in the first sample image (i.e., the pixel value C (0, 2)) is calculated, and the pixel value at (0 × 10/5, 2 × 10/5)) in the current frame image a is the pixel value C (0, 2) in the first sample image (0, 2), i.e., the pixel value C (0, 2) ═ a (0, 4), and a (0, 4) is a03 in fig. 5. The down-sampling process of the reference image is the same as that of the current frame image, and a description thereof will not be provided.
In the method 0122, the process of performing the mean filtering on the first sample image and the second sample image respectively is the same as the process of performing the mean filtering on the current frame image and the reference image respectively in the method 012, and is not described herein again. The size of the filtering window can be adjusted according to the resolution of the first sampling image and the resolution of the second sampling image. For example, when the resolutions of the first and second sample images are m × m, where m >1 and is a positive integer, in order to more accurately analyze the detail information in the first and second filtered images, the information of a plurality of surrounding pixels is integrated for filtering when the current pixel is filtered, and the filtering window may be adjusted to 1/n of the resolution of the first sample image, where 1< n < m.
Referring to fig. 7, in some embodiments, 013: obtaining an intermediate result corresponding to the current frame image according to the first filtering image and the second filtering image, including:
0131 (step one): covering a first area containing current pixels in the first filtering image by adopting a preset window, and covering a second area containing current pixels in the second filtering image by adopting the preset window, wherein the first area corresponds to the second area in position, and the current pixels in the first area correspond to the current pixels in the second area in position;
0132 (step two): acquiring the variance of the pixel values of all the pixel points in the first area to serve as a first parameter of the current pixel in the first area;
0133 (step three): calculating the covariance between the first region and the second region according to the pixel values of all the pixel points in the first region and the pixel values of all the pixel points in the second region, and taking the covariance as a second parameter of the current pixel in the first region;
0134 (step four): and moving the predetermined window to change the positions of the first area and the second area, taking other pixel points in the first filtering image as current pixel points in the updated first area and taking other pixel points in the second filtering image as current pixel points in the updated second area, repeatedly executing the second step to the fourth step, obtaining a first parameter and a second parameter of each pixel point in the first filtering image, combining a plurality of first parameters to form a first parameter matrix, and combining a plurality of second parameters to form a second parameter matrix.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can also be used to perform the methods of 0131, 0132, 0133 and 0134. That is, both the first filtering module 11 and the processor 30 can also be used to perform: covering a first area containing current pixels in the first filtering image by adopting a preset window, and covering a second area containing current pixels in the second filtering image by adopting the preset window, wherein the first area corresponds to the second area in position, and the current pixels in the first area correspond to the current pixels in the second area in position; acquiring the variance of the pixel values of all the pixel points in the first area to serve as a first parameter of the current pixel in the first area; calculating the covariance between the first region and the second region according to the pixel values of all the pixel points in the first region and the pixel values of all the pixel points in the second region, and taking the covariance as a second parameter of the current pixel in the first region; and moving the predetermined window to change the positions of the first area and the second area, and repeatedly executing the second step to the fourth step by taking other pixel points in the first filtered image as current pixel points in the updated first area and taking other pixel points in the second filtered image as current pixel points in the updated second area, so as to obtain a first parameter and a second parameter of each pixel point in the first filtered image, combine a plurality of first parameters to form a first parameter matrix, and combine a plurality of second parameters to form a second parameter matrix.
Referring to fig. 8, the intermediate result includes a first parameter and a second parameter of the pixel, the intermediate result is represented by a first parameter matrix and a second parameter matrix, the predetermined window size can be set according to the resolution of the first filtered image and the resolution of the second filtered image, and the predetermined window size is 3 × 3 for example. It is assumed that the resolution of the first filtered image and the resolution of the second filtered image obtained after performing the method 012 are both 5 × 5, and the first filtered image is denoted as a2, the second filtered image is denoted as B2, and the first filtered image a2 includes 5 rows and 5 columns of pixel values, which are respectively denoted as a2(0, 0), a2(0, 1), … …, a2(0, 4), … …, a2(4, 0), a2(4, 1), … …, and a2(4, 4), where a (0, 0) represents the pixel value of the 1 st row and the 1 st column in the first filtered image a2, a (0, 1) represents the pixel value of the 1 st row and the 2 nd column in the first filtered image a2, and a2(4, 4) represents the pixel value of the 5 th row and the 5 th column in the first filtered image a 2; the second filtered image B2 includes 5 rows and 5 columns of pixel values, which are respectively denoted as B2(0, 0), B2(0, 1), … …, B2(0, 4), … …, B2(4, 0), B2(4, 1), … …, and B2(4, 4). If the current pixel is a pixel point at (0, 0), the first region covered by the predetermined window may be a region composed of a2(0, 0), a2(0, 1), a2(0, 2), a2(1, 0), a2(1, 1), a2(1, 2), a2(2, 0), a2(2, 1), and a2(2, 2), and the second region covered by the predetermined window may be a region composed of B2(0, 0), B2(0, 1), B2(0, 2), B2(1, 0), B2(1, 1), B2(1, 2), B2(2, 0), B2(2, 1), and B2(2, 2). Wherein the first position in the first area as shown in fig. 8 covers the current pixel. In other embodiments, the current pixel may be covered at any one position in the first region.
Referring to fig. 9, in the method 0132, when the current pixel is a2(0, 0), the first filtering module 11 or the processor 30 obtains a mean value of pixel values of all pixels in the first region, that is, E (a00) ═ a2(0, 0) + a2(0, 1) + a2(0, 2) + a2(1, 0) + a2(1, 1) + a2(1, 2) + a2(2, 0) + a2(2, 1) + a2(2, 2)]A00 in E (a00) indicates a pixel point at (0, 0) (i.e., the first row and the first column) in the first filtered image, and the variance of the pixel values of all the pixel points in the first region is var (a00) ═ 1/9 × [ (a2(0, 0) -E (a00))2+(A2(0,1)-E(A00))2+(A2(0,2)-E(A00))2+(A2(1,0)-E(A00))2+(A2(1,1)-E(A00))2+(A2(1,2)-E(A00))2+(A2(2,0)-E(A00))2+(A2(2,1)-E(A00))2+(A2(2,2)-E(A00))2]Then the first parameter of the current pixel a2(0, 0) is var (a 00).
In method 0133, when the current pixel is B2(0, 0), the first filtering module 11 or the processor 30 obtains the covariance between the first region and the second region, similarly, the mean value E of the pixel values of all the pixel points in the second region is obtained first (B00), the specific calculation method is the same as the method for calculating the mean value of the pixel values of all the pixel points in the first region, the covariance between the first region and the second region is cov (a00) ═ [ (a2(0, 0) -E (a00)) × (B2(0, 0) -E (B00)) + (a2(0, 1) -E (a00)) × (B2(0, 1) -E (B00)) + … … + (a2(2, 2) -E (a00)) (B2(2, 2) -E (B00)) ] × 1/(9-1), the second parameter of the current pixel a2(0, 0) in the first region is cov (a 00).
In method 0134, after the first parameter and the second parameter of the current pixel in the first filtered image a2 are calculated, the first filtering module 11 or the processor 30 controls to move the predetermined window, and make the first position in the moved first region cover the updated current pixel, and make the first position in the moved second region cover the updated current pixel. For example, after the first filter module 11 or the processor 30 calculates the first parameter and the second parameter of the pixel point at a2(0, 0) in the first filtered image a2, the first parameter and the second parameter of the pixel point at a2(0, 1) in the first filtered image a2 are calculated, the first region after the movement is a region composed of a2(0, 1), a2(0, 2), a2(0, 3), a2(1, 1), a2(1, 2), a2(1, 3), a2(2, 1), a2(2, 2), a2(2, 3), and the second region after the movement is a region composed of B5 (0, 1), B2(0, 2), B2(0, 3), B2(1, 1), B2(1, 2), B2(1, 3), B2 (6862 ), B2(0, 2), B583), B01 2 (3), and B8632 in the first filtered image a2, the method of obtaining a according to the method of obtaining a 8653, 1) the first parameter var (a01) and the second parameter cov (a01) of the pixel point at (a). And repeating the steps until the first parameters and the second parameters of all the pixel points in the first filtered image A2 are calculated. The first parameters of all the pixels in the first filtered image a2 form a first parameter matrix (as shown in fig. 9), and the second parameters of all the pixels in the first filtered image form a second parameter matrix (as shown in fig. 9).
Note that, when the current pixel is a2(0, 3), the pixel value of the third column in the first region is missing, and a column of pixel values on the left side of the current pixel may be mapped to the third column in the first region, that is, all the pixel values in the first region are a2(0, 3), a2(0, 4), a2(0, 2), a2(1, 3), a2(1, 4), a2(1, 2), a2(2, 3), a2(2, 4), and a2(2, 2), respectively. Alternatively, the pixel values of the third column in the first region are all replaced with 0, that is, all the pixel values in the first region are a2(0, 3), a2(0, 4), 0, a2(1, 3), a2(1, 4), 0, a2(2, 3), a2(2, 4), 0, respectively.
When the current pixel is a2(3, 3), the pixel value of the third column in the first region is missing, the pixel value of the column to the left of the current pixel may be mapped to the third column in the first region, the pixel value of the third row in the first region is missing, the pixel value of the row above the current pixel may be mapped to the third row in the first region, and the pixel values still missing in the first region after mapping may be replaced with 0, i.e., all the pixel values in the first region are a2(3, 3), a2(3, 4), a2(3, 2), a2(4, 3), a2(4, 4), a2(4, 2), a2(2, 3), a2(2, 4), and 0, respectively. .
Referring to fig. 10, in some embodiments, the noise reduction parameters are represented by a first noise reduction matrix and a second noise reduction matrix, the first noise reduction matrix includes first noise reduction parameters corresponding to all pixel points in the current frame image, the second noise reduction matrix includes second noise reduction parameters corresponding to all pixel points in the current frame image, and the intermediate result is represented by the first parameter matrix and the second parameter matrix. 0141: obtaining a noise reduction parameter of the current frame image according to a preset first guide parameter, an intermediate result, a first filtering image and a second filtering image, comprising:
0141: acquiring a first noise reduction matrix according to the first parameter matrix, the second parameter matrix and the first guide parameter; and
0142: and acquiring a second noise reduction matrix according to the first noise reduction matrix, the first filtering image and the second filtering image.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can be also used to perform the methods in 0141 and 0142. That is, both the first filtering module 11 and the processor 30 can also be used to: acquiring a first noise reduction matrix according to the first parameter matrix, the second parameter matrix and the first guide parameter; and acquiring a second noise reduction matrix according to the first noise reduction matrix, the first filtering image and the second filtering image.
In the method 0141, for the whole current frame image, the first filtering module 11 or the processor 30 sequentially obtains the first noise reduction parameters of each pixel point according to the first parameter in the first parameter matrix, the second parameter in the second parameter matrix, and the first guide parameter, and the plurality of first noise reduction parameters form a first noise reduction matrix. The first guiding parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness degree of the spatial filtering processing.
Referring to fig. 11, specifically, the first filtering module 11 or the processor 30 adds a first guiding parameter to each first parameter in the first parameter matrix to obtain a first intermediate matrix, and if the first guiding parameter is denoted as eps, values from the first row to the last row in the first intermediate matrix are var (a00) + eps, var (a01) + eps, var (a02) + eps, … …, var (a43) + eps, and var (a44) + eps, respectively. Each second parameter in the second parameter matrix of the first filtering module 11 or the processor 30 is respectively multiplied by each value at the corresponding position in the first intermediate matrix to obtain a first noise reduction matrix, that is, each first noise reduction parameter from the first row to the last row in the first noise reduction matrix is cov (a00)/(var (a00) + eps, cov (a01)/(var (a01) + eps), … …, cov (a43)/(var (a43) + eps), cov (a44)/(var (a44) + eps).
In the method 0142, for the whole current frame image, the first filtering module 11 or the processor 30 sequentially obtains the second noise reduction parameters of the pixel points according to the first noise reduction parameters in the first noise reduction matrix, the pixel values in the first filtered image and the pixel values in the second filtered image, and the plurality of second noise reduction parameters form a second noise reduction matrix.
Specifically, the first filtering module 11 or the processor 30 multiplies the pixel values of the pixels at the corresponding positions in the first filtering frame image by the first noise reduction parameters in the first noise reduction matrix to obtain the data in the second intermediate matrix, and subtracts the values in the second intermediate matrix from the pixel values of the pixels in the second filtering frame image to obtain the second noise reduction matrix.
Referring to fig. 12, in an embodiment, when the first filtering module 11 or the processor 30 obtains the second noise reduction parameter of the first pixel point in the current frame image, the first filtering module multiplies the pixel value at the first position in the first filtering image a2 by the first noise reduction parameter in the first noise reduction matrix, so as to obtain the value at the first position in the second intermediate matrix. And subtracting the first numerical value in the second intermediate matrix from the pixel value at the first pixel point in the second filtered image B2 to obtain a second noise reduction parameter of the first pixel point. Assuming that each pixel in the first filtered image a2 (resolution is 5 × 5) is respectively denoted as a2(0, 0), … …, a2(0, 4), … …, a2(4, 0), … …, a2(4, 4), and each pixel in the second filtered image B2 (resolution is 5 × 5) is respectively denoted as B2(0, 0), … …, B2(0, 4), … …, B2(4, 0), … …, B2(4, 4), the second noise reduction parameter corresponding to the first pixel in the second noise reduction matrix is B2(0, 0) - (cov (a00)/(var (a00) + eps) a2(0, 0), where (cov (a00)/(var (a00) + eps) a2(0, 0) is the value of the first column (0, 0) in the second intermediate matrix.
In one embodiment, the first filtering module 11 or the processor 30 performs noise reduction processing on the current frame image according to a first noise reduction parameter in the first noise reduction matrix and a second noise reduction parameter in the second noise reduction matrix. Specifically, when the first filtered image is obtained from the first sampled image after the down-sampling, the first filtered image may be spatially filtered using the first noise reduction parameter and the second noise reduction parameter, and the pixel values of the pixels in the spatially filtered first filtered image are (cov (a00)/(var (a00) + eps))) a2(0, 0) + (B2(0, 0) - (cov (a00)/(var (a 9) + eps) × a2(0, 0)), (cov (a01)/(var (a01) + eps))) a2(0, 1) + (B2(0, 1) - (cov (a01)/(var (a01) + eps)) a01 (0, 1)), 01, (01 a01)/(var (a01) + 01 a01 (01 a01) + 01 a01 (01 a01) + 01 a01 (01 a 364) + 01 a 364 (01 a 364B 364, 4)). And finally, performing interpolation processing on the first filtered image after the spatial domain filtering processing to obtain a first noise reduction image with the same resolution as that of the current frame image. Under the condition that the first filtering image is obtained by directly carrying out mean value filtering on the current frame image, a plurality of first noise reduction parameters in the first noise reduction matrix respectively correspond to all pixel points in the current frame image, a plurality of second noise reduction parameters in the second noise reduction matrix respectively correspond to all pixel points in the current frame image, and then the pixel values of all the pixel points in the current frame image can be directly multiplied by the first noise reduction parameters at the corresponding positions respectively, and then the second noise reduction parameters at the corresponding positions are added to obtain the first noise reduction image.
In summary, compared with the NLM denoising process, the first denoising parameter in the first denoising matrix and the second denoising parameter in the second denoising matrix respectively perform adaptive denoising corresponding to each pixel point in the current frame image, so that the adjustable range of the first filtering module 11 or the processor 30 for performing spatial filtering is larger, and the image blur caused by smoothing of detail information in the current frame image can be avoided while the noise in the whole image is removed.
Referring to fig. 13, in some embodiments, 015: the spatial filtering processing is carried out on the current frame image according to the noise reduction parameters to obtain a first noise reduction image, and the method comprises the following steps:
0151: carrying out mean value filtering processing on the first noise reduction matrix;
0152: carrying out mean value filtering processing on the second noise reduction matrix; and
0153: and acquiring a first noise reduction image according to the processed first noise reduction matrix, the current frame image and the processed second noise reduction matrix.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can also be used to perform the methods of 0151, 0152 and 0153. That is, both the first filtering module 11 and the processor 30 can also be used to perform: carrying out mean value filtering processing on the first noise reduction matrix; carrying out mean value filtering processing on the second noise reduction matrix; and acquiring a first noise reduction image according to the processed first noise reduction matrix, the current frame image and the processed second noise reduction matrix.
Further, before performing spatial filtering on each pixel point in the current frame image by combining the first noise reduction parameter and the second noise reduction parameter, the first filtering module 11 or the processor 30 may further perform mean filtering on the first noise reduction matrix and the second noise reduction matrix, respectively, and the specific mean filtering principle is the same as that in the method 012, and is not described herein again. Under the condition that the first filtering image is obtained by directly carrying out mean value filtering on the current frame image, the pixel value of each pixel point in the current frame image can be directly multiplied by the processed first noise reduction parameter at the corresponding position respectively, and then the processed second noise reduction parameter at the corresponding position is added to obtain the first noise reduction image. The first filtering module 11 or the processor 30 performs spatial filtering processing on the current pixel by combining the first noise reduction parameter and the second noise reduction parameter corresponding to a plurality of surrounding pixel points (that is, other first noise reduction parameters and second noise reduction parameters within a predetermined window size are considered in the average filtering processing process), so as to perform denoising processing on the current pixel more comprehensively, so that the first noise reduction image can retain the detail information in the current frame image and can remove the small-particle noise in the current frame image, and the phenomenon that the small-particle noise is aggregated into large noise in the first noise reduction image is avoided.
Referring to fig. 14, in some embodiments, 014: obtaining a noise reduction parameter of the current frame image according to the intermediate result, the first filtering image and the second filtering image, comprising:
0143: acquiring a correction guide parameter according to a preset first guide parameter and an intermediate result; and
0144: and acquiring the noise reduction parameter of the current frame image according to the correction guide parameter, the intermediate result, the first filtering image and the second filtering image.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can be also used to perform the methods in 0143 and 0144. That is, both the first filtering module 11 and the processor 30 can also be used to: acquiring a correction guide parameter according to a preset first guide parameter and an intermediate result; and acquiring the noise reduction parameter of the current frame image according to the correction guide parameter, the intermediate result, the first filtering image and the second filtering image.
In some embodiments, the first filtering module 11 or the processor 30 may obtain the noise reduction parameters of the current frame image according to methods 0143 and 0144 in addition to obtaining the noise reduction parameters of the current frame image according to methods 0141 and 0142.
Specifically, the intermediate results are presented in the first parameter matrix and the second parameter matrix obtained by the methods in 0131, 0132, 0133, and 0134. The first filtering module 11 or the processor 30 adjusts the first guiding parameter according to the intermediate result of each pixel point in the current frame image, so that each pixel point in the current frame image has a corresponding modified guiding parameter instead of a uniform first guiding parameter, thereby further maximizing the denoising effect and the edge preserving effect and ensuring the enhancement effect of the image.
Referring to fig. 15, in some embodiments, 0143: obtaining a correction guide parameter according to a preset first guide parameter and an intermediate result, comprising:
01431: carrying out transformation processing on the intermediate result to obtain an index value of the intermediate result, wherein the index value is an integer and is within a preset range;
01432: finding out data values in a corresponding sequence in a preset descending array according to the index values, wherein the number of the data values is the same as the number of integers in a preset range; and
01433: and acquiring a correction guide parameter according to the searched data value and the first guide parameter.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 may also be used to perform the methods of 01431, 01432, and 01433. That is, both the first filtering module 11 and the processor 30 can also be used to: carrying out transformation processing on the intermediate result to obtain an index value of the intermediate result, wherein the index value is an integer and is within a preset range; finding out data values in a corresponding sequence in a preset descending array according to the index values, wherein the number of the data values is the same as the number of integers in a preset range; and acquiring a correction guide parameter according to the searched data value and the first guide parameter.
In one embodiment, the first parameter in the intermediate result of the first filtering module 11 or the processor 30 is transformed to obtain the index value. In another embodiment, the second parameter in the intermediate result of the first filtering module 11 or the processor 30 is transformed to obtain the index value. The index value is an integer and is within a preset range. The upper limit value of the preset range is related to the number of data values in the preset decrement array. And each data value in the descending array is a parameter which is obtained by the actual scene in the video and is subjected to descending sequencing. The parameters acquired from the actual scene in the video are sorted in a descending order, so that the larger the intermediate result is, the smaller the corresponding correction guide parameter is.
Specifically, the following describes the details of obtaining the correction guidance parameter by taking the example of the conversion process of the first parameter.
In some embodiments, in the method 01431, the quotient of the first parameter and the number of the data values may be rounded, and when the rounded value is greater than the upper limit value of the predetermined range, the upper limit value of the predetermined range is used as the index value, and when the rounded value is less than or equal to the upper limit value of the predetermined range, the rounded value is used as the index value. When rounding is performed on the quotient of the first parameter and the number of the data values, rounding can be performed according to a rounding method; or, rounding down the quotient of the first parameter and the number of the data values (the rounding value is smaller than the maximum integer of the quotient); or, rounding up the quotient of the first parameter and the number of data values (the rounded value is greater than the maximum integer of the quotient). In the method 01432, since the index value is within the predetermined range and the number of data values is the same as the number of integers in the predetermined range, the first filtering module 11 or the processor 30 may find the data values in the predetermined decrement array according to the index value. And finally, acquiring a corrected guide parameter according to the searched data value and the first guide parameter.
In one example, the descending array is ratio [10] {10.0, 2.0, 1.5, 1.0, 0.8, 0.5, 0.3, 0.2, 0.125, 0.1}, and the subscript of the descending array starts at 0 and is preset to be clip [0, 9 ]. When the first filtering module 11 or the processor 30 obtains the correction guidance parameter of the first pixel in the first filtered image, rounding var (a0)/10, if var (a00)/10 is equal to 2.5, rounding 2.5 to obtain a rounding value 3, where the rounding value 3 is smaller than the upper limit value 9 in the preset range, the index value is 3, the first filtering module 11 or the processor 30 searches for a data value with a subscript of 3 in the decremental array according to the index value 3, that is, ratio [3] < 1.0, and the correction guidance parameter is obtained by multiplying ratio [3] by the first guidance parameter. For example, when the first filtering module 11 or the processor 30 obtains the correction guidance parameter of the third pixel in the first filtered image, round var (a02)/10, if var (a02)/10 is equal to 11.6, the round value obtained by rounding 11.6 is 12, since the round value 12 is greater than the upper limit value 9 of the preset range, the index value is 9, the first filtering module 11 or the processor 30 searches for the data value with the index of 9 in the decrement array according to the index value 3, that is, ratio [9] is equal to 0.1, and the correction guidance parameter is obtained by multiplying ratio [9] by the first guidance parameter.
Referring to fig. 15, in some embodiments, the noise reduction parameters are represented by a third noise reduction matrix and a fourth noise reduction matrix, the third noise reduction matrix includes the third noise reduction parameters corresponding to all the pixel points in the current frame image, and the fourth noise reduction matrix includes the fourth noise reduction parameters corresponding to all the pixel points in the current frame image. 0144: obtaining the noise reduction parameter of the current frame image according to the correction guide parameter, the intermediate result, the first filtering image and the second filtering image, and the method comprises the following steps:
01441: acquiring a third noise reduction matrix according to the first parameter matrix, the second parameter matrix and the correction guide parameters; and
01442: and acquiring a fourth noise reduction matrix according to the third noise reduction matrix, the first filtering image and the second filtering image.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can be also used to perform the methods in 01441 and 01442. That is, the first filtering module 11 is further configured to: acquiring a third noise reduction matrix according to the first parameter matrix, the second parameter matrix and the correction guide parameters; and acquiring a fourth noise reduction matrix according to the third noise reduction matrix, the first filtering image and the second filtering image.
Specifically, the first filtering module 11 or the processor 30 adds the corresponding correction guidance parameters to each first parameter in the first parameter matrix to obtain a third intermediate matrix, and if the plurality of correction guidance parameters are respectively denoted as eps00, eps01, … …, and eps44 (as shown in fig. 16), the values in the third intermediate matrix are var (a00) + eps00, var (a01) + eps01, var (a02) + eps02, … …, var (a43) + eps43, and var (a44) + eps44, respectively. Each second parameter in the second parameter matrix of the first filtering module 11 or the processor 30 is respectively multiplied by each value at the corresponding position in the third intermediate matrix to obtain a third noise reduction matrix, that is, as shown in fig. 16, each third noise reduction parameter in the third noise reduction matrix is cov (a00)/(var (a00) + eps00, cov (a01)/(var (a01) + eps01), … …, cov (a43)/(var (a43) + eps43), cov (a44)/(var (a44) + eps44), respectively.
In method 01442, the first filtering module 11 or the processor 30 multiplies the pixel values of the pixels at the corresponding positions in the first filtered image by the third noise reduction parameters in the third noise reduction matrix to obtain data in the fourth intermediate matrix, and subtracts the values in the fourth intermediate matrix from the pixel values of the pixels in the second filtered image to obtain the fourth noise reduction matrix.
Referring to fig. 17, in an embodiment, when the first filtering module 11 or the processor 30 obtains the second noise reduction parameter of the first pixel point in the current frame image, the first filtering module multiplies the pixel value at the first position in the first filtering image a2 by the first third noise reduction parameter in the third noise reduction matrix to obtain the value at the first position in the third intermediate matrix. And subtracting the first numerical value in the third intermediate matrix from the pixel value at the first pixel point in the second filtered image B2 to obtain a fourth noise reduction parameter of the first pixel point. Assuming that each pixel in the first filtered image (resolution of 5 × 5) is respectively denoted as a2(0, 0), … …, a2(0, 4), … …, a2(4, 0), … …, a2(4, 4), and each pixel in the first filtered image (resolution of 5 × 5) is respectively denoted as B2(0, 0), … …, B2(0, 4), … …, B2(4, 0), … …, B2(4, 4), the second noise reduction parameter corresponding to the first pixel in the fourth noise reduction matrix is B2(0, 0) - (cov (a00)/(var (a00) + eps00) × a2(0, 0), where (cov (a00)/(var (a00) + eps 00): a2(0, 0) is a value at the first position in the third intermediate matrix.
Referring to fig. 18, in some embodiments, 015: the spatial filtering processing is carried out on the current frame image according to the noise reduction parameters to obtain a first noise reduction image, and the method comprises the following steps:
0154: carrying out mean value filtering processing on the third noise reduction matrix;
0155: carrying out mean value filtering processing on the fourth noise reduction matrix; and
0156: and acquiring a first noise reduction image according to the processed third noise reduction matrix, the current frame image and the processed fourth noise reduction matrix.
Referring to fig. 2 and 3, the first filtering module 11 and the processor 30 can also be used to perform the methods of 0154, 0155 and 0156. That is, the first filtering module 11 is further configured to: carrying out mean value filtering processing on the third noise reduction matrix; carrying out mean value filtering processing on the fourth noise reduction matrix; and acquiring a first noise reduction image according to the processed third noise reduction matrix, the current frame image and the processed fourth noise reduction matrix.
Specifically, the implementation processes in method 0154 and method 0151 are the same, the implementation processes in method 0155 and method 0152 are the same, and the implementation processes in method 0156 and method 0153 are the same, and are not described again here.
In summary, the first filtering module 11 or the processor 30 may multiplex the intermediate result obtained in the method 013 according to the process of obtaining the noise reduction parameters in the methods 0143 and 0144, so as to reduce the amount of calculation in the spatial filtering process, and maximize the denoising effect and the edge preserving effect through the adaptive point-by-point filtering process.
Referring to fig. 19, in some embodiments, 05: according to the second noise reduction image and the preset first guide parameter, contrast enhancement processing is carried out on the current frame image to obtain a target image, and the method comprises the following steps:
051: adjusting a preset first guide parameter to obtain a second guide parameter so that the second guide parameter is larger than the first guide parameter;
052: acquiring high-frequency information in the current frame image according to the second guide parameter, the second noise reduction image and the current frame image; and
053: and fusing the high-frequency information and the second noise reduction image to obtain a target image.
Please refer to fig. 2 and 3, the contrast enhancement module 15 and the processor 30 can also be used to perform the methods of 051, 052 and 053. That is, both contrast enhancement module 15 and processor 30 may also be used to: adjusting a preset first guide parameter to obtain a second guide parameter so that the second guide parameter is larger than the first guide parameter; acquiring high-frequency information in the current frame image according to the second guide parameter, the second noise reduction image and the current frame image; and fusing the high-frequency information and the second noise reduction image to obtain a target image.
Specifically, the contrast enhancement module 15 or the processor 30 increases the value of the preset first guide parameter to obtain a second guide parameter, so that the second guide parameter is greater than the first guide parameter, so that the contrast enhancement module 15 or the processor 30 obtains more accurate high-frequency information by screening, and finally, the high-frequency information and the second noise-reduced image are fused to obtain the target image, thereby removing the noise in the target image to the maximum degree and enhancing the contrast in the target image.
In one embodiment, the contrast enhancement module 15 or the processor 30 multiplies the pixel values in the current frame image by the second guiding parameter to obtain a smoother smoothing matrix, and then uses the area of the smoothing matrix where the pixel values are greater than the preset threshold pixel value as the high frequency information.
In another embodiment, the contrast enhancement module 15 or the processor 30 multiplies the pixel values in the current frame image by the second guiding parameter to obtain a smoother matrix, and subtracts the pixel values at the corresponding position in the current frame image from the pixel values in the smoother matrix to obtain the pixel values corresponding to the high frequency information.
In the method 053, the contrast enhancement module 15 or the processor 30 replaces the region corresponding to the high frequency information in the second noise-reduced image with the high frequency information, so as to obtain a target image with enhanced contrast and good noise reduction effect.
Referring to fig. 20, the present application also provides a non-volatile computer readable storage medium 200 containing a computer program 201. The computer program 201, when executed by one or more processors 30, causes the processors 30 to implement the methods in 01, 03, 05, 011, 012, 013, 014, 015, 0121, 0122, 0131, 0132, 0133, 0134, 0141, 0142, 0151, 0152, 0153, 0143, 0144, 01431, 01432, 01433, 01441, 01442, 0154, 0155, 0156, 051, 052, and 053.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (14)
1. A video processing method, wherein the video comprises a current frame image and at least one previous frame image, the video processing method comprising:
respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding first noise reduction images;
performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of a second noise reduction image; and
and performing contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter to obtain a target image, wherein the first guide parameter is a filtering parameter of spatial filtering processing and is used for controlling the smoothness of the spatial filtering processing.
2. The video processing method according to claim 1, wherein performing spatial filtering processing on the current frame image to obtain a first noise-reduced image corresponding to the current frame image comprises:
acquiring a reference image according to the current frame image, wherein the resolution of the reference image is the same as that of the current frame image;
respectively carrying out mean value filtering processing on the current frame image and the reference image to obtain a first filtering image corresponding to the current frame image and a second filtering image corresponding to the reference image;
acquiring an intermediate result corresponding to the current frame image according to the first filtering image and the second filtering image, wherein the intermediate result is data generated in the process of performing spatial filtering processing on the current frame image;
obtaining noise reduction parameters of the current frame image according to the intermediate result, the first filtering image and the second filtering image; and
and performing spatial filtering processing on the current frame image according to the noise reduction parameters to obtain the first noise reduction image.
3. The video processing method of claim 2, wherein the performing the mean filtering on the current frame image and the reference image to obtain a first filtered image corresponding to the current frame image and a second filtered image corresponding to the reference image respectively comprises:
performing downsampling processing on the current frame image and the reference image to obtain a first sampling image corresponding to the current frame image and a second sampling image corresponding to the reference image;
and respectively carrying out mean value filtering processing on the first sampling image and the second sampling image so as to obtain the first filtering image corresponding to the current frame image and the second filtering image corresponding to the reference image.
4. The method of claim 2, wherein the intermediate result is represented by a first parameter matrix and a second parameter matrix, and the obtaining the intermediate result corresponding to the current frame image according to the first filtered image and the second filtered image comprises:
the method comprises the following steps: covering a first area containing current pixels in the first filtering image by adopting a preset window, and covering a second area containing current pixels in the second filtering image by adopting the preset window, wherein the first area corresponds to the second area in position, and the current pixels in the first area correspond to the current pixels in the second area in position;
step two: acquiring the variance of the pixel values of all the pixel points in the first area to serve as a first parameter of the current pixel in the first area;
step three: calculating the covariance between the first region and the second region according to the pixel values of all the pixel points in the first region and the pixel values of all the pixel points in the second region, and taking the covariance as a second parameter of the current pixel in the first region;
step four: and moving the preset window to change the positions of the first area and the second area, repeatedly executing the second step to the fourth step by taking other pixel points in the first filtering image as current pixel points in the updated first area and taking other pixel points in the second filtering image as current pixel points in the updated second area, obtaining the first parameter and the second parameter of each pixel point in the first filtering image, combining a plurality of first parameters to form a first parameter matrix, and combining a plurality of second parameters to form a second parameter matrix.
5. The method of claim 4, wherein the denoising parameters are presented in a first denoising matrix and a second denoising matrix, the first denoising matrix comprises first denoising parameters corresponding to all pixel points in the current frame image, the second denoising matrix comprises second denoising parameters corresponding to all pixel points in the current frame image, and the intermediate result is presented in the first parameter matrix and the second parameter matrix; the obtaining of the noise reduction parameter of the current frame image according to the intermediate result, the first filtered image and the second filtered image includes:
acquiring the first noise reduction matrix according to the first parameter matrix, the second parameter matrix and the first guide parameter; and
and acquiring a second noise reduction matrix according to the first noise reduction matrix, the first filtering image and the second filtering image.
6. The video processing method according to claim 5, wherein the performing spatial filtering processing on the current frame image according to the noise reduction parameter to obtain a first noise-reduced image comprises:
carrying out mean value filtering processing on the first noise reduction matrix;
carrying out mean value filtering processing on the second noise reduction matrix; and
and acquiring the first noise reduction image according to the processed first noise reduction matrix, the current frame image and the processed second noise reduction matrix.
7. The method of claim 4, wherein the obtaining the noise reduction parameters of the current frame image according to the intermediate result, the first filtered image and the second filtered image comprises:
acquiring a correction guide parameter according to a preset first guide parameter and the intermediate result; and
and obtaining the noise reduction parameter of the current frame image according to the corrected guide parameter, the intermediate result, the first filtering image and the second filtering image.
8. The video processing method of claim 7, wherein the obtaining modified guide parameters according to the preset first guide parameters and the intermediate result comprises:
carrying out transformation processing on the intermediate result to obtain an index value of the intermediate result, wherein the index value is an integer and is within a preset range;
finding out data values in a corresponding sequence in a preset descending array according to the index value, wherein the number of the data values is the same as that of integers in the preset range; and
and acquiring the corrected guide parameter according to the searched data value and the first guide parameter.
9. The video processing method of claim 7, wherein the noise reduction parameters are represented by a third noise reduction matrix and a fourth noise reduction matrix, the third noise reduction matrix comprises third noise reduction parameters corresponding to all pixels in the current frame image, the fourth noise reduction matrix comprises fourth noise reduction parameters corresponding to all pixels in the current frame image, and the intermediate result is represented by the first parameter matrix and the second parameter matrix; the obtaining of the noise reduction parameter of the current frame image according to the corrected guide parameter, the intermediate result, the first filtered image and the second filtered image includes:
acquiring a third noise reduction matrix according to the first parameter matrix, the second parameter matrix and the correction guide parameters; and
and acquiring a fourth noise reduction matrix according to the third noise reduction matrix, the first filtering image and the second filtering image.
10. The video processing method according to claim 9, wherein the performing spatial filtering processing on the current frame image according to the noise reduction parameter to obtain a first noise reduction image comprises:
carrying out mean value filtering processing on the third noise reduction matrix;
carrying out mean value filtering processing on the fourth noise reduction matrix; and
and acquiring the first noise reduction image according to the processed third noise reduction matrix, the current frame image and the processed fourth noise reduction matrix.
11. The video processing method of claim 1, wherein performing contrast enhancement processing on the current frame image according to the second noise-reduced image and a preset first guiding parameter to obtain a target image comprises:
adjusting a preset first guide parameter to obtain a second guide parameter, so that the second guide parameter is larger than the first guide parameter;
acquiring high-frequency information in the current frame image according to the second guide parameter, the second noise reduction image and the current frame image; and
and fusing the high-frequency information and the second noise reduction image to obtain a target image.
12. A video processing apparatus, comprising:
the first filtering module is used for respectively performing spatial filtering processing on the current frame image and the previous frame image to obtain a plurality of corresponding frames of first noise reduction images;
the second filtering module is used for performing time domain filtering processing on a plurality of frames of the first noise reduction image to obtain a frame of second noise reduction image; and
and the contrast enhancement module is used for carrying out contrast enhancement processing on the current frame image according to the second noise reduction image and a preset first guide parameter so as to obtain a target image, wherein the first guide parameter is a filtering parameter for spatial filtering processing and is used for controlling the smoothness degree of the spatial filtering processing.
13. A terminal, characterized in that the terminal comprises:
one or more processors, memory; and
one or more programs, wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs comprising instructions for performing the video processing method of any of claims 1 to 11.
14. A non-transitory computer-readable storage medium storing a computer program which, when executed by one or more processors, implements the video processing method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111328624.2A CN114049271A (en) | 2021-11-10 | 2021-11-10 | Video processing method and device, terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111328624.2A CN114049271A (en) | 2021-11-10 | 2021-11-10 | Video processing method and device, terminal and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114049271A true CN114049271A (en) | 2022-02-15 |
Family
ID=80208231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111328624.2A Pending CN114049271A (en) | 2021-11-10 | 2021-11-10 | Video processing method and device, terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114049271A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116205810A (en) * | 2023-02-13 | 2023-06-02 | 爱芯元智半导体(上海)有限公司 | Video noise reduction method and device and electronic equipment |
CN117237238A (en) * | 2023-11-13 | 2023-12-15 | 孔像汽车科技(武汉)有限公司 | Image denoising method, system, equipment and storage medium based on guided filtering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020057248A1 (en) * | 2018-09-21 | 2020-03-26 | 中兴通讯股份有限公司 | Image denoising method and apparatus, and device and storage medium |
CN111402165A (en) * | 2020-03-18 | 2020-07-10 | 展讯通信(上海)有限公司 | Image processing method, device, equipment and storage medium |
WO2020177723A1 (en) * | 2019-03-06 | 2020-09-10 | 深圳市道通智能航空技术有限公司 | Image processing method, night photographing method, image processing chip and aerial camera |
CN113327228A (en) * | 2021-05-26 | 2021-08-31 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113610724A (en) * | 2021-08-03 | 2021-11-05 | Oppo广东移动通信有限公司 | Image optimization method and device, storage medium and electronic equipment |
-
2021
- 2021-11-10 CN CN202111328624.2A patent/CN114049271A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020057248A1 (en) * | 2018-09-21 | 2020-03-26 | 中兴通讯股份有限公司 | Image denoising method and apparatus, and device and storage medium |
WO2020177723A1 (en) * | 2019-03-06 | 2020-09-10 | 深圳市道通智能航空技术有限公司 | Image processing method, night photographing method, image processing chip and aerial camera |
CN111402165A (en) * | 2020-03-18 | 2020-07-10 | 展讯通信(上海)有限公司 | Image processing method, device, equipment and storage medium |
CN113327228A (en) * | 2021-05-26 | 2021-08-31 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113610724A (en) * | 2021-08-03 | 2021-11-05 | Oppo广东移动通信有限公司 | Image optimization method and device, storage medium and electronic equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116205810A (en) * | 2023-02-13 | 2023-06-02 | 爱芯元智半导体(上海)有限公司 | Video noise reduction method and device and electronic equipment |
CN116205810B (en) * | 2023-02-13 | 2024-03-19 | 爱芯元智半导体(上海)有限公司 | Video noise reduction method and device and electronic equipment |
CN117237238A (en) * | 2023-11-13 | 2023-12-15 | 孔像汽车科技(武汉)有限公司 | Image denoising method, system, equipment and storage medium based on guided filtering |
CN117237238B (en) * | 2023-11-13 | 2024-03-29 | 孔像汽车科技(上海)有限公司 | Image denoising method, system, equipment and storage medium based on guided filtering |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2987135B1 (en) | Reference image selection for motion ghost filtering | |
US8971666B2 (en) | Fisheye correction with perspective distortion reduction method and related image processor | |
CN114049271A (en) | Video processing method and device, terminal and computer readable storage medium | |
AU2006244201B2 (en) | Continuous extended range image processing | |
US11770613B2 (en) | Anti-shake image processing method, apparatus, electronic device and storage medium | |
EP2164040B1 (en) | System and method for high quality image and video upscaling | |
US11151702B1 (en) | Deep learning-based image fusion for noise reduction and high dynamic range | |
CN110675404A (en) | Image processing method, image processing apparatus, storage medium, and terminal device | |
US10489897B2 (en) | Apparatus and methods for artifact detection and removal using frame interpolation techniques | |
JP4454657B2 (en) | Blur correction apparatus and method, and imaging apparatus | |
CN110944176B (en) | Image frame noise reduction method and computer storage medium | |
EP2819092B1 (en) | Image correction apparatus and imaging apparatus | |
WO2011046755A1 (en) | Image deblurring using a spatial image prior | |
JP6293374B2 (en) | Image processing apparatus, image processing method, program, recording medium recording the same, video photographing apparatus, and video recording / reproducing apparatus | |
EP2887306B1 (en) | Image processing method and apparatus | |
Jeong et al. | Multi-frame example-based super-resolution using locally directional self-similarity | |
EP3438923B1 (en) | Image processing apparatus and image processing method | |
EP2061227A1 (en) | Image processing device, image processing method, and program | |
Lee et al. | Video deblurring algorithm using accurate blur kernel estimation and residual deconvolution based on a blurred-unblurred frame pair | |
Tofighi et al. | Blind image deblurring using row–column sparse representations | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
Kim et al. | Lens distortion correction and enhancement based on local self-similarity for high-quality consumer imaging systems | |
CN113344821A (en) | Image noise reduction method, device, terminal and storage medium | |
DE112021006769T5 (en) | CIRCUIT FOR COMBINED DOWNCLOCKING AND CORRECTION OF IMAGE DATA | |
US8687912B2 (en) | Adaptive overshoot control for image sharpening |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |