CN113055669A - Image filtering method and device before coding - Google Patents
Image filtering method and device before coding Download PDFInfo
- Publication number
- CN113055669A CN113055669A CN202110057841.6A CN202110057841A CN113055669A CN 113055669 A CN113055669 A CN 113055669A CN 202110057841 A CN202110057841 A CN 202110057841A CN 113055669 A CN113055669 A CN 113055669A
- Authority
- CN
- China
- Prior art keywords
- pixel
- search window
- spatial
- pixels
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses an image filtering method before coding, which comprises the following steps: aiming at each pixel in the current frame, determining the spatial domain weight between the pixel and other pixels in a spatial search window, and acquiring the time domain weight between the pixel and other pixels in a time search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, and the temporal search window is a window which takes the pixel position as the center in the previous frame; the pixel is filtered according to other pixels in the temporal search window, temporal domain weights between the pixel and other pixels in the temporal search window, other pixels in the spatial search window, and spatial domain weights between the pixel and other pixels in the spatial search window. The spatial domain similarity of the pixel to be filtered in the current frame is considered, meanwhile, the time domain similarity of the previous filtering frame is introduced through recursion, the pixel to be filtered is filtered by utilizing the two items of information, and even when the code rate is insufficient, the effect of reducing the blocking effect can be achieved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for filtering an image before coding.
Background
In the field of video coding, varying degrees of inter-block discontinuity distortion, i.e., blockiness, often occur in coded video frames. For this reason, blocking artifacts are currently reduced by setting a loop filter algorithm in the coding tool.
However, when the bandwidth is insufficient or the video is too complex, the code rate is insufficient, and in this case, the video frame may not be able to obtain a satisfactory result after passing through the coding tool. Therefore, even if a post-processing method like loop filtering is used in the coding tool, blocking artifacts still occur at a low code rate, and relying on the post-processing method inside the coding tool alone is not enough to solve the blocking artifacts problem.
Disclosure of Invention
The present invention provides a method and an apparatus for filtering an image before encoding, which are directed to the above-mentioned deficiencies of the prior art, and the object is achieved by the following technical solutions.
The first aspect of the present invention provides a method for filtering an image before encoding, where the method includes:
aiming at each pixel in the current frame, determining the spatial domain weight between the pixel and other pixels in a spatial search window, and acquiring the temporal domain weight between the pixel and other pixels except the pixel position in the temporal search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, the temporal search window is a window which takes the pixel position as the center in the previous frame, and the previous frame is a coded video frame;
and filtering the pixel according to other pixels except the pixel position in the time search window, the time domain weight between the pixel and other pixels except the pixel position in the time search window, other pixels in the space search window and the space domain weight between the pixel and other pixels in the space search window.
A second aspect of the present invention provides an image filtering apparatus before encoding, the apparatus comprising:
the weight calculation module is used for determining the spatial domain weight between the pixel and other pixels in a spatial search window aiming at each pixel in the current frame and acquiring the time domain weight between the pixel and other pixels except the pixel position in the temporal search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, the temporal search window is a window which takes the pixel position as the center in the previous frame, and the previous frame is a coded video frame;
and the filtering module is used for filtering the pixel according to other pixels except the pixel position in the time search window, the time domain weight between the pixel and other pixels except the pixel position in the time search window, other pixels in the space search window and the space domain weight between the pixel and other pixels in the space search window.
Based on the image filtering method and device before encoding in the first aspect and the second aspect, the invention has the following beneficial effects:
the method comprises the steps of introducing surrounding pixel information (namely time domain similarity) of a previous filtered frame through recursion while considering surrounding pixel information (namely space domain similarity) of a pixel to be filtered in a current frame, and filtering the pixel to be filtered in the current frame by utilizing the surrounding pixel information (namely time domain similarity) of the previous filtered frame, so that the effect of reducing blocking effect can be achieved under the condition of insufficient code rate, and the perception quality of a video can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention without limiting the invention to the right. In the drawings:
FIG. 1 is a flow chart illustrating an embodiment of a method for image filtering before encoding according to an exemplary embodiment of the present invention;
FIG. 2 is a diagram illustrating filtering of a pixel i in a current frame according to an exemplary embodiment of the present invention;
FIG. 3 is a diagram illustrating an unfiltered and filtered differential encoded image contrast according to an exemplary embodiment of the present invention;
FIG. 4 is a diagram illustrating a hardware configuration of an electronic device in accordance with an exemplary embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image filtering apparatus before encoding according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart illustrating an embodiment of a method for filtering an image before encoding according to an exemplary embodiment of the present invention, where the method for filtering an image before encoding can be applied to any electronic device (e.g., a camera, a PC, a server, etc.), and in this embodiment, the method is directed to an image filtering preprocessing before encoding, and after the filtering preprocessing, the filtered image is input to an encoding tool for encoding, so as to generate a better encoding effect. As shown in fig. 1, the image filtering method before encoding includes the steps of:
step 101: and aiming at each pixel in the current frame, determining the spatial domain weight between the pixel and other pixels in the spatial search window, and acquiring the temporal domain weight between the pixel and other pixels except the pixel position in the temporal search window of the previous frame.
In this embodiment, when filtering pixels in a current frame, since the similar information in the spatial domain is not enough to describe the perceptual similarity, the pixels in the current frame are also filtered by recursively using the similar information in the previous frame (i.e., the frame encoded by the filtered wave) in the temporal domain while considering the similar information in the spatial domain, so as to obtain a better encoding effect.
The spatial search window refers to a window centered on a current pixel to be filtered in a current frame, the temporal search window refers to a window centered on a current pixel to be filtered in a previous frame, and the previous frame refers to a filtered and encoded video frame.
It should be noted that the size of the spatial search window located in the current frame is equal to the size of the temporal search window located in the previous frame.
In an embodiment, the calculation process for the spatial domain weight may include the following steps:
step 11: according to the pixel contained in the space search window, a first space distance measurement between the pixel and other pixels in the space search window is obtained.
Wherein the first spatial distance measure refers to a sum of squared accumulations of block-level pixel similarity distances between the pixel to be filtered and other pixels in the spatial search window. The smaller the first spatial distance measure, the more similar between the two blocks.
Referring to fig. 2, in the spatial search window of the current frame, the pixel i to be filtered is located in the center of the spatial search window, the sizes of other pixels j in the spatial search window, the block centered on the pixel i and the block centered on the pixel j are both d × d, that is, the size of the filtering kernel is d × d, and the first spatial distance metric | | | DVk(i,j)||2The calculation formula is as follows:
wherein, | | Ik(i+z)-Ik(j+z)||2Representing the sum of the squared sums of the pixel differences at corresponding positions in the block centered on pixel i and the block centered on pixel j.
Step 12: and acquiring the spatial filtering strength according to the size of the spatial search window.
Wherein the spatial filtering strength is an adjustment parameter of the first spatial distance measure for controlling the attenuation of the weights relative to the first spatial distance measure.
In the present embodiment, the spatial filtering strength σd 2Is related to the size d of the filter kernel, assuming 2 σdD +1, then σdThe calculation formula of (a) is as follows:
step 13: a just noticeable distortion value for each pixel in the spatial search window is obtained.
The just Noticeable distortion value is a JND (just Noticeable distortion) value, which indicates that human eyes cannot perceive some distortion below the JND value.
In an example, in the process of calculating the JND value of a certain pixel, the average luminance of a preset region including the pixel may be obtained, the luminance adaptive factor of the pixel may be calculated by using the average luminance, the luminance contrast of the pixel in the preset region and the number of gradient direction types of the pixel in the current frame may be obtained, the visual occlusion factor of the pixel may be calculated by using the luminance contrast and the number, and the JND value of the pixel may be obtained according to the luminance adaptive factor and the visual occlusion factor.
Continuing with FIG. 2, taking JND value calculation of pixel i as an example, the luminance adaptive factor L of pixel iA(i) The calculation formula of (a) is as follows:
wherein b (i) is the average brightness of the predetermined area including the pixel i.
Visual occlusion factor V for pixel iM(i) The calculation formula of (a) is as follows:
and Lc is the brightness contrast of the pixel i in the preset area, and N is the number of gradient direction types of the pixel i in the current frame.
From this, the formula for calculating the JND value JND (i) of the pixel i is as follows:
JND(i)=LA(i)+VM(i)-0.3·min(LA(i),VM(i) equation 5
Step 14: and acquiring a first perception distance measurement between the pixel and other pixels in the space search window according to the JND value of each pixel in the space search window.
In this embodiment, in practical application, the spatial distance metric does not completely reflect the perceptual attribute of the content, so the perceptual distance metric is considered on the basis of considering the spatial distance metric, so as to better describe the content perceptual features.
Wherein the first perceptual distance metric refers to a sum of squared accumulations of block-level JND value-like distances between the pixel to be filtered and other pixels in the spatial search window. The smaller the perceptual distance metric, the more similar between the two blocks.
With continued reference to fig. 2, the first perceptual distance metric | | DJk(i,j)||2The calculation formula is as follows:
wherein, | JNDk(i+z)-JNDk(j+z)||2Representing the sum of the squared accumulation of the JND value differences for the pixel at the corresponding position in the block centered on pixel i and the block centered on pixel j.
Step 15: the perceptual filter strength of the pixel is determined from the quantization parameter used to encode the previous frame.
In an example, a JND variance value of a JND value of a pixel included in a preset block centered on the pixel may be obtained, and then a perceptual filtering strength of the pixel may be determined according to the quantization parameter and the JND variance value.
Specifically, the preset block is a filtering kernel of the pixel, and the JND variance value JND of the filtering kernelvarThe calculation formula is as follows:
where JND (i + z) represents the JND value for each pixel in the filter kernel centered on pixel i,represents the average value of the pixel JND values in the filter kernel centered on pixel i.
where m and n are empirical normalization parameters, QP is the quantization parameter used for the previous frame, QPThIs the threshold value of QP.
As can be seen from the above equation 8, when the code rate is insufficient, the QP used by the previous frame may exceed the QPThA threshold, which means that a higher level of perceptual filtering strength is required to reduce frame complexity; when the code rate is sufficient, the QP used by the previous frame may be below the threshold, at which time max (0, QP-QP)Th) Take 0, by JNDvarAnd determining the perceptual filtering strength, and removing invisible high-frequency information in the current frame.
Therefore, in the process of filtering the current frame, for each pixel, the filtering strength is adaptively updated according to the content perception characteristic and the quantization parameter used by the previous frame, and under the condition of insufficient code rate, the sequence complexity is reduced by adjusting the filtering strength, so that the blocking effect after coding is reduced, and the perception quality of the video is improved.
Step 16: and obtaining the spatial domain weight between the pixel and other pixels in the spatial search window according to the first spatial distance measurement, the first perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels in the spatial search window.
Based on the above description, the spatial domain weight ω between the pixel i and other pixels j in the spatial search windowk(i, j) the calculation formula is as follows:
wherein, | | DVk(i,j)||2Is a first spatial distance measure, σd 2Is the spatial filtering strength, | DJk(i,j)||2In order to be the first perceptual distance measure,to sense the filtering strength.
In one embodiment, based on the calculation principle of the time domain weights, the calculation process for the spatial domain weights includes the following steps:
step 21: and acquiring a second spatial distance measurement between the pixel and other pixels except the pixel position in the time search window according to the pixel contained in the time search window.
Wherein the second spatial distance metric refers to a sum of squared accumulations of block-level pixel similarity distances between the pixel to be filtered in the spatial search window and other pixels in the temporal search window.
Referring to fig. 2, in the spatial search window of the current frame, the pixel i to be filtered is located at the center of the spatial search window, and in the other pixels j in the temporal search window of the previous frame, the size of the block centered on the pixel i and the size of the block centered on the pixel j are both d × d, that is, the size of the filter kernel is d × d, and the second spatial distance measure | | DVk-1(i,j)||2The calculation formula is as follows:
wherein, | | Ik(i+z)-Ik-1(j+z)||2Representing the sum of the squared differences of pixels at corresponding positions in a block centered on pixel i in the current frame and a block centered on pixel j in the previous frame.
Step 22: and acquiring the spatial filtering strength according to the size of the time search window.
Based on the above step 12, since the size of the temporal search window is consistent with the size of the spatial search window, the step 22 may also adopt the above equation 2.
Step 23: the JND value of each pixel in the temporal search window is obtained.
Based on the JND value calculation principle described in step 13, step 23 may also be calculated by using the above equations 3, 4, and 5.
Step 24: and acquiring a second perception distance measurement between the pixel and other pixels except the pixel position in the time search window according to the JND value of each pixel in the time search window.
The second perception distance measure refers to the sum of squares and accumulations of similar distances of block-level JND values between the pixel to be filtered in the spatial search window and other pixels in the temporal search window.
With continued reference to FIG. 2, the second perceptual distance metric | | DJk-1(i,j)||2The calculation formula is as follows:
wherein, | JNDk(i+z)-JNDk-1(j+z)||2And the sum of the square accumulation of the difference values of the JND values of the block taking the pixel i as the center in the spatial search window and the pixel at the corresponding position in the block taking the pixel j as the center in the temporal search window is represented.
Step 25: the perceptual filter strength of the pixel is determined from the quantization parameter used to encode the previous frame.
The perceptual filtering strength in step 15 is also the perceptual filtering strength in step 25.
Step 26: and obtaining the time domain weight between the pixel and other pixels except the pixel position in the time search window according to the second spatial distance measurement, the second perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels except the pixel position in the time search window.
Based on the above description, the time domain weight ω between pixel i and other pixels j in the time search windowk-1(i, j) the calculation formula is as follows:
wherein, | | DVk-1(i,j)||2Is as followsTwo spatial distance measures, σd 2Is the spatial filtering strength, | DJk-1(i,j)||2For the purpose of the second perceptual distance measure,to sense the filtering strength.
It should be noted that the first perceptual distance metric described above refers to a JND similarity distance metric between the pixel to be filtered and other pixels in the current frame, and the second perceptual distance metric refers to a JND similarity distance metric between the pixel to be filtered and other pixels in the previous frame.
The first spatial distance measure refers to a pixel similarity distance measure between the pixel to be filtered and other pixels in the current frame, and the second spatial distance measure refers to a pixel similarity distance measure between the pixel to be filtered and other pixels in the previous frame.
Step 102: and filtering the pixel according to other pixels except the pixel position in the time search window, the time domain weight between the pixel and other pixels except the pixel position in the time search window, other pixels in the space search window and the space domain weight between the pixel and other pixels in the space search window.
Based on the above-mentioned description of step 101, referring to FIG. 2, the filtered value f of pixel i in the current framek(i) The calculation formula is as follows:
wherein, ω isk-1(i, j) is the time domain weight between pixel i and pixel j in the previous frame, fk-1(j) The encoded pixel value of the pixel j in the previous frame,ε(k-1) is a time search window, ωk(I, j) is the spatial domain weight between pixel I and pixel j in the current frame, Ik(j) Is the pixel value of pixel j in the current frame,Ω(k) For a spatial search window, WkFor the sum of the time domain weight and the space domain weight, the calculation formula is as follows:
it should be noted that, when the current frame is the first frame, the filtering process of the pixels in the current frame only considers the spatial domain weight, and the calculation formula is as follows:
through experimental comparison, as shown in fig. 3, a graph (a) shows a coded image obtained without using the filtering method of the present embodiment, where the local blocking effect indicated by the arrow is relatively severe, and a graph (b) shows a coded image obtained using the filtering method of the present embodiment, where the local blocking effect indicated by the arrow is greatly reduced.
So far, the filtering process shown in fig. 1 is completed, and while taking into account the surrounding pixel information (i.e., spatial domain similarity) of the pixel to be filtered in the current frame, the surrounding pixel information (i.e., temporal domain similarity) of the previous filtered frame is introduced through recursion, and the two items of information are utilized to filter the pixel to be filtered in the current frame, so that even under the condition of insufficient code rate, the effect of reducing blocking effect can be achieved, and the perceptual quality of the video can be improved.
Fig. 4 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present invention, the electronic device including: a communication interface 401, a processor 402, a machine-readable storage medium 403, and a bus 404; wherein the communication interface 401, the processor 402 and the machine-readable storage medium 403 communicate with each other via a bus 404. The processor 402 may execute the pre-encoding image filtering method described above by reading and executing machine executable instructions in the machine readable storage medium 403 corresponding to the control logic of the pre-encoding image filtering method, and the specific content of the method is described in the above embodiments, which will not be described herein again.
The machine-readable storage medium 403 referred to in this disclosure may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 403 may be a RAM (Random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
The invention also provides an embodiment of an image filtering device before coding, corresponding to the embodiment of the image filtering method before coding.
Fig. 5 is a flowchart illustrating an embodiment of a pre-coding image filtering apparatus according to an exemplary embodiment of the present invention, where the pre-coding image filtering apparatus may be applied to any electronic device, as shown in fig. 5, and the pre-coding image filtering apparatus includes:
a weight calculating module 510, configured to determine, for each pixel in the current frame, a spatial domain weight between the pixel and another pixel in the spatial search window, and obtain a temporal domain weight between the pixel and another pixel except the pixel position in the temporal search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, the temporal search window is a window which takes the pixel position as the center in the previous frame, and the previous frame is a coded video frame;
a filtering module 520, configured to filter the pixel according to the other pixels except the pixel position in the temporal search window, the time domain weights between the pixel and the other pixels except the pixel position in the temporal search window, the other pixels in the spatial search window, and the spatial domain weights between the pixel and the other pixels in the spatial search window.
In an optional implementation manner, the weight calculating module 510 is specifically configured to, in a process of determining a spatial domain weight between the pixel and another pixel in the spatial search window, obtain a first spatial distance metric between the pixel and another pixel in the spatial search window according to the pixel included in the spatial search window; acquiring spatial filtering strength according to the size of the spatial search window; acquiring a JND value of each pixel in the space search window; acquiring a first perception distance measurement between each pixel and other pixels in the space search window according to the JND value of each pixel in the space search window; determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame; and obtaining the spatial domain weight between the pixel and other pixels in the spatial search window according to the first spatial distance measurement, the first perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels in the spatial search window.
In an optional implementation manner, the weight calculating module 510 is specifically configured to, in a process of determining the perceptual filtering strength of the pixel according to a quantization parameter used in encoding a previous frame, obtain a JND variance value of a JND value of a pixel included in a preset block centered on the pixel; and determining the perceptual filtering strength of the pixel according to the quantization parameter and the JND variance value.
In an optional implementation manner, the weight calculating module 510 is specifically configured to, in the process of obtaining the JND value of each pixel in the spatial search window, obtain, for each pixel in the spatial search window, an average luminance of a preset region including the pixel, and calculate a luminance adaptive factor of the pixel by using the average luminance; acquiring the brightness contrast of the pixel in the preset area and the number of gradient direction types of the pixel in the current frame, and calculating the visual shielding factor of the pixel by using the brightness contrast and the number; and acquiring a JND value of the pixel according to the brightness self-adaptive factor and the visual shielding factor.
In an optional implementation manner, the weight calculating module 510 is specifically configured to, in a process of obtaining a time domain weight between the pixel and another pixel in the time search window of the previous frame except for the pixel position, obtain a second spatial distance metric between the pixel and another pixel in the time search window except for the pixel position according to the pixel included in the time search window; acquiring spatial filtering strength according to the size of the time search window; acquiring a JND value of each pixel in the time search window; acquiring a second perception distance measurement between the pixel and other pixels except the pixel position in the time search window according to the JND value of each pixel in the time search window; determining the perceptual filtering strength of the pixel according to the quantization parameter used for coding the previous frame; and obtaining the time domain weight between the pixel and other pixels except the pixel position in the time search window according to the second spatial distance measurement, the second perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels except the pixel position in the time search window.
The implementation process of the functions and actions of each unit in the device is specifically detailed in the implementation process of the corresponding step in the method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement the present invention without any inventive effort.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other elements in the process, method, article, or apparatus that comprise the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A method of filtering an image prior to encoding, the method comprising:
aiming at each pixel in the current frame, determining the spatial domain weight between the pixel and other pixels in a spatial search window, and acquiring the temporal domain weight between the pixel and other pixels except the pixel position in a temporal search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, the temporal search window is a window which takes the pixel position as the center in the previous frame, and the previous frame is a coded video frame;
and filtering the pixel according to other pixels except the pixel position in the time search window, the time domain weight between the pixel and other pixels except the pixel position in the time search window, other pixels in the space search window and the space domain weight between the pixel and other pixels in the space search window.
2. The method of claim 1, wherein determining spatial domain weights between the pixel and other pixels in the spatial search window comprises:
according to the pixels contained in the space search window, acquiring a first space distance measurement between the pixel and other pixels in the space search window;
acquiring spatial filtering strength according to the size of the spatial search window;
acquiring Just Noticeable Distortion (JND) values of each pixel in the spatial search window;
acquiring a first perception distance measurement between each pixel and other pixels in the space search window according to the JND value of each pixel in the space search window;
determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame;
and obtaining the spatial domain weight between the pixel and other pixels in the spatial search window according to the first spatial distance measurement, the first perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels in the spatial search window.
3. The method of claim 2, wherein determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame comprises:
acquiring a JND variance value of a pixel JND value contained in a preset block taking the pixel as a center;
and determining the perceptual filtering strength of the pixel according to the quantization parameter and the JND variance value.
4. The method of claim 2, wherein the obtaining the Just Noticeable Distortion (JND) value for each pixel in the spatial search window comprises:
aiming at each pixel in a space search window, obtaining the average brightness of a preset area containing the pixel, and calculating the brightness self-adaptive factor of the pixel by using the average brightness;
acquiring the brightness contrast of the pixel in the preset area and the number of gradient direction types of the pixel in the current frame, and calculating the visual shielding factor of the pixel by using the brightness contrast and the number;
and acquiring a JND value of the pixel according to the brightness self-adaptive factor and the visual shielding factor.
5. The method of claim 1, wherein obtaining the time domain weight between the pixel and other pixels except the pixel position in the time search window of the previous frame comprises:
according to the pixels contained in the time search window, obtaining a second spatial distance measurement between the pixels and other pixels except the pixel position in the time search window;
acquiring spatial filtering strength according to the size of the time search window;
acquiring a JND value of each pixel in the time search window;
acquiring a second perception distance measurement between the pixel and other pixels except the pixel position in the time search window according to the JND value of each pixel in the time search window;
determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame;
and obtaining the time domain weight between the pixel and other pixels except the pixel position in the time search window according to the second spatial distance measurement, the second perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels except the pixel position in the time search window.
6. An apparatus for filtering an image before encoding, the apparatus comprising:
the weight calculation module is used for determining the spatial domain weight between the pixel and other pixels in a spatial search window aiming at each pixel in the current frame and acquiring the time domain weight between the pixel and other pixels except the pixel position in the time search window of the previous frame; the spatial search window is a window which takes the pixel as the center in the current frame, the temporal search window is a window which takes the pixel position as the center in the previous frame, and the previous frame is a coded video frame;
and the filtering module is used for filtering the pixel according to other pixels except the pixel position in the time search window, the time domain weight between the pixel and other pixels except the pixel position in the time search window, other pixels in the space search window and the space domain weight between the pixel and other pixels in the space search window.
7. The apparatus according to claim 6, wherein the weight calculation module is specifically configured to, in the process of determining the spatial domain weight between the pixel and the other pixels in the spatial search window, obtain a first spatial distance metric between the pixel and the other pixels in the spatial search window according to the pixel included in the spatial search window; acquiring spatial filtering strength according to the size of the spatial search window; acquiring Just Noticeable Distortion (JND) values of each pixel in the spatial search window; acquiring a first perception distance measurement between each pixel and other pixels in the space search window according to the JND value of each pixel in the space search window; determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame; and obtaining the spatial domain weight between the pixel and other pixels in the spatial search window according to the first spatial distance measurement, the first perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels in the spatial search window.
8. The apparatus according to claim 7, wherein the weight calculation module is specifically configured to, in the process of determining the perceptual filtering strength of the pixel according to the quantization parameter used in encoding the previous frame, obtain a JND variance value of a JND value of a pixel included in a preset block centered on the pixel; and determining the perceptual filtering strength of the pixel according to the quantization parameter and the JND variance value.
9. The apparatus according to claim 7, wherein the weight calculation module is specifically configured to, in the process of obtaining the just noticeable distortion JND value of each pixel in the spatial search window, obtain, for each pixel in the spatial search window, an average luminance of a preset region including the pixel, and calculate a luminance adaptive factor of the pixel by using the average luminance; acquiring the brightness contrast of the pixel in the preset area and the number of gradient direction types of the pixel in the current frame, and calculating the visual shielding factor of the pixel by using the brightness contrast and the number; and acquiring a JND value of the pixel according to the brightness self-adaptive factor and the visual shielding factor.
10. The apparatus according to claim 6, wherein the weight calculation module is specifically configured to, in obtaining the time domain weight between the pixel and the other pixels except the pixel position in the time search window of the previous frame, obtain a second spatial distance metric between the pixel and the other pixels except the pixel position in the time search window according to the pixels included in the time search window; acquiring spatial filtering strength according to the size of the time search window; acquiring a JND value of each pixel in the time search window; acquiring a second perception distance measurement between the pixel and other pixels except the pixel position in the time search window according to the JND value of each pixel in the time search window; determining the perceptual filtering strength of the pixel according to the quantization parameter used for encoding the previous frame; and obtaining the time domain weight between the pixel and other pixels except the pixel position in the time search window according to the second spatial distance measurement, the second perception distance measurement, the spatial filtering strength and the perception filtering strength between the pixel and other pixels except the pixel position in the time search window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110057841.6A CN113055669B (en) | 2021-01-15 | 2021-01-15 | Image filtering method and device before coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110057841.6A CN113055669B (en) | 2021-01-15 | 2021-01-15 | Image filtering method and device before coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113055669A true CN113055669A (en) | 2021-06-29 |
CN113055669B CN113055669B (en) | 2023-01-17 |
Family
ID=76508484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110057841.6A Active CN113055669B (en) | 2021-01-15 | 2021-01-15 | Image filtering method and device before coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113055669B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6281942B1 (en) * | 1997-08-11 | 2001-08-28 | Microsoft Corporation | Spatial and temporal filtering mechanism for digital motion video signals |
US20030039310A1 (en) * | 2001-08-14 | 2003-02-27 | General Instrument Corporation | Noise reduction pre-processor for digital video using previously generated motion vectors and adaptive spatial filtering |
CN1665298A (en) * | 2003-12-11 | 2005-09-07 | 三星电子株式会社 | Method of removing noise from digital moving picture data |
US20070058716A1 (en) * | 2005-09-09 | 2007-03-15 | Broadcast International, Inc. | Bit-rate reduction for multimedia data streams |
US20170061582A1 (en) * | 2015-08-31 | 2017-03-02 | Apple Inc. | Temporal filtering of independent color channels in image data |
US20180220129A1 (en) * | 2017-01-30 | 2018-08-02 | Intel Corporation | Motion, coding, and application aware temporal and spatial filtering for video pre-processing |
US20200084460A1 (en) * | 2019-09-27 | 2020-03-12 | Intel Corporation | Method and system of content-adaptive denoising for video coding |
-
2021
- 2021-01-15 CN CN202110057841.6A patent/CN113055669B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6281942B1 (en) * | 1997-08-11 | 2001-08-28 | Microsoft Corporation | Spatial and temporal filtering mechanism for digital motion video signals |
US20030039310A1 (en) * | 2001-08-14 | 2003-02-27 | General Instrument Corporation | Noise reduction pre-processor for digital video using previously generated motion vectors and adaptive spatial filtering |
CN1665298A (en) * | 2003-12-11 | 2005-09-07 | 三星电子株式会社 | Method of removing noise from digital moving picture data |
US20070058716A1 (en) * | 2005-09-09 | 2007-03-15 | Broadcast International, Inc. | Bit-rate reduction for multimedia data streams |
US20170061582A1 (en) * | 2015-08-31 | 2017-03-02 | Apple Inc. | Temporal filtering of independent color channels in image data |
US20180220129A1 (en) * | 2017-01-30 | 2018-08-02 | Intel Corporation | Motion, coding, and application aware temporal and spatial filtering for video pre-processing |
US20200084460A1 (en) * | 2019-09-27 | 2020-03-12 | Intel Corporation | Method and system of content-adaptive denoising for video coding |
Also Published As
Publication number | Publication date |
---|---|
CN113055669B (en) | 2023-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7394856B2 (en) | Adaptive video prefilter | |
WO2002096118A2 (en) | Decoding compressed image data | |
JP2002535896A (en) | Method and apparatus for improving sharpness | |
CN111193931B (en) | Video data coding processing method and computer storage medium | |
CN110062230B (en) | Image coding method and device | |
CN104469386A (en) | Stereoscopic video perception and coding method for just-noticeable error model based on DOF | |
CN111988611A (en) | Method for determining quantization offset information, image coding method, image coding device and electronic equipment | |
US10911785B2 (en) | Intelligent compression of grainy video content | |
WO2018095890A1 (en) | Methods and apparatuses for encoding and decoding video based on perceptual metric classification | |
Dai et al. | Film grain noise removal and synthesis in video coding | |
US8121199B2 (en) | Reducing the block effect in video file compression | |
US20220103869A1 (en) | Techniques for limiting the influence of image enhancement operations on perceptual video quality estimations | |
US10129565B2 (en) | Method for processing high dynamic range video in order to improve perceived visual quality of encoded content | |
CN110378860A (en) | Method, apparatus, computer equipment and the storage medium of restored video | |
EP3648460B1 (en) | Method and apparatus for controlling encoding resolution ratio | |
EP1690232A2 (en) | Detection of local visual space-time details in a video signal | |
CN113906762B (en) | Pre-processing for video compression | |
CN113055669B (en) | Image filtering method and device before coding | |
CN111212198B (en) | Video denoising method and device | |
CN112118446B (en) | Image compression method and device | |
US20110097010A1 (en) | Method and system for reducing noise in images in video coding | |
JP2002223445A (en) | Image coder, image decoder, image coding method, image decoding method, and recording medium for recording image coding program, recording medium for recording image decoding program, and image coding program and image decoding program | |
CN115967806B (en) | Data frame coding control method, system and electronic equipment | |
JP2005191865A (en) | Image processing apparatus, image processing program and image processing method | |
WO2022205094A1 (en) | Data processing method, data transmission system, and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |