CN111654710B - Image filtering method, device, equipment and storage medium - Google Patents

Image filtering method, device, equipment and storage medium Download PDF

Info

Publication number
CN111654710B
CN111654710B CN202010509310.1A CN202010509310A CN111654710B CN 111654710 B CN111654710 B CN 111654710B CN 202010509310 A CN202010509310 A CN 202010509310A CN 111654710 B CN111654710 B CN 111654710B
Authority
CN
China
Prior art keywords
filtering
frame
current frame
target
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010509310.1A
Other languages
Chinese (zh)
Other versions
CN111654710A (en
Inventor
李琳
邢刚
冯亚楠
简云瑞
张嘉琪
王苫社
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
Peking University
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical Peking University
Priority to CN202010509310.1A priority Critical patent/CN111654710B/en
Publication of CN111654710A publication Critical patent/CN111654710A/en
Application granted granted Critical
Publication of CN111654710B publication Critical patent/CN111654710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an image filtering method, an image filtering device, image filtering equipment and a readable storage medium, which solve the problem of low coding gain of the existing frame image. The method of the invention comprises the following steps: after a current frame is subjected to inverse transformation and dequantization, filtering the current frame through a first loop filtering technology and a target loop filtering technology, wherein the current frame is a current coding frame or a current decoding frame; the target loop filtering technique comprises at least one of: a deblocking filtering technique; a sample adaptive compensation technique; adaptive loop filtering techniques; the first loop filtering technology is a technology for filtering the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame, so that the coding gain of the frame image can be improved.

Description

Image filtering method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image filtering method, device, equipment and storage medium.
Background
In video coding and decoding, there are often regions with drastically changed pixel values in an image, and after the regions are coded, decoded and reconstructed, a moire-like distortion phenomenon generally occurs, which is called a ringing effect.
Sample Adaptive Offset (SAO) is one of the important techniques for video encoding and decoding, and the SAO technique can reduce the ringing effect well. However, ringing effect itself uses a Largest Coding Unit (LCU) as a unit, and meanwhile, there are many parameters that need to be written into a code stream, and a part of performance is limited by too high code rate consumption of SAO itself.
Disclosure of Invention
The embodiment of the invention provides an image filtering method, device, equipment and storage medium, aiming at solving the problem of low coding gain of the existing frame image.
In order to solve the above problems, the present invention is realized by:
in a first aspect, an embodiment of the present invention provides an image filtering method, including:
after a current frame is subjected to inverse transformation and dequantization, filtering the current frame through a first loop filtering technology and a target loop filtering technology, wherein the current frame is a current coding frame or a current decoding frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technique is a technique for performing filtering processing on the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame.
In a second aspect, an embodiment of the present invention provides an image filtering apparatus, including:
the first filtering module is used for filtering a current frame through a first loop filtering technology and a target loop filtering technology after the current frame is subjected to inverse transformation and dequantization processing, wherein the current frame is a current coding frame or a current decoding frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technique is a technique for performing filtering processing on the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame.
In a third aspect, an embodiment of the present invention provides an image filtering apparatus, including: a memory, a processor, and a program stored on the memory and executable on the processor; the processor is used for reading the program in the memory to realize the steps in the image filtering method.
In a third aspect, the present invention provides a readable storage medium for storing a program, which when executed by a processor implements the steps in the image filtering method as described above.
According to the embodiment of the invention, after a current frame is subjected to inverse transformation and dequantization processing, the current frame is subjected to filtering processing through a first loop filtering technology and a target loop filtering technology, wherein the current frame is a current coding frame or a current decoding frame; the target loop filtering technique comprises at least one of: a deblocking filtering technique; a sample adaptive compensation technique; adaptive loop filtering techniques; the first loop filtering technology is a technology for filtering the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame, so that the technology for filtering the current frame based on the first filtering information is adopted, and meanwhile, a deblocking filtering technology, a sample adaptive compensation technology and/or an adaptive loop filtering technology are combined to filter an image, so that the coding gain of a frame image can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of an image filtering method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing a positional relationship between a pixel and 8 adjacent pixels according to an embodiment of the present invention;
FIG. 3 is a second flowchart illustrating an image filtering method according to an embodiment of the invention;
FIG. 4 is a diagram illustrating a filtering position of an image filtering method in loop filtering according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating one of filtering positions of an image filtering method in loop filtering according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image filtering apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image filtering apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments. In the following description, specific details are provided, such as specific configurations and components, merely to facilitate a thorough understanding of embodiments of the invention. Thus, it will be apparent to those skilled in the art that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention. In addition, the terms "system" and "network" are often used interchangeably herein.
As shown in fig. 1, an embodiment of the present invention provides an image filtering method, which specifically includes the following steps:
step 101, after a current frame is subjected to inverse transformation and dequantization, filtering the current frame by a first loop filtering technology and a target loop filtering technology, wherein the current frame is a current coding frame or a current decoding frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technique is a technique for performing filtering processing on the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame.
In the embodiment of the present invention, the first loop filtering technique may be used for filtering first and then the target loop filtering technique is used for filtering, or the target loop filtering technique may be used for filtering first and then the first loop filtering technique is used for filtering.
Specifically, the step may include performing filtering processing on the current frame by using the target loop filtering technique to obtain an intermediate frame, and then performing filtering processing on the intermediate frame by using a first loop filtering technique.
If filtering is performed using the target loop filtering technique and then filtering is performed using the first loop filtering technique, the filtering process using the first loop filtering technique may be performed after any of the above specific target loop filtering techniques.
If the target loop filtering technique comprises a deblocking filtering technique, the intermediate frame specifically comprises: the current frame is processed by the deblocking filtering;
if the target loop filtering technique includes a sample adaptive compensation technique, the intermediate frame specifically includes: only the current frame is subjected to sample adaptive compensation processing;
if the target loop filtering technique comprises an adaptive loop filtering technique, the intermediate frame specifically comprises: a current frame processed only by an adaptive loop filtering technique;
if the target loop filtering technique includes a deblocking filtering technique and a sample adaptive compensation technique, the intermediate frame specifically includes: the current frame is processed by the deblocking filtering but not processed by the sample adaptive compensation; or, the current frame is processed by the deblocking filtering and the sample adaptive compensation;
if the target loop filtering technique includes a deblocking filtering technique and an adaptive loop filtering technique, the intermediate frame specifically includes: the current frame is processed by the deblocking effect filtering but not processed by the adaptive loop filtering; or, the current frame is processed by the deblocking filtering and adaptive loop filtering;
if the target loop filtering technique includes a sample adaptive compensation technique and an adaptive loop filtering technique, the intermediate frame specifically includes: the current frame is subjected to sample adaptive compensation processing but not subjected to adaptive loop filtering processing; or the current frame is processed by sample adaptive compensation and adaptive loop filtering;
if the target loop filtering technique includes the above three techniques, the intermediate frame is specifically any one of the following:
the current frame is processed by the deblocking filtering but is not processed by the sample adaptive compensation processing and the adaptive loop filtering;
the current frame is processed by deblocking effect filtering and sample adaptive compensation, but is not processed by adaptive loop filtering;
and the current frame is subjected to deblocking filtering, sample adaptive compensation processing and adaptive loop filtering processing.
Specifically, the step may include performing filtering processing on the current frame by using a first loop filtering technique to obtain an intermediate frame, and performing filtering processing on the intermediate frame by using the target loop filtering technique.
That is, in the loop filtering including the above-described target loop filtering technique, the first loop filtering technique may be located at any position before or after the above-described target loop filtering technique.
Wherein, using the first loop filtering technique to filter the current frame, and obtaining the intermediate frame may include:
acquiring M classification sets and filtering parameters corresponding to each classification set; the classification set is obtained by classifying all pixels of the current frame according to a preset classification mode;
here, the preset classification manner includes at least one of the following manners:
a classification based on the size of the pixels themselves;
m, L are all positive integers based on the classification of the relationship between a pixel and its adjacent L pixels.
It should be noted that, when the current frame is the current decoding frame, M classification sets and the filtering parameters corresponding to each classification set may be obtained from the received filtering information sent by the encoding end device.
The relationship between the pixel and the L adjacent pixels comprises: the size relationship between a pixel and L pixels adjacent to the pixel.
And filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame.
Optionally, the pixel comprises: a luminance component Y and a chrominance component UV. Wherein the chrominance components include a first chrominance component U and a second chrominance component V.
Specifically, the classified set to which the pixel belongs includes at least one of the following:
a classification set to which the pixel belongs on the luminance component;
a classification set to which the pixel belongs on the first chrominance component;
the classification set to which the pixel belongs on the second chrominance component.
And the classification sets to which the pixels belong on different components have corresponding filtering parameters.
In this embodiment, the pixels in the above steps are based on target components of the pixels, where the target components include: at least one of a luma component, a first chroma component, and a second chroma component.
If the target component includes at least two components, the filtering based on the at least two components may be independent of each other, or may be filtering based on a combination of any two components, which is not specifically limited herein.
The filtering coefficient corresponding to each classification set obtained according to the preset classification mode is used for coding and decoding the current frame, so that the coding and decoding quality of the whole frame image can be improved.
Here, as an optional implementation manner, the current frame is a current coding frame; the step of obtaining M classification sets may specifically include:
firstly, traversing each pixel of the current frame based on the relationship between the pixel and L adjacent pixels to obtain a first identification value corresponding to each pixel;
as an optional implementation manner, the following steps are adopted, and based on a relationship between a pixel and L pixels adjacent to the pixel, each pixel of the current frame is traversed to obtain a first identification value corresponding to each pixel:
comparing a first pixel with an ith pixel adjacent to the first pixel in size, wherein i is less than or equal to L and is a positive integer;
if the size of the first pixel is larger than that of the ith pixel, the Class is judgedi-1Adding the first preset value to obtain Classi
If the size of the first pixel is smaller than that of the ith pixel, the Class is judged to be in the range of the ith pixeli-1Subtracting the second preset value to obtain Classi
Will ClassLDetermining a first identification value corresponding to the first pixel;
wherein, when i is 1, Classi-1Is a preset initial value; when i > 1, Classi-1And comparing the sizes of the first pixel and the (i-1) th pixel adjacent to the first pixel, wherein the first pixel is any one of all pixels in the current frame.
It should be noted that, in general, a pixel has a square shape, and the number of pixels adjacent to the pixel may be 8, and as shown in fig. 2, the pixel is surrounded by 8 pixels around the pixel. And 4 pixels can be also adopted, wherein the 4 pixels can be distributed diagonally or in a cross shape.
The following describes an implementation process of the above steps in detail by using an example.
As shown in fig. 2, the first pixel is compared in size with the surrounding 8 pixels in turn. Suppose Y1(i, j) denotes a luminance component of the first pixel, Y (k)1,k2) Represents the luminance component of the surrounding pixels, where | k1-i|≤1,|k1-j | < 1; example code for classification is as follows:
Initial:Classt1=0;
For |k1-i|≤1,|k2-j|≤1:
If Y(k1,k2)>Y1(i,j):
Classt1+=1
Else if Y(k1,k2)<Y1(i,j)
Classt1+=-1
here, the initial classification is set to Classt1When the value is 0, the preset initial value is 0, and the surrounding 8 pixels are sequentially compared with the first pixel in size. Assuming the above classification criteria: class if the surrounding pixels are larger than the current pixelt1Self-adding 1; class if the surrounding pixels are smaller than the current pixelt1And adding-1, and obtaining a first identification value corresponding to the first pixel after traversing is finished, wherein the identification value is used for identifying the classification set to which the first pixel belongs. Classified by this method, it can be seen that-8. ltoreq. Classt1Less than or equal to 8. Based on this classification method, all pixels in the current frame can be classified into 17 classification sets at most, i.e., 17 classes.
Further, as shown in fig. 2, the first pixel is sequentially compared with the surrounding 8 pixels in size. Suppose Y1(i, j) denotes a luminance component of the first pixel, Y (k)1,k2) Represents the luminance component of the surrounding pixel, where | k1-i|≤1,|k1-j | < 1; example code for classification is as follows:
Initial:Classt1=0;
For |k1-i|≤1,|k2-j|≤1:
If Y(k1,k2)>Y1(i,j):
Classt1+=1
here, the initial classification is set to Classt1When the value is 0, the preset initial value is 0, and the surrounding 8 pixels are sequentially compared with the first pixel in size. Assuming the above classification criteria: class if the surrounding pixels are larger than the current pixelt1Self-adding 1; class if the surrounding pixels are smaller than the current pixelt1And if not, obtaining a first identification value corresponding to the first pixel after traversing is finished, wherein the identification value is used for identifying the classification set to which the first pixel belongs. Classified by this method, it can be seen that 0. ltoreq. Classt1Less than or equal to 8. Based on this classification method, all pixels in the current frame can be classified into 9 classification sets at most, i.e., 9 classes.
It should be noted that the above example is described based on the luminance component of the pixel, and the same applies to the chrominance component based on the pixel, which is not described herein again.
As another optional implementation manner, the following steps are adopted, and based on a relationship between a pixel and L pixels adjacent to the pixel, each pixel of the current frame is traversed to obtain a first identification value corresponding to each pixel:
respectively calculating gradient values of the second pixels in all preset directions according to the size of the second pixels and the size of pixels adjacent to the second pixels in a plurality of preset directions;
here, the plurality of preset directions may include: a horizontal direction (x-direction), a vertical direction (y-direction), a first direction offset by a first angle in the horizontal direction to the-y direction, and a second direction offset by a second angle in the horizontal direction to the + y direction.
Preferably, the first angle and the second angle are both 45 degrees.
For an example, referring to fig. 2, in this case, the first pixel in fig. 2 may be regarded as the second pixel, in the figure, two pixels adjacent to the third pixel v in the horizontal direction are referred to as an a pixel and a b pixel, the magnitude of the second pixel is subtracted from the magnitude of the a pixel to obtain an absolute value, the magnitude of the second pixel is subtracted from the magnitude of the b pixel to obtain an absolute value, and the absolute values are summed to obtain a gradient value of the third pixel in the horizontal direction; and the gradient value calculation processes in other directions refer to the above process, and finally the gradient values of the third pixel in all directions are obtained.
And comparing the gradient values of the second pixel in each preset direction, determining a target direction gradient to which the second pixel belongs, and taking an identification value corresponding to the target direction gradient as a first identification value of the second pixel, wherein the second pixel is any one of all pixels in the current frame.
Here, the gradient values of the second pixels in each preset direction are compared, the magnitude of the gradient values is compared, an optimal gradient value, which may be the largest gradient value or the largest gradient value, is determined according to a preset rule, and finally, the direction gradient corresponding to the optimal gradient value is determined as the target direction gradient.
Continuing with the above example as an example, for example, by comparing, the obtained optimal gradient value corresponds to the gradient value of the second pixel in the horizontal direction, and the directional gradient to which the second pixel belongs is the horizontal gradient.
In this implementation, the gradient distribution of the pixel and the L pixels adjacent to the pixel may be a laplacian distribution, a gaussian distribution, or the like.
Then, classifying all pixels in the current frame according to the first identification value to obtain M1 first type classification sets; wherein, the first identification values of the corresponding pixels in each first type classification set are the same;
finally, the M classification sets are obtained according to the M1 first type classification sets, and M1 is a positive integer.
It should be noted that, when only a classification manner based on the relationship between the pixel and the L pixels adjacent to the pixel is adopted to classify all the pixels of the current frame, the step may specifically include:
obtaining M-1 classification sets according to the M1 first classification sets.
In addition, based on this, a classification method based on the size of the pixel itself is further adopted to classify all pixels of the current frame, and then this step may further include:
based on the size of the pixel, the target pixel is traversed and classified to obtain M2 second type classification sets corresponding to each first type classification set, and M1M 2 classification sets are obtained; wherein the target pixel is a respective pixel in each of the M1 first type classification sets, and M2 is a positive integer.
It should be noted that, when all the pixels in each of the M1 first-type classification sets are classified based on the size of the pixel itself, the number of the second-type classification sets needs to be set in advance, and in order to obtain the maximum gain as much as possible, the optimal classification, that is, the number of the second-type classification sets that need to be classified, that is, M2, is obtained through adaptive selection. That is, the value of M2 may be obtained through an adaptive selection process.
Optionally, when M-M1-M2, the classification set is obtained by classifying all pixels of the current frame according to a preset classification manner based on the luminance component of the pixel.
That is, when all pixels of the current frame are classified based on the luminance components of the pixels, a classification manner based on the relationship between the pixels and L pixels adjacent to the pixels is firstly adopted, then a classification manner based on the sizes of the pixels is adopted, and finally M1 × M2 classification sets are obtained.
As another optional implementation manner, the step of obtaining M classification sets may specifically include:
based on the size of the pixel, traversing the target pixel for classification to obtain M3 second type classification sets, and obtaining M3 classification sets; wherein the target pixels are all pixels in the current frame, and M3 is a positive integer.
Optionally, when M is equal to M3, the classification set is obtained by classifying all pixels of the current frame according to a preset classification manner based on chrominance components of the pixels.
That is, when all pixels of the current frame are classified based on the chrominance components of the pixels, M3 second-type classification sets are obtained by only adopting a classification mode based on the sizes of the pixels.
It should be noted that, in the above two implementation manners, both of the two implementation manners relate to a classification manner based on the size of the pixel itself, and based on this, the method of the embodiment of the present invention may further include, before traversing the target pixel for classification based on the size of the pixel itself, that:
determining the number of the target classification sets of the second type classification set;
here, as an optional implementation manner, determining the number of target classification sets of the second type classification set may include the following steps:
firstly, determining T candidate classifications according to a preset threshold value T; wherein, different candidate classifications correspond to different classification set numbers, and T is a positive integer;
it should be noted that, the number of classification sets corresponding to each candidate classification in the T candidate classifications is not limited. The candidate classifications may be random or values based on a predetermined rule, for example, the number of classification sets corresponding to the T candidate classifications in sequence is a positive integer from 1 to T.
Here, the preset threshold T specifically refers to the number of the preset second-type classification sets mentioned in the foregoing embodiment.
It should be noted that the candidate classification refers to the classification corresponding to the current frame according to the number of different classification sets. That is, the current frame can be divided into what kinds according to the number of different classification sets, that is, what kinds of classification conditions the current frame has.
For example, if T is 4, the current frame is classified into 4 candidates. That is, the current frame can be divided into four classes, 1,2,3 and 4, according to the number of different classification sets. That is, the number of corresponding classification sets in the 4 candidate classifications is 1,2,3, and 4, respectively.
Then, traversing each candidate classification in the T candidate classifications, and calculating to obtain the optimal rate-distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification;
here, specifically, the following steps may be adopted to traverse the T candidate classifications, and calculate the optimal rate-distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification:
firstly, obtaining a filtering parameter of each classification set corresponding to a first candidate classification;
traversing each classification set corresponding to the first candidate classification to obtain a filtering parameter corresponding to each classification set by adopting the following steps:
acquiring the number of pixels in a target classification set and a difference value between each pixel in the target classification set and an original pixel at a corresponding position, wherein the original pixel is a pixel before encoding, and the target classification set is any one of all classification sets in the first candidate classification;
determining an initial filtering parameter of the target classification set according to the number of the pixels and the difference value;
specifically, the number of the difference values obtained in the above steps is the same as the number of the target classification set, and when the number of the difference values is not less than 2, the sum of the plurality of difference values is calculated, and the sum of the plurality of difference values and the number of the pixels are subjected to division operation, so as to obtain the initial filtering parameter of the target classification set.
Limiting the initial filtering parameters, and determining R filtering parameters corresponding to the target classification set;
in the step, firstly, a value range of a filtering parameter is given; then, based on the value range, limiting the initial filtering parameter to make the initial filtering parameter in the value range; and finally, determining R filtering parameters corresponding to the target classification set from 0 to a first interval corresponding to the initial filtering parameter or from the initial filtering parameter to a second interval corresponding to 0.
Specifically, whether the first interval or the second interval is adopted depends on the positive and negative of the initial filtering parameter.
Traversing each filtering parameter in the R filtering parameters, and calculating to obtain an optimal rate distortion cost corresponding to the simulation filtering of the target classification set according to each filtering parameter;
and determining the filter parameter with the minimum optimal rate distortion cost in the R filter parameters as the filter parameter corresponding to the target classification set.
That is to say, the finally determined filtering parameter corresponding to the target classification set is the optimal filtering parameter.
The following is a detailed description of an example:
let Class _ tmp ∈ (0, Class)total-1), wherein Class _ tmp represents an identification value of any one of the classification sets (target classification set), ClasstsostaThe maximum identification value of the classification set is represented, and the filtering parameter offset is calculated as follows:
first, count the number numCount of pixels in Class _ tmp and the difference diffCount between the pixels in Class _ tmp and the original pixels in the corresponding positions. Then, calculate the initial offset of Class _ tmp: initialOffset is diffCount/numCount; next, initialOffset is size-limited: initialOffset is COM _ CLIP3(-a, initialOffset) such that initialOffset is within the maximum and minimum allowable range of offset. Wherein [ -A, A ] is the value range of offset; next, from the [0, initialOffset ] or [ initialOffset,0] interval, the optimal bestOffset is obtained by rate distortion optimization calculation.
Specifically, which interval is selected depends on the sign of initialOffset.
Note that, the CLIP function is used to limit the upper and lower bounds of one array, and COM _ CLIP3(-a, initialOffset) is used to indicate the minimum value-a of the range to be limited and the maximum value a of the range to be limited, and the array to be output is initialOffset.
The computation process of bestOffset corresponding to all classification sets is the same. By this method, a total Class can be obtainedtotalbestOffset.
And then, according to the filtering parameters, calculating to obtain an optimal rate distortion cost corresponding to the first candidate classification, wherein the first candidate classification is any one of the T candidate classifications.
It should be noted that, the obtained filtering parameter of each classification set corresponding to the first candidate classification is used to perform analog filtering on the current frame, and then, the filtered pixel is compared with the original pixel at the corresponding position to obtain the filtering loss; then, calculating the bit number consumed by the filter parameter to be coded into the code stream; and finally, calculating the sum of the filtering loss and the consumed bit number to obtain the optimal rate distortion cost.
And finally, determining the number of the classification sets corresponding to the candidate classification with the minimum optimal rate distortion cost in the T candidate classifications as the number of the target classification sets of the second type classification set.
Here, the conditions that need to be satisfied for this step execution are: the obtained minimum optimal rate distortion cost is smaller than the optimal rate distortion cost obtained by calculation when filtering is not performed on the basis of the filtering parameters corresponding to each obtained classification set; if the condition is not met, the filtering technology of traversing the target pixel for classification based on the size of the pixel is not executed, namely, the filtering technology of filtering based on the classification mode of the size of the pixel is not adopted.
At this time, the number of the target classification sets is the optimal number of the classification sets determined based on the preset threshold value. The number of the target classification sets is M2 under the condition that a classification mode based on the relation between the pixels and L adjacent pixels is adopted, and then a classification mode based on the sizes of the pixels is adopted; the number of target classification sets corresponds to a value of M3 when only a classification method based on the size of the pixel itself is employed.
Then, the following steps are adopted, and the target pixel is traversed to be classified based on the size of the pixel:
determining a second type classification set to which a third pixel belongs according to the pixel size of the third pixel, the number of the target classification sets and the image bit depth; wherein the third pixel is one of the target pixels.
Here, the step may specifically include:
multiplying the pixel size of the third pixel by the number of the target classification sets;
performing right shift operation on the result obtained after multiplication according to the image bit depth to obtain a second identification value of the third pixel;
determining a second type classification set to which the third pixel belongs according to a second identification value of the third pixel; and the second identifiers of the pixels corresponding to each second type classification set are the same.
In an example, the example can be continued with the example shown in fig. 2, that is, after the classification of all the pixels of the current frame is completed by adopting a classification manner based on the relationship between the pixels and the L pixels adjacent to the pixels, the pixels are classified based on the size of the pixels themselves.
Specifically, a video, according to its image bit depth bitdepth, has a pixel size range (0, (1)<<bitdepth) -1), classifying pixels of a pixel based on the size of the pixel itself, assuming classification into Class is requiredt2,Classt2Can be obtained through self-adaptive selection processing and can be understood as Classt2Number of sets to be classified for object, Y2(i, j) represents a luminance component of the third pixel; the calculation formula of the classification is as follows:
Pixelclass=(Y2(i,j)*Classt2)>>bitdepth (1)
Pixelclassa second identification value representing a third pixel, the identification value identifying the classified set to which the second pixel belongs.
It should be noted that 1< < bitdepth indicates that 1 is shifted to the left by bitdepth, for example, if bitdepth is 8, 1 is shifted to the left by 8 bits and takes a value of 256, that is, an image with 8 bit depth has a size range of pixels (0, 255). Also, the symbol "> > >" is used to indicate a right shift.
For example, 2 second-type classification sets are obtained by the above classification method based on the size of the pixel itself, and then all pixels of the current frame are classified into 2 × 17, that is, 34 classification sets by using the classification method based on the relationship between the pixel and L pixels adjacent to the pixel and the classification method based on the size of the pixel itself.
Here, as another optional implementation manner, determining the number of target classification sets of the second type classification set may include the following steps:
determining a candidate classification set, wherein the candidate classification set comprises at least one candidate classification, and each candidate classification corresponds to one classification set number;
it should be noted that, among the candidate classifications in the candidate classification set, different candidate classifications correspond to different numbers of classification sets.
The candidate classification set in this step may be understood as an initial candidate classification set. Wherein, the candidate classification set can be determined through preset settings. For example, the initial candidate classification set includes 5 candidate classifications, and the number of the corresponding classification sets in the 5 candidate classifications is 1,2,3,4, and 5, respectively, that is, the initial candidate classification set may be specifically represented as {1,2,3,4,5 }.
Traversing each candidate classification in the candidate classification set, and calculating to obtain the optimal rate-distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification;
the specific implementation principle of this step may refer to the previous implementation manner, that is, the T candidate classifications are traversed, and the implementation process of obtaining the optimal rate-distortion cost corresponding to the classification of all the pixels in the current frame according to each candidate classification is calculated, which is not described herein again.
And under the condition that the optimal rate distortion cost is smaller than a preset threshold value, circularly executing:
firstly, according to a first preset rule, obtaining an updated candidate classification set;
here, the preset threshold is less than or equal to the optimal rate distortion cost calculated when filtering is not performed based on the filtering parameters corresponding to each obtained classification set.
It should be noted that, the current candidate classification set can obtain an updated candidate classification set through a first preset rule.
For example, the current candidate classification set is {1,2,3,4,5}, the updated candidate classification set is {6,7,8,9,10} obtained through the first preset rule, and if the current candidate classification set is {6,7,8,9,10}, the updated candidate classification set is {11,12,13,14,15} obtained through the first preset rule, and so on.
That is, the first preset rule is that the number of the candidate classifications in the updated candidate classification set is the same, and the value of the number of the classification sets corresponding to the candidate classifications is sequentially increased in ascending order based on the last value in the candidate classification set before updating.
Of course, the first preset rule is not limited to this, and may be other rules, which are not specifically limited herein.
Then, traversing each candidate classification in the updated candidate classification set, calculating to obtain optimal rate-distortion costs corresponding to classification of all pixels in the current frame according to each candidate classification until the optimal rate-distortion costs corresponding to all candidate classifications in the updated candidate classification set are all larger than the preset threshold, obtaining the minimum value of all the obtained optimal rate-distortion costs, and determining the number of classification sets corresponding to the candidate classification corresponding to the minimum value as the number of target classification sets.
Here, the updated optimal rate-distortion costs corresponding to all candidate classifications in the candidate classification set are all greater than the preset threshold, which indicates that none of the candidate classifications in the candidate classification set has a better filtering effect when a classification method based on the size of the pixel itself is adopted for filtering the frame image than when the method is not adopted for filtering the frame image. In order to improve the operation efficiency and reduce the operation amount, the minimum value of all the obtained optimal rate-distortion costs is directly executed, and the number of the classification sets corresponding to the candidate classification corresponding to the minimum value is determined as the number of the target classification sets.
On this basis, as an optional implementation manner, before determining, as the number of the target classification sets, the number of the classification sets corresponding to the candidate classification corresponding to the minimum value among all the obtained optimal rate-distortion costs, the method of the embodiment of the present invention may further include:
and (3) circularly executing:
obtaining the updated candidate classification set according to a second preset rule;
traversing each candidate classification in the updated candidate classification set, and calculating to obtain the optimal rate-distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification until a first cycle upper limit f is reached, wherein f is a positive integer.
It should be noted that the value of f may be preset or determined according to actual conditions, and is not specifically limited herein.
After the first cycle upper limit is reached, executing the minimum value of all the obtained optimal rate-distortion costs, and determining the number of the classification sets corresponding to the candidate classification corresponding to the minimum value as the number of the target classification sets. At this time, the process of the present invention,
the purpose of the present implementation is to further ensure that the obtained optimal rate-distortion cost is minimal, and to reduce the impact on computational efficiency on that basis.
As an optional implementation manner, after traversing each candidate classification in the candidate classification set and calculating to obtain an optimal rate-distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification, the method according to the embodiment of the present invention may further include:
under the condition that the optimal rate-distortion cost is not smaller than the preset threshold, the step of traversing the target pixels for classification based on the size of the pixels is not executed;
here, when the above condition is not satisfied, it is described that none of the candidate classifications in the first candidate classification set of the trial enables the frame image to be filtered with a classification method based on the size of the pixel itself more effectively than without the classification method. At this time, a filtering technique of classifying the target pixel by traversing the target pixel based on the size of the pixel itself, that is, filtering based on the classification method based on the size of the pixel itself is not performed.
Alternatively, the first and second electrodes may be,
in order to avoid the error in the calculation process, an update time, that is, an upper loop limit mentioned below is preset, and specifically, in the case that there is no optimal rate-distortion cost less than the preset threshold, the loop is executed:
obtaining the updated candidate classification set according to a third preset rule;
traversing each candidate classification in the updated candidate classification set, calculating to obtain the optimal rate distortion cost corresponding to the classification of all pixels in the current frame according to each candidate classification,
until the second cycle upper limit g is not reached, if the optimal rate-distortion costs corresponding to all candidate classifications in the updated candidate classification set are all larger than the preset threshold, obtaining the minimum value of all the obtained optimal rate-distortion costs, and determining the number of classification sets corresponding to the candidate classifications corresponding to the minimum value as the number of target classification sets, wherein g is a positive integer;
alternatively, the first and second electrodes may be,
until the second upper cycle limit is reached, then
Obtaining the minimum value of all the obtained optimal rate-distortion costs smaller than the preset threshold value, and determining the number of the classification sets corresponding to the candidate classification corresponding to the minimum value as the number of the target classification sets;
it should be noted that, in this case, the second cycle upper limit is reached, but the situation that the optimal rate-distortion costs corresponding to all the candidate classifications in the updated candidate classification set are greater than the preset threshold is not found, which indicates that the optimal rate-distortion costs corresponding to the candidate classifications in the previous candidate classification set are less than the preset threshold.
Or, under the condition that all the obtained optimal rate-distortion costs are greater than the preset threshold, the step of traversing the target pixels for classification based on the sizes of the pixels is not executed.
Here, such a case corresponds to the case that, after the trial of the allowed maximum number of updates (i.e., the second cycle upper limit), none of the candidate classifications in all the trial candidate classification sets is found, so that the frame image can be filtered with a classification method based on the size of the pixel itself more effectively than without the classification method. Therefore, a filtering technique of classifying by traversing the target pixel based on the size of the pixel itself, that is, filtering by a classification based on the size of the pixel itself is not performed.
As an optional implementation manner, the step of obtaining the filtering parameter of each classification set may specifically include:
traversing each classification set in the M classification sets to obtain a filtering parameter corresponding to each classification set by adopting the following steps:
acquiring the number of pixels of a target classification set and a difference value between each pixel in the target classification set and an original pixel at a corresponding position, wherein the original pixel is a pixel before encoding, and the target classification set is any one of all classification sets in the M classification sets;
determining an initial filtering parameter of the target classification set according to the number of the pixels and the difference value;
specifically, the number of the difference values obtained in the above steps is the same as the number of the target classification set, and when the number of the difference values is not less than 2, the sum of the plurality of difference values is calculated, and the sum of the plurality of difference values and the number of the pixels are subjected to division operation, so as to obtain the initial filtering parameter of the target classification set.
Limiting the initial filtering parameters, and determining J filtering parameters corresponding to the target classification set;
in the step, firstly, a value range of a filtering parameter is given; then, based on the value range, limiting the initial filtering parameter to make the initial filtering parameter in the value range; and finally, determining J filter parameters corresponding to the target classification set from 0 to a first interval corresponding to the initial filter parameter or from the initial filter parameter to a second interval corresponding to 0.
Specifically, whether the first interval or the second interval is adopted depends on the positive and negative of the initial filtering parameter.
Traversing each filtering parameter in the J filtering parameters, and calculating to obtain the optimal rate distortion cost corresponding to the simulation filtering of the target classification set according to each filtering parameter;
and determining the filter parameter with the minimum optimal rate distortion cost in the J filter parameters as the filter parameter corresponding to the target classification set.
That is to say, the finally determined filtering parameter corresponding to the target classification set is the optimal filtering parameter.
In order to further ensure the encoding and decoding quality of the whole frame of image, before filtering each pixel according to the filtering parameter corresponding to the classification set to which the pixel belongs, the method may further include:
performing analog filtering on the current frame based on the acquired filtering parameters corresponding to each classification set;
and if the optimal rate-distortion cost obtained by the calculation after the analog filtering is smaller than a first rate-distortion cost, filtering each pixel according to the filtering parameter corresponding to the corresponding classification set, wherein the first rate-distortion cost is the optimal rate-distortion cost obtained by the calculation when the current frame is not filtered based on the filtering parameter corresponding to each obtained classification set.
Here, whether filtering needs to be performed may be determined by setting a frame-level filtering switch frame _ control _ flag (which may be represented by 1 bit).
If the optimal rate distortion cost obtained by calculation after the analog filtering is less than the first rate distortion cost, the frame _ control _ flag is equal to 1, and the filtering is required; otherwise, frame _ control _ flag is equal to 0 and no filtering is performed.
Wherein, filtering each pixel according to the filtering parameter corresponding to the classification set to which the pixel belongs, and obtaining the intermediate frame may include:
and traversing each pixel of all pixels of the current frame for filtering by adopting the following steps:
determining a target identification value of a classification set to which a first target pixel belongs;
in this step, the target identification value of the classification set to which the first target pixel belongs may be determined through the aforementioned step of obtaining M classification sets and the filter parameter corresponding to each classification set.
According to the target identification value, obtaining a filtering parameter corresponding to the classification set to which the first target pixel belongs;
filtering the first target pixel according to the filtering parameter to obtain an intermediate frame; wherein the first target pixel is any one of all pixels in the current frame.
Here, the first target pixel is a fourth pixel, and the fourth pixel is any one of all pixels in the current frame.
Here, when all pixels of the current frame are classified based on the luminance component of the pixel, the filtering process is described by taking a classification method (first classification method) based on the relationship between the pixel and L pixels adjacent to the pixel, and then a classification method (second classification method) based on the size of the pixel itself as an example.
First, the luminance component Y of the fourth pixel is obtained according to the first classification method and the second classification method, specifically referring to the above formula (1)3(i, j) first classification result C1 and second classification result C2, from which Y is derived3(i, j) identification value C of the assigned classification setyC2 × m + C1, where m is the total number of sorted sets that can be sorted by the first sorting method.
Then, according to CyAnd obtaining the corresponding filtering parameter offset.
Finally, based on Y3(i,j)=COM_CLIP3(0,1<<bitdepth-1,Y3(i, j) + offset). And sequentially circulating until all the pixels are traversed.
As an optional implementation manner, filtering each pixel according to the filtering parameter corresponding to the classification set to which the pixel belongs may specifically include:
for each pixel, determining whether to perform filtering according to a rate distortion cost corresponding to a classification set to which a target component belongs, wherein the target component comprises at least one of a luminance component, a first chrominance component and a second chrominance component;
it should be noted that, in the case where the target component includes a luminance component, a first chrominance component, and a second chrominance component, filtering for the current frame may be performed based on filtering of the luminance component, the first chrominance component, and the second chrominance component of the pixel, which are independent of each other.
I.e. whether or not filtering is performed, is determined according to the condition of each component itself.
In this regard, it should be noted that, for the filtering of the current frame, the filtering may be performed based on a combination of any two of the luminance component, the first chrominance component, and the second chrominance component of the pixel, and in the case of the combination, the filtering may be performed based on a combination of the luminance component, the first chrominance component, and the second chrominance component, or may be performed based on a combination of two of the luminance component, the first chrominance component, and the second chrominance component.
For example, assuming that the luminance component and the first chrominance component of a pixel are jointly filtered, the rate-distortion Cost corresponding to each pixel according to the classification set to which the luminance component belongs is appliedyAnd rate-distortion Cost corresponding to the classification set to which the first chrominance component belongsuSumming to obtain a target rate distortion Costy+Costu
If Costy+CostuWhen the current frame is smaller than the optimal rate distortion cost obtained by calculation when the current frame is not filtered based on the filtering parameters corresponding to each obtained classification set, filtering the current frame based on the brightness component and the first chrominance component of the pixel respectively; otherwise, neither the luma component nor the chroma component is filtered.
Here, the combination may also be a combination of three component filtering orders, for example, firstly, according to the method of this embodiment, each pixel is obtained, whether filtering is performed is determined based on the rate distortion cost according to the rate distortion cost corresponding to the classification set to which the luminance component belongs, and if the result is that filtering is not performed, filtering is not performed on the chrominance component of the current frame based on the pixel according to the method of this embodiment.
And if filtering is carried out, filtering each pixel according to the filtering parameters corresponding to the classification set to which the target component belongs.
Then, classifying all pixels of the current frame by adopting a classification mode based on the relation between the pixels and L adjacent pixels based on the brightness components of the pixels and then adopting a classification mode based on the sizes of the pixels; based on the chrominance components of the pixels, all the pixels of the current frame are classified only by adopting a classification mode based on the sizes of the pixels, and the brief description of the image filtering method is carried out.
As shown in fig. 3, it is determined whether the luminance component of the current pixel is present, and if yes, the luminance component is compared with 8 surrounding pixels to perform a first classification, and then a second classification is performed according to the size of the pixel itself; if not, namely the chroma component of the current pixel, performing secondary classification according to the size of the pixel; and finally, calculating a filtering parameter offset through Rate Distortion Optimization (RDO).
For the brightness component of the current pixel, firstly, two classification modes are needed to classify all pixels of the coding frame, the current pixel and the surrounding 8 pixels are sequentially compared through a first classification method, and the pixels are classified into classest1A set of classifications. The second classification method is specifically classified into specific classes according to the sizes of pixels from small to larget2A set of classifications. Classt2The number of the cells can be obtained through the adaptive selection process, and specific reference may be made to the description of the foregoing parts, which are not described herein again.
Through the two classifications, the total number of the obtained classifications is: classtotal=Classt1*Classt2
And classifying the chrominance components of the current pixel by only adopting a second classification method, so that the total number of classifications obtained by chrominance classification is as follows: classtotal=Classt2
Then, each classification set Class _ tmp is calculated (Class _ tmp E (0, Class)total-1)) of the filter parameter offset.
Finally, in general, the method is applied to the loop filtering process, and for the whole code, the increased code stream overhead is as follows: firstly, YUV components respectively correspond to a frame-level filtering switch frame _ filtered _ control of 1bit, and filtering is not performed if the YUV components are equal to 0, and filtering is required if the YUV components are equal to 1; then since the luminance component classification set total is: classtotal=Classt1*Classt2(ii) a While chrominance components are only classified two, hence Classtotal=Classt2(ii) a That is, the YUV three components respectively correspond to ClasstotalThe offset to be coded and decoded.
The YUV three components are respectively and independently coded, the difference is only in the calculation of the number of offsets to be coded, and the rest are the same.
Here, M classification sets and a corresponding filtering parameter of each classification set are obtained; the classification set is obtained by classifying all pixels of a current frame according to a preset classification mode, wherein the current frame is a current coding frame or a current decoding frame; filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame; the preset classification mode comprises at least one of the following modes: classification based on the size of the pixels themselves; m, L are positive integers based on the classification mode of the relation between the pixel and the L pixels adjacent to the pixel, thus, the filtering coefficient corresponding to each classification set obtained according to the preset classification mode is utilized to carry out coding and decoding on the current frame, and the coding and decoding quality of the whole frame image can be improved.
Wherein, filtering each pixel according to the filtering parameter corresponding to its belonging classification set to obtain an intermediate frame, which may include:
acquiring a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong; p is a positive integer;
under the condition that the filtering state corresponding to the block set to which the second target pixel belongs is filtering, filtering the second target pixel according to the filtering parameter corresponding to the classification set to which the second target pixel belongs;
optionally, the current frame may be a current encoded frame or a current decoded frame. For example: under the condition that the current frame is the current coding frame, the filtering process is the filtering process of the coding stage; in the case that the current frame is the current decoding frame, the filtering process is the filtering process of the decoding stage.
Optionally, the target filtering information includes: all pixels in the current frame respectively correspond to the filtering parameters; or, M classification sets to which all pixels in the current frame belong and a filtering parameter corresponding to each classification set; alternatively, the target filtering information may be filtering information corresponding to a target frame chronologically preceding the current frame (the filtering information corresponding to the target frame may be M classification sets to which all pixels belong and a filtering parameter corresponding to each classification set).
Optionally, the size of a block of the set of blocks may be: m x n, wherein m and n are positive integers. The size of the block can be determined by rate distortion optimization, or can be preset at an encoding and decoding end; or, the number of blocks may be determined by rate distortion optimization, or may be preset at the encoding and decoding end; or the number of rows/columns when dividing the current frame into P block sets may be determined by rate distortion optimization, or may be preset at the codec end.
In the scheme, the filtering state corresponding to each block set in P block sets to which all pixels in a current frame belong is obtained by obtaining target filtering information corresponding to the current frame; and for each pixel, under the condition that the filtering state corresponding to the block set to which the pixel belongs is filtering, filtering according to the corresponding filtering parameter of the pixel in the target filtering information, which is favorable for improving the coding performance and solves the problem of poor coding performance in the current frame-level filtering mode.
Optionally, in a case that the current frame is a current coding frame, the step of obtaining a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong may specifically include:
for each block set, respectively calculating a first rate distortion cost1 under the condition of filtering and a second rate distortion cost2 under the condition of no filtering of pixels corresponding to the block set;
if the cost1 and cost2 corresponding to the target block set satisfy: if cost2 is less than or equal to cost1, determining the filtering state of the target block set as no filtering;
if the cost1 and cost2 corresponding to the target block set satisfy: when cost2 is greater than cost1, determining the filtering state of the target block set as filtering; wherein the target block set is any one of the P block sets.
In this embodiment, after determining the classification set of pixels and the filtering parameter (offset) corresponding to each classification set in the pixel classification process, the offset corresponding to each pixel is obtained; therefore, rate distortion costs cost1 and cost2 when the pixels are filtered and not filtered can be calculated based on the offsets of all the pixels in the block set; if the cost2 of the target block set is not more than cost1, the performance of not performing filtering is better, and all pixels in the target block set are determined not to be performed with filtering; if the cost2 of the target block set is greater than the cost1, it indicates that the filtering performance is better, and it is determined that all pixels in the target block set are filtered, for example, filtering is performed according to the offset corresponding to the classification set to which each pixel belongs. In this way, each block combination can decide the corresponding filtering state through a rate distortion optimization process, so that whether each pixel in the block combination is filtered or not is determined based on the block set, and the coding performance is improved.
Optionally, when the current frame is a current encoding frame, before the step of filtering, for each pixel, according to a corresponding filtering parameter of the pixel in the filtering information when a filtering state corresponding to a block set to which the pixel belongs is filtering, the method may further include:
calculating a first target rate distortion cost in a block mode and a second target rate distortion cost in a non-block mode;
if the first target rate distortion cost is less than the second target rate distortion cost, executing the step of filtering each pixel according to the corresponding filtering parameter of the pixel in the filtering information under the condition that the filtering state corresponding to the block set to which the pixel belongs is filtering;
wherein the block mode is: each pixel in the current frame is filtered based on a filtering state corresponding to a block set to which the pixel belongs; the non-block mode is as follows: each pixel in the current frame is filtered based on the filter state of the current frame.
In this embodiment, a rate distortion optimization mode is used to decide in advance whether a current frame has a better performance when filtering in a block mode or a non-block mode, and when the block mode has a better coding performance than the non-block mode, the block mode is used to perform filtering, that is, for each pixel, filtering is performed according to a filtering parameter corresponding to the pixel in the filtering information when a filtering state corresponding to a block set to which the pixel belongs is filtering; in the case that the non-block mode has better encoding performance than the block mode, the non-block mode is used for filtering, for example, the filtering may be performed in a frame-level manner, that is, each pixel is filtered according to the corresponding filtering parameter of the pixel in the filtering information.
Specifically, the method for deciding whether the current frame is filtered by using the block mode or the non-block mode has better performance may be to calculate a total rate distortion cost when the current frame is filtered according to the filtering state corresponding to each block set, that is, a first target rate distortion cost, and simultaneously obtain a total rate distortion cost when the current frame is not filtered by using the filtering state corresponding to the block set, that is, a second target rate distortion cost (where the second target rate distortion cost may be calculated in the process of pixel classification); thus, by comparing the first target rate-distortion cost with the second target rate-distortion cost, if the first target rate-distortion cost is smaller than the second target rate-distortion cost, it indicates that the current frame has better coding performance by using the block mode, i.e. filtering by using the block mode. Otherwise, the block mode is not adopted for filtering.
Optionally, after the step of calculating a first target rate-distortion cost in the block mode and a second target rate-distortion cost in the non-block mode, the method may further include:
and if the first target rate distortion cost is less than the second target rate distortion cost, encoding the first identification information of the filtering state corresponding to each block set in the P block sets into the code stream.
Specifically, if the first target rate distortion cost is smaller than the second target rate distortion cost, it indicates that the current frame has better coding performance by using the block mode, i.e., the current frame is filtered by using the block mode. Under the condition of filtering by adopting a block mode, the identification information of the filtering state corresponding to each block set is required to be coded into a code stream in the coding process; if the block mode is not adopted for filtering, the identification information of the filtering state corresponding to the block set does not need to be coded into the code stream.
Optionally, each block set may respectively correspond to identification information of one filtering state, such as: whether the current block set is filtered or not is indicated by a block identifier (block flag) with 1bit for each block set, for example, if the block flag is 0, filtering is performed, and if the block flag is 1, filtering is not performed. Or, a plurality of block sets share identification information of a filter state, such as: if a plurality of block sets with the same filtering state can share one piece of identification information, the identification information of the block set subjected to filtering and the identification information of the block set not subjected to filtering can be respectively coded into a code stream; still alternatively, one or more identification information may be shared by a plurality of block sets in one or more rows, such as: the filtering states of a plurality of block sets in one row are identified through one identification information, so that the filtering states of the P block sets can be encoded through the row identification information, and the code rate is improved; of course, it is also possible to share one identification information by a plurality of block sets in one or more columns, such as: the filtering states of a plurality of block sets in a column are identified by one identification information, so that the filtering states of the P block sets can be encoded by a plurality of columns of identification information, and the code rate is improved.
Specifically, whether the pixels in the current block set are filtered is controlled according to the identification information of the filtering state; and if the identification information of the filtering state corresponding to the current block set is false (0), determining that all pixels in the current block set do not filter, and if the identification information of the filtering state corresponding to the current block set is true (1), determining that all pixels in the current block set need to filter.
The following is described in connection with the encoding and decoding processes:
whether the current frame needs to be filtered is indicated by a frame control flag (frame _ control _ flag) of a coded 1bit (bit), and if the frame _ control _ flag is 0, it indicates that no filtering is performed, and no other information, such as a classification set of pixels, a filtering parameter, and the like, needs to be transferred.
If frame _ control _ flag is 1, then it indicates that filtering is needed; and if the block _ component is 1, the block _ component indicates that block mode filtering is required, and a numlock block identifier (block _ flag) is encoded, wherein the numlock indicates the number of block sets corresponding to the current frame. Then, the class (classNum) of the pixel classification is coded, and the filtering parameter (offset) corresponding to each class set obtained under the class Num is coded at the same time. If frame _ control _ flag is 0, it means no filtering is performed and no further information needs to be passed.
At the decoding end, first, 1-bit frame _ control _ flag is decoded, and if it is 0, it indicates that the filtering is not performed.
If the value is 1 and block _ component is 1, it indicates that the block mode is adopted for filtering, and numlock block _ flag is decoded, where numlock indicates the number of block sets corresponding to the current frame. After block _ flag decoding is completed, the class classNum of the pixel classification and the offset corresponding to each classification set are decoded.
It should be noted that the luminance component (Y), the first color component (U), and the second chrominance component (V) of the pixel may be encoded in the above manner, that is, Y, U, V may be encoded independently.
Here, the step of obtaining the target filtering information corresponding to the current frame corresponds to the method step 101 in the embodiment of the present invention, and the detailed process is described in the above description and is not repeated here.
In combination with the above pixel classification manner, the step of calculating the first target rate-distortion cost in the block mode and the second target rate-distortion cost in the non-block mode may be specifically implemented by the following manner:
the first method is as follows:
in the process of traversing each candidate classification in at least one candidate classification, respectively calculating a third rate-distortion cost3 of each candidate classification in the non-block mode and a fourth rate-distortion cost4 of each candidate classification in the block mode, wherein the number of corresponding classification sets of different candidate classifications is different;
and taking the minimum cost3 in the cost3 corresponding to each candidate classification as the second target rate-distortion cost, and taking the cost4 corresponding to the candidate classification corresponding to the minimum cost3 as the first target rate-distortion cost.
If the current component is the Y component, a first type classification set is obtained first, and then a second type classification set is obtained; if the current UV component is, a second type classification set is obtained.
Further, in the process of obtaining the second type classification set, for the rate distortion optimization process corresponding to each candidate classification, a determination is required to be made as to whether filtering needs to be performed by using a block mode, for example: for example, in the process of traversing T candidate classes, assume that four classes 1,2,3, and 4 are selected for adaptation, so we will obtain to divide the current frame into: and m, 2m, 3m and 4m are classified into four categories (when the current component is Y, m is the number of the first type classification sets, and when the current component is UV, m is 1), and aiming at each candidate classification, two rate distortion costs are respectively calculated in parallel, wherein one rate distortion cost is the optimal rate distortion cost adopting the block mode, and the other rate distortion cost is the optimal rate distortion cost not adopting the block mode.
It should be noted that, the above is exemplified by calculating the rate-distortion cost of each candidate class in the block mode and the non-block mode in the process of traversing T candidate classes, of course, the process of traversing at least one candidate class may also be a process in other embodiments in the above-mentioned determining the number of target class sets of the second-type class set, and the application is not limited thereto.
Then, a minimum rate-distortion cost can be decided according to the optimal rate-distortion cost which does not adopt the block mode and corresponds to each candidate classification, the minimum rate-distortion cost is used as the rate-distortion cost of the optimal candidate classification, namely, the rate-distortion cost is a second target rate-distortion cost, and the rate-distortion cost which adopts the block mode under the optimal candidate classification is compared with the rate-distortion cost which does not adopt the block mode; if the rate distortion cost without the block mode is smaller, determining that the block _ component of the current component is equal to 0, and writing the identifier into the code stream; if the rate distortion cost of the block mode is lower, determining that the block _ component of the current component is equal to 1, writing the identifier into the code stream, and simultaneously writing the block _ flag corresponding to each block set into the code stream.
The second method comprises the following steps:
respectively calculating to obtain a fifth rate-distortion cost5 of each candidate classification in a non-block mode in the process of traversing each candidate classification in at least one candidate classification;
calculating a sixth rate distortion cost6 of the candidate classification corresponding to the smallest cost5 in the block mode in the cost5 corresponding to each candidate classification;
the minimum cost5 is taken as the second target rate-distortion cost, and the cost6 is taken as the first target rate-distortion cost.
If the current component is Y component, obtaining a first type classification set, and then obtaining a second type classification set; if the current UV component is, a second type classification set is obtained.
The process of obtaining the second type classification set is a process of obtaining a classification set corresponding to an optimal candidate classification from a plurality of candidate classifications, and a rate distortion cost corresponding to the optimal candidate classification set, namely a second target rate distortion cost, is obtained at this time; further calculating the rate distortion cost of the optimal candidate classification in the block mode, namely the rate distortion cost is a first target rate distortion cost; and then comparing the first target rate distortion cost with the second target rate distortion cost to judge whether a block mode needs to be adopted. The specific determination process is similar to the above-mentioned manner, and is not described herein again.
And the rate distortion optimization process of calculating the block mode is increased once by adopting the second mode, and compared with the first mode, the rate distortion optimization process of calculating the block mode is greatly reduced.
The first or second scheme used by the encoder may be preset, or may be selected between the two schemes by a rate-distortion optimization scheme. The specific filtering process is illustrated below for mode one:
the filtering processes of the YUV components are independent of each other, and the difference is only that the luminance and the two chromaticities are slightly different in the pixel classification method before filtering, and the above embodiment has been described. For example, if the frame filter control flag of the luminance component is frame _ filtered _ control equal to 1 and block _ component equal to 1, it indicates that the luminance component needs to be filtered and the block mode is used for filtering.
Firstly, judging a block _ flag value of a block set where a current pixel is located, if the block _ flag value is 0, indicating that the pixel in the current block set does not need filtering, skipping a filtering process of the pixel in the current block set, and judging a next block set. If block _ flag is 1, it indicates that the pixels in the current block set need filtering.
The foregoing part has already been explained for the filtering process performed on the current pixel, and is not described here again.
The image filtering method of the invention can be applied to the loop filtering process, and for the whole code, the increased code stream cost is as follows:
the YUV components respectively correspond to 1bit frame _ filt _ control; if frame _ filet _ control is 0, then no filtering is performed; if the block _ component is 1, it indicates that the current component adopts block mode filtering, so it needs to continue coding numlock and block _ flag of each block set, where the size of numlock is equal to the number of block sets of the current component, and it also needs to code filtering parameters, since the total number of classification sets corresponding to the luminance component is M1 × M2, and the total number of classification sets corresponding to the chrominance is M2, that is, the three components corresponding to YUV correspond to M offset needing to be coded and decoded respectively.
Optionally, a joint filtering manner may be adopted between the YUV components, specifically: determining whether to filter each pixel in the current block set according to a rate distortion cost corresponding to a classification set to which a target component belongs, wherein the target component comprises at least one of a luminance component, a first chrominance component and a second chrominance component;
it should be noted that, in the case where the target component includes a luminance component, a first chrominance component, and a second chrominance component, filtering for the current frame may be performed based on filtering of the luminance component, the first chrominance component, and the second chrominance component of the pixel, which are independent of each other.
I.e. whether or not filtering is performed, is determined according to the condition of each component itself.
In this regard, for filtering of the block set, filtering may be performed based on a combination of any two of the luminance component, the first chrominance component, and the second chrominance component of the pixel, or filtering may be performed based on a combination of the luminance component, the first chrominance component, and the second chrominance component, or filtering may be performed based on a combination of both of the luminance component, the first chrominance component, and the second chrominance component.
For example, assuming that the luminance component and the first chrominance component of a pixel are jointly filtered, the rate-distortion Cost corresponding to each pixel according to the classification set to which the luminance component belongs is appliedyAnd rate-distortion Cost corresponding to the classification set to which the first chrominance component belongsuSumming to obtain a target rate distortion Costy+Costu
If Costy+CostuWhen the current block is smaller than the optimal rate distortion cost obtained by calculation when the current block is not filtered based on the filtering parameters corresponding to each obtained classification set, filtering the current block set based on the brightness component and the first chrominance component of the pixel respectively; otherwise, neither the luma component nor the chroma component is filtered.
Here, the combination may also be a combination of three component filtering orders, for example, firstly, according to the method of this embodiment, each pixel in the current block set is obtained, and whether filtering is performed is determined according to a rate distortion cost corresponding to the classification set to which the luminance component belongs, based on the rate distortion cost, if the result is that filtering is not performed, filtering is not performed on the chrominance component of the current frame based on the pixel according to the method of this embodiment.
And if filtering is carried out, filtering each pixel in the current block set according to the filtering parameter corresponding to the classification set to which the target component belongs.
Optionally, in a case that the current frame is a current coding frame, the step of obtaining target filtering information corresponding to the current frame may include:
if the current coding frame is the 1 st coding frame, taking the 1 st first filtering information obtained by classifying all pixels based on the current coding frame as the target filtering information;
if the current coding frame is the jth coding frame, determining the target filtering information according to the jth first filtering information and the time domain information list;
the jth first filtering information is first filtering information obtained by classifying all pixels of the jth coded frame; the time domain information list comprises K pieces of second filtering information, the second filtering information is obtained by classifying all pixels of a target coding frame, j and K are positive integers, and j is larger than 1, wherein the time sequence of the target coding frame is before the current coding frame.
Wherein if the current frame is a current coding frame, the method further comprises:
determining target filtering information corresponding to a current coding frame;
if the target filtering information is first filtering information obtained by classifying all pixels of the current coding frame, coding the target filtering information into a code stream;
if the target filtering information is second filtering information in a time domain information list, encoding first identification information corresponding to the target filtering information in the time domain information list into the code stream;
the time domain information list comprises K pieces of second filtering information, the second filtering information is filtering information obtained by classifying all pixels of a target coding frame, the time sequence of the target coding frame is before the current coding frame, and K is a positive integer.
Optionally, the target filtering information includes at least one of:
m sets of classifications of pixels, M being a positive integer;
filtering parameters corresponding to each of the classification sets.
For example: when the encoding end selects a target candidate classification from multiple candidate classifications, and classifies all pixels in the current encoded frame to obtain M classification sets, the target filtering information should include, in addition to the filtering parameter of each classification set, the M classification sets of the target candidate classification or the target candidate classification, and the decoding end can directly decode according to the M classification sets, or determine, according to the target classification set, the M classification sets to which all pixels of the decoded frame corresponding to the current encoded frame belong, and then decode according to the filtering parameters corresponding to the classification sets.
For another example: when the encoding end does not adopt a method of selecting a target candidate classification from a plurality of candidate classifications, that is, the encoding end and the decoding end can pre-agree the classification method of pixels, the target filtering information may only include the filtering parameters of each classification set.
Optionally, the number of K pieces of second filtering information that can be stored in the time domain information list may be fixed to m pieces, (where m is a positive integer) or may not be fixed; when the number of the time domain information which can be stored in the time sequence information list is not fixed, the number of the time domain information can be determined according to the optimal classification value obtained by training.
Optionally, when the current frame is encoded, if the target filtering information of the current frame adopts one second filtering information in the time domain information list, and a part of filtering parameters in the second filtering information may need to be adjusted, or a classification to which a part of pixels belong needs to be adjusted, the identification information corresponding to the second filtering information may be encoded to the code stream, and simultaneously the adjusted part of filtering parameters or the adjusted classification to which the part of pixels belong is encoded to the code stream, so that the encoding performance can be ensured on the basis of reducing the consumption of the code stream.
In the scheme, target filtering information corresponding to a current coding frame is determined; and when the target filtering information is first filtering information obtained by classifying all pixels of the current coding frame, coding the target filtering information into a code stream, and when the target filtering information is second filtering information in a time domain information list, coding first identification information corresponding to the target filtering information in the time domain information list into the code stream. Therefore, the current coding frame can be coded based on the similarity between the adjacent frames, the number of parameters in the code stream can be reduced, the consumption of the code stream is reduced, and the coding efficiency is improved.
Optionally, as an implementation: after the target filtering information corresponding to the current coding frame is determined, the method further includes:
and if the target filtering information is first filtering information obtained by classifying all pixels of the current coding frame, updating the target filtering information into the time domain information list.
Optionally, the second filtering information in the time domain information list may be updated for a group of I frames and their associated B frames or P frames, that is, for each I frame and its associated B frame or P frame, the second filtering information may individually correspond to a time domain information list (or it may be understood that for each I frame, the current time domain information list is empty), so that the B frame or P frame associated with each I frame may have higher accuracy when determining the target filtering information based on the time domain information list.
Optionally, the second filtering information in the temporal information list may also be updated based on all frames of the video data, that is, the temporal information list corresponding to the first frame (generally, the first frame is an I frame, and may also be referred to as a 1 st I frame) is empty, and for each frame (which may be an I frame, a B frame, or a P frame) after the first frame, the temporal information list is not empty.
Of course, the updating method of the time domain information list in the embodiment of the present invention may also be other methods besides the above embodiments, and the embodiment of the present invention is not limited thereto.
The following describes a method for updating a time domain information list: in the case where the number of pieces of second filter information that can be stored in the time domain information list is not fixed, the filter information may be stored for both the current encoded frame and each encoded frame whose timing is prior to the current encoded frame.
In a case that the number of the second filtering information that can be stored in the time domain information list is fixed, the number of the second filtering information in the time domain information list needs to be adjusted according to the upper limit value m of the second filtering information that can be stored in the time domain information list.
Optionally, the step of updating the target filtering information to the time domain information list may specifically include:
if K is smaller than a preset threshold, adding the target filtering information into the time domain information list;
and if K is equal to a preset threshold, removing at least one piece of second filtering information in the time domain information list, and adding the target filtering information into the time domain information list.
Such as: when the number of the second filtering information stored in the time domain information list is m, the time domain information list may be updated in a first-in first-out or first-in last-out manner.
Specifically, the clearing of at least one second filtering information in the time domain information list may be: clearing one or more second filtering information stored firstly in the time domain information list, and simultaneously storing the first filtering information corresponding to the current coding frame to the last position (such as after the position of the second filtering information stored last) in the time domain information list; or one or more second filtering information which is/are finally stored in the time domain information list can be removed, and meanwhile, the first filtering information corresponding to the current coding frame is stored to the last position in the time domain information list; of course, other ways besides the above updating ways may also be adopted to update the second filtering information in the time domain information list, and the embodiment of the present invention is not limited thereto.
It should be noted that the encoding end and the decoding end should update the time domain information list at the same time to ensure the synchronous encoding and decoding, thereby ensuring the accuracy of encoding and decoding.
Optionally, as another implementation: the step of updating the target filtering information to the time domain information list may specifically include:
if K is smaller than a preset threshold, adding the target filtering information to the time domain information list;
and if K is equal to a preset threshold, not updating the second filtering information in the time domain information list.
For example: under the condition that the number of the second filtering information that can be stored in the time domain information list is fixed, if the number of the second filtering information that has been stored in the time domain information list has reached the upper limit value m, the filtering information of the current coding frame and the filtering information after the current coding frame may not be stored, that is, the second filtering information in the time domain information list is not updated.
Optionally, the step of determining the target filtering information corresponding to the current coding frame may specifically include:
if the current coding frame is the 1 st coding frame, taking the 1 st first filtering information obtained by classifying all pixels based on the current coding frame as the target filtering information;
if the current coding frame is the jth coding frame, determining the target filtering information according to the jth first filtering information and the time domain information list;
wherein the jth first filtering information is first filtering information obtained by classifying based on all pixels of the jth encoded frame, and j is an integer greater than 1.
Optionally, when a mode that each I frame and its associated B frame or P frame individually correspond to one time domain information list is adopted, the 1 st encoded frame may be an I frame, and the jth encoded frame is a B frame or a P frame associated with the I frame.
Optionally, the step of determining the target filtering information according to the jth first filtering information and the time domain information list may specifically include:
calculating a first rate distortion cost corresponding to the jth coded frame based on the jth first filtering information;
respectively calculating a second rate distortion cost corresponding to the jth coded frame based on each second filtering information in the time domain information list;
and determining the target filtering information according to the first rate distortion cost and the second rate distortion cost.
Wherein, respectively calculating the second rate-distortion cost corresponding to the jth encoded frame based on each second filtering information in the time domain information list may be understood as: and traversing each second filtering information in the time domain information list aiming at the jth coding frame to obtain K second rate distortion costs.
Specifically, the classification set to which the pixel belongs and the filtering parameter corresponding to each classification set may be determined for the jth encoded frame according to each second filtering information, and the second rate-distortion cost corresponding to each second filtering information may be calculated based on the filtering parameter.
Optionally, the step of determining the target filtering information according to the first rate distortion cost and the second rate distortion cost may specifically include:
taking the minimum second rate-distortion cost in the K second rate-distortion costs as the optimal rate-distortion cost;
if the optimal rate distortion cost is smaller than the first rate distortion cost, determining second filtering information corresponding to the optimal rate distortion cost in the time domain information list as the target filtering information;
and if the optimal rate distortion cost is greater than or equal to the first rate distortion cost, determining the jth first filtering information as the target filtering information.
The following describes the above method with reference to a specific example, where, for example, the number of filter information that can be stored in the time domain information list is 8, and the update process of the time domain information list is as follows:
for example: for the 1 st encoded frame (or called the start encoded frame) to be an I frame (where the I frame represents a key frame), the time domain information list corresponding to the start encoded frame is empty, so that the start encoded frame determines its corresponding target filtering information only through the filtering information in the non-time domain information list, for example, the optimal classification set obtained by classifying all pixels in the I frame and the filtering parameter corresponding to each classification set are used as the target filtering information, and the information such as the optimal classification set of the I frame and the filtering parameter corresponding to each classification set is stored in the time domain information list (or called the history list).
For the next j-th coded frame as the coded B/P frame associated with the I frame, (wherein, the P frame represents a forward predictive coded frame for indicating the difference between the current frame and the previous I frame or P frame, and the B frame represents a bidirectional predictive interpolation coded frame for indicating the difference between the current frame and the previous and next frames), if the current frame is a P frame, the time domain information list is not null; firstly, the frame is subjected to pixel classification to obtain an optimal classification set and a filtering parameter corresponding to each classification set, for example, the optimal classification set is called as first filtering information, and a rate distortion cost corresponding to the optimal classification set is calculated. Meanwhile, traversing the filtering information in the current time domain information list, calculating the rate-distortion cost of P corresponding to each second filtering information in the time domain information list, and deciding an optimal rate-distortion cost corresponding to the second filtering information from the rate-distortion costs of all the second filtering information in the time domain information list, for example, the optimal rate-distortion cost is called.
If the optimal rate distortion cost is less than the rate distortion cost corresponding to the first filtering information, the current frame adopts second filtering information (i.e. second filtering information corresponding to the optimal rate distortion cost) in a time domain information list, the time domain information list does not need to be updated, and meanwhile, only identification information corresponding to the second filtering information corresponding to the optimal rate distortion cost in the time domain information list needs to be coded into a code stream during coding, so that the code stream consumption is reduced. Otherwise, the P frame uses the second filtering information in the non-time domain information list, that is, the P frame is subjected to pixel classification to obtain filtering parameters passing through the optimal classification set and corresponding to each classification set, and the current frame is subjected to pixel classification to obtain filtering parameters passing through the optimal classification set and corresponding to each classification set during encoding and is stored in the time domain information list.
If the number of the second filtering information recorded in the time domain information list reaches 8 before the P frame is subjected to pixel classification to obtain the filtering parameters which pass through the optimal classification set and correspond to each classification set and are added to the time domain information list, first removing the first second filtering information in the time domain information list in a first-in first-out mode, and then adding the P frame to the last position of the time domain information list to perform pixel classification to obtain the filtering parameters which pass through the optimal classification set and correspond to each classification set; if the number of the second filtering information recorded in the time domain information list is less than 8 before the P frame is subjected to pixel classification to obtain a filtering parameter which passes through the optimal classification set and corresponds to each classification set and is added to the time domain information list, the P frame is subjected to pixel classification to obtain a filtering parameter which passes through the optimal classification set and corresponds to each classification set and is added to the last bit of the time domain information list.
It should be noted that, the above is only an illustration of one embodiment of the present invention, and the present invention is not limited to adopt only one implementation manner, and the above-mentioned implementation manner may also adopt a first-in first-out manner, or adopt a first-in last-out manner, or when the number of the second filtering information recorded in the time domain information list has reached 8, the updating of the time domain information list is not continued, or the upper limit of the second filtering information that can be stored in the time domain information list may also be not limited to 8, and the present invention is not limited to this.
The encoding and decoding process is illustrated below: whether the current frame needs to be filtered is indicated by a frame control flag (frame _ control _ flag) encoding 1bit (bit), and if frame _ control _ flag is 0, it means that no filtering is performed. There is no need to pass any other information, such as the classified set of pixels and the filter parameters.
If frame _ control _ flag is 1, then it indicates that filtering is needed; if the current frame is a non-I frame, a history _ flag of 1bit is coded to indicate whether the current frame adopts filtering information in the time domain information list, and if the history _ flag is 0, the current frame is in a non-history mode, that is, the current frame does not adopt filtering information in the time domain information list, and then the class (classNum) of the current frame for pixel classification is coded. And encoding the filtering parameter (offset) corresponding to each classification set obtained under the classNum. If the history _ flag is 1, it indicates that the current frame is in the history mode, that is, the filtering information in the time domain information list is used, and then the index (history _ index) of the time domain information in the time domain new information list used by the current frame is re-encoded. Therefore, the offset does not need to be coded, and the consumption of the code stream is reduced.
If frame _ control _ flag is 1, and if the current frame is an I frame, since the temporal information list is empty, the I frame does not need to encode history _ flag and history _ index, and the I frame can be directly encoded for pixel classification: classNum. And encoding the offset corresponding to each classification set obtained under the classNum.
At the decoding end, first, 1-bit frame _ control _ flag is decoded, and if it is 0, it indicates that the filtering is not performed.
And if the current frame is 1 and the current frame is a non-I frame, decoding 1-bit history _ flag, and if the history _ flag is 1, decoding the history _ index. If history _ flag is 0, the class classNum of the pixel classification and the offset corresponding to each classification set obtained under the class Num are decoded.
If it is 1 and the current frame is an I frame, the class classNum of the pixel classification and the offset corresponding to each classification set obtained under the class Num are decoded.
It should be noted that the luminance component (Y), the first color component (U), and the second chrominance component (V) of the pixel may be encoded in the above manner, that is, Y, U, V three components may be encoded independently. It should be further noted that the above coding method in the embodiment of the present invention can be used in all configurations of image coding, for example: the configurations such as intra (all intra), Random access (Random access), low delay (low delay) may be used, or may be used in a single configuration, which is not limited in the embodiments of the present invention.
The following Table 1 shows the results of the tests
TABLE 1
Figure BDA0002527860890000341
Figure BDA0002527860890000351
The following describes in detail the implementation process of the embodiment of the present invention with reference to the embodiment.
First, a first loop filtering technique in the embodiment of the present invention is referred to as GSAO, wherein the target loop filtering techniques are respectively: deblocking Filter (DF), SAO, and Adaptive Loop Filter (ALF), and the positions between the three are also DF- > SAO- > ALF. Therefore, the coding performance of the method will be greatly affected by the difference of the placement position of the GSAO in the loop filtering.
Specifically, the GSAO is located in the loop filter, and has the coding performance no matter whether the GSAO replaces the SAO or the GSAO and the SAO exist in the loop filter at the same time, and through the test, the GSAO has the best performance under the following conditions. In the embodiment of the present invention, the position of GSAO in the loop filter has the following situations:
(1) when GSAO and SAO exist in both loop filtering:
when both GSAO and SAO are present in the loop filter, GSAO may be located after SAO and ALF may have the highest gain. The specific filtering position is shown in fig. 4.
(2) When GSAO replaces SAO:
when the SAO is replaced by the GSAO, that is, only DF, GSAO and ALF exist in the loop filter, the GSAO is positioned after DF and has the highest gain before ALF. The specific filtering position is shown in fig. 5.
Alternatively, for example: GSAO filtering is selected aiming at the chrominance component, and then the Class is obtained through trainingtotalA filter parameter for the ClasstotalTwo encoding methods can be adopted for the filter parameters:
(1)Classtotalthe filter parameters are directly coded by ae
(2) Starting from the start position, the first non-zero filter parameter is found, and the subscript is recorded: startIndex; starting from the end, the first non-zero filter parameter is traversed forward: endIndex, from which the number of filter parameters to be encoded is obtained: num is startIndex-endIndex; then, encoding startIndex, num and saving corresponding chroma filter coefficient ClasstotalThe array index of (1) is num filter coefficients starting from startIndex.
Optionally, the two encoding modes are decided by a Rate Distortion Optimization (RDO) process at an encoding end.
As can be seen from the above description, in the embodiment of the present invention, the coding performance can be improved without affecting the coding time, thereby improving the coding gain.
As shown in fig. 6, an embodiment of the present invention further provides an image filtering apparatus 600, including:
a first filtering module 601, configured to perform filtering processing on a current frame through a first loop filtering technique and a target loop filtering technique after the current frame is subjected to inverse transformation and dequantization, where the current frame is a current encoded frame or a current decoded frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technique is a technique for performing filtering processing on the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame.
Optionally, the first filtering module 601 includes:
the first processing sub-module is used for carrying out filtering processing on the current frame by utilizing the target loop filtering technology to obtain an intermediate frame;
the second processing submodule is used for carrying out filtering processing on the intermediate frame by utilizing a first loop filtering technology;
wherein the intermediate frame is any one of the following:
the current frame is only processed by deblocking filtering;
only the current frame is subjected to sample adaptive compensation processing;
a current frame processed only by an adaptive loop filtering technique;
the current frame is processed by the deblocking filtering but not processed by the sample adaptive compensation;
the current frame is processed by deblocking filtering and sample adaptive compensation;
the current frame is processed by the deblocking effect filtering but not processed by the adaptive loop filtering;
the current frame is processed by deblocking filtering and adaptive loop filtering;
the current frame is subjected to sample adaptive compensation processing but not subjected to adaptive loop filtering processing;
the current frame is subjected to sample adaptive compensation processing and adaptive loop filtering processing;
the current frame is processed by the deblocking filtering but is not processed by the sample adaptive compensation processing and the adaptive loop filtering;
the current frame is processed by deblocking filtering and sample adaptive compensation, but is not processed by adaptive loop filtering;
and the current frame is subjected to deblocking filtering, sample adaptive compensation processing and adaptive loop filtering processing.
Optionally, the first filtering module 601 includes:
the third processing sub-module is used for carrying out filtering processing on the current frame by utilizing a first loop filtering technology to obtain an intermediate frame;
and the fourth processing sub-module is used for performing filtering processing on the intermediate frame by using the target loop filtering technology.
Optionally, the third processing sub-module includes:
the first obtaining unit is used for obtaining M classification sets and a filtering parameter corresponding to each classification set; the classification set is obtained by classifying all pixels of the current frame according to a preset classification mode;
the first filtering unit is used for filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame;
the preset classification mode comprises at least one of the following modes:
classification based on the size of the pixels themselves;
m, L are all positive integers based on the classification of the relationship between a pixel and its adjacent L pixels.
Optionally, the apparatus further comprises:
the second filtering module is used for carrying out analog filtering on the current frame based on the acquired filtering parameters corresponding to each classification set;
and the third filtering module is configured to filter each pixel according to the filtering parameter corresponding to the classification set to which the pixel belongs if the optimal rate distortion cost obtained through calculation after the analog filtering is less than the first rate distortion cost, where the first rate distortion cost is the optimal rate distortion cost obtained through calculation when the current frame is not filtered based on the filtering parameter corresponding to each obtained classification set.
Optionally, the first filtering unit includes:
the first determining subunit is used for determining a target identification value of the classification set to which the first target pixel belongs;
the second determining subunit is configured to obtain, according to the target identifier, a filtering parameter corresponding to the classification set to which the first target pixel belongs;
the first filtering subunit is configured to filter the first target pixel according to the filtering parameter to obtain an intermediate frame; wherein the first target pixel is any one of all pixels in the current frame.
Optionally, the first filtering unit includes:
a first obtaining subunit, configured to obtain a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong; p is a positive integer;
the second filtering subunit is configured to, when a filtering state corresponding to a block set to which a second target pixel belongs is filtering, filter the second target pixel according to a filtering parameter corresponding to a classification set to which the second target pixel belongs; wherein the second target pixel is any one of all pixels in the current frame.
Optionally, the current frame is a current coding frame, and the apparatus further includes:
the first determining module is used for determining target filtering information corresponding to the current coding frame;
the first processing module is used for encoding the target filtering information into a code stream if the target filtering information is first filtering information obtained by classifying all pixels of the current encoding frame;
the second processing module is used for coding the identification information corresponding to the target filtering information in the time domain information list into the code stream if the target filtering information is one piece of second filtering information in the time domain information list;
the time domain information list comprises K pieces of second filtering information, the second filtering information is filtering information obtained by classifying all pixels of a target coding frame, the time sequence of the target coding frame is before the current coding frame, and K is a positive integer.
The implementation principle and technical effect of the device of the embodiment of the invention are similar, and the embodiment is not described herein again.
As shown in fig. 7, an embodiment of the present invention further provides an image filtering apparatus, including: a processor 701, a memory 702 and a computer program stored on the memory 702 and operable on the processor 701, the processor 701 implementing the steps of the video processing method described above when executing the computer program.
Specifically, the processor 702 is configured to, after a current frame is subjected to inverse transform and dequantization, perform filtering processing on the current frame by using a first loop filtering technique and a target loop filtering technique, where the current frame is a current encoded frame or a current decoded frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technique is a technique for performing filtering processing on the current frame based on first filtering information, and the first loop filtering information is obtained by classifying all pixels of the current frame.
Optionally, the processor 702 is further configured to: filtering the current frame by using the target loop filtering technology to obtain an intermediate frame;
filtering the intermediate frame by utilizing a first loop filtering technology;
wherein the intermediate frame is any one of the following:
the current frame is processed by the deblocking filtering;
only the current frame is subjected to sample adaptive compensation processing;
a current frame processed only by the adaptive loop filtering technique;
the current frame is processed by the deblocking filtering but not processed by the sample adaptive compensation;
the current frame is processed by deblocking filtering and sample adaptive compensation;
the current frame is processed by the deblocking effect filtering but not processed by the adaptive loop filtering;
the current frame is processed by deblocking filtering and adaptive loop filtering;
a current frame which is subjected to sample adaptive compensation processing but is not subjected to adaptive loop filtering processing;
the current frame is subjected to sample adaptive compensation processing and adaptive loop filtering processing;
the current frame is processed by the deblocking filtering but is not processed by the sample adaptive compensation processing and the adaptive loop filtering;
the current frame is processed by deblocking effect filtering and sample adaptive compensation, but is not processed by adaptive loop filtering;
and the current frame is subjected to deblocking filtering, sample adaptive compensation processing and adaptive loop filtering processing.
Optionally, the processor 702 is further configured to: filtering the current frame by utilizing a first loop filtering technology to obtain an intermediate frame; and performing filtering processing on the intermediate frame by using the target loop filtering technology.
Optionally, the processor 702 is further configured to:
acquiring M classification sets and filtering parameters corresponding to each classification set; the classification set is obtained by classifying all pixels of the current frame according to a preset classification mode;
filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame;
the preset classification mode comprises at least one of the following modes:
classification based on the size of the pixels themselves;
m, L are all positive integers based on the classification of the relationship between a pixel and its adjacent L pixels.
Optionally, the processor 702 is further configured to:
performing analog filtering on the current frame based on the acquired filtering parameters corresponding to each classification set;
and if the optimal rate-distortion cost obtained by the calculation after the analog filtering is smaller than a first rate-distortion cost, filtering each pixel according to the filtering parameter corresponding to the corresponding classification set, wherein the first rate-distortion cost is the optimal rate-distortion cost obtained by the calculation when the current frame is not filtered based on the filtering parameter corresponding to each obtained classification set.
Optionally, the processor 702 is further configured to:
and traversing each pixel of all pixels of the current frame for filtering by adopting the following steps:
determining a target identification value of a classification set to which a first target pixel belongs;
according to the target identification value, obtaining a filtering parameter corresponding to the classification set to which the first target pixel belongs;
filtering the first target pixel according to the filtering parameter to obtain an intermediate frame; wherein the first target pixel is any one of all pixels in the current frame.
Optionally, the processor 702 is further configured to:
acquiring a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong; p is a positive integer;
under the condition that the filtering state corresponding to the block set to which the second target pixel belongs is filtering, filtering the second target pixel according to the filtering parameter corresponding to the classification set to which the second target pixel belongs; wherein the second target pixel is any one of all pixels in the current frame.
Optionally, the processor 702 is further configured to:
determining target filtering information corresponding to a current coding frame;
if the target filtering information is first filtering information obtained by classifying all pixels of the current coding frame, coding the target filtering information into a code stream;
if the target filtering information is second filtering information in a time domain information list, coding identification information corresponding to the target filtering information in the time domain information list into a code stream;
the time domain information list comprises K pieces of second filtering information, the second filtering information is filtering information obtained by classifying all pixels of a target coding frame, the time sequence of the target coding frame is before the current coding frame, and K is a positive integer.
The image filtering device provided by the embodiment of the present invention may implement the above method embodiments, and the implementation principle and technical effect are similar, which are not described herein again.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments may be performed by hardware, or may be instructed to be performed by associated hardware by a computer program that includes instructions for performing some or all of the steps of the above methods; and the computer program may be stored in a readable storage medium, which may be any form of storage medium.
In addition, the embodiment of the present invention further provides a readable storage medium, on which a program is stored, and the program, when executed by a processor, implements the steps in the image filtering method. And the same technical effect can be achieved, and in order to avoid repetition, the description is omitted.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the transceiving method according to various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other media capable of storing program codes.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (14)

1. An image filtering method, comprising:
after a current frame is subjected to inverse transformation and dequantization, filtering the current frame by a first loop filtering technology and a target loop filtering technology, wherein the current frame is a current coding frame or a current decoding frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technology is a technology for filtering the current frame based on first filtering information, and the first filtering information is obtained by classifying all pixels of the current frame;
wherein, the filtering the current frame by the first loop filtering technique and the target loop filtering technique includes:
filtering the current frame by utilizing a first loop filtering technology to obtain an intermediate frame;
filtering the intermediate frame by using the target loop filtering technology;
wherein, the filtering the current frame by using the first loop filtering technology to obtain an intermediate frame includes:
acquiring M classification sets and a filtering parameter corresponding to each classification set; the classification set is obtained by classifying all pixels of the current frame according to a preset classification mode;
filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame;
the preset classification mode comprises at least one of the following modes:
classification based on the size of the pixels themselves;
m, L are all positive integers based on the classification of the relationship between a pixel and its adjacent L pixels.
2. The image filtering method according to claim 1, wherein the filtering the current frame by the first loop filtering technique and the target loop filtering technique includes:
filtering the current frame by using the target loop filtering technology to obtain an intermediate frame;
filtering the intermediate frame by utilizing a first loop filtering technology;
wherein the intermediate frame is any one of the following:
the current frame is processed by the deblocking filtering;
only the current frame is subjected to sample adaptive compensation processing;
a current frame processed only by the adaptive loop filtering technique;
the current frame is processed by the deblocking filtering but not processed by the sample adaptive compensation;
the current frame is processed by deblocking filtering and sample adaptive compensation;
the current frame is processed by the deblocking effect filtering but not processed by the adaptive loop filtering;
the current frame is processed by deblocking filtering and adaptive loop filtering;
the current frame is subjected to sample adaptive compensation processing but not subjected to adaptive loop filtering processing;
the current frame is subjected to sample adaptive compensation processing and adaptive loop filtering processing;
the current frame is processed by the deblocking filtering but is not processed by the sample adaptive compensation processing and the adaptive loop filtering;
the current frame is processed by deblocking filtering and sample adaptive compensation, but is not processed by adaptive loop filtering;
and the current frame is subjected to deblocking filtering, sample adaptive compensation processing and adaptive loop filtering processing.
3. The image filtering method according to claim 1, wherein before filtering the filtering parameters corresponding to each pixel according to the classification set to which the pixel belongs, the method further comprises:
performing analog filtering on the current frame based on the acquired filtering parameters corresponding to each classification set;
and if the optimal rate-distortion cost obtained by the calculation after the analog filtering is smaller than a first rate-distortion cost, filtering each pixel according to the filtering parameter corresponding to the corresponding classification set, wherein the first rate-distortion cost is the optimal rate-distortion cost obtained by the calculation when the current frame is not filtered based on the filtering parameter corresponding to each obtained classification set.
4. The image filtering method according to claim 1, wherein the filtering, according to the filtering parameters corresponding to the classification set to which each pixel belongs, the obtaining of the intermediate frame comprises:
and traversing each pixel in all pixels of the current frame for filtering by adopting the following steps:
determining a target identification value of a classification set to which the first target pixel belongs;
according to the target identification value, obtaining a filtering parameter corresponding to the classification set to which the first target pixel belongs;
filtering the first target pixel according to the filtering parameter to obtain an intermediate frame; wherein the first target pixel is any one of all pixels in the current frame.
5. The image filtering method according to claim 1, wherein the filtering, according to the filtering parameters corresponding to the classification set to which each pixel belongs, the obtaining of the intermediate frame comprises:
acquiring a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong; p is a positive integer;
under the condition that the filtering state corresponding to the block set to which the second target pixel belongs is filtering, filtering the second target pixel according to the filtering parameter corresponding to the classification set to which the second target pixel belongs; wherein the second target pixel is any one of all pixels in the current frame.
6. The image filtering method according to claim 1, wherein the current frame is a current encoded frame, the method further comprising:
determining target filtering information corresponding to a current coding frame;
if the target filtering information is first filtering information obtained by classifying all pixels of the current coding frame, coding the target filtering information into a code stream;
if the target filtering information is second filtering information in a time domain information list, encoding first identification information corresponding to the target filtering information in the time domain information list into the code stream;
the time domain information list comprises K pieces of second filtering information, the second filtering information is filtering information obtained by classifying all pixels of a target coding frame, the time sequence of the target coding frame is before the current coding frame, and K is a positive integer.
7. An image filtering apparatus, comprising:
the first filtering module is used for filtering a current frame through a first loop filtering technology and a target loop filtering technology after the current frame is subjected to inverse transformation and dequantization processing, wherein the current frame is a current coding frame or a current decoding frame;
the target loop filtering technique comprises at least one of:
a deblocking filtering technique;
a sample adaptive compensation technique;
adaptive loop filtering techniques;
the first loop filtering technology is a technology for filtering the current frame based on first filtering information, and the first filtering information is obtained by classifying all pixels of the current frame;
wherein the first filtering module comprises:
the third processing sub-module is used for carrying out filtering processing on the current frame by utilizing a first loop filtering technology to obtain an intermediate frame;
the fourth processing submodule is used for carrying out filtering processing on the intermediate frame by utilizing the target loop filtering technology;
wherein the third processing submodule comprises:
the first obtaining unit is used for obtaining M classification sets and a filtering parameter corresponding to each classification set; the classification set is obtained by classifying all pixels of the current frame according to a preset classification mode;
the first filtering unit is used for filtering each pixel according to the filtering parameters corresponding to the classification set to which the pixel belongs to obtain an intermediate frame;
the preset classification mode comprises at least one of the following modes:
classification based on the size of the pixels themselves;
m, L are all positive integers based on the classification of the relationship between a pixel and its adjacent L pixels.
8. The apparatus of claim 7, wherein the first filtering module comprises:
the first processing sub-module is used for carrying out filtering processing on the current frame by utilizing the target loop filtering technology to obtain an intermediate frame;
the second processing sub-module is used for carrying out filtering processing on the intermediate frame by utilizing a first loop filtering technology;
wherein the intermediate frame is any one of the following:
the current frame is only processed by deblocking filtering;
only the current frame is subjected to sample adaptive compensation processing;
a current frame processed only by the adaptive loop filtering technique;
the current frame is processed by the deblocking filtering but not processed by the sample adaptive compensation;
the current frame is processed by deblocking filtering and sample adaptive compensation;
the current frame is processed by the deblocking effect filtering but not processed by the adaptive loop filtering;
the current frame is processed by deblocking filtering and adaptive loop filtering;
the current frame is subjected to sample adaptive compensation processing but not subjected to adaptive loop filtering processing;
the current frame is subjected to sample adaptive compensation processing and adaptive loop filtering processing;
the current frame is processed by the deblocking filtering but is not processed by the sample adaptive compensation processing and the adaptive loop filtering;
the current frame is processed by deblocking filtering and sample adaptive compensation, but is not processed by adaptive loop filtering;
and the current frame is subjected to deblocking filtering, sample adaptive compensation processing and adaptive loop filtering processing.
9. The apparatus of claim 7, further comprising:
the second filtering module is used for carrying out analog filtering on the current frame based on the acquired filtering parameters corresponding to each classification set;
and the third filtering module is configured to filter each pixel according to the filtering parameter corresponding to the classification set to which the pixel belongs if the optimal rate distortion cost obtained through calculation after the analog filtering is less than the first rate distortion cost, where the first rate distortion cost is the optimal rate distortion cost obtained through calculation when the current frame is not filtered based on the filtering parameter corresponding to each obtained classification set.
10. The apparatus of claim 7, wherein the first filtering unit comprises:
the first determining subunit is used for determining a target identification value of the classification set to which the first target pixel belongs;
the second determining subunit is configured to obtain, according to the target identifier, a filtering parameter corresponding to the classification set to which the first target pixel belongs;
the first filtering subunit is configured to filter the first target pixel according to the filtering parameter to obtain an intermediate frame; wherein the first target pixel is any one of all pixels in the current frame.
11. The apparatus of claim 7, wherein the first filtering unit comprises:
a first obtaining subunit, configured to obtain a filtering state corresponding to each block set in P block sets to which all pixels in the current frame belong; p is a positive integer;
the second filtering subunit is configured to, when a filtering state corresponding to a block set to which a second target pixel belongs is filtering, filter the second target pixel according to a filtering parameter corresponding to a classification set to which the second target pixel belongs; wherein the second target pixel is any one of all pixels in the current frame.
12. The apparatus of claim 7, wherein the current frame is a current encoded frame, the apparatus further comprising:
the first determining module is used for determining target filtering information corresponding to the current coding frame;
the first processing module is used for encoding the target filtering information into a code stream if the target filtering information is first filtering information obtained by classifying all pixels of the current encoding frame;
the second processing module is used for coding the identification information corresponding to the target filtering information in the time domain information list into the code stream if the target filtering information is one piece of second filtering information in the time domain information list;
the time domain information list comprises K pieces of second filtering information, the second filtering information is filtering information obtained by classifying all pixels of a target coding frame, the time sequence of the target coding frame is before the current coding frame, and K is a positive integer.
13. An image filtering apparatus comprising: a memory, a processor, and a program stored on the memory and executable on the processor; it is characterized in that the preparation method is characterized in that,
the processor, which is configured to read a program in a memory to implement the steps in the image filtering method according to any one of claims 1 to 6.
14. A readable storage medium storing a program, wherein the program when executed by a processor implements the steps in the image filtering method according to any one of claims 1 to 6.
CN202010509310.1A 2020-06-07 2020-06-07 Image filtering method, device, equipment and storage medium Active CN111654710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010509310.1A CN111654710B (en) 2020-06-07 2020-06-07 Image filtering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010509310.1A CN111654710B (en) 2020-06-07 2020-06-07 Image filtering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111654710A CN111654710A (en) 2020-09-11
CN111654710B true CN111654710B (en) 2022-06-03

Family

ID=72344545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010509310.1A Active CN111654710B (en) 2020-06-07 2020-06-07 Image filtering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111654710B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866506A (en) * 2020-06-07 2020-10-30 咪咕文化科技有限公司 Image coding method, device, equipment and readable storage medium
CN114615494A (en) * 2020-12-04 2022-06-10 咪咕文化科技有限公司 Image processing method, device and equipment
CN113382257B (en) * 2021-04-19 2022-09-06 浙江大华技术股份有限公司 Encoding method, encoding device, electronic device and computer-readable storage medium
CN114222118B (en) * 2021-12-17 2023-12-12 北京达佳互联信息技术有限公司 Encoding method and device, decoding method and device
CN114387192B (en) * 2021-12-22 2024-05-03 广东中星电子有限公司 Image filtering method, device, electronic equipment and computer readable medium
CN114363613B (en) * 2022-01-10 2023-11-28 北京达佳互联信息技术有限公司 Filtering method and filtering device
CN115063327B (en) * 2022-08-19 2022-11-08 摩尔线程智能科技(北京)有限责任公司 Image processing method and device, and video processing method and device
CN116760983B (en) * 2023-08-09 2023-11-28 中国科学技术大学 Loop filtering method and device for video coding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807403B2 (en) * 2011-10-21 2017-10-31 Qualcomm Incorporated Adaptive loop filtering for chroma components
CN103379319B (en) * 2012-04-12 2018-03-20 中兴通讯股份有限公司 A kind of filtering method, wave filter and the encoder and decoder comprising the wave filter
CN115134607A (en) * 2015-06-11 2022-09-30 杜比实验室特许公司 Method for encoding and decoding image using adaptive deblocking filtering and apparatus therefor
CN111083474A (en) * 2019-12-03 2020-04-28 咪咕文化科技有限公司 Filtering method for inter-frame prediction, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
CN111654710A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111654710B (en) Image filtering method, device, equipment and storage medium
CN111866507A (en) Image filtering method, device, equipment and storage medium
EP3979647A1 (en) Coding/decoding method and device, and storage medium
CN113068032B (en) Image encoding/decoding method, encoder, decoder, and storage medium
EP3232663B1 (en) Decoding methods and devices
CN104754361A (en) Image encoding and decoding method and device
CN111698511B (en) Image filtering method, device, equipment and readable storage medium
CN110365982B (en) Multi-transformation selection accelerating method for intra-frame coding in multipurpose coding
CN109819250B (en) Method and system for transforming multi-core full combination mode
TWI833073B (en) Coding using intra-prediction
CN103096057A (en) Chromaticity intra-frame prediction method and device
CN113544705A (en) Method and apparatus for picture encoding and decoding
CN101453651B (en) A deblocking filtering method and apparatus
JP2022544438A (en) In-loop filtering method and in-loop filtering apparatus
CN111742553A (en) Deep learning based image partitioning for video compression
CN111866506A (en) Image coding method, device, equipment and readable storage medium
CN113709480A (en) Encoding and decoding method, device and equipment
CN112055210B (en) Video image processing method, encoder and computer readable storage medium
CN113365080B (en) Encoding and decoding method, device and storage medium for string coding technology
CN114501004B (en) Filtering processing method, device and machine-readable storage medium
KR20230119718A (en) Methods, devices and instruments used for filtering
CN113347437A (en) Encoding method, encoder, decoder and storage medium based on string prediction
CN116980596A (en) Intra-frame prediction method, encoder, decoder and storage medium
CN116711304A (en) Prediction method, encoder, decoder, and storage medium
JP7460802B2 (en) Image enhancement method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant