CN118101933A - Filtering method, device and equipment - Google Patents

Filtering method, device and equipment Download PDF

Info

Publication number
CN118101933A
CN118101933A CN202410175711.6A CN202410175711A CN118101933A CN 118101933 A CN118101933 A CN 118101933A CN 202410175711 A CN202410175711 A CN 202410175711A CN 118101933 A CN118101933 A CN 118101933A
Authority
CN
China
Prior art keywords
lcu
region
filtering
filter
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410175711.6A
Other languages
Chinese (zh)
Inventor
潘冬萍
孙煜程
陈方栋
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202410175711.6A priority Critical patent/CN118101933A/en
Publication of CN118101933A publication Critical patent/CN118101933A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a filtering method, a device and equipment, wherein the filtering method comprises the following steps: performing region division on a luminance component of a current image frame; determining the region category of the LCU based on the region category identification of the LCU obtained by analyzing the code stream; determining the filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream; ALF filtering is carried out on pixels of the LCU one by one based on the filtering coefficient of the LCU. The method can optimize the filtering effect and improve the coding and decoding performance.

Description

Filtering method, device and equipment
Technical Field
The present application relates to video encoding and decoding technologies, and in particular, to a filtering method, apparatus, and device.
Background
Complete video coding generally includes operations such as prediction, transformation, quantization, entropy coding, filtering, and the like.
Quantization operations exist after block-based motion compensation, thereby creating coding noise, causing video quality distortion, and loop post-processing techniques are used to reduce the effects of such distortion. Loop post-processing techniques include three techniques, deblocking filtering (Deblocking Filter, DBF for short), sample adaptive compensation (SAMPLE ADAPTIVE Offset, SAO for short), and adaptive loop filtering (Adaptive Loop Filter, ALF for short).
An ALF technique used in the coding framework of the audio video coding standard (Audio Video coding Standard, AVS for short) calculates the optimal linear filtering that can be achieved by the original signal and the distorted signal in the mean square sense according to the principle of wiener filtering.
However, it is found in practice that in the ALF technology, according to a fixed area division manner, the pixel characteristics are not considered in the division of the area, so that the ALF filtering performance is affected, resulting in poor ALF filtering performance.
Disclosure of Invention
In view of this, the present application provides a filtering method, apparatus and device.
Specifically, the application is realized by the following technical scheme:
According to a first aspect of an embodiment of the present application, there is provided a filtering method applied to a decoding end device, the method including:
Performing region division on a luminance component of a current image frame;
Determining the region category of the LCU based on the region category identification of the LCU obtained by analyzing the code stream;
Determining the filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream;
ALF filtering is carried out on pixels of the LCU one by one based on the filtering coefficient of the LCU.
According to a second aspect of an embodiment of the present application, there is provided a filtering method applied to an encoding/decoding end device, the method including:
in the ALF filtering of any pixel within a current filtering unit, for any reference pixel, when the reference pixel is not within the current filtering unit:
Under the condition that the pixel value of the reference pixel cannot be acquired, the current filtering unit and the pixel closest to the reference pixel in the boundary area are used for filtering instead of the reference pixel; the boundary area comprises the left boundary outside or the right boundary outside of the current filtering unit, the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the left side of the current filtering unit, and the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the right side of the current filtering unit;
Otherwise, the reference pixel is used for filtering.
According to a third aspect of the embodiment of the present application, there is provided a filtering method applied to a decoding end device, the method including:
When determining that the current LCU of the current image frame starts ALF filtering, acquiring a region coefficient identifier of a merging region to which the current LCU belongs;
Acquiring a filtering coefficient of the current LCU based on a region coefficient identifier of a merging region to which the current LCU belongs; the region coefficient identifier is used for identifying the filter coefficient used by the merging region to which the LCU belongs in the preset multiple groups of filter coefficients;
ALF filtering is carried out on pixels of the current LCU one by one based on the filtering coefficient of the current LCU.
According to a fourth aspect of an embodiment of the present application, there is provided a filtering method applied to a decoding end device, the method including:
when determining that ALF filtering is started on a current LCU of a current frame image, acquiring a coefficient selection identifier of the current LCU;
Determining a filtering coefficient of the current LCU based on a merging area to which the current LCU belongs and a coefficient selection identifier of the current LCU; wherein the coefficient selection identification is used for identifying a filter coefficient selected for use by the current LCU in a plurality of sets of candidate filter coefficients;
ALF filtering is carried out on pixels of the current LCU one by one based on the filtering coefficient of the current LCU.
According to a fifth aspect of an embodiment of the present application, there is provided a filtering method applied to a decoding end device, the method including:
When determining that ALF filtering is started on a current LCU of a current frame image, acquiring a filter shape of a merging area of the current LCU based on the merging area of the current LCU;
Based on the filter shape, obtaining a filter coefficient of a merging area to which the current LCU belongs;
ALF filtering is performed on pixels of the current LCU one by one based on the filter shape and the filter coefficients.
According to a sixth aspect of the embodiment of the present application, there is provided a filtering method applied to a decoding end device, the method including:
when determining that ALF filtering is started on a current LCU of a current frame image, acquiring a filtering coefficient of a merging region to which the current LCU belongs and a weight coefficient of each reference pixel position based on the region to which the current LCU belongs;
and performing ALF filtering on the pixels of the current LCU one by one based on the filtering coefficient and the weight coefficient of each reference pixel position.
According to a seventh aspect of the embodiment of the present application, there is provided a filtering method applied to an encoding end device, the method including:
Performing region division on a luminance component of a current image frame;
Classifying each LCU in any region, and dividing the region into a plurality of region categories based on the category of each LCU;
Carrying out region combination on each region category, and determining a filter coefficient of each combined region;
And writing the filter coefficient of each merging region and the region category identification of each LCU into the code stream.
According to an eighth aspect of an embodiment of the present application, there is provided a filtering method applied to an encoding end device, the method including:
For any merging region of the current image frame, determining filter coefficients used by the merging region based on RDO decision;
Determining a region coefficient identification of the merging region based on the filter coefficients used by the merging region; wherein, the region coefficient mark is used for marking the filter coefficient used by the combining region in the preset multiple groups of filter coefficients;
and writing the filter coefficient used by each merging area and the area coefficient identification of each merging area into the code stream.
According to a ninth aspect of the embodiment of the present application, there is provided a filtering method applied to an encoding end device, the method including:
for any merging region of the current image frame, determining filter coefficients used by the merging region from a plurality of groups of filter coefficients based on RDO decision;
determining a coefficient selection identifier of each LCU in the merging area based on the filtering coefficient used by the merging area; wherein the coefficient selection identification is used for identifying filter coefficients selected for use by each LCU in a plurality of groups of candidate filter coefficients;
And writing the filter coefficient used by each merging area and the coefficient selection identifier of each LCU into the code stream.
According to a tenth aspect of the embodiment of the present application, there is provided a filtering method applied to an encoding end device, the method including:
for any merging region of the current image frame, determining a filter shape and filter coefficients used by the merging region based on RDO decisions;
The filter shape and filter coefficients used by each merge region are written into the code stream.
According to an eleventh aspect of the embodiment of the present application, there is provided a filtering method applied to an encoding end device, the method including:
For any merging region of the current image frame, determining a filter coefficient used by the merging region and a weight coefficient of each corresponding reference pixel position based on RDO decision;
and writing the filter coefficients used by each merging region and the weight coefficients of the corresponding reference pixel positions into the code stream.
According to a twelfth aspect of an embodiment of the present application, there is provided a filtering apparatus applied to a decoding end device, the apparatus including:
a dividing unit for performing region division on a luminance component of a current image frame;
The first determining unit is used for determining the area category of the LCU based on the area category identification of the LCU obtained by analysis from the code stream;
the second determining unit is used for determining the filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream;
and the filtering unit is used for carrying out ALF filtering on pixels of the LCU one by one based on the filtering coefficient of the LCU.
According to a thirteenth aspect of an embodiment of the present application, there is provided a filtering apparatus applied to an encoding/decoding end device, the apparatus including:
A filtering unit, configured to, in performing ALF filtering on any pixel in a current filtering unit, for any reference pixel, when the reference pixel is not in the current filtering unit:
Under the condition that the pixel value of the reference pixel cannot be acquired, the current filtering unit and the pixel closest to the reference pixel in the boundary area are used for filtering instead of the reference pixel; the boundary area comprises the left boundary outside or the right boundary outside of the current filtering unit, the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the left side of the current filtering unit, and the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the right side of the current filtering unit;
Otherwise, the reference pixel is used for filtering.
According to a fourteenth aspect of embodiments of the present application, there is provided a decoding end apparatus, including a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being configured to execute the machine-executable instructions to implement the filtering method applied to the decoding end apparatus.
According to a fifteenth aspect of embodiments of the present application, there is provided an encoding-end device, including a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being configured to execute the machine-executable instructions to implement the filtering method applied to the encoding-end device.
The filtering method of the embodiment of the application carries out regional division on the brightness component of the current image frame; determining the region category of the LCU based on the region category identification of the LCU obtained by analyzing the code stream; determining a filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream; ALF filtering is carried out on pixels of the LCU one by one based on the filtering coefficient of the LCU, and the LCU in the area obtained by dividing in a fixed area dividing mode is classified, so that the area division is more in line with the pixel characteristics of each LCU, and therefore the ALF filtering effect can be optimized, and the coding and decoding performance is improved.
Drawings
FIG. 1is a schematic flow chart of video encoding and decoding;
FIG. 2 is a schematic illustration of a region division;
FIG. 3 is a schematic illustration of a region merge;
FIG. 4A is a schematic diagram of a 7*7 cross plus 5*5 square center symmetric filter shape;
FIG. 4B is a schematic diagram of a reference pixel corresponding to the filter coefficients shown in FIG. 4A;
FIG. 5 is a schematic diagram of a sample filter compensation unit according to an exemplary embodiment of the present application;
FIG. 6 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 7 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 8 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 9 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 10 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 11 is a flow chart of a filtering method according to an exemplary embodiment of the present application;
FIG. 12 is a schematic diagram of a 7*7 cross plus 3*3 square center symmetric filter shape;
FIG. 13 is a schematic diagram of a merge area shown in accordance with an exemplary embodiment of the present application;
FIG. 14 is a schematic diagram of a plurality of different filter shapes shown in an exemplary embodiment of the application;
FIG. 15 is a schematic diagram of a 3*3 block of pixels, shown in accordance with an exemplary embodiment of the present application;
FIG. 16 is a schematic diagram of a filter with asymmetric filter coefficients according to an exemplary embodiment of the present application;
FIG. 17A is a schematic diagram of a reference pixel location, according to an exemplary embodiment of the application;
FIG. 17B is a schematic diagram of another reference pixel location shown in accordance with an exemplary embodiment of the present application;
fig. 18A and 18B are schematic diagrams illustrating a plurality of secondary divisions of an area obtained by a fixed area division manner according to an exemplary embodiment of the present application;
FIG. 18C is a schematic view showing the region numbers corresponding to the respective sub-divisions of FIG. 18A according to an exemplary embodiment of the present application;
fig. 19 is a schematic structural view of a filtering apparatus according to an exemplary embodiment of the present application;
Fig. 20 is a schematic structural view of a filtering apparatus according to an exemplary embodiment of the present application;
Fig. 21 is a schematic hardware structure of a decoding end device according to an exemplary embodiment of the present application;
fig. 22 is a schematic diagram of a hardware structure of an encoding end device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical terms related to the embodiments of the present application, the main flow of the existing video encoding and decoding, and the implementation of the ALF filtering technology are briefly described below.
1. Technical terminology
1. Rate distortion principle (Rate-Distortion Optimized, RDO for short): the indexes for evaluating the coding efficiency include: code rate and peak signal-to-Noise Ratio (PSNR). The smaller the code rate, the larger the compression rate; the greater the PSNR, the better the reconstructed image quality. At the time of mode selection, the discrimination formula is essentially an integrated evaluation of both.
Cost of pattern correspondence: j (mode) =d+λ×r. Where D represents Distortion (Distortion), typically measured using an SSE (difference squared) index, which refers to the difference squared sum of the reconstructed block and the source image block; lambda is the Lagrangian multiplier; r is the actual number of bits required for coding an image block in this mode, including the sum of bits required for coding mode information, motion information, residuals, etc.
In the mode selection, if the RDO principle is used to make a comparison decision on the coding mode, the best coding performance can be generally ensured.
2. Tree Coding Unit (CTU for short): conventional video coding is implemented based on macro blocks, and for video in 4:2:0 sampling format, one macro block contains a luminance block with 16×16 size and two chrominance blocks with 8×8 size, and considering the self-characteristics of high definition video/super definition video, CTU is introduced in general video coding (VERSATILE VIDEO CODING, abbreviated as VVC), whose size is specified by the encoder, allowing for larger than the macro block size. A luminance tree coding unit (Coding Tree Block, abbreviated CTB) and two chrominance CTBs at the same location, together with corresponding syntax elements, form a CTU. For a luminance CTB of l×l size in VVC, L e {8, 16, 32, 64, 128}.
The range of the brightness CTB size is: {8×8, 16×16, 32×32, 64×64, 128×128}
The range of the chroma CTB size is {4×4,8×8, 16×16, 32×32, 64×64}
In high resolution video coding, better compression can be achieved using larger CTBs.
3. Deblocking filtering: the image coding process is based on different blocks, each block is coded relatively independently, and since each block uses different parameters, the distribution characteristics in the blocks are independent of each other, so that a discontinuous phenomenon exists at the edges of the blocks, which can be called as a blocking effect. The deblocking filtering mainly smoothes the block boundaries, removing blocking artifacts.
4. Sample adaptive compensation: starting from the pixel domain, classifying the reconstructed images according to the characteristics of the reconstructed images, and then performing compensation processing on the pixel domain. Mainly in order to reduce ringing effects.
5. Adaptive loop filtering (ALF filtering): after DB and SAO, the main purpose is to further improve image quality under objective conditions. The ALF technology constructs a multiple linear regression model based on least square according to the characteristics of reference pixels, and performs filtering compensation on a pixel domain.
6. Wiener filter (WIENER FILTERING): the essence is to minimize the mean square value of the estimation error (defined as the difference between the desired response and the actual output of the filter).
2. Main flow of video coding and decoding
Referring to fig. 1 (a), taking video encoding as an example, the video encoding generally includes processes of prediction, transformation, quantization, entropy encoding, etc., and further, the encoding process may be implemented according to the framework of fig. 1 (b).
The prediction can be divided into intra-frame prediction and inter-frame prediction, wherein the intra-frame prediction is to utilize surrounding coded blocks as references to predict a current uncoded block, so that redundancy in a space domain is effectively removed. Inter prediction is to predict a current picture using neighboring coded pictures, effectively removing redundancy in the temporal domain.
Transformation refers to the conversion of an image from the spatial domain to the transform domain, which is represented by transform coefficients. Most images contain more flat areas and areas with slow change, and proper transformation can convert the images from scattered distribution in a space domain to relatively concentrated distribution in a transformation domain, remove the frequency domain correlation among signals and can effectively compress a code stream by matching with a quantization process.
Entropy coding is a lossless coding method that converts a series of element symbols into a binary code stream for transmission or storage, where the input symbols may include quantized transform coefficients, motion vector information, prediction mode information, transform quantization-related syntax, and the like. Entropy encoding can effectively remove redundancy of video element symbols.
The foregoing description is presented by taking coding as an example, and the video decoding and the video coding are opposite, that is, the video decoding generally includes entropy decoding, prediction, inverse quantization, inverse transformation, filtering, and the like, where the implementation principle of each process is the same as or similar to that of the entropy coding.
3. Implementation of ALF filtering techniques
The ALF coding flow may include: region division, reference pixel acquisition, region merging and filter coefficient calculation, CTU decision to judge whether each LCU starts filtering.
The parameters to be calculated and acquired in the whole process are as follows:
1) The number of filtering parameters;
2) Combining the identification of the areas;
3) Each set of filter coefficients;
4) Whether the LCU starts a filtering identifier or not;
5) Whether the current component (Y, U, V) initiates a filter flag.
A detailed description of some of the processing and concepts in the ALF filtering process follows.
1. Region division
In the ALF process, the acquired reconstructed video data is subjected to partition processing on the data on the luminance component and non-partition processing on the data on the chrominance component.
By way of example, the specific implementation procedure of region division may be: the image is segmented into 16 regions of substantially equal size and aligned based on LCUs. The non-rightmost region Width is (((pic_width_ InLcus +1)/4) ×lcu_width), where pic_width_ InLcus represents the number of LCUs over the image Width and lcu_width represents the Width of each Lcu. The rightmost region width is the difference between the image width and the non-rightmost region width (the image width minus the total width of the non-rightmost three regions).
Similarly, the Height of the non-bottom region is (((pic_height_ InLcus +1)/4) ×lcu_height), where pic_height_ InLcus represents the number of LCUs on the image Height and lcu_height represents the Height of each Lcu. The height of the bottommost region is the difference between the image height and the height of the non-bottommost three regions (image height minus the total height of the non-bottommost three regions).
After obtaining the region division result of the whole graph, an index value is allocated to each region, and the schematic diagram can be shown in fig. 2.
2. Region merging
The region merging operation is to sequentially judge whether adjacent regions are merged according to the index value. The purpose of the merging is to reduce the coding coefficients. A merge flag is required to indicate whether the current region merges with the neighboring region.
For example, after the above-mentioned region division method is performed, a total of 16 regions (may be referred to as 16 types or 16 groups (groups), and the index value is sequentially 0 to 15) are included, and in the first time of the merging, the merging of the region 0 and the region 1, the merging of the region 1 and the region 2, the merging of the region 2 and the region 3, …, the merging of the region 13 and the region 14, and the merging of the region 14 and the region 15 may be sequentially attempted, and in the merging method with the minimum error, the first time of the region merging is performed, so that the 16 regions become 15 regions after the merging.
For 15 regions after the first merging (assuming that the region 2 and the region 3 are merged to obtain the region 2+3), sequentially trying to merge the region 0 and the region 1, merge the region 1 and the region 2+3, merge the region 2+3 and the region 4, …, merge the region 13 and the region 14, merge the region 14 and the region 15, and perform the second region merging according to the merging mode with the minimum error, so that the 15 regions are merged to become 14 regions.
For 14 regions after the second merging (assuming that the region 14 and the region 15 are merged to obtain the region 14+15, that is, the merged region includes the region 2+3 and the region 14+15), sequentially trying to merge the region 0 and the region 1, merge the region 1 and the region 2+3, merge the region 2+3 and the region 4, merge the region …, merge the region 12 and the region 13, merge the region 13 and the region 14+15, and perform the third region merging according to the merging mode with the minimum error, thereby forming 13 regions after 14 regions are merged.
And so on until merging into 1 region, the schematic diagram of which can be shown in fig. 3.
After the above region merging operation is completed, the error in wiener filtering of the entire frame image may be calculated and the region merging mode with the smallest error may be determined as the final region merging mode in order of no region merging (total of 16 regions), one region merging (total of 15 regions), …, 14 region merging (total of 2 regions), and 15 region merging (total of 1 region).
3. Reference pixel, filter coefficient
After the region division is performed in the above manner, the filter coefficients can be calculated according to the wiener filter principle based on the reference pixels of the pixels in each region.
And for all pixel points participating in filtering, taking each pixel point as a center, taking surrounding pixel points as reference pixels in a certain range, taking the reference pixels and the current pixels as input, taking the original value of each pixel as a target, and calculating a filtering coefficient by using a least square method.
Please refer to fig. 4A, which is a schematic diagram of a filter shape, as shown in fig. 4A, which is a 7*7 cross-shape plus 5*5 square central symmetry filter shape, and the reference pixel corresponding to the filter coefficient can refer to fig. 4B.
As shown in fig. 4B, the filter coefficients of the centrosymmetric filters P 0 and P 28 are the same, so that (P i+P28-i, i=0, …, 13) are respectively obtained as the same feature to be input, and P 14 is used as one feature to be input during the training of the coding end, so that 15 filter coefficients are trained.
Namely, the selection of the reference pixels is as follows:
E[i]=(Pi+P28-i)
E[14]=P14
Wherein P i belongs to a pixel in the pre-filter reconstruction map, ei is the value of the reference pixel, i=0, 1,2,3,4,5,6,7,8,9, 10, 11, 12, 13.
The objective of wiener filtering is to linearly combine the reference pixel values to approximate them to the pixel values of the original image.
The ALF technique is based on a maximum coding unit (Largest Coding Unit, LCU for short). LCUs belonging to the same combined region are filtered using the same set of filter coefficients.
4. Adaptive correction filter unit
As shown in fig. 5, the adaptive correction filter unit is derived from the current maximum coding unit as follows:
4.1, deleting the part of the sample area where the current maximum coding unit C is located beyond the image boundary to obtain a sample area D;
4.2, if the sample where the lower boundary of the sample area D is located does not belong to the lower boundary of the image, contracting the lower boundaries of the luminance component and chrominance component sample areas D upwards by four rows to obtain a sample area E1; otherwise, let sample area E1 equal sample area D. The last line of samples of sample region D is the lower boundary of the region;
4.3, if the sample where the upper boundary of the sample area E1 is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of cross_patch_loopfilter_enable_flag is '0', making the sample area E2 equal to the sample area E1; otherwise, the upper boundaries of the luminance component and chrominance component sample areas E1 are extended up by four lines to obtain sample areas E2. The first row of samples of sample region E1 is the upper boundary of the region;
4.4, taking the sample area E2 as a current adaptive correction filtering unit. The first line of samples of the image is the upper boundary of the image and the last line of samples is the lower boundary of the image.
5. Adaptive correction filter operation
When the current adaptive filtering unit is determined to be filtered, the reference pixels used in the adaptive correction filtering process are samples in the adaptive correction filtering unit, and the samples are directly used for filtering;
If the sample used in the self-adaptive correction filtering process is the sample in the self-adaptive correction filtering unit, directly using the sample for filtering; otherwise, the filtering is performed as follows:
5.1, if the sample is outside the image boundary or outside the slice boundary and the value of cross_patch_loopfilter_enable_flag is '0', i.e. filtering is allowed to be performed across the slice boundary, using the sample closest to the sample in the adaptive correction filtering unit to replace the sample for filtering;
5.2, if the sample is outside the upper boundary or the lower boundary of the adaptive correction filter unit, using the sample closest to the sample in the adaptive correction filter unit to replace the sample for filtering;
and 5.3, if the sample is not outside the upper boundary of the adaptive correction filtering unit and is not outside the lower boundary of the adaptive correction filtering unit, directly filtering by using the sample.
6. Coding Tree Unit (CTU) decision
For the encoding end device, after the region merging calculation obtains the filter coefficient of each region, a CTU decision needs to be performed, and the CTU decision also uses LCUs as a basic unit to determine whether each LCU in the current image uses ALF (i.e. whether wiener filtering is started).
The encoding end device may calculate rate distortion costs before and after the current LCU is turned on and off to determine whether the current LCU uses ALF. If the current LCU is marked as using ALF, each pixel within the LCU is wiener filtered.
In the related art, the ALF technique delivers only one set of fixed filter coefficients for each region, and the filter coefficient shape is fixed. There may be some problems such as: the fixedly divided regions cannot divide pixels of the same characteristics into the same class or the filter shape used is not appropriate. Meanwhile, each divided region transmits one set of filter coefficients at most, and one set of filter coefficients is insufficient for a larger region or a region with complex image texture.
In order to optimize the ALF filtering effect and improve the coding and decoding performance, the embodiment of the application provides the following optimization scheme:
Scheme 1, for each frame, with LCU as the minimum unit, adaptively divide it into multiple regions, where each region may include more than one LCU, so it is proposed to classify each LCU, and divide LCUs of the same region into N classes, where N is a positive integer.
For example, if LCUs in each region are all classified into the same class, a fixed region classification scheme corresponding to the conventional ALF scheme is adopted. In one example, to distinguish from the fixed region partitioning approach of the traditional ALF scheme, N+.2.
Scheme 2, multiple sets of filter coefficients can be transferred in each region, and the shape of each set of filter can be the same or different.
Scheme 3, based on each LCU self-adaptive selection of a set of filter coefficients, the LCUs of the same region can select the filter coefficients of adjacent regions.
Scheme 4, each region can only pass one set of filter coefficients, but the filter shape of each region may not be the same.
Scheme 5, modify symmetrical filter into asymmetric filter, filter coefficient the same with symmetrical position, optimize to filter coefficient meet certain proportional relation on the symmetrical position, for example 0.5:1.5 or 0.6:1.4, etc.
And 6, optimizing the value of the sample of the boundary during filtering.
In order to make the above objects, features and advantages of the embodiments of the present application more comprehensible, the following describes the technical solution of the embodiments of the present application in detail with reference to the accompanying drawings.
It should be noted that, in the following, the ALF filtering is taken as an example by adopting the scheme provided by the embodiment of the present application under the size of the LCU, but this is not a limitation of the protection scope of the present application, and other sizes or methods for representing image blocks may be used instead in the embodiment of the present application, for example, image blocks with size n×m, where N is a positive integer less than or equal to the width of the image frame, and M is a positive integer less than or equal to the height of the image frame.
Example 1
Referring to fig. 6, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to a decoding end device, and as shown in fig. 6, the filtering method may include the following steps:
Step S600, the luminance component of the current image frame is divided into regions.
For example, the implementation of the region division of the luminance component of the image frame may be referred to in the above description of the "region division" section, and the embodiments of the present application are not described herein.
Step S610, determining the region category of the LCU based on the region category identification of the LCU obtained from the analysis of the code stream.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, when the luminance component of the image frame is divided into a plurality of regions according to a fixed region division mode, for each region, LCUs in the region can be classified based on the pixel characteristics of each pixel in the region, and the LCUs in the region are classified into at least one category, i.e., one region can be classified into at least one region (which can be called a sub-region or a region category) by the LCU classification mode.
For example, for any LCU, the category of the region to which the LCU belongs may be determined based on the region to which the LCU belongs and the category of the LCU within the region to which the LCU belongs.
For example, when the encoding end device sends the code stream to the decoding end device, the region class identifier for identifying the region class to which each LCU belongs may be carried in the code stream and sent to the decoding end device.
For example, for any LCU, the decoding end device may parse the region class identifier of the LCU from the code stream, and determine, based on the parsed region class identifier of the LCU, a region class to which the LCU belongs.
Step S620, determining the filter coefficient of the LCU based on the region category to which the LCU belongs and the filter coefficient obtained by analyzing the code stream.
In the embodiment of the present application, when LCUs in each region are classified in the above manner, the encoding end device may perform region merging on each region class to obtain at least one merged region (may be referred to as a merged region), and determine a filter coefficient of each merged region.
The implementation manner of the region merging for each region category is similar to that described in the above section of "region merging", and the embodiments of the present application are not described herein.
For example, for any region class, the encoding end device may assign a coefficient index to the merging region to which it belongs, where the coefficient index corresponds to the filter coefficient of one of the merging regions.
The encoding end device may write the filter coefficients of each merging region and the index of each region class into the code stream, and send the code stream to the decoding end device.
For example, for any LCU, the decoding side device may determine, based on the region class to which the LCU belongs, a coefficient index of the region class to which the LCU belongs, and determine, based on the coefficient index and a filter coefficient obtained by parsing from the code stream, a filter coefficient of the LCU.
Step S630, performing ALF filtering on pixels of the LCU one by one based on the filtering coefficient of the LCU.
In the embodiment of the application, for any LCU, when the filter coefficient of the LCU is determined, ALF filtering can be performed on pixels of the LCU one by one based on the filter coefficient of the LCU.
Therefore, the LCUs in the areas obtained by dividing according to the fixed area dividing mode are classified, so that the area dividing is more in line with the pixel characteristics of each LCU, the ALF filtering effect can be optimized, and the coding and decoding performance is improved.
In one embodiment, in step S610, determining, based on the region class identifier of the LCU parsed from the code stream, the region class to which the LCU belongs includes:
And determining the region category of the LCU based on the region of the LCU and the region category identification of the LCU.
For example, for any LCU, the region type to which the LCU belongs may be determined based on the region to which the LCU belongs (the region obtained by the fixed region division manner) and the region type identifier of the LCU.
In one example, the region category identification of the LCU is used to identify a category of the LCU in the region to which the LCU belongs, where the category of the LCU in the region to which the LCU belongs is determined by classifying each LCU in the region to which the LCU belongs;
the determining, based on the region to which the LCU belongs and the region category identifier of the LCU, the region category to which the LCU belongs may include:
And determining the region category of the LCU based on the category number of each region, the region of the LCU and the region category identification of the LCU.
For example, for any LCU, the decoding end device may determine, based on the region class identifier of the LCU obtained by parsing the code stream, a class of the LCU in the region to which the LCU belongs.
For example, assuming that LCUs within an area are divided into at most 2 classes, for LCUs divided into a first class, the area class identification may be 0; for LCUs that are classified into the second class, their regional class identification may be 1.
For any LCU in any region, when the value of the region type identifier of the LCU obtained by analyzing the code stream is 0, determining the type of the LCU in the region as a first type; when the value of the regional category identifier of the LCU obtained by parsing the code stream is 1, the category of the LCU in the region can be determined to be the second category.
For example, for any LCU, the decoding end device may determine, based on the number of categories of each category, the region to which the LCU belongs, and the region category to which the LCU belongs, the region category to which the LCU belongs.
As an example, the determining, based on the number of categories of each region, the region to which the LCU belongs, and the region category identifier of the LCU, the region category to which the LCU belongs may include:
Determining the total number of categories of each region before the region to which the LCU belongs based on the category number of each region before the region to which the LCU belongs;
and determining the region category of the LCU based on the total number of categories of each region before the region of the LCU and the region category identification of the LCU.
For any LCU, the total number of categories of each region before the region to which the LCU belongs may be determined based on the number of categories of each region, and the regional category to which the LCU belongs may be determined based on the total number of categories of each region before the region to which the LCU belongs, and the regional category identifier of the LCU.
For example, assuming that the brightness of the current image frame is divided into L regions, and LCUs in each region are divided into N categories, for any LCU in the region K, when the value of the region category identifier of the LCU obtained by parsing from the code stream is m, it may be determined that the region category to which the LCU belongs is n×k+m; wherein, m is 0, N-1, N is more than or equal to1, K is 0, L-1.
In some embodiments, in step S620, before determining the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient obtained by parsing the code stream, the method may further include:
determining whether to initiate ALF filtering for the LCU;
When determining to start ALF filtering on the LCU, determining to execute the operation of determining the filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream.
For example, for any LCU, the encoding end device may determine whether to initiate ALF filtering for that LCU based on RDO decisions.
Illustratively, the decoding end device may determine whether to initiate ALF filtering for the LCU prior to ALF filtering the LCU.
For example, the decoding end device may determine whether to start ALF filtering for the LCU based on an identification parsed from the code stream to identify whether to start ALF filtering for the LCU.
When the decoding end device determines to start ALF filtering on the LCU, the filtering coefficient of the LCU may be determined according to the manner described in the above embodiment based on the region class to which the LCU belongs and the filtering coefficient obtained by parsing the code stream.
In one example, the determining whether to initiate ALF filtering for the LCU may include:
Analyzing LCU coefficient identification of the LCU from the code stream; wherein, the LCU coefficient identifier is used for identifying a filter coefficient used by the LCU in at least one group of filter coefficients used by the LCU in the merge region to which the LCU belongs;
and when the value of the LCU coefficient identifier of the LCU is not the first value, determining to start ALF filtering on the LCU.
Illustratively, in order to optimize the ALF filtering effect and improve the codec performance, the filter coefficients used by one combining region are not limited to one set of filter coefficients, but one or more sets of filter coefficients may be selected according to the actual situation.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients and determine that the merge region uses one or more of the sets of filter coefficients based on RDO decisions. For any LCU in the region, the encoding end device may identify, by LCU coefficient identification, a filter coefficient used by the LCU from among one or more sets of filter coefficients used by the merge region.
For any LCU, when the value of the LCU coefficient identifier of the LCU is a first value, it indicates that ALF filtering is not started for the LCU.
For any LCU, when the LCU coefficient identifier of the LCU analyzed by the decoding end equipment from the code stream is a non-first value, the ALF filtering can be determined to be started for the LCU.
For example, assuming that the first value is 0, for any LCU, when the LCU coefficient identifier of the LCU obtained by the decoding end device from the code stream by parsing is 0, it may be determined that ALF filtering is not started for the LCU; when the LCU coefficient identifier of the LCU obtained by the decoding end device through parsing from the code stream is not 0, it may be determined that ALF filtering is started on the LCU, and at this time, the decoding end device may determine a filter coefficient used by the LCU according to the LCU coefficient identifier of the LCU.
It should be noted that, when the value of the LCU coefficient identifier of the LCU is not the first value, if a group of filter coefficients is used in the merge area to which the LCU belongs, the filter coefficient of the LCU is the group of filter coefficients; if the LCU belongs to the combining area, multiple groups of filter coefficients are used, the filter coefficients of the LCU need to be determined according to the specific value of the LCU coefficient identifier of the LCU.
In one example, determining the filter coefficients of the LCU based on the region class to which the LCU belongs and the filter coefficients parsed from the code stream may include:
Determining a filter coefficient of the LCU based on the region category of the LCU, the filter coefficient obtained by analyzing the code stream and the region coefficient identification of the merging region of the LCU obtained by analyzing the code stream; the region coefficient identifier is used for identifying the filter coefficient used by the merging region to which the LCU belongs in the preset multiple groups of filter coefficients.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients and determine that the merge region uses one or more of the multiple sets of filter coefficients based on RDO decisions and write a region coefficient identification to the code stream that identifies the filter coefficients used by the merge region.
For example, for any LCU of any merging region, the decoding end device may determine a filter coefficient used by the merging region based on a region coefficient identifier of the merging region parsed from the code stream.
For example, assuming that the preset plurality of sets of filter coefficients include two sets of filter coefficients (assuming that the filter coefficients are a and B), for any merging region, when the encoding end device determines that the merging region uses the filter coefficient a, the encoding end device determines that a value of a region coefficient identifier of the merging region is 0; when the encoding end equipment determines that the merging area uses the filter coefficient B, the encoding end equipment determines that the value of the area coefficient identifier of the merging area is 1; when the encoding end device determines that the merging area uses the filter coefficient A and the filter coefficient B, the encoding end device determines that the value of the area coefficient identifier of the merging area is 2.
For any merging region, when the value of the region coefficient identifier of the merging region analyzed by decoding end equipment from the code stream is 0, determining that the region uses a filter coefficient A; when the value of the region coefficient identifier of the merging region analyzed by the decoding end equipment from the code stream is 1, determining that the merging region uses a filter coefficient B; when the value of the region coefficient identifier of the merging region analyzed by the decoding end device from the code stream is 2, determining that the merging region uses the filter coefficient A and the filter coefficient B.
For any merging region, when the decoding end device determines that the region uses a group of filter coefficients based on the region coefficient identifier of the merging region obtained by parsing the code stream, for any LCU of the merging region, when determining that ALF filtering is started for the LCU, if the value of the LCU coefficient identifier of the LCU is a non-first value, it may be determined that the filter coefficient used by the LCU is the filter coefficient used by the merging region; when the decoding end device determines that the combining area uses multiple groups of filter coefficients based on the area coefficient identifier of the combining area obtained by parsing the code stream, when determining that ALF filtering is started on the LCU, the decoding end device may determine the filter coefficient used by the LCU (one group of filter coefficients in multiple groups of filter coefficients used by the combining area) based on the LCU coefficient identifier of the LCU.
As an example, determining the filter coefficient of the LCU based on the region class to which the LCU belongs, the filter coefficient parsed from the code stream, and the region coefficient identifier of the merge region to which the LCU belongs parsed from the code stream may include:
When the region coefficient identification of the combining region to which the LCU belongs is used for determining that the combining region to which the LCU belongs uses a plurality of sets of filter coefficients, the filter coefficients obtained by analysis from the code stream and the LCU coefficient identification of the LCU are determined based on the region category to which the LCU belongs.
For example, for any LCU, when the decoding end device determines, based on the region type identifier of the LCU obtained by parsing the code stream, the region type to which the LCU belongs, the merging region to which the LCU belongs may also be determined based on the region type to which the LCU belongs, and based on the region coefficient identifier of the merging region to which the region type belongs obtained by parsing the code stream, a filter coefficient used by the region type may be determined.
For example, assuming that the luminance component of the current image frame is divided into 16 regions according to a fixed region division manner, by classifying LCUs of the region classes, 32 region classes are obtained in total, after the region classes are combined, an index table may be obtained based on the region class combining situation, where the index table may be a 32-element one-dimensional vector, and each element in the 32-element one-dimensional vector is an index of a combined region to which each region class belongs in turn.
Assuming that the 32-element one-dimensional vector is { a1, a2, a3, …, a32}, a1 is the index … a32 of the merge region to which the region type 0 belongs and an index of the merge region to which the region type 32 belongs. Assuming that a1 to a5 are 0 and a6 to a11 are 1, it is indicated that the region classes 0 to 4 are merged into the merge region 0 and the region classes 5 to 10 are merged into the merge region 1.
The encoding end device may send the index table to the decoding end device through the code stream, so that the decoding end device determines, based on the index table obtained by parsing the code stream, a merging area to which each area category belongs, so that, for any LCU, the area category to which the LCU belongs may be determined based on the area category identifier of the LCU, and the merging area to which the LCU belongs may be determined according to the area category to which the LCU belongs.
When the decoding end device determines that the combining area to which the LCU belongs uses multiple sets of filter coefficients, the decoding end device may determine the filter coefficients used by the LCU based on the LCU coefficient identifier of the LCU obtained by parsing the code stream.
It should be noted that, for any merging region, the filter shapes of the sets of filter coefficients used in the merging region may be identical or not identical.
For example, assuming that the combining region 1 uses the filter coefficient a, the filter coefficient B, and the filter coefficient C, the filter shapes of the filter coefficient a, the filter coefficient B, and the filter coefficient C may be identical, or all different, or partially identical, such as the filter shapes of the filter coefficient a and the filter coefficient B are identical, but the filter shapes of the filter coefficient a and the filter coefficient C are different.
In some embodiments, in step S620, determining the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient parsed from the code stream may include:
Determining the filter coefficient of the LCU based on the region category of the LCU, the filter coefficient obtained by analyzing the code stream and the coefficient selection mark of the LCU; wherein the coefficient selection identifies filter coefficients for identifying a selection of LCU among the plurality of sets of candidate filter coefficients.
For example, in order to optimize the ALF filtering effect and improve the codec performance, the LCU is not limited to selecting the filter coefficients of the merge region to which it belongs, but may adaptively select a set of filter coefficients from multiple sets of filter coefficients to perform ALF filtering.
For example, for any LCU, the candidate filter coefficients of the LCU may include, but are not limited to, filter coefficients of a merge region to which the LCU belongs and filter coefficients of neighboring merge regions of the merge region to which the LCU belongs, so that, in a case that each merge region passes a set of filter coefficients, one LCU may have multiple sets of candidate filter coefficients, so as to improve flexibility of LCU filter coefficient selection, optimize ALF filter effect, and improve codec performance.
For any LCU, the encoding end device may determine, based on the RDO decision, a filter coefficient used by the LCU in the plurality of sets of candidate filter coefficients, and write a coefficient selection identifier corresponding to the filter coefficient into the code stream and send the code stream to the decoding end device.
The decoding end device may determine the filter coefficient of the LCU based on the region class to which the LCU belongs, the filter coefficient obtained by parsing the code stream, and the coefficient selection identifier of the LCU.
In an example, the determining the filter coefficient of the LCU based on the region class to which the LCU belongs, the filter coefficient obtained by parsing the code stream, and the coefficient selection identifier of the LCU may include:
when the value of the coefficient selection identifier of the LCU is a first value, determining the filter coefficient of the previous merging region of the merging region to which the LCU belongs as the filter coefficient of the LCU;
When the value of the coefficient selection identifier of the LCU is a second value, determining the filter coefficient of the merging area to which the LCU belongs as the filter coefficient of the LCU;
and when the value of the coefficient selection identifier of the LCU is a third value, determining the filter coefficient of the later merging region of the merging region to which the LCU belongs as the filter coefficient of the LCU.
For example, for any LCU, its candidate filter coefficients may include the filter coefficients of the merge region to which it belongs, the filter coefficients of the previous merge region to which it belongs, and the filter coefficients of the next merge region to which it belongs.
For example, the previous merge area of the merge area to which the LCU belongs is the merge area corresponding to the previous adjacent index of the merge area to which the LCU belongs.
For example, the merging region of the merging region to which the LCU belongs is a merging region corresponding to a next adjacent index of the merging region to which the LCU belongs.
For example, assuming that the merge area to which the LCU belongs is merge area 2, and the corresponding index is 2, the previous merge area of the merge area to which the LCU belongs is the merge area (i.e., merge area 1) corresponding to the previous adjacent index (i.e., 1) of index 2, and the next merge area of the merge area to which the LCU belongs is the merge area (i.e., merge area 3) corresponding to the next adjacent index (i.e., 3) of index 2.
For any LCU, the encoding end device may determine a filter coefficient used by the encoding end device based on the RDO decision, and when determining the filter coefficient used by the LCU, the encoding end device may determine that the value of the coefficient selection identifier of the LCU is a first value, such as 0, when determining the filter coefficient of the previous merge region of the merge region to which the LCU belongs; when determining the filter coefficient of the combining area to which the LCU belongs, determining the value of the coefficient selection identifier of the LCU as a second value, for example, 1; when determining the filter coefficient used by the LCU in the later merge area of the merge area to which the LCU belongs, the value of the coefficient selection identifier of the LCU may be determined to be a third value, e.g., 2.
For any LCU, when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream by the decoding end device is the first value, the filter coefficient of the previous combining region of the combining region to which the LCU belongs may be determined as the filter coefficient of the LCU; when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream is a second value, determining the filter coefficient of the combining region to which the LCU belongs as the filter coefficient of the LCU; when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream is the third value, the filter coefficient of the later combining region of the combining region to which the LCU belongs may be determined as the filter coefficient of the LCU.
In some embodiments, parsing the filter coefficients from the code stream may include:
for any merging region, analyzing the filter shape of the merging region from the code stream;
based on the filter shape, filter coefficients of the combined region are parsed from the code stream.
For example, in order to improve the flexibility of the filter coefficient, optimize the ALF filtering effect and improve the coding and decoding performance, each combining region is not limited to use of the same filter shape, but may selectively use different filter shapes, that is, the filter shapes used by different combining regions may be the same or different.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients for different filter shapes, determine the filter shape and filter coefficients used by the merge region based on RDO decisions, and write the filter shape and filter coefficients into the code stream for transmission to the decoding end device.
For any merging region, the decoding end device may analyze the filter shape of the merging region from the code stream when acquiring the filter coefficient of the merging region, and analyze the filter coefficient of the region class from the code stream based on the filter shape.
In an example, the determining the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient obtained by parsing the code stream may include:
Determining the filter shape and the filter coefficient of the LCU based on the region category of the LCU and the filter shape and the filter coefficient obtained by analyzing the code stream;
the performing ALF filtering on the pixels of the LCU one by one based on the filtering coefficient of the LCU may include:
ALF filtering is carried out on pixels of the LCU one by one based on the filter shape and the filter coefficient of the LCU.
For any LCU, the merging area to which the LCU belongs may be determined based on the area class to which the LCU belongs, the filter shape and the filter coefficient of the merging area may be parsed from the code stream, the filter shape and the filter coefficient may be determined as the filter shape and the filter coefficient of the LCU, and the pixel of the LCU may be subjected to ALF filtering one by one based on the filter shape and the filter coefficient.
It should be noted that, in the embodiment of the present application, the filter shape may also be selected for the image frame, or for a component (such as a luminance component and/or a chrominance component) of the image frame. For example, if image frame a selects a center symmetric filter shape of 7*7 cross plus 5*5 square as shown in fig. 4A, each LCU in image frame a that initiates ALF filtering uses a center symmetric filter shape of 7*7 cross plus 5*5 square.
In some embodiments, in step S630, performing ALF filtering on pixels of the LCU one by one based on the filter coefficients of the LCU may include:
ALF filtering is carried out on pixels of the LCU one by one based on filtering coefficients of the LCU and weight coefficients of reference pixel positions corresponding to merging areas of the LCU obtained through analysis from the code stream.
For example, in order to optimize the ALF filtering effect and improve the codec performance, the filter used in performing the ALF filtering is not limited to a symmetric filter, but an asymmetric filter may be used, that is, the filter coefficients of the locations that are symmetric may be different and satisfy a certain proportional relationship, for example, 0.5:1.5 or 0.6:1.4.
When performing ALF filtering based on the determined filter coefficient, the filtered pixel value needs to be obtained based on the sum of products of the filter coefficient at any non-center position and the reference pixel at the symmetrical position of the filter coefficient, and therefore, the proportion may be a proportion between the filter coefficients at the symmetrical position or a proportion (may also be referred to as a weight proportion) of the weighting proportion when the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position participates in ALF filtering calculation, that is, the asymmetric filter means that the filter coefficient at the symmetrical position is different, or the weight of the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position is different when the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position participates in ALF filtering calculation.
For example, the filter coefficient C i of the center symmetric filter shape of 7*7 cross plus 5*5 square, where the filter coefficient at the symmetric position is C 28-i, then C i:C28-i=Ai:(2-Ai), or the ratio of the weighting weights of P i and P 28-i when participating in ALF filtering calculation is that a i:(2-Ai),Pi is the pixel value of the reference pixel position corresponding to C i, and P 28-i is the pixel value of the reference pixel position corresponding to C 28-i, and for any pixel of the LCU, the pixel value after the pixel filtering can be determined by:
Wherein, C i is the (i+1) th filter coefficient in the filter coefficient of the combining region to which the LCU belongs, P i is the pixel value of the reference pixel position corresponding to the filter coefficient C i, the reference pixel position corresponding to P 28-i and the reference pixel position corresponding to P i are centrally symmetric with respect to the pixel position of the current filter pixel, a i is the weight coefficient of the pixel value of the reference pixel position corresponding to P i, P 14 is the pixel value of the current filter pixel, C 14 is the filter coefficient of the current filter pixel, 0 < a i < 2.
For example, for any merging region, the encoding end device may determine the filter coefficient of the merging region, and the filter performance under the condition that each position corresponds to a different weight coefficient. And selecting a group of filter coefficients with the best filtering performance, recording the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, writing the filter coefficients into a code stream, and sending the code stream to decoding end equipment.
For example, a set of weight coefficients (such as the value set of a i) may be pre-constructed, and each weight coefficient is selected from the set to obtain a filter coefficient with the best filtering performance and a corresponding weight coefficient at each position of the corresponding filter, and an index of the weight coefficient in the set of weight coefficients is written into a code stream and sent to the decoding end device.
For any LCU, the decoding end device can analyze the code stream to obtain the filter coefficient of the merging area of the LCU and the weight coefficient of each reference pixel position corresponding to the merging area of the LCU, and ALF filtering is carried out on pixels of the LCU one by one.
In some embodiments, in step S630, performing ALF filtering on pixels of the LCU one by one based on the filter coefficients of the LCU may include:
for any pixel of the LCU, updating the pixel value of the pixel based on the pixel values of surrounding pixels of the pixel in the process of performing ALF filtering on the pixel;
And performing ALF filtering on the pixel based on the pixel value updated by the pixel.
For example, considering that the filtering performance of filtering a pixel location according to the conventional ALF technology is poor when the pixel value of the pixel location is too large or too small for any pixel location, in order to optimize the ALF filtering effect, in the process of performing ALF filtering on the pixel for any pixel, the pixel value of the pixel may be updated based on the pixel values of surrounding pixels of the pixel, so that the pixel value of the pixel location is smoother than the pixel values of surrounding pixels.
In one example, the updating the pixel value of the pixel based on the pixel values of the surrounding pixels of the pixel may include:
Determining a maximum value and a minimum value of pixel values of pixels except for a center position in a target pixel block; wherein the target pixel block is 3*3 pixel blocks taking the pixel as a central position;
when the pixel value of the pixel is larger than the maximum value, updating the pixel value of the pixel to the maximum value;
and when the pixel value of the pixel is smaller than the minimum value, updating the pixel value of the pixel to the minimum value.
Illustratively, taking the surrounding pixels of a pixel as 8 neighboring pixels of the pixel as an example, i.e., the remaining pixels except the center position in a 3*3-pixel block (referred to herein as a target pixel block) centered on the pixel.
For any pixel in any LCU, the pixel values for each pixel in the target pixel block other than the center position may be determined, and the maximum and minimum values in each pixel value may be determined.
When the pixel value of the pixel is larger than the maximum value, updating the pixel value of the pixel to the maximum value; and when the pixel value of the pixel is smaller than the minimum value, updating the pixel value of the pixel to the minimum value.
For example, taking the 3*3 pixel block shown in fig. 15 as an example, assuming that the current filtered pixel is pixel 0, its surrounding pixels include 8 adjacent pixels of the pixel, that is, pixels 1 to 8, when filtering the pixel 0, the pixel values of the pixels 1 to 8 may be obtained respectively, and the maximum value and the minimum value of the pixel values of the 8 pixels may be determined. Assuming that the pixel value of the pixel 1 is the largest and the pixel value of the pixel 8 is the smallest, the largest value of the pixel values of the 8 pixels is the pixel value of the pixel 1 (assumed to be p 1), the smallest value is the pixel value of the pixel 8 (assumed to be p 8), the pixel value of the pixel 0 (assumed to be p 0) can be compared with p1 and p8, and if p0 > p1, the pixel value of the pixel 0 is updated to be p1; if p0 < p8, the pixel value of pixel 0 is updated to p8.
Example two
Referring to fig. 7, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to an encoding/decoding device, and as shown in fig. 7, the filtering method may include the following steps:
Step S700, in the process of performing ALF filtering on any pixel in the current filtering unit, for any reference pixel, when the reference pixel is not in the current filtering unit, go to step S710.
Step S710, determining whether the pixel value of the reference pixel can be acquired; if yes, go to step S730; otherwise, go to step S720.
Step S720, the pixel closest to the reference pixel position in the boundary area is used to replace the reference pixel for filtering.
Step S730, filtering is performed using the reference pixel.
The filtering unit may be an LCU or an image block obtained based on the LCU, for example, an image block obtained by clipping or expanding the LCU, for example.
For example, the implementation of the LCU-based filtering unit may be referred to the related description in the section "adaptive correction filtering unit" above, and the embodiments of the present application will not be repeated here.
In the embodiment of the present application, considering that for the boundary pixels of the filtering unit, there may be some reference pixels outside the filtering unit, i.e. not inside the filtering unit, in the reference pixels, at this time, the pixel values of the some reference pixels may not be obtained.
In one example, the pixel values for which the reference pixel cannot be obtained may include, but are not limited to, one of the following:
the reference pixel is outside the image boundary, outside the slice boundary and does not allow filtering across the slice boundary, outside the upper or lower boundary of the current filtering unit.
For example, considering that for any pixel location, the other pixel location closest to the pixel is also typically the pixel location whose pixel value is closest to the pixel value of the pixel location, in order to optimize the ALF filtering effect, for the case where the pixel value of the reference pixel cannot be obtained, the current filtering unit and the pixel closest to the reference pixel location in the boundary area may be used to perform filtering instead of the reference pixel.
The distance between pixel locations may be, for example, euclidean distances.
The boundary region includes, for example, a left boundary outer or a right boundary outer of the current filter unit, and the left boundary outer of the current filter unit includes a part or all of the regions in the filter units adjacent to the left side of the current filter unit, and the right boundary outer of the current filter unit includes a part or all of the regions in the filter units adjacent to the right side of the current filter unit.
For example, taking the filtering unit shown in fig. 5 (i.e., the sample filtering compensation unit in fig. 5) as the current filtering unit, the boundary region of the current filtering unit may include 3 columns of pixels on the left side of the left boundary of the sample filtering compensation unit shown in fig. 5 (i.e., 3 columns of pixels in the filtering unit on the left side of the current filtering unit, which are close to the current filtering unit, may be referred to as outside the left boundary); the boundary region of the current filter unit may include 3 columns of pixels to the right of the right boundary of the sample filter compensation unit shown in fig. 5 (i.e., 3 columns of pixels in the filter unit to the right of the current filter unit, which are close to the current filter unit, may be referred to as outside the right boundary).
As can be seen, in the process of performing ALF filtering on each pixel in the current filtering unit in the method flow shown in fig. 7, when the pixel value of the reference pixel position cannot be obtained for the reference pixel position that is not in the current filtering unit, the current filtering unit and the pixel closest to the reference pixel position in the boundary area are used to replace the reference pixel for filtering, so that the ALF filtering performance is optimized and the coding and decoding performance is improved.
In some embodiments, in a case where the pixel value of the reference pixel cannot be obtained, before filtering using the current filtering unit and the pixel closest to the reference pixel in the boundary area instead of the reference pixel, the method may further include:
Determining whether the reference pixel corresponds to a specified location of the filter shape;
if yes, determining to execute the filtering operation by using the current filtering unit and the pixel closest to the reference pixel position in the boundary area to replace the reference pixel.
Illustratively, considering that for a reference pixel at a certain specific location, when the pixel value of the reference pixel cannot be obtained, the pixel value of the pixel location closest to the location within the boundary region cannot be obtained.
For example, the reference pixel at a position directly to the left, directly to the right, directly above or directly below the current filtered pixel position.
For example, assuming that the current filtering pixel position (i.e., the pixel position corresponding to C 14) is located at the left boundary of the current filtering unit, for the reference pixel position corresponding to C 11, since the reference pixel position is located at the left side of the current filtering pixel position and the distance from the current filtering pixel position is 3 pixels, and the width of one filtering unit is generally greater than 3 pixels, if the pixel value of the reference pixel position corresponding to C 11 cannot be acquired, it may be determined that the filtering unit at the left side of the current filtering unit is located outside the boundary of the current image frame (i.e., the picture frame where the current filtering unit is located), or is located outside the slice boundary of the current slice (i.e., the slice where the current filtering unit is located) and filtering is not allowed to be performed across the slice boundary, at this time, the pixel value of the pixel position closest to the reference pixel position in the boundary region, i.e., the pixel position corresponding to C 12, is also not acquired, and at this time, the pixel value of the reference pixel position (i.e., the reference pixel position corresponding to C 11) within the current filtering unit needs to be replaced with the pixel value of the reference pixel position.
Taking the scenario shown in fig. 17A as an example, when the pixel value of the reference pixel position corresponding to C 11 cannot be obtained, the reference pixel position corresponding to C 12 cannot be obtained.
And for a reference pixel at the upper left, upper right, lower left or lower right position of the current filter pixel position, when the pixel value thereof cannot be acquired, it may be due to that the pixel value thereof is outside the upper or lower boundary of the current filter unit (the pixel value of the pixel position outside the upper or lower boundary of the current filter unit cannot be acquired), and at this time, the pixel position closest to the reference pixel position may be outside the left or right boundary of the current filter unit, the pixel value thereof may be available.
For example, assuming that the current filter pixel position (i.e., the pixel position corresponding to C 14) is above and to the left of the current filter unit, for the reference pixel position corresponding to C 1, since the reference pixel position is above and to the left of the current filter pixel position, when the current filter pixel position is relatively close to the top left vertex of the current filter unit, the reference pixel position corresponding to C 1 may be outside the top boundary of the current filter unit, and thus, the pixel value of the reference pixel position may not be acquired, and in this case, the pixel position corresponding to C 6 may be outside the left boundary of the current filter unit, and the pixel value thereof may be acquired.
Taking the scenario shown in fig. 17B as an example, since the reference pixel position corresponding to C 1 is located outside the upper boundary of the current filtering unit, the pixel value of the reference pixel position corresponding to C 1 cannot be obtained, and the reference pixel position corresponding to C 6 is located outside the left boundary of the current filtering unit, and when the pixel value outside the left boundary of the current filtering unit can be obtained, such as when the left boundary of the current filtering unit is not an image boundary or a slice boundary, the pixel value of the reference pixel position corresponding to C 6 can be obtained, and therefore, in this case, for the reference pixel position where the pixel value cannot be obtained, the pixel value of the pixel position closest to the current filtering unit or within the boundary region can be used to replace the reference pixel for filtering.
Thus, in order to optimize the ALF filtering effect, for a reference pixel at a specified position in the filter state, if the pixel value of the reference pixel cannot be acquired, the current filtering unit and the pixel closest to the reference pixel position in the boundary region may be used to perform filtering instead of the reference pixel.
For example, if the reference pixel does not correspond to the designated position of the filter shape, the pixel closest to the reference pixel position in the current filtering unit may be used to perform filtering instead of the reference pixel, i.e., the pixel in the boundary region is not considered.
In one example, the specified locations may include, but are not limited to, a first location, a second location, a third location, and symmetrical locations of the first location, the second location, and the third location in the first filter;
The first filter is a 7*7 cross-shaped 5*5 square central symmetry filter, the first position is the upper left corner position of the first filter, the second position is the right side adjacent position of the first position, the third position is the lower adjacent position of the first position, and the symmetry positions comprise axisymmetry positions and central symmetry positions.
For example, for the first filter shown in fig. 4A, the first position is the C 1 position, the axisymmetric position is the C 5 position, the second position is the C 2 position, the axisymmetric position is the C 4 position, the third position is the C 6 position, the axisymmetric position is the C 10 position, that is, the specified positions may include C 1、C2、C6、C4、C5 and C 10.
In some embodiments, in a case where the pixel value of the reference pixel cannot be obtained, before filtering using the current filtering unit and the pixel closest to the reference pixel in the boundary area instead of the reference pixel, the method may further include:
determining whether the current filtering unit allows the use of enhanced adaptive correction filtering;
if yes, determining to execute the filtering operation by using the current filtering unit and the pixel closest to the reference pixel position in the boundary area to replace the reference pixel.
By way of example, the filters used are typically different in view of the fact that enhanced adaptive correction filtering is allowed to be used and not allowed to be used.
For example, a filter used in the case where the enhancement adaptive correction filter is allowed to be used may be as shown in fig. 4A, and a filter used in the case where the enhancement adaptive correction filter is not allowed to be used may be as shown in fig. 12.
With the filter shown in fig. 12, when the pixel value of the reference pixel cannot be obtained, the pixel value of the pixel position closest to the reference pixel position outside the current filtering unit cannot be obtained, so that, in the case where the use of the enhanced adaptive correction filtering is not allowed, the filtering may be performed without considering that the reference pixel where the pixel value cannot be obtained is replaced with the pixel in the boundary region.
Thus, for any reference pixel, in case the pixel value of that reference pixel cannot be obtained, it can be determined whether the current filtering unit allows the use of enhanced adaptive correction filtering.
By way of example, it may be determined whether the current filtering unit allows the use of enhanced adaptive correction filtering based on the value EalfEnableFlag, when EalfEnableFlag is equal to 1, indicating that enhanced adaptive correction filtering may be used; when EalfEnableFlag is equal to 0, it means that adaptive correction filtering should not be used.
Illustratively, the value EalfEnableFlag may be derived from the decoding end, and the value EalfEnableFlag may be obtained from the code stream or may be a constant value.
Illustratively, the value of EalfEnableFlag may be determined based on a value of an enhanced adaptive correction filter enable flag (ealf _enable_flag) parsed from the bitstream.
It should be noted that the "adaptive enhancement filter enable flag" may be a sequence header parameter, that is, a value of the "adaptive enhancement filter enable flag" may be used to indicate whether or not an image sequence is enabled to use adaptive enhancement filter.
For example, when the decoding end device determines that the current filtering unit allows the enhancement adaptive correction filtering to be used, the current filtering unit and a pixel closest to the reference pixel position in the boundary region may be used for filtering instead of the reference pixel.
For example, if the current filtering unit does not allow the use of enhanced adaptive correction filtering, the pixel closest to the reference pixel location in the current filtering unit may be used instead of the reference pixel for filtering, i.e. without considering pixels in the boundary region.
Example III
Referring to fig. 8, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to a decoding end device, and as shown in fig. 8, the filtering method may include the following steps:
Step S800, when determining that the current LCU of the current image frame starts ALF filtering, obtaining the region coefficient identification of the merging region to which the current LCU belongs.
Step S810, acquiring a filter coefficient of the current LCU based on a region coefficient identifier of a merging region to which the current LCU belongs; the region coefficient identifier is used for identifying the filter coefficient used by the merging region to which the LCU belongs in the preset multiple groups of filter coefficients.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the filtering coefficients used by one merging area are not limited to one group of filtering coefficients, but one group or a plurality of groups of filtering coefficients can be selected to be used according to actual conditions.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients and determine that the merge region uses one or more of the multiple sets of filter coefficients based on RDO decisions and write a region coefficient identification to the code stream that identifies the filter coefficients used by the merge region.
For example, when the decoding end device determines that the current LCU of the current image frame starts ALF filtering, the decoding end device may obtain, based on information parsed from the code stream, a region coefficient identifier of a merging region to which the current LCU belongs, and determine, based on the region coefficient identifier, a filter coefficient used by the merging region to which the current LCU belongs.
When the filter coefficient used by the merge area to which the current LCU belongs is determined, the filter coefficient of the current LCU may be determined from the filter coefficients used by the merge area.
For example, when a merge region to which a current LCU belongs uses a set of filter coefficients, the filter coefficients used by the merge region may be determined as the filter coefficients of the current LCU.
Step S820, ALF filtering is carried out on pixels of the current LCU one by one based on the filtering coefficient of the current LCU.
In the embodiment of the application, when the filter coefficient of the current LCU is determined, ALF filtering can be carried out on pixels of the current LCU one by one based on the filter coefficient of the current LCU.
It can be seen that, in the method flow shown in fig. 8, by training multiple sets of filter coefficients for each region, determining, based on RDO decision, that each merging region uses one or more sets of the trained multiple sets of filter coefficients, and notifying the decoding end device of the decision result through region coefficient identification, one region is not limited to use of one set of filter coefficients any more, but can select to use one or more sets of filter coefficients according to performance, so as to optimize ALF filter performance and improve codec performance.
In some embodiments, in step S800, determining to start ALF filtering for the current LCU of the current frame image may include:
Analyzing LCU coefficient identification of the current LCU from the code stream; wherein, the LCU coefficient identifier is used for identifying the filter coefficient used by the current LCU in at least one group of filter coefficients used by the merging area to which the current LCU belongs;
And when the value of the LCU coefficient identifier of the LCU is not the first value, determining to start ALF filtering on the current LCU.
Illustratively, the encoding end device may inform the decoding end device of the merge region using one or more sets of filter coefficients through region coefficient identification. For any LCU in the region, the encoding end device may identify, by LCU coefficient identification, a filter coefficient used by the LCU from among one or more sets of filter coefficients used by the merge region.
For any LCU, the decoding end device may determine, based on the LCU coefficient identifier of the LCU parsed from the bitstream, whether to start ALF filtering on the LCU, and in case of starting ALF filtering on the LCU, a filtering coefficient of the LCU.
For any LCU, when the value of the LCU coefficient identifier of the LCU is a first value, it indicates that ALF filtering is not started for the LCU.
For any LCU, when the LCU coefficient identifier of the LCU analyzed by the decoding end equipment from the code stream is a non-first value, the ALF filtering can be determined to be started for the LCU.
For example, assuming that the first value is 0, for any LCU, when the LCU coefficient identifier of the LCU obtained by the decoding end device from the code stream by parsing is 0, it may be determined that ALF filtering is not started for the LCU; when the value of the LCU coefficient identifier of the LCU obtained by the decoding end equipment through analysis from the code stream is not 0, the ALF filtering can be determined to be started on the LCU, and at the moment, the decoding end equipment can determine the filtering coefficient used by the LCU according to the LCU coefficient identifier of the LCU.
It should be noted that, when the value of the LCU coefficient identifier of the LCU is not the first value, if a group of filter coefficients is used in the merge area to which the LCU belongs, the filter coefficient of the LCU is the group of filter coefficients; if the LCU belongs to the combining area, multiple groups of filter coefficients are used, the filter coefficients of the LCU need to be determined according to the specific value of the LCU coefficient identifier of the LCU.
In one example, in step S810, obtaining the filter coefficient of the current LCU based on the region coefficient identifier of the merge region to which the current LCU belongs may include:
When the region coefficient identification of the merging region to which the current LCU belongs is used for determining that the merging region to which the LCU belongs uses a plurality of sets of filter coefficients, the filter coefficients of the current LCU are determined from the plurality of sets of filter coefficients used by the merging region to which the LCU belongs based on the LCU coefficient identification of the current LCU.
For example, for any LCU, when the decoding end device determines that the combining area to which the LCU belongs uses multiple sets of filter coefficients, the decoding end device may determine the filter coefficients used by the LCU based on the LCU coefficient identifier of the LCU obtained by parsing the code stream.
It should be noted that, for any merging region, the filter shapes of the sets of filter coefficients used in the merging region may be identical or not identical.
Example IV
Referring to fig. 9, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to a decoding end device, and as shown in fig. 9, the filtering method may include the following steps:
Step S900, when determining that ALF filtering is started on a current LCU of a current frame image, acquiring a coefficient selection identifier of the current LCU.
Step S910, determining a filtering coefficient of the current LCU based on the merging area to which the current LCU belongs and the coefficient selection identifier of the current LCU; wherein the coefficient selection identifies filter coefficients for identifying a current LCU to select for use among the plurality of sets of candidate filter coefficients.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the LCU is not limited to select the filtering coefficient of the combining region to which the LCU belongs, but can adaptively select a group of filtering coefficients from a plurality of groups of filtering coefficients to carry out the ALF filtering.
For example, for any LCU, the candidate filter coefficients of the LCU may include, but are not limited to, filter coefficients of a merge region to which the LCU belongs and filter coefficients of adjacent regions of the merge region to which the LCU belongs, so that, in a case that each region transmits a set of filter coefficients, one LCU may have multiple sets of candidate filter coefficients, so that flexibility in LCU filter coefficient selection is improved, an ALF filter effect is optimized, and coding and decoding performance is improved.
For any LCU, the encoding end device may determine, based on the RDO decision, a filter coefficient used by the LCU in the plurality of sets of candidate filter coefficients, and write a coefficient selection identifier corresponding to the filter coefficient into the code stream and send the code stream to the decoding end device.
The decoding end device may determine a filter coefficient of the current LCU based on the merge area to which the current LCU belongs and a coefficient selection identifier of the current LCU obtained by parsing the code stream.
Step S920, ALF filtering is performed on pixels of the current LCU one by one based on the filter coefficient of the current LCU.
In the embodiment of the application, when the decoding end equipment determines the filter coefficient of the current LCU, ALF filtering can be carried out on pixels of the current LCU one by one based on the filter coefficient of the current LCU.
It can be seen that, in the method flow shown in fig. 9, by setting multiple sets of candidate filter coefficients for each LCU, determining the filter coefficient used by each LCU based on RDO decision, and notifying the decoding end device of the decision result through the coefficient selection identifier, flexibility of the filter coefficient used by each LCU can be improved, ALF filter performance is optimized, and coding and decoding performance is improved.
In some embodiments, in step S910, determining the filter coefficient of the LCU based on the merge area to which the LCU belongs and the coefficient selection identifier of the LCU may include:
When the value of the coefficient selection identifier of the current LCU is a first value, determining the filter coefficient of the previous merging region of the merging region to which the current LCU belongs as the filter coefficient of the previous LCU;
When the value of the coefficient selection identifier of the current LCU is a second value, determining the filter coefficient of the merging area to which the current LCU belongs as the filter coefficient of the current LCU;
And when the value of the coefficient selection identifier of the current LCU is a third value, determining the filter coefficient of the next merging region of the merging region to which the current LCU belongs as the filter coefficient of the current LCU.
For example, for any LCU within any merge region, the candidate filter coefficients for that LCU may include the filter coefficients for that merge region, the filter coefficients for the previous merge region for that merge region, and the filter coefficients for the subsequent merge region for that merge region.
For example, the previous merge area of the merge area to which the LCU belongs is the merge area corresponding to the previous adjacent index of the merge area to which the LCU belongs.
For example, the merging region of the merging region to which the LCU belongs is a merging region corresponding to a next adjacent index of the merging region to which the LCU belongs.
The merging region obtained by merging the 16 regions obtained by the fixed region division (i.e., the merging region 0 to the merging region 15 in this order) may be the merging region 0, and the merging region 15 may be the merging region 0 and the merging region 15.
For any LCU, the encoding end device may determine a filter coefficient used by the encoding end device based on the RDO decision, and when determining the filter coefficient used by the LCU, the encoding end device may determine that the value of the coefficient selection identifier of the LCU is a first value, such as 0, when determining the filter coefficient of the previous merge region of the merge region to which the LCU belongs; when determining the filter coefficient of the combining area to which the LCU belongs, determining the value of the coefficient selection identifier of the LCU as a second value, for example, 1; when determining the filter coefficient used by the LCU in the later merge area of the merge area to which the LCU belongs, the value of the coefficient selection identifier of the LCU may be determined to be a third value, e.g., 3.
For any LCU, when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream by the decoding end device is the first value, the filter coefficient of the previous combining region of the combining region to which the LCU belongs may be determined as the filter coefficient of the LCU; when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream is a second value, determining the filter coefficient of the combining region to which the LCU belongs as the filter coefficient of the LCU; when the value of the coefficient selection identifier of the LCU obtained by parsing the code stream is the third value, the filter coefficient of the later combining region of the combining region to which the LCU belongs may be determined as the filter coefficient of the LCU.
Therefore, the filtering coefficients of the merging region to which the LCU belongs and the filtering coefficients of the previous merging region and the next merging region of the merging region to which the LCU belongs are used as candidate filtering coefficients of the LCU, and a group of filtering coefficients are selected as the filtering coefficients of the LCU based on RDO decision, so that multiple groups of candidate filtering coefficients can exist for the LCU in the merging region under the condition that one merging region trains one group of filtering coefficients, and the flexibility of the filtering coefficients of the LCU is improved under the condition that multiple groups of filtering coefficients do not need to be trained for one merging region, thereby optimizing the ALF filtering performance and improving the coding and decoding performance.
Example five
Referring to fig. 10, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to a decoding end device, and as shown in fig. 10, the filtering method may include the following steps:
Step S1000, when determining that ALF filtering is started on the current LCU of the current frame image, acquiring the filter shape of the merging area of the current LCU based on the merging area of the current LCU.
Step S1010, obtaining a filter coefficient of a merging area to which the current LCU belongs based on the filter shape.
In the embodiment of the application, in order to improve the flexibility of the filter coefficient, the ALF filter effect is optimized, the coding and decoding performance is improved, each region is not limited to use the same filter shape, but different filter shapes can be selectively used, namely, the filter shapes used by different merging regions can be the same or different.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients for different filter shapes, determine the filter shape and filter coefficients used by the merge region based on RDO decisions, and write the filter shape and filter coefficients into the code stream for transmission to the decoding end device.
For example, for any merging region, when the decoding end device obtains the filter coefficient of the merging region, the decoding end device may parse the filter shape of the merging region from the code stream, and parse the filter coefficient of the merging region from the code stream based on the filter shape.
Step S1020, performing ALF filtering on pixels of the current LCU one by one based on the filter shape and the filter coefficient.
In the embodiment of the application, when the decoding end equipment determines the filter coefficient of the current LCU, ALF filtering can be carried out on pixels of the current LCU one by one based on the filter coefficient of the current LCU.
It can be seen that, in the method flow shown in fig. 10, by training multiple sets of filter coefficients with different filter shapes for each region, determining the filter shape and the filter coefficient used by each merging region based on RDO decision, notifying the decision result to the decoding end device through the code stream, the decoding end device can parse the code stream to obtain the filter shape and the filter coefficient of each region, thereby optimizing the ALF filtering effect and improving the coding and decoding performance.
It should be noted that, in the embodiment of the present application, the filter shape may also be selected for the image frame, or for a component (such as a luminance component and/or a chrominance component) of the image frame. For example, when image frame a selects a center symmetric filter shape of 7*7 cross plus 5*5 square, each LCU in image frame a that initiates ALF filtering uses a center symmetric filter shape of 7*7 cross plus 5*5 square.
Example six
Referring to fig. 11, a flowchart of a filtering method according to an embodiment of the present application is shown, wherein the filtering method may be applied to a decoding end device, and as shown in fig. 11, the filtering method may include the following steps:
Step S1100, when determining that ALF filtering is started on the current LCU of the current frame image, obtaining a filtering coefficient of the merging region to which the current LCU belongs and a weight coefficient of each reference pixel position based on the merging region to which the current LCU belongs.
Step S1110, performing ALF filtering on the pixels of the current LCU one by one based on the filtering coefficients and the weight coefficients of the reference pixel positions.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the filter used in the ALF filtering is not limited to a symmetrical filter, but an asymmetrical filter can be adopted, i.e. the filtering coefficients with symmetrical positions can be different and meet a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4.
When performing ALF filtering based on the determined filter coefficient, the filtered pixel value needs to be obtained based on the sum of products of the filter coefficient at any non-center position and the reference pixel at the symmetrical position of the filter coefficient, and therefore the proportion may be a proportion between the filter coefficients at the symmetrical position or a proportion (may also be referred to as a weight proportion) of the weighting when the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position participates in ALF filtering calculation, that is, the asymmetric filter means that the filter coefficient at the symmetrical position is different, or the weighting of the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position is different.
For example, the filter coefficient C i of the center symmetric filter shape of 7*7 cross plus 5*5 square, where the filter coefficient at the symmetric position is C 28-i, then C i:C28-i=Ai:(2-Ai), or the ratio of the weighting weights of P i and P 28-i when participating in ALF filtering calculation is that a i:(2-Ai),Pi is the pixel value of the reference pixel position corresponding to C i, and P 28-i is the pixel value of the reference pixel position corresponding to C 28-i, and for any pixel of the LCU, the pixel value after the pixel filtering can be determined by:
Wherein, C i is the (i+1) th filter coefficient in the filter coefficient of the combining region to which the LCU belongs, P i is the pixel value of the reference pixel position corresponding to the filter coefficient C i, the reference pixel position corresponding to P 28-i and the reference pixel position corresponding to P i are centrally symmetric with respect to the pixel position of the current filter pixel, a i is the weight coefficient of the pixel value of the reference pixel position corresponding to P i, P 14 is the pixel value of the current filter pixel, C 14 is the filter coefficient of the current filter pixel, 0 < a i < 2.
In the embodiment of the application, for any merging region, the encoding end device can determine the filtering coefficient and the filtering performance of the merging region under the condition that each position corresponds to different weight coefficients. And selecting a group of filter coefficients with the best filtering performance, recording the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, writing the filter coefficients into a code stream, and sending the code stream to decoding end equipment.
For example, a set of weight coefficients (such as the value set of a i) may be pre-constructed, and each weight coefficient is selected from the set to obtain a filter coefficient with the best filtering performance and a corresponding weight coefficient at each position of the corresponding filter, and an index of the weight coefficient in the set of weight coefficients is written into a code stream and sent to the decoding end device.
For any LCU, the decoding end device can analyze the code stream to obtain the filter coefficient of the merging area of the LCU and the weight coefficient of each reference pixel position corresponding to the merging area of the LCU, and ALF filtering is carried out on pixels of the LCU one by one.
It can be seen that, in the method flow shown in fig. 11, the filters used in each combining region are not limited to the symmetric filters, the filter coefficients of the reference pixels at the symmetric positions are not limited to the same, but can satisfy a certain proportional relationship, and the number of required filter coefficients is not increased due to the fact that the filter coefficients at the symmetric positions satisfy the proportional relationship, so that flexibility of the filter coefficients is improved, ALF filter performance is optimized, and encoding and decoding performance is improved.
Example seven
The embodiment of the application provides a filtering method which can be applied to encoding end equipment and comprises the following steps:
T100, performing region division on the luminance component of the current image frame.
For example, the implementation of the region division of the luminance component of the image frame may be referred to in the above description of the "region division" section, and the embodiments of the present application are not described herein.
And T110, classifying each LCU in any region, and dividing the region into a plurality of region categories based on the category of each LCU.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, when the luminance component of the image frame is divided into a plurality of regions according to a fixed region division mode, for each region, LCUs in the region can be classified based on the pixel characteristics of each pixel in the region, and the LCUs in the region are classified into at least one category, i.e., one region can be classified into at least one region (which can be called a sub-region or a region category) by the LCU classification mode.
And T120, carrying out region merging on each region category, and determining the filter coefficient of each merged region.
And T130, writing the filter coefficient of each combined region and the region type identifier of each LCU into the code stream.
In the embodiment of the application, when the encoding end device classifies the LCUs in each region according to the above manner, the encoding end device may perform region merging on each region category to obtain at least one merged region, and determine the filter coefficient of each merged region.
The implementation manner of the region merging for each region category is similar to that described in the above section of "region merging", and the embodiments of the present application are not described herein.
For example, for any region class, the encoding end device may assign a coefficient index to the region based on the region to which it belongs, where the coefficient index corresponds to the filter coefficient of one of the merged regions.
In the embodiment of the application, the encoding end device can write the filter coefficient of each combined region, the index of each region category and the region category identification for identifying the region category to which each LCU belongs into the code stream, and send the code stream to the decoding end device.
For example, the processing flow of the decoding side device may be referred to the related description in the above embodiments, and the embodiments of the present application are not described herein.
Example eight
The embodiment of the application provides a filtering method which can be applied to encoding end equipment and comprises the following steps:
t200, for any merging region of the current image frame, determining filter coefficients used by the merging region based on RDO decision.
T210, determining a region coefficient identification of the merging region based on the filter coefficient used by the region; wherein the region coefficient identification is used for identifying the filter coefficients used by the combining region in a preset plurality of groups of filter coefficients.
And T220, writing the filter coefficient used by each merging area and the area coefficient identification of each merging area into the code stream.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the filtering coefficients used by one merging area are not limited to one group of filtering coefficients, but one group or a plurality of groups of filtering coefficients can be selected to be used according to actual conditions.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients and determine that the merge region uses one or more of the multiple sets of filter coefficients based on RDO decisions and write a region coefficient identification to the code stream that identifies the filter coefficients used by the merge region.
For example, the processing flow of the decoding side device may be referred to the related description in the above embodiments, and the embodiments of the present application are not described herein.
In some embodiments, the filtering method may further include:
For any merging region of the current image frame, when the filtering coefficients used by the merging region comprise a plurality of sets, determining LCU coefficient identifiers of LCUs based on the filtering coefficients used by the LCUs in the merging region;
And writing the LCU coefficient identification of each LCU into the code stream.
Illustratively, the encoding end device may inform the decoding end device of the merge region using one or more sets of filter coefficients through region coefficient identification. For any LCU of the merge region, the encoding end device may identify, by LCU coefficient identification, a filter coefficient used by the LCU from among one or more sets of filter coefficients used by the merge region.
For any merging region, when the encoding end device determines that the merging region uses multiple groups of filter coefficients, for any LCU of the merging region, the encoding end device can inform the decoding end device of the filter coefficients used by the LCU in the multiple groups of filter coefficients through LCU coefficient identification.
For example, for any LCU, when ALF filtering is not started on the LCU, the encoding end device may write the value of the LCU coefficient identifier of the LCU of the code stream to a first value.
For example, assuming that the first value is 0, for any LCU, when the encoding end device determines that ALF filtering is not started for the LCU, the value of the LCU coefficient identifier of the LCU written into the code stream is 0.
When (when)
Example nine
The embodiment of the application provides a filtering method which can be applied to encoding end equipment and comprises the following steps:
t300, for any merging region of the current image frame, determining a filter coefficient used by the merging region from a plurality of groups of filter coefficients based on RDO decision;
T310, determining coefficient selection identifiers of LCUs in the merging area based on the filter coefficients used in the area; wherein the coefficient selection identifies filter coefficients for identifying a selection of each LCU among the plurality of sets of candidate filter coefficients.
And T320, writing the filter coefficient used by each merging area and the coefficient selection identification of each LCU into the code stream.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the LCU is not limited to select the filtering coefficient of the combining region to which the LCU belongs, but can adaptively select a group of filtering coefficients from a plurality of groups of filtering coefficients to carry out the ALF filtering.
For example, for any LCU, the candidate filter coefficients of the LCU may include, but are not limited to, filter coefficients of a merge region to which the LCU belongs and filter coefficients of adjacent regions of the merge region to which the LCU belongs, so that, in a case that each region transmits a set of filter coefficients, one LCU may have multiple sets of candidate filter coefficients, so that flexibility in LCU filter coefficient selection is improved, an ALF filter effect is optimized, and coding and decoding performance is improved.
For any LCU, the encoding end device may determine, based on the RDO decision, a filter coefficient used by the LCU in the plurality of sets of candidate filter coefficients, and write a coefficient selection identifier corresponding to the filter coefficient into the code stream and send the code stream to the decoding end device.
For example, the processing flow of the decoding side device may be referred to the related description in the above embodiments, and the embodiments of the present application are not described herein.
For example, for any LCU within any merge region, the candidate filter coefficients for that LCU may include the filter coefficients for that merge region, the filter coefficients for the previous merge region for that merge region, and the filter coefficients for the subsequent merge region for that region.
It should be noted that, for 16 regions obtained in the fixed region division manner, after the region merging, the 16 merged regions (assuming that the merged regions 0 to 15 are sequentially merged), the later merged region of the merged region 15 may be the merged region 0, and the former merged region of the merged region 0 may be the merged region 15.
For any LCU, the encoding end device may determine a filter coefficient used by the encoding end device based on the RDO decision, and when determining the filter coefficient used by the LCU, the encoding end device may determine that the value of the coefficient selection identifier of the LCU is a first value, such as 0, when determining the filter coefficient of the previous merge region of the merge region to which the LCU belongs; when determining the filter coefficient of the combining area to which the LCU belongs, determining the value of the coefficient selection identifier of the LCU as a second value, for example, 1; when determining the filter coefficient used by the LCU in the later merge area of the merge area to which the LCU belongs, the value of the coefficient selection identifier of the LCU may be determined to be a third value, e.g., 3.
Examples ten
The embodiment of the application provides a filtering method which can be applied to encoding end equipment and comprises the following steps:
t400, for any merge region of the current image frame, determining the filter shape and filter coefficients used by the merge region based on RDO decisions.
And T410, writing the filter shape and the filter coefficient used by combining the areas into the code stream.
In the embodiment of the application, in order to improve the flexibility of the filter coefficient, optimize the ALF filtering effect and improve the coding and decoding performance, each merging region is not limited to use the same filter shape any more, but can selectively use different filter shapes, namely, the filter shapes used by different merging regions can be the same or different.
For example, for any merge region, the encoding end device may train multiple sets of filter coefficients for different filter shapes, determine the filter shape and filter coefficients used by the merge region based on RDO decisions, and write the filter shape and filter coefficients into the code stream for transmission to the decoding end device.
For example, the processing flow of the decoding side device may be referred to the related description in the above embodiments, and the embodiments of the present application are not described herein.
It should be noted that, in the embodiment of the present application, the filter shape may also be selected for the image frame, or for a component (such as a luminance component and/or a chrominance component) of the image frame. For example, when image frame a selects a center symmetric filter shape of 7*7 cross plus 5*5 square, each LCU in image frame a that initiates ALF filtering uses a center symmetric filter shape of 7*7 cross plus 5*5 square.
Example eleven
The embodiment of the application provides a filtering method which can be applied to encoding end equipment and comprises the following steps:
T500, for any merging region of the current image frame, determining a filter coefficient used by the merging region and a weight coefficient of each corresponding reference pixel position based on RDO decision;
And T510, writing the filter coefficients used by each merging region and the weight coefficients of the corresponding reference pixel positions into the code stream.
In the embodiment of the application, in order to optimize the ALF filtering effect and improve the coding and decoding performance, the filter used in the ALF filtering is not limited to a symmetrical filter, but an asymmetrical filter can be adopted, i.e. the filtering coefficients with symmetrical positions can be different and meet a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4.
For example, taking the filter shown in fig. 16 as an example, in the filter shown in fig. 16, the filter coefficients that are centrosymmetric with respect to the C14 position are not limited to be the same, but may satisfy a certain proportional relationship, for example, C 1:C27=0.5:1.5,C6:C22 =0.6:1.4, and so on.
When performing ALF filtering based on the determined filter coefficient, the filtered pixel value needs to be obtained based on the sum of products of the filter coefficient at any non-center position and the reference pixel at the symmetrical position of the filter coefficient, and therefore the proportion may be a proportion between the filter coefficients at the symmetrical position or a proportion (may also be referred to as a weight proportion) of the weighting when the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position participates in ALF filtering calculation, that is, the asymmetric filter means that the filter coefficient at the symmetrical position is different, or the weighting of the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position is different.
For example, as shown in fig. 16, a filter coefficient C i of a center symmetric filter shape of 7*7 cross plus 5*5 square, where the filter coefficient of the symmetric position is C 28-i, then C i:C28-i=Ai:(2-Ai), or a ratio of weighting weights of P i and P 28-i when participating in ALF filtering calculation is a i:(2-Ai),Pi being a pixel value of a reference pixel position corresponding to C i, and P 28-i being a pixel value of a reference pixel position corresponding to C 28-i, the pixel value after filtering can be determined for any pixel of the LCU by:
Wherein, C i is the (i+1) th filter coefficient in the filter coefficient of the combining region to which the LCU belongs, P i is the pixel value of the reference pixel position corresponding to the filter coefficient C i, the reference pixel position corresponding to P 28-i and the reference pixel position corresponding to P i are centrally symmetric with respect to the pixel position of the current filter pixel, a i is the weight coefficient of the pixel value of the reference pixel position corresponding to P i, P 14 is the pixel value of the current filter pixel, C 14 is the filter coefficient of the current filter pixel, 0 < a i < 2.
As in the filter shown in fig. 16, assuming C i:C28-i=Ai:(2-Ai), only 15 coefficients of C 0~C14 may need to be trained and the remaining coefficients may be determined based on the weighting coefficients, in the case where the filter coefficients at symmetrical positions (symmetrical about the center of C 14) are different.
For example, assuming a 1 =0.5, then C 1:C27 =0.5: 1.5; assuming a 0 =0.6, C 0:C28 =0.6:1.4.
In the embodiment of the application, for any merging region, the encoding end device can determine the filtering coefficient and the filtering performance of the merging region under the condition that each position corresponds to different weight coefficients. And selecting a group of filter coefficients with the best filtering performance, recording the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, writing the filter coefficients into a code stream, and sending the code stream to decoding end equipment.
For example, a set of weight coefficients (such as the value set of a i) may be pre-constructed, and each weight coefficient is selected from the set to obtain a filter coefficient with the best filtering performance and a corresponding weight coefficient at each position of the corresponding filter, and an index of the weight coefficient in the set of weight coefficients is written into a code stream and sent to the decoding end device.
For example, the processing flow of the decoding side device may be referred to the related description in the above embodiments, and the embodiments of the present application are not described herein.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
Aiming at the defects of the traditional ALF technology, the embodiment of the application provides the following optimization scheme:
Scheme 1, for each frame, with LCU as the minimum unit, adaptively divide it into multiple regions, where each region may include more than one LCU, so it is proposed to classify each LCU, and divide LCUs of the same region into N classes, where N is a positive integer.
Scheme 2, multiple sets of filter coefficients can be transferred in each region, and the shape of each set of filter can be the same or different.
Scheme 3, based on each LCU self-adaptive selection of a set of filter coefficients, the LCUs of the same region can select the filter coefficients of adjacent regions.
Scheme 4, each region can only pass one set of filter coefficients, but the filter shape of each region may not be the same.
Scheme 5, modify symmetrical filter into asymmetric filter, filter coefficient the same with symmetrical position, optimize to filter coefficient meet certain proportional relation on the symmetrical position, for example 0.5:1.5 or 0.6:1.4, etc.
And 6, optimizing the value of the sample of the boundary during filtering.
The main improvement points of the present application will be described from the encoding side and decoding side, respectively.
1. Coding method and coding terminal equipment
For the encoding end device, an ALF switch sequence header may be obtained, and it is determined whether the current sequence needs to enable the ALF technology. If the ALF switch sequence header is turned off, the ALF technique is turned off, and the ALF optimization technique (that is, the optimization of the ALF filtering scheme provided by the embodiment of the present application to the conventional ALF technique may include any one or more of the optimization schemes of the schemes 1 to 6) is also turned off, so that the ALF switch sequence header is transmitted to the decoding end device. If the ALF switching sequence header is on, the ALF technology encoding is entered, and the ALF technology optimization sequence header can be obtained.
If the ALF technology is optimized and closed, the original ALF technology is used for filtering. The ALF switching sequence header is transmitted to the optimization technique sequence header and parameters required by the ALF technique are transmitted to decoding end equipment.
If the ALF technology is optimized and turned on, the following scheme may be used to optimize, and the ALF switching sequence header is transmitted, and the optimized technology sequence header and parameters required by the optimized ALF technology are transmitted to the decoding end device.
For example, the optimization technique sequence header may not exist, and at this time, if the ALF switch sequence header is turned on, it is determined to use the optimized ALF technique scheme.
1.1 Adaptive region division with LCU as minimum unit
Performing fixed region division on the brightness component to obtain a plurality of regions; LCUs belonging to the same region are subdivided (i.e., LCUs within the same region are classified), and the same region is subdivided into N 1 classes at most, where N 1 is a positive integer.
For example, if N 1 =2, the total number of areas is at most 2 times the original number; if N 1 =1, the original fixed-area division scheme is adopted.
The encoding end device may mark the LCU division result in each region and send the result to the decoding end device (i.e., send the region class identifier of each LCU to the decoding end device).
1.2 Multiple sets of filter coefficient schemes can be passed in each region
For any merge region, at most n sets of filter coefficients may be passed. And identifies each LCU in the merge area. 0 indicates off, i.e. ALF filtering is not started; i denotes that the current LCU uses a certain set of filter coefficients in the region, and the value range of i is [1, n ].
Illustratively, each set of filter coefficients is obtained as follows:
the first training of the filter coefficients, defaulting to all LCUs to enable ALF filtering, after CTU decision, for the closed LCUs (i.e., LCUs that do not start ALF filtering), not participating in the training of the second filter coefficients, while LCUs with the same label jointly train the same set of filter coefficients.
Similarly, the training of the third filtering coefficient is performed based on the result of the second CTU decision.
And finally, determining that the image frame or the merging area uses n groups of filter coefficients at most based on a frame level or an area level, and finally writing the filter coefficient corresponding to each merging area into a code stream.
1.3, Adaptive selection of a set of Filter coefficients for each LCU
For the luminance component, for any LCU, the filter coefficient of the area where the current LCU is located or the filter coefficient of other areas can be selected to be used in the CTU decision, the maximum value of the number of the filters selected by each LCU is N 2 (namely N 2 groups of filter coefficients, N 2 is more than or equal to 2), RDO decision is carried out under the N 2 groups of filter coefficients, a group of filter coefficients with the best performance is selected, and the optimal selection result of the current LCU is sent to decoding end equipment (namely, the coefficient selection identification is sent to the decoding end equipment).
Illustratively, N 2 is less than or equal to the number of merged regions.
1.4 Selecting different shaped filters for each region
Each region conveys a set of filter coefficients so that, for any LCU, the encoding end device may signal to the decoding end device, via a flag, whether to activate ALF filtering or not (i.e., to deactivate) ALF filtering for that LCU.
When calculating the filter coefficient of each region, for any merging region, the filter coefficients under N 3(N3 more than or equal to 2) different filter shapes can be calculated respectively, the filter performance of the region under N 3 filter shapes can be calculated respectively, one filter shape with the best performance is selected, and the filter shape and the filter coefficient with the best performance of each region are notified to decoding end equipment through code streams.
It should be noted that the encoding end device may select a filter of a different shape based on the frame level, or select a filter of a different shape based on Y, U and V components.
Taking the filter with different shapes selected based on the frame level as an example, for any image frame, the filter coefficients of each region under N 4 different filter shapes can be calculated respectively, the filter performance of the image frame under N 4(N4 more than or equal to 2) filter shapes can be calculated respectively, one filter shape with the best performance is selected, and the filter shape with the best performance of the image frame and the filter coefficients of each region are notified to the decoding end device through the code stream.
1.5 Modifying the symmetric Filter to an asymmetric Filter
For a symmetrical filter with N 5(N5 more than or equal to 2) coefficients, only training is neededAnd filter coefficients.
By way of example only, and not by way of limitation,For rounding down, i.e. when "×" is not an integer, the non-integer part of "×" is set to 0, e.g
The symmetric filter is modified to the asymmetric filter, and the filter coefficients at the symmetric positions satisfy different proportional relations, so that the coefficient training is still only neededAnd filter coefficients.
Illustratively, taking the filter shape shown in FIG. 4A as an example, C i and C 28-i are symmetrical positions. The proportional relation of the filter coefficients at each symmetrical position can be selected through RDO decision, and the filter coefficients at each position and the proportion of the filter coefficients at the symmetrical positions are sent to the decoding end equipment through the code stream.
Illustratively, when the ratio of filter coefficients for all symmetric positions is 1:1, the filter obtained by training is still a symmetrical filter.
2. Decoding method and decoding end equipment
For the decoding end device, the ALF switch sequence header may be read from the code stream, and it is determined whether the current sequence needs to enable the ALF technology. If the ALF switch sequence header is off, the ALF technique is turned off. If the ALF switching sequence header is on, the ALF technology optimization sequence header can be read continuously.
If the ALF technology optimization sequence head is closed, acquiring the filtering parameters required by the original ALF technology;
And if the ALF technology optimization sequence header is opened, reading the filtering parameters required by the optimized ALF technology.
For example, the ALF optimization technique sequence header may not exist, and if the ALF switching sequence header is on, the filtering parameters required by the optimized ALF technique are read.
2.1 Adaptive region division with LCU as minimum unit
Performing fixed region division on the brightness component to obtain a plurality of regions; and reading the filter coefficients of all the areas from the code stream, simultaneously reading the area category identifiers of all the LCUs starting ALF filtering from the code stream, determining the area category to which the LCU belongs according to the fixed area division result and the area category identifiers of the LCUs, acquiring the corresponding filter coefficients according to the area category to which the LCU belongs, and performing ALF filtering on pixels of the LCU one by one.
2.2 Multiple sets of Filter coefficient schemes can be passed in each region
The frame-level or region-level coefficient identifications are read from the code stream, the plurality of groups of filter coefficients in each merging region are obtained from the code stream according to the frame-level or region-level coefficient identifications, and the selectable filter number (i.e., the filter coefficient group number) is determined by the frame-level or region-level coefficient identifications.
And acquiring LCU coefficient identifiers of each LCU from the code stream, and selecting a group of filter coefficients according to the LCU coefficient identifiers of each LCU. When filtering the current LCU, ALF filtering is carried out on pixels of the LCU one by using the group of filter coefficients.
2.3 Adaptive selection of a set of Filter coefficients for each LCU
And reading the filter coefficient in each region from the code stream, and reading the coefficient selection identification of each LCU from the code stream for each LCU which starts the ALF filtering by opening or closing the flag bit of the ALF filtering (namely starting the ALF filtering or not starting the ALF filtering for the LCU).
Illustratively, for any LCU, the number of selectable filters is a maximum of N 2 (i.e., the candidate filter coefficients are up to N 2 sets).
2.4 Selecting different shaped filters for each region
For any merging region, the filter shape of the merging region can be read from the code stream, and the filter coefficients of the merging region can be read according to the filter shape. ALF filtering is carried out on pixels of each LCU in the merging area one by one according to the shape and the filter coefficient of the filter
2.5 Modifying the symmetric Filter to an asymmetric Filter
The filter coefficients of each combining region are read from the code stream, and the scale coefficients of the filter coefficients at symmetrical positions are read.
And according to the filter coefficients at each position and the proportional coefficients of the filter coefficients at the symmetrical positions, deriving the filter coefficients at each position of the merging region, and performing ALF filtering on pixels of each LCU in the merging region one by one.
The sample value optimization scheme of the filtering boundary is described below.
Sample value optimization scheme 1 of twelve-embodiment filtering boundary
Assume that the filter shape is as shown in fig. 4A. If the sample used in the adaptive correction filtering process (i.e. the reference pixel when filtering any pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (i.e. the reference pixel is in the adaptive correction filtering unit), filtering by using the sample; if the samples used in the adaptive correction filtering process do not belong to samples in the adaptive correction filtering unit (i.e., the reference pixel is not in the adaptive correction filtering unit), filtering is performed as follows:
12.1, if the sample is outside the image boundary, or outside the slice boundary and filtering across the slice boundary is not allowed:
12.1.1, if the sample corresponds to the position of the filter shape C 1、C2、C6、C4、C5 or C 10, using the adaptive correction filtering unit to replace the sample with the sample closest to the sample in the boundary area for filtering;
12.1.2, otherwise, i.e. the sample is not at the position corresponding to the filter shape C 1、C2、C6、C4、C5 or C 10, filtering using the sample closest to the sample in the adaptive correction filtering unit instead of the sample;
12.2, otherwise, if the sample is outside the upper boundary or the lower boundary of the adaptive correction filter unit:
12.2.1, if the sample corresponds to the position of the filter shape C 1、C2、C6、C4、C5 or C 10, using the adaptive correction filtering unit to replace the sample with the sample closest to the sample in the boundary area for filtering;
12.2.2, otherwise, i.e. the sample is not at the position corresponding to the filter shape C 1、C2、C6、C4、C5 or C 10, filtering by using the sample closest to the sample in the adaptive correction filtering unit instead of the sample;
12.3, otherwise, i.e. the sample does not meet the condition in 12.1, nor does the sample meet the condition in 12.2, then the sample is used for filtering.
Sample value optimization scheme 2 of thirteenth embodiment filtering boundary
Assume that the filter shape is as shown in fig. 4A. If the sample used in the adaptive correction filtering process (i.e. the reference pixel when filtering any pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (i.e. the reference pixel is in the adaptive correction filtering unit), filtering by using the sample; if the samples used in the adaptive correction filtering process do not belong to samples in the adaptive correction filtering unit (i.e., the reference pixel is not in the adaptive correction filtering unit), filtering is performed as follows:
13.1, if the sample is outside the image boundary or outside the slice boundary and filtering is not allowed to be carried out across the slice boundary, using an adaptive correction filtering unit to replace the sample with the sample closest to the sample in the boundary area;
13.2, if the sample is outside the upper boundary or the lower boundary of the adaptive correction filter unit, using the sample closest to the sample in the boundary area of the adaptive correction filter unit to replace the sample for filtering;
13.3, otherwise, i.e. the sample does not meet the condition in 13.1, nor does the sample meet the condition in 13.2, then the sample is used for filtering.
Sample value optimization scheme 3 of fourteen-embodiment filtering boundary
Assume that the filter shape is as shown in fig. 4A. If the sample used in the adaptive correction filtering process (i.e. the reference pixel when filtering any pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (i.e. the reference pixel is in the adaptive correction filtering unit), filtering by using the sample; if the samples used in the adaptive correction filtering process do not belong to samples in the adaptive correction filtering unit (i.e., the reference pixel is not in the adaptive correction filtering unit), filtering is performed as follows:
14.1, if the sample is outside the image boundary, or outside the slice boundary and filtering across the slice boundary is not allowed:
14.1.1, if EalfEnableFlag is equal to 1, using the adaptive correction filtering unit to replace the sample closest to the sample in the boundary area to filter;
14.1.2, if not, i.e. EalfEnableFlag is equal to 0, filtering by using the sample closest to the sample in the adaptive correction filtering unit to replace the sample;
14.2, otherwise, if the sample is outside the upper boundary or the lower boundary of the adaptive correction filter unit:
14.2.1, if EalfEnableFlag is equal to 1, using the adaptive correction filtering unit to replace the sample closest to the sample in the boundary area to filter;
14.2.2, if not, i.e. EalfEnableFlag is equal to 0, filtering by using the sample closest to the sample in the adaptive correction filtering unit to replace the sample;
14.3, otherwise, i.e. the sample does not meet the condition in 14.1, nor does the sample meet the condition in 14.2, then the sample is used for filtering.
Note that EalfEnableFlag refers to a flag, whose value may include '1' or '0', when EalfEnableFlag is equal to 1, indicating that the enhancement adaptive correction filtering may be used; when EalfEnableFlag is equal to 0, it means that adaptive correction filtering should not be used.
Illustratively, the value EalfEnableFlag may be derived from the decoding end, and the value EalfEnableFlag may be obtained from the code stream or may be a constant value.
Illustratively, the value of EalfEnableFlag may be equal to the value of ealf _enable_flag (i.e., the enhanced adaptive correction filter enable flag); when EalfEnableFlag is equal to 1, it means that enhanced adaptive correction filtering can be used; when EalfEnableFlag is equal to 0, it means that adaptive correction filtering should not be used.
The ALF filter optimization scheme is described in detail below.
Fifteen embodiments adaptive region partitioning with LCU as the minimum unit
Encoding end device
Taking the fixed area division manner shown in fig. 2 as an example, for the luminance component of the image frame, the fixed area division may be performed in the manner described in the "area division" section above. The luminance component is divided into 16 regions, each region labeled K, K ε [0,15], inside each region there are 1 or more LCUs.
LCUs belonging to the same region are subdivided.
For example, the method for dividing LCUs in a region may use an LCU merging manner, calculate the cost of two LCUs merged by two, merge two LCUs with the minimum cost, and so on, calculate all costs of [1, n 6 ] class only when merging, and select a dividing method with the minimum cost. The last selection of each LCU is marked.
Taking the maximum two types as examples, namely N 6 =2, each LCU is marked as 1 or 0, that is, the value of the region class identifier of the LCU includes 1 or 0.
For example, for an LCU labeled 0 (i.e., the value of the zone category identification is 0), the zone category is 2K; for LCUs marked 1 (i.e., the value of the region class identifier is 1), the class is 2K+1.
Illustratively, the luminance component is divided into at most 32 regions. After the region division, region merging is performed on the 32 regions, and the filter coefficient of each merged region is calculated. And transmitting the LCU region division result (namely the region category identification of each LCU) and the filter coefficient obtained after the region division to decoding end equipment through a code stream.
Decoding end device
For the luminance component of the image frame, the fixed region division may be performed in the manner described in the "region division" section above. The luminance component is divided into 16 regions, and a schematic diagram of the division result thereof can be shown in fig. 2.
For any LCU of the region K, the region category identification of the LCU can be obtained; LCU with value of 0 of regional category identification, belonging to regional category of 2K; LCUs of region class 1, belonging to region class 2k+1.
And determining the filter coefficient of each LCU based on the region category of each LCU and the filter coefficient obtained by analyzing the code stream, and performing ALF filtering on pixels of each LCU one by one based on the filter coefficient of each LCU.
Sixteen embodiments, optimization scheme capable of transmitting 2 groups of filter coefficients in each region
Encoding end device
Each LCU is identified, assuming that 0 indicates that ALF filtering is turned off, 1 indicates that the 1 st set of filter coefficients is used, and 2 indicates that the 2 nd set of filter coefficients is used, i.e., the LCU coefficient of the LCU is identified with a value that includes 0,1, or 2, where the first value is 0 and the non-first value includes 1 or 2.
Each region is coefficient identified, 0 indicating that only the first set of filter coefficients is used, 1 indicating that only the second set of filter coefficients is used, and 2 indicating that both sets of filter coefficients are used.
Illustratively, the filter shape of the first set of filter coefficients may be as shown in fig. 4A, and the filter shape of the second set of filter coefficients may be as shown in fig. 12 (7*7 cross plus 3*3 square center symmetric filter shape).
In the filter coefficient training process, when the filter coefficient is calculated for the first time, the filter parameters use all pixels of the LCU in the current region. After the CTU decision is made, training the 1 st group of coefficients, and only using the LCU with the value of 1 to participate in training. When training the 2 nd group of coefficients, only the LCU with the value of 2 is used for participating in training. Finally, determining the performance of the current region by using only one set of filters or using two sets of filters through RDO decision.
If the performance of using only the first group of filter coefficients is optimal, determining that the value of the regional coefficient identifier is 0, writing the regional coefficient identifier into the code stream, writing the first group of filter coefficients into the code stream, and writing the LCU coefficient identifiers of all LCUs into the code stream, wherein the value of the LCU coefficient identifier of each LCU is 0 or 1; if the performance of using only the second group of filter coefficients is optimal, determining that the value of the region coefficient identifier is 1, writing the code stream, writing the second group of filter coefficients into the code stream, writing the LCU coefficient identifiers of all LCUs into the code stream, and determining that the value of the LCU coefficient identifier of each LCU is 0 or 1; if the performance of using two groups of filter coefficients is optimal, determining that the value of the regional coefficient identifier is 2, writing the regional coefficient identifier into the code stream, writing the two groups of filter coefficients into the code stream, and writing the LCU identifier into the code stream, wherein the value of the LCU coefficient identifier of each LCU is 0, 1 or 2.
Decoding end device
For any merging region, reading a region coefficient identifier of the merging region from the code stream, and if the value of the region coefficient identifier is 0, acquiring 15 filter coefficients (namely, a first group of filter coefficients) by the merging region; if the value of the region coefficient identifier is 1, the merging region acquires 9 filter coefficients (namely, a second group of filter coefficients); if the value of the region coefficient identifier is 2, the merging region respectively acquires 9 filter coefficients and 15 filter coefficients.
For example, if the value of the region coefficient identifier is 0, obtaining the LCU coefficient identifiers of all LCUs in the merging region; if the value of the LCU coefficient identifier is 0, the LCU is indicated to close the ALF filtering, namely the ALF filtering is not started for the LCU; if the value of the LCU coefficient identifier is 1, it indicates that the LCU turns on ALF filtering, that is, ALF filtering is started for the LCU, and the LCU uses the first set of filter coefficients.
If the value of the region coefficient identifier is 1, the LCU coefficient identifiers of all LCUs in the merging region are obtained, and if the value of the LCU coefficient identifier is 0, the LCU is indicated to close ALF filtering, namely the ALF filtering is not started for the LCU; if the value of the LCU coefficient identifier is 1, it indicates that the LCU turns on ALF filtering, that is, ALF filtering is started for the LCU, and the LCU uses a second set of filter coefficients.
If the value of the region coefficient identifier is 2, the LCU coefficient identifiers of all LCUs in the merging region are obtained, and if the value of the LCU coefficient identifier is 0, the LCU is indicated to close ALF filtering, namely the ALF filtering is not started for the LCU; if the value of the LCU coefficient identifier is 1, the LCU is indicated to open ALF filtering, namely the ALF filtering is started on the LCU, and the LCU uses a first group of filtering coefficients; if the value of the LCU coefficient identifier is 2, it indicates that the LCU turns on ALF filtering, that is, ALF filtering is started for the LCU, and the LCU uses a second set of filter coefficients.
Seventeenth embodiment, adaptive selection of a set of Filter coefficients for each LCU
Encoding end device
Taking as an example a maximum of 3 sets of filter coefficients for each LCU. That is, for any LCU, the candidate filter coefficients of the LCU include the filter coefficient of the merge region to which the LCU belongs (may be referred to as the filter coefficient of the current merge region), the filter coefficient of the previous merge region of the merge region to which the LCU belongs (may be referred to as the filter coefficient of the previous merge region), and the filter coefficient of the next merge region of the region to which the LCU belongs (may be referred to as the filter coefficient of the next merge region).
For any LCU, the performance under 3 sets of filter coefficients (RDO decisions) may be calculated separately, selecting the set of filter coefficients with the best performance.
For example, if the group filter coefficient with the optimal performance is the filter of the previous merging area, the value of the coefficient selection identifier of the LCU is 0 (that is, taking the first value as 0 as an example); if the filter coefficient with the optimal performance is the filter coefficient of the current merging area, the value of the coefficient selection identifier of the LCU is 1 (namely taking the second value as 1 as an example); if the filter coefficient with the optimal performance is the filter coefficient of the later merging area, the value of the coefficient selection identifier of the LCU is 2 (that is, taking the above third value as 2 as an example).
For example, assume that the current image frame merge region is as shown in fig. 13, i.e., the number of merge regions is 16; when the CTU decision is made, if the LCU currently processed belongs to the merge area 2, the candidate filter coefficients of the LCU may include the filter coefficient of the merge area 1, the filter coefficient of the merge area 2, and the filter coefficient of the merge area 3, and the filter coefficient with the optimal performance may be determined based on the RDO decision, and the LCU may be marked based on the decision result.
It should be noted that, since the chrominance components (U-component or V-component) have only one set of filter coefficients, the LCU thereof may not participate in the selection of the filter coefficients, or the LCU of the two chrominance components may select the filter coefficients on the other component, i.e., the LCU of the U-component may select the filter coefficients of the V-component, and the LCU of the V-component may select the filter coefficients of the U-component.
Decoding end device
The method comprises the steps of obtaining the filter coefficient of a merging area to which a current LCU belongs (namely the filter coefficient of the current area) and the filter coefficients of two areas adjacent to the current LCU (namely the filter coefficient of a previous merging area and the filter coefficient of a next merging area), and selecting the filter coefficient of the current LCU according to a coefficient selection mark of the LCU.
It should be noted that, since there is only one set of filter coefficients on each of the two chrominance components, the LCU may not participate in the selection of the filter coefficients, or when the ALF filter switch is turned on for both chrominance components, the LCU on the two chrominance components may select the filter coefficients on the other component, i.e., the LCU of the U component may select the filter coefficients of the V component, and the LCU of the V component may select the filter coefficients of the U component.
Eighteenth embodiment of the invention filters of different shapes are selected for each region
Encoding end device
The filter shape may be exemplified by the filter shape shown in fig. 4 or 12, two or more of the 4 filter shapes shown in fig. 14, or other filter shapes than the filter shapes shown in fig. 4, 9, and 10.
For any merging region, N 3 groups of filter coefficients can be trained, the filter shapes of the N 3 groups of filter coefficients are different, the performance of the merging region after using one filter shape is calculated in each region, a group of filter coefficients with optimal performance is selected, the corresponding filter shape is sent to decoding end equipment through a code stream, and the filter coefficients with optimal performance of the merging region are sent to decoding end equipment.
Decoding end device
For any LCU, when determining to start ALF filtering on the LCU, acquiring a filter shape of the merging region based on the merging region to which the LCU belongs, and acquiring a filter coefficient of the merging region based on the filter shape.
When the filter coefficients of the current LCU are determined, ALF filtering may be performed on pixels of the current LCU one by one based on the filter coefficients of the current LCU.
Nineteenth embodiment, modification of a symmetric Filter to an asymmetric Filter
Encoding end device
Taking the filter shape shown in fig. 4A as an example, the weighting factor a i of the pixel value of the reference pixel at the position C i (i=0, 1, …, 13) can be determined, where the reference pixel value corresponding to the symmetric position C 28-i has a weighting of 2-a i.
Illustratively, A i E [0.5,0.6,0.8,1.2,1.4,1.5,1].
As shown in fig. 4A, there are 14 symmetrical positions in total, and the filter coefficient of the region, and the filter performance can be calculated for each position corresponding to a different weight coefficient. And selecting a group of filter coefficients with optimal filter performance, and recording the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter.
For example, for any weight coefficient, a label (or index) identifying the position of the weight coefficient in the set of weight coefficients may be passed to the decoding end device.
Decoding end device
And acquiring a filter coefficient of each region and a weight coefficient corresponding to each filter coefficient.
And for any merging region, acquiring the filter coefficient and the weight coefficient corresponding to each filter coefficient so as to derive the filter coefficient at the corresponding position.
For any pixel of the LCU, the pixel filtered pixel value may be determined by:
Wherein, C i is the (i+1) th filter coefficient in the filter coefficient of the combining region to which the LCU belongs, P i is the pixel value of the reference pixel position corresponding to the filter coefficient C i, the reference pixel position corresponding to P 28-i and the reference pixel position corresponding to P i are centrally symmetric with respect to the pixel position of the current filter pixel, a i is the weight coefficient of the pixel value of the reference pixel position corresponding to P i, P 14 is the pixel value of the current filter pixel, C 14 is the filter coefficient of the current filter pixel, 0 < a i < 2.
Twenty embodiments, adaptive selection of a set of Filter coefficients per LCU
In the process of filtering pixels, for any pixel to be filtered, the maximum value and the minimum value of the pixel values are taken out in a 3*3 pixel block (3*3 pixel block taking the current pixel to be filtered as the center point and excluding the current pixel to be filtered), namely, the maximum value and the minimum value of the pixel values of 8 pixels except the center position in a 3*3 pixel block taking the current pixel to be filtered as the center.
If the pixel value of the current pixel to be filtered is larger than the maximum value or smaller than the minimum value, the maximum value or the minimum value is used for replacing the pixel value of the current pixel to be filtered to participate in filtering, namely if the pixel value of the current pixel to be filtered is larger than the maximum value, the pixel value of the current pixel to be filtered is replaced by the maximum value to participate in filtering; if the pixel value of the current pixel to be filtered is smaller than the minimum value, the pixel value of the current pixel to be filtered is replaced by the minimum value to participate in filtering.
The adaptive correction filter decoding process is described in detail below
Twenty-first embodiment, adaptive correction filter decoding process
1. Adaptive correction filtering parameter definition:
Parameter 1, adaptive correction filter enable flag (alf_enable_flag): binary variables. A value of '1' indicates that adaptive correction filtering may be used; a value of '0' indicates that adaptive correction filtering should not be used.
Illustratively, alfEnableFlag is equal to the value of alf_enable_flag.
It should be noted that the value of alf_enable_flag may be obtained from the sequence header, that is, before the whole sequence starts to be compressed, the ALF technique of the whole video sequence is turned on and the ALF technique of the whole video sequence is turned off, and the value of alf_enable_flag is '0'.
Parameter 2, enhancement adaptive correction filter enable flag (ealf _enable_flag): binary variables. A value of '1' indicates that enhanced adaptive correction filtering may be used; a value of '0' indicates that enhanced adaptive correction filtering should not be used.
Illustratively, ealfEnableFlag has a value equal to the value of ealf _enable_flag, the syntax of which is described as follows:
if(AlfEnableFlag){
ealf_enable_flag u(1)
}
When AlfEnableFlag has a value of 1, an enhancement adaptive correction filter enable flag, which is a sequence header flag, is read from the code stream.
Parameter 3, picture-level adaptive correction filtering enable flag (picture_alf_enable_flag [ compIdx ]): binary variables. A value of '1' indicates that the compIdx th component of the current image may use adaptive correction filtering; a value of '0' indicates that the compIdx th component of the current image should not use adaptive correction filtering.
Illustratively, pictureAlfEnableFlag [ compIdx ] has a value equal to the value of picture_alf_enable_flag [ compIdx ], the syntax of which is described as follows:
if(AlfEnableFlag){
for(compIdx=0;compIdx<3;compIdx++){
picture_alf_enable_flag[compIdx] u(1)
}
When AlfEnableFlag has a value of 1, an image-level adaptive correction filter enable flag of Y, U, V, which is an image header flag, is read from the code stream.
Parameter 4, image luma component sample adaptive correction filter coefficients (alf_coeff_luma [ i ] [ j ]): alf_coeff_luma [ i ] [ j ] represents the jth coefficient of the ith adaptive correction filter of the luminance component.
Illustratively, the value of AlfCoeffLuma [ i ] [ j ] is equal to the value of alf_coeff_luma [ i ] [ j ].
For example, if EalfEnableFlag is 0, alfCoeffLuma [ i ] [ j ] (j=0 to 7) should have a value ranging from-64 to 63, and AlfCoeffLuma [ i ] [8] should have a value ranging from-1088 to 1071.
If EalfEnableFlag is 1, the value of AlfCoeffLuma [ i ] [ j ] (j=0 to 13) should be-64 to 63, and the value of AlfCoeffLuma [ i ] [14] should be-1088 to 1071.
Parameter 5, image chroma component adaptive correction filter coefficients (alf_coeff_chroma [0] [ j ], alf_coeff_chroma [1] [ j ]): alf_coeff_chroma [0] [ j ] represents the coefficient of the jth adaptive correction filter of the Cb component, alf_coeff_chroma [1] [ j ] represents the coefficient of the jth adaptive correction filter of the Cr component.
Illustratively, alfCoeffChroma [0] [ j ] is equal to the value of alf_coeff_chroma [0] [ j ], and AlfCoeffChroma [1] [ j ] is equal to the value of alf_coeff_chroma [1] [ j ].
For example, if EalfEnableFlag has a value of 0, alfCoeffchroma [ i ] [ j ] (i=0-1, j=0-7) should have a value ranging from-64 to 63, and AlfCoeffchroma [ i ] [8] (i=0-1) should have a value ranging from-1088-1071.
If EalfEnableFlag is 1, alfCoeffchroma [ i ] [ j ] (i=0 to 1, j=0 to 13) should be-64 to 63, alfCoeffchroma [ i ] [14] (i=0 to 1) should be-1088 to 1071.
Parameter 6, number of adjacent adaptive correction filter regions of image luminance component (alf_region_distance [ i ]): alf_region_distance [ i ] represents the difference between the i-th adaptive correction filter region basic unit start index and the i-1-th adaptive correction filter region basic unit start index of the luminance component.
For example, alf_region_distance [ i ] should have a value ranging from 1 to FilterNum-1.
Illustratively, when alf_region_distance [ i ] is not present in the bitstream, if i is equal to 0, then the value of alf_region_distance [ i ] is 0; if i is not equal to 0 and the value of alf_filter_num_minus1 is FilterNum-1, then the value of alf_region_distance [ i ] is 1.
Illustratively, the sum of alf_region_distance [ i ] (i=0 to alf_filter_num_minus1) is less than or equal to FilterNum-1.
Parameter 7, maximum coding unit adaptive correction filter enable flag (alf_ lcu _enable_flag [ compIdx ] [ LcuIndex ]): binary variables. A value of '1' indicates that the sample of the LcuIndex th maximum coding unit compIdx component should be filtered using adaptive correction; a value of '0' indicates that samples of the LcuIndex th maximum coding unit compIdx component should not use adaptive correction filtering.
Illustratively, alfLCUEnableFlag [ compIdx ] [ LcuIndex ] has a value equal to the value of alf_ lcu _enable_flag [ compIdx ] [ LcuIndex ], the syntax of which is described as follows:
It should be noted that, if one of the image-level adaptive correction filtering permission flags on the three components has a flag value of 1, the adaptive correction filtering parameters are obtained, that is, if EalfEnableFlag =1, each filter coefficient is 15, and the luminance component is divided into 64 regions; otherwise each filter coefficient is 9 and the luminance component is divided into 16 regions. Respectively acquiring filter coefficients on the three components, and if the image-level adaptive correction filter permission mark is opened, acquiring the filter coefficients alf_coeff_chroma on the components if the image-level adaptive correction filter permission mark is on chromaticity; if the brightness is high, the region merging mode flag alf_region_order_idx, the number of filters minus 1alf_filter_num_minus1, the region merging result alf_region_distance [ i ]), and each filter coefficient alf_coeff_luma need to be obtained.
The syntax description of the adaptive correction filter parameters may be as follows:
2. Adaptive correction filter decoding process
If PictureAlfEnableFlag [ compIdx ] is 0, directly taking the value of the sample component after the offset as the value of the corresponding reconstructed sample component; otherwise, the corresponding offset sample components are subjected to adaptive correction filtering.
Illustratively, compIdx equal to 0 represents a luminance component, 1 represents a Cb component, and 2 represents a Cr component.
The adaptive correction filtering unit is derived from the maximum encoding unit, and sequentially processes the units in the raster scan order. The method comprises the steps of firstly obtaining self-adaptive correction filter coefficients of all components according to a self-adaptive correction filter coefficient decoding process, then obtaining a reconstructed sample according to an adaptive correction filter unit, according to an adaptive correction filter coefficient index for determining brightness components of the current self-adaptive correction filter unit, and finally carrying out self-adaptive correction filter according to brightness and chromaticity components of the self-adaptive correction filter unit.
An adaptive correction filter coefficient decoding process:
21.1.1, if EalfEnableFlag is equal to 0, the i-th set of filter coefficients AlfCoeffLuma [ i ] [ j ] (i=0 to alf_filter_num_minus1, j=0 to 7) of the luminance samples are parsed from the bitstream. Coefficients AlfCoeffLuma [ i ] [8] (i.e., C 8 in the filter shown in FIG. 12) were processed as follows:
Wherein AlfCoeffLuma [ i ] [ j ] (j=0 to 7) has a bit width of 7 bits and a value range of-64 to 63; the value range of AlfCoeffLuma [ i ] [8] after the treatment is 0-127.
If EalfEnableFlag is equal to 1, the i-th set of filter coefficients AlfCoeffLuma [ i ] [ j ] (i=0-alf_filter_num_minus1, j=0-13) of the luminance samples are parsed from the bitstream. The following is done for the coefficients AlfCoeffLuma [ i ] [14] (i.e., C 14 in the filter shown in FIG. 4A):
Wherein AlfCoeffLuma [ i ] [ j ] (j=0 to 13) has a bit width of 7 bits and a value range of-64 to 63; the value range of AlfCoeffLuma [ i ] [14] after the treatment is 0-127.
21.1.2 Obtaining an index array of luminance component adaptive correction filter coefficients (denoted alfCoeffIdxTab [ FilterNum ]) from alf_region_distance [ i ] (i > 1):
count=0
alfCoeffIdxTab[0]=0
for(i=1;i<alf_filter_num_minus1+1;i++){
for(j=0;j<alf_region_distance[i]-1;j++){
alfCoeffIdxTab[count+1]=alfCoeffIdxTab[count]
count=count+1
}
alfCoeffIdxTab[count+1]=alfCoeffIdxTab[count]+1
count=count+1
}
for(i=count;i<FilterNum;i++){
alfCoeffIdxTab[i]=alfCoeffIdxTab[count]
21.1.3, if EalfEnableFlag is equal to 0, filter coefficients AlfCoeffChroma [0] [ j ] and AlfCoeffChroma [1] [ j ] (j=0 to 7) of the chroma samples are obtained from the bitstream by parsing. Coefficients AlfCoeffChroma [0] [8] and AlfCoeffChroma [1] [8] (i.e., C 8 in the filter of FIG. 12) were treated as follows:
/>
wherein AlfCoeffChroma [ i ] [ j ] (j=0 to 7) has a bit width of 7 bits and a value range of-64 to 63; the value range of AlfCoeffChroma [ i ] [8] after the treatment is 0-127.
If EalfEnableFlag is equal to 1, filter coefficients AlfCoeffChroma [0] [ j ] and AlfCoeffChroma [1] [ j ] (j=0 to 13) of the chroma samples are parsed from the bitstream. Coefficients AlfCoeffChroma [0] [14] and AlfCoeffChroma [1] [14] (i.e., C 14 in the filter of FIG. 4A) were treated as follows:
Wherein AlfCoeffChroma [ i ] [ j ] (j=0 to 13) has a bit width of 7 bits and a value range of-64 to 63; the value range of AlfCoeffChroma [ i ] [14] after the treatment is 0-127.
Deriving an adaptive correction filter unit
An adaptive correction filter unit (which may be as shown in fig. 5) is derived from the current maximum coding unit as follows:
21.2.1, deleting the part of the sample area where the current maximum coding unit C is located beyond the image boundary to obtain a sample area D;
21.2.2, if the sample where the lower boundary of the sample area D is located does not belong to the lower boundary of the image, contracting the lower boundaries of the luminance component and chrominance component sample areas D upwards by four lines to obtain a sample area E1; otherwise, let sample area E1 equal sample area D. The last line of samples of sample region D is the lower boundary of the region;
21.2.3 if the sample where the upper boundary of the sample area E1 is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of cross_patch_loopfilter_enable_flag is '0', let the sample area E2 be equal to the sample area E1; otherwise, the upper boundaries of the luminance component and chrominance component sample areas E1 are extended up by four lines to obtain sample areas E2. The first row of samples of sample region E1 is the upper boundary of the region;
21.2.4, using the sample area E2 as a current adaptive correction filtering unit. The first line of samples of the image is the upper boundary of the image and the last line of samples is the lower boundary of the image.
Determining brightness component adaptive correction filter unit adaptive correction filter coefficient index
If EalfEnableFlag is equal to 0, the adaptive correction filter coefficient index (denoted filterIdx) of the current luminance component adaptive correction filter unit is calculated according to the following method:
xInterval=((((horizontal_size+(1<<LcuSizeInBit)-1)>>LcuSizeInBit)+1)>>2)<<LcuSizeInBit
yInterval=((((vertical_size+(1<<LcuSizeInBit)-1)>>LcuSizeInBit)+1)>>2)<<LcuSizeInBit
if(xInterval==0&&yInterval==0){
index=15
}
else if(xInterval==0){
index=Min(3,y/yInterval)*4+3
}
else if(yInterval==0){
index=Min(3,x/xInterval)+12
}
else{
index=Min(3,y/yInterval)*4+Min(3,x/xInterval)
}
filterIdx=alfCoeffIdxTab[regionTable[index]]
wherein, (x, y) is the coordinates in the image of the top left corner sample of the largest coding unit that leads to the current adaptive correction filter unit, regionTable is defined as follows:
regionTable[16]={0,1,4,5,15,2,3,6,14,11,10,7,13,12,9,8}
If EalfEnableFlag is equal to 1, an adaptive correction filter coefficient index (denoted filterIdx) of the current luminance component adaptive correction filter unit is calculated according to the following method.
lcu_width=1<<LcuSizeInBit
lcu_height=1<<LcuSizeInBit
y_interval=((((vertical_size+lcu_height-1)/lcu_height)+4)/8*lcu_height)
x_interval=((((horizontal_size+lcu_width-1)/lcu_width)+4)/8*lcu_width)
if(yInterval==0){
y_st_offset=0
}
else{
y_cnt=Clip3(0,8,(vertical_size+y_interval-1)/y_interval)
y_st_offset=vertical_size-y_interval*(y_cnt-1)
y_st_offset=(y_st_offset+lcu_height/2)/lcu_height*lcu_height
}
if(xInterval==0){
x_st_offset=0
}
else{
x_cnt=Clip3(0,8,(horizontal_size+x_interval-1)/x_interval)
x_st_offset=horizontal_size-x_interval*(x_cnt-1)
x_st_offset=(x_st_offset+lcu_width/2)/lcu_width*lcu_width
}
y_index=(y_interval==0)?7:Clip3(0,7,y/y_interval)
y_index_offset=y_index<<3
y_index2=(y_interval==0||y<y_st_offset)?0:Clip3(-1,6,(y-y_st_offset)/y_interval)+1
y_index_offset2=y_index2<<3
x_index=(x_interval==0)?7:Clip3(0,7,x/x_interval)
x_index2=(x_interval==0||x<x_st_offset)?0:Clip_post(-1,6,(x-x_st_offset)/x_interval)+1
if(AlfRegionOrderIndex==0){
filterIdx=alfCoeffIdxTab[regionTable[0][y_index_offset+x_index]]
}
else if(AlfRegionOrderIndex==1)
filterIdx=alfCoeffIdxTab[regionTable[1][y_index_offset+x_index2]]
}
else if(AlfRegionOrderIndex==2)
filterIdx=alfCoeffIdxTab[regionTable[2][y_index_offset2+x_index2]]
}
else if(AlfRegionOrderIndex==3)
filterIdx=alfCoeffIdxTab[regionTable[3][y_index_offset2+x_index]]
}
Wherein, (x, y) is the coordinates in the image of the top left corner sample of the largest coding unit that leads to the current adaptive correction filter unit, regionTable is defined as follows:
regionTable[4][64]={
{63,60,59,58,5,4,3,0,62,61,56,57,6,7,2,1,49,50,55,54,9,8,13,14,48,51,52,53,10,11,12,15,47,46,33,32,31,30,17,16,44,45,34,35,28,29,18,19,43,40,39,36,27,24,23,20,42,41,38,37,26,25,22,21},
{42,43,44,47,48,49,62,63,41,40,45,46,51,50,61,60,38,39,34,33,52,55,56,59,37,36,35,32,53,54,57,58,26,27,28,31,10,9,6,5,25,24,29,30,11,8,7,4,22,23,18,17,12,13,2,3,21,20,19,16,15,14,1,0},
{21,22,25,26,37,38,41,42,20,23,24,27,36,39,40,43,19,18,29,28,35,34,45,44,16,17,30,31,32,33,46,47,15,12,11,10,53,52,51,48,14,13,8,9,54,55,50,49,1,2,7,6,57,56,61,62,0,3,4,5,58,59,60,63},
{0,1,14,15,16,19,20,21,3,2,13,12,17,18,23,22,4,7,8,11,30,29,24,25,5,6,9,10,31,28,27,26,58,57,54,53,32,35,36,37,59,56,55,52,33,34,39,38,60,61,50,51,46,45,40,41,63,62,49,48,47,44,43,42}}
Adaptive correction filter operation
If the left boundary of the current adaptive correction filter unit is an image boundary or is located outside a slice boundary and the value of CplfEnableFlag is 0, the left boundary does not exist, otherwise, the left boundary is the area of the current adaptive correction filter unit shifted left by 3 sample points to the current adaptive correction filter unit.
If the right boundary of the current adaptive correction filter unit is an image boundary or is located outside a slice boundary and the value of CplfEnableFlag is 0, the right boundary does not exist, otherwise, the right boundary outside area is the area of the current adaptive correction filter unit shifted left by 3 sample points to the current adaptive correction filter unit.
Illustratively, the boundary region includes a left boundary outer region and a right boundary outer region.
If AlfLCUEnableFlag [ compIdx ] [ LcuIndex ] is equal to 1, then adaptive correction filtering is performed on the compIdx component, otherwise, adaptive correction filtering is not performed.
If EalfEnableFlag is equal to 0, when the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, directly using the sample for filtering; when the samples used in the adaptive correction filtering process do not belong to the samples in the adaptive correction filtering unit, the filtering is performed in the following manner:
21.3.1, if the sample is outside the image boundary, or outside the slice boundary and CplfEnableFlag has a value of 0, filtering using the sample closest to the sample in the adaptive correction filtering unit instead of the sample;
21.3.2, if not, if the sample is outside the upper boundary or the lower boundary of the adaptive correction filter unit, using the sample closest to the sample in the adaptive correction filter unit to replace the sample for filtering;
21.3.3, if none of the conditions in 21.3.1 and 21.3.2 is met, use the samples for filtering. Illustratively, the adaptive correction filtering operation of the luminance component of the adaptive correction filtering unit is as follows:
ptmp=AlfCoeffLuma[filterIdx][8]*p(x,y)
for(j=0;j<8;j++){
ptmp+=AlfCoeffLuma[filterIdx][j]*(p(x-Hor[j],y-Ver[j])+p(x+Hor[j],y+Ver[j])
}
ptmp=(ptmp+32)>>6
p’(x,y)=Clip3(0,(1<<BitDepth)-1,ptmp)
Where p (x, y) is the shifted samples, p' (x, y) is the reconstructed samples, hor [ j ] and Ver [ j ] (j=0 to 7). Illustratively, the adaptive correction filtering operation of the chrominance components of the adaptive correction filtering unit is as follows:
ptmp=AlfCoeffChroma[i][8]*p(x,y)
for(j=0;j<8;j++){
ptmp+=AlfCoeffChroma[i][j]*(p(x-Hor[j],y-Ver[j])+p(x+Hor[j],y+Ver[j])
}
ptmp=(ptmp+32)>>6
p’(x,y)=Clip3(0,(1<<BitDepth)-1,ptmp)
Where p (x, y) is the post-offset samples, p' (x, y) is the reconstructed samples, hor [ j ] and Ver [ j ] (j=0 to 7), which can be shown in table 1.
TABLE 1 sample Compensation Filter coordinate offset values
Value of j Value of horj Ver [ j ] value
0 0 3
1 0 2
2 1 1
3 0 1
4 1 -1
5 3 0
6 2 0
7 1 0
If EalfEnableFlag is equal to 1, when the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, directly using the sample for filtering; when the samples used in the adaptive correction filtering process do not belong to the samples in the adaptive correction filtering unit, the filtering is performed in the following manner:
21.4.1, if the sample is outside the image boundary, or outside the tile boundary and CplfEnableFlag has a value of 0:
21.4.1.1 taking the filter shown in fig. 4A as an example, if the sample corresponds to the position of the filter shape C 1、C2、C6、C4、C5 or C 10, using the adaptive correction filtering unit to replace the sample closest to the sample in the boundary area to filter;
21.4.1.2, otherwise, using the sample closest to the sample in the adaptive correction filtering unit to replace the sample for filtering.
21.4.2, Otherwise, if the sample is outside the upper or lower boundary of the adaptive correction filter unit:
21.4.2.1 taking the filter shown in fig. 4A as an example, if the sample corresponds to the position of the filter shape C 1、C2、C6、C4、C5 or C 10, using the adaptive correction filtering unit to replace the sample closest to the sample in the boundary area to filter;
21.4.2.2, otherwise, using the sample closest to the sample in the adaptive correction filtering unit to replace the sample for filtering.
21.4.3, Otherwise, 21.4.1 and 21.4.2, are not met, and the samples are used for filtering.
Illustratively, the adaptive correction filtering operation of the luminance component of the adaptive correction filtering unit is as follows:
ptmp=AlfCoeffLuma[filterIdx][14]*p(x,y)
for(j=0;j<14;j++){
ptmp+=AlfCoeffLuma[filterIdx][j]*(p(x-Hor[j],y-Ver[j])+p(x+Hor[j],y+Ver[j])
}
ptmp=(ptmp+32)>>6
p’(x,y)=Clip3(0,(1<<BitDepth)-1,ptmp)
where p (x, y) is the shifted samples, p' (x, y) is the reconstructed samples, hor [ j ] and Ver [ j ] (j=0-13), which can be shown in table 2.
Illustratively, the adaptive correction filtering operation of the chrominance components of the adaptive correction filtering unit is as follows:
ptmp=AlfCoeffChroma[i][14]*p(x,y)
for(j=0;j<14;j++){
ptmp+=AlfCoeffChroma[i][j]*(p(x-Hor[j],y-Ver[j])+p(x+Hor[j],y+Ver[j])
}
ptmp=(ptmp+32)>>6
p’(x,y)=Clip3(0,(1<<BitDepth)-1,ptmp)
Table 2, sample compensation filtered coordinate offset values
Value of j Value of horj Ver [ j ] value
0 0 3
1 2 2
2 1 2
3 0 2
4 1 -2
5 2 -2
6 2 1
7 1 1
8 0 1
9 1 -1
10 2 -1
11 3 0
12 2 0
13 1 0
Examples twenty two
First, the image may be subjected to fixed region division, and the fixed region division result may be as shown in fig. 2, to obtain an index value of each region. For each of the regions, it may be considered to be divided into 8 kinds as shown in fig. 18A, or a partial division manner as shown in fig. 18A may be reserved as shown in fig. 18B.
For each region, the encoding end device may determine a final division manner based on the RDO decision, and transmit the division manner of each region to the decoding end device through a code stream.
Illustratively, the last division in FIG. 18A is taken as an example. The area obtained by the 16 fixed division modes can be divided into 64 areas at most.
In the decoding process, the decoding end device may first divide a fixed region, and then read a specific division manner of each region from the code stream to obtain a final division manner of the whole frame.
For example, for each division in fig. 18A, the divided region numbers may be as shown in fig. 18C.
Illustratively, the value of J is the maximum index value +1 of the previous region.
Examples twenty-three
Taking the LCU as an example of the minimum unit of fixed region division, when the image resolution is small, there are cases where some regions only hold numbers and no image information is contained in the regions.
For such a case, after the fixed area division result is determined, it can be determined which specific type does not have image information.
By way of example, the determination may be based on the number of LCUs contained in the image width and height. If the fixed area is divided into fixed 4*4 areas, each area index is shown in fig. 2, when the number of LCUs in the width or height is less than 4, there are some areas in the columns or rows that do not contain image information. All these areas not containing image information are denoted as set G. The size of the set G is noted as N 7,N7 being a positive integer.
For any index value i of all index values,
23.1, If i is equal to any element in the set G, i=0;
23.2, if not, i is not equal to any element in the set G, i=i-k if i is greater than n elements in the set G; k is less than or equal to N 7.
It should be noted that, the foregoing embodiments are merely specific examples of implementation manners of the embodiments of the present application, and are not limiting to the scope of the present application, and new embodiments may be obtained by combining the embodiments or modifying the embodiments based on the foregoing embodiments, which all fall within the scope of the present application.
In addition, the implementation flows of the encoding end and the decoding end in the above embodiments may be referred to each other.
The method provided by the application is described above. The device provided by the application is described below:
Referring to fig. 19, fig. 19 is a schematic structural diagram of a filtering apparatus according to an embodiment of the present application, where the filtering apparatus may be applied to a decoding end device, and the apparatus may include:
a dividing unit 1910 for performing region division on a luminance component of a current image frame;
a first determining unit 1920, configured to determine, based on a region category identifier of an LCU obtained by parsing from a code stream, a region category to which the LCU belongs;
A second determining unit 1930, configured to determine a filter coefficient of the LCU based on the region class to which the LCU belongs and a filter coefficient obtained by parsing a code stream;
a filtering unit 1940, configured to perform ALF filtering on pixels of the LCU one by one based on a filter coefficient of the LCU.
In a possible implementation manner, the first determining unit 1920 determines, based on the region class identifier of the LCU obtained by parsing the code stream, a region class to which the LCU belongs, including:
and determining the region category of the LCU based on the region of the LCU and the region category identification of the LCU.
In a possible implementation manner, the region category identification of the LCU is used for identifying a category of the LCU in a region to which the LCU belongs, and the category of the LCU in the region to which the LCU belongs is determined by classifying each LCU in the region to which the LCU belongs;
the first determining unit 1920 determines, based on the region to which the LCU belongs and the region class identifier of the LCU, a region class to which the LCU belongs, including:
And determining the region category of the LCU based on the category number of each region, the region of the LCU and the region category identification of the LCU.
In a possible implementation manner, the first determining unit 1920 determines, based on the number of categories of each region, the region to which the LCU belongs, and the region category identifier of the LCU, a region category to which the LCU belongs, including:
Determining the total number of categories of each region before the region to which the LCU belongs based on the category number of each region before the region to which the LCU belongs;
and determining the region category of the LCU based on the total number of the categories of the regions before the region of the LCU and the region category identification of the LCU.
In one possible implementation manner, before the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient obtained by parsing the code stream, the second determining unit further includes:
determining whether to initiate ALF filtering for the LCU;
and when the ALF filtering is determined to be started on the LCU, determining to execute the operation of determining the filter coefficient of the LCU based on the region category of the LCU and the filter coefficient obtained by analyzing the code stream.
In a possible implementation, the second determining unit 1930 determines whether to start ALF filtering for the LCU, including:
Analyzing LCU coefficient identification of the LCU from the code stream; wherein the LCU coefficient identification is used to identify a filter coefficient used by the LCU among at least one set of filter coefficients used by a merge region to which the LCU belongs;
and when the value of the LCU coefficient identifier of the LCU is not the first value, determining to start ALF filtering on the LCU.
In one possible implementation manner, the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient parsed from the code stream, and includes:
Determining a filter coefficient of the LCU based on the region category of the LCU, the filter coefficient obtained by analyzing the code stream and the region coefficient identification of the merging region of the LCU obtained by analyzing the code stream; the region coefficient identification is used for identifying the filter coefficient used by the merging region to which the LCU belongs in the preset multiple groups of filter coefficients.
In a possible implementation manner, the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs, the filter coefficient obtained by parsing the code stream, and the region coefficient identifier of the merge region to which the LCU belongs by parsing the code stream, and includes:
when the fact that a plurality of sets of filter coefficients are used by the combining area of the LCU is determined based on the area coefficient identification of the combining area of the LCU, the filter coefficients obtained through analysis from the code stream and the LCU coefficient identification of the LCU are determined based on the area category of the LCU.
In one possible implementation manner, the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient parsed from the code stream, and includes:
Determining a filter coefficient of the LCU based on the region category of the LCU, the filter coefficient obtained by analyzing the code stream and the coefficient selection identifier of the LCU; wherein the coefficient selection identifies filter coefficients for identifying a selection of the LCU from among a plurality of sets of candidate filter coefficients.
In a possible implementation manner, the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs, the filter coefficient obtained by parsing from the code stream, and the coefficient selection identifier of the LCU, and includes:
When the value of the coefficient selection identifier of the LCU is a first value, determining the filter coefficient of the previous merging region of the merging region to which the LCU belongs as the filter coefficient of the LCU; the previous merging area of the LCU is the merging area corresponding to the previous adjacent index of the merging area of the LCU;
When the value of the coefficient selection identifier of the LCU is a second value, determining the filter coefficient of the merging area to which the LCU belongs as the filter coefficient of the LCU;
And when the value of the coefficient selection identifier of the LCU is a third value, determining the filter coefficient of the next merging region of the merging region to which the LCU belongs as the filter coefficient of the LCU, wherein the next merging region of the merging region to which the LCU belongs is the merging region corresponding to the next adjacent index of the merging region to which the LCU belongs.
In one possible implementation manner, the second determining unit 1930 parses the code stream to obtain filter coefficients, including:
for any merging region, analyzing the filter shape of the merging region from the code stream;
and analyzing the filter coefficient of the merging area from the code stream based on the filter shape.
In one possible implementation manner, the second determining unit 1930 determines the filter coefficient of the LCU based on the region class to which the LCU belongs and the filter coefficient parsed from the code stream, and includes:
determining the filter shape and the filter coefficient of the LCU based on the region category of the LCU and the filter shape and the filter coefficient obtained by analyzing the code stream;
The ALF filtering is carried out on pixels of the LCU one by one based on the filtering coefficient of the LCU, and the ALF filtering comprises the following steps:
ALF filtering is carried out on pixels of the LCU one by one based on the filter shape and the filter coefficient of the LCU.
In one possible implementation, the filtering unit 1940 performs ALF filtering on pixels of the LCU one by one based on a filter coefficient of the LCU, including:
And performing ALF filtering on pixels of the LCU one by one based on the filtering coefficient of the LCU and the weight coefficient of each reference pixel position corresponding to the merging region of the LCU, which is obtained by analyzing the code stream, wherein the weight coefficient of the reference pixel position is the weight when the pixel value of the reference pixel position participates in ALF filtering calculation.
In one possible implementation, for any pixel of the LCU, its filtered pixel value is determined by:
/>
Wherein, C i is the (i+1) th filter coefficient in the filter coefficient of the combining region to which the LCU belongs, P i is the pixel value of the reference pixel position corresponding to the filter coefficient C i, the reference pixel position corresponding to P 28-i and the reference pixel position corresponding to P i are centrosymmetric with respect to the pixel position of the current filter pixel, a i is the weight coefficient of the pixel value of the reference pixel position corresponding to P i, P 14 is the pixel value of the current filter pixel, C 14 is the filter coefficient of the current filter pixel, and 0 < a i < 2.
In one possible implementation, the filtering unit 1940 performs ALF filtering on pixels of the LCU one by one based on a filter coefficient of the LCU, including:
for any pixel of the LCU, in the process of performing ALF filtering on the pixel, updating the pixel value of the pixel based on the pixel values of surrounding pixels of the pixel;
And performing ALF filtering on the pixel based on the pixel value updated by the pixel.
In one possible implementation, the filtering unit 1940 updates the pixel value of the pixel based on the pixel values of surrounding pixels of the pixel, including:
Determining a maximum value and a minimum value of pixel values of pixels except for a center position in a target pixel block; wherein the target pixel block is 3*3 pixel blocks taking the pixel as a central position;
Updating the pixel value of the pixel to the maximum value when the pixel value of the pixel is larger than the maximum value;
And when the pixel value of the pixel is smaller than the minimum value, updating the pixel value of the pixel to the minimum value.
Referring to fig. 20, fig. 20 is a schematic structural diagram of a filtering apparatus according to an embodiment of the present application, where the filtering apparatus may be applied to an encoding/decoding end device, and the apparatus may include:
a filtering unit 2010, configured to, during ALF filtering of any pixel in a current filtering unit, for any reference pixel, when the reference pixel is not in the current filtering unit:
Under the condition that the pixel value of the reference pixel cannot be acquired, the current filtering unit and the pixel closest to the reference pixel in the boundary area are used for filtering instead of the reference pixel; the boundary area comprises the left boundary outside or the right boundary outside of the current filtering unit, the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the left side of the current filtering unit, and the left boundary outside of the current filtering unit comprises part or all of the areas in the filtering units adjacent to the right side of the current filtering unit;
Otherwise, the reference pixel is used for filtering.
In one possible implementation manner, the situation that the pixel value of the reference pixel cannot be obtained includes one of the following:
The reference pixel is outside the image boundary of the current image frame, outside the slice boundary of the current slice and does not allow filtering across the slice boundary, outside the upper or lower boundary of said current filtering unit.
In a possible implementation manner, in a case where the pixel value of the reference pixel cannot be obtained, before the filtering unit 2010 uses the current filtering unit and the pixel closest to the reference pixel in the boundary area to replace the reference pixel for filtering, the method further includes:
Determining whether the reference pixel corresponds to a filter coefficient of a specified position of the filter shape;
if yes, determining to execute the filtering operation by using the current filtering unit and the pixel closest to the reference pixel position in the boundary area to replace the reference pixel.
In one possible implementation, after the filtering unit 2010 determines whether the reference pixel corresponds to a specified position of the filter shape, the method further includes:
And if the reference pixel does not correspond to the filter coefficient of the designated position of the filter shape, using the pixel closest to the reference pixel position in the current filtering unit to replace the reference pixel for filtering.
In a possible implementation manner, in a case where the pixel value of the reference pixel cannot be obtained, before the filtering unit 2010 uses the current filtering unit and the pixel closest to the reference pixel in the boundary area to replace the reference pixel for filtering, the method further includes:
Determining whether the current filtering unit allows the use of enhanced adaptive correction filtering;
if yes, determining to execute the filtering operation by using the current filtering unit and the pixel closest to the reference pixel position in the boundary area to replace the reference pixel;
Or alternatively, the first and second heat exchangers may be,
If the current filtering unit allows the enhancement adaptive correction filtering to be used, the pixel closest to the reference pixel position in the current filtering unit is used for filtering instead of the reference pixel.
In one possible embodiment, the specified positions include a first position, a second position, a third position, and symmetrical positions of the first position, the second position, and the third position in the first filter;
The first filter is a 7*7 cross-shaped 5*5 square central symmetry filter, the first position is the left upper corner position of the first filter, the second position is the right adjacent position of the first position, the third position is the lower adjacent position of the first position, and the symmetry positions comprise axisymmetry positions and central symmetry positions.
The filtering device provided by the embodiment of the application can be applied to decoding end equipment, and the device can comprise: the device comprises a determining unit, an acquiring unit and a filtering unit; wherein:
An obtaining unit, configured to obtain an area coefficient identifier of a merging area to which a current LCU belongs when the determining unit determines that the current LCU of the current image frame starts ALF filtering;
The obtaining unit is further configured to obtain a filter coefficient of the current LCU based on a region coefficient identifier of a merging region to which the current LCU belongs; the region coefficient identifier is used for identifying the filter coefficient used by the merging region to which the LCU belongs in the preset multiple groups of filter coefficients;
and the filtering unit is used for performing ALF filtering on pixels of the current LCU one by one based on the filtering coefficient of the current LCU.
In a possible implementation manner, the determining unit determines that ALF filtering is started for a current LCU of a current frame image, including:
Analyzing the LCU coefficient identification of the current LCU from the code stream; wherein the LCU coefficient identification is configured to identify, among at least one set of filter coefficients used in a merge region to which the current LCU belongs, a filter coefficient used by the current LCU;
and when the value of the LCU coefficient identifier of the LCU is not the first value, determining to start ALF filtering on the current LCU.
In a possible implementation manner, the obtaining unit obtains a filter coefficient of the current LCU based on a region coefficient identifier of a merging region to which the current LCU belongs, including:
When the fact that the combining region to which the current LCU belongs uses multiple sets of filter coefficients is determined based on the region coefficient identification of the combining region to which the current LCU belongs, the filter coefficients of the current LCU are determined from the multiple sets of filter coefficients used by the combining region to which the current LCU belongs based on the LCU coefficient identification of the current LCU.
The filtering device provided by the embodiment of the application can be applied to decoding end equipment, and the device can comprise: the device comprises a determining unit, an acquiring unit and a filtering unit; wherein:
The acquisition unit is used for acquiring a coefficient selection identifier of the current LCU when the determination unit determines that the ALF filtering is started on the current LCU of the current frame image;
A determining unit, configured to determine a filter coefficient of the current LCU based on a merge area to which the current LCU belongs and a coefficient selection identifier of the current LCU; wherein the coefficient selection identification is used for identifying a filter coefficient selected for use by the current LCU in a plurality of sets of candidate filter coefficients;
and the filtering unit is used for performing ALF filtering on pixels of the current LCU one by one based on the filtering coefficient of the current LCU.
In a possible implementation manner, the determining unit determines the filter coefficient of the LCU based on the merge area to which the LCU belongs and the coefficient selection identifier of the LCU, and includes:
When the value of the coefficient selection identifier of the current LCU is a first value, determining the filter coefficient of the previous merging region of the merging region to which the current LCU belongs as the filter coefficient of the current LCU; the previous merging area of the LCU is the merging area corresponding to the previous adjacent index of the merging area of the LCU;
When the value of the coefficient selection identifier of the current LCU is a second value, determining the filter coefficient of the merging area to which the current LCU belongs as the filter coefficient of the current LCU;
And when the value of the coefficient selection identifier of the current LCU is a third value, determining the filter coefficient of the next merging region of the merging region to which the current LCU belongs as the filter coefficient of the current LCU, wherein the next merging region of the merging region to which the LCU belongs is the merging region corresponding to the next adjacent index of the merging region to which the LCU belongs.
The filtering device provided by the embodiment of the application can be applied to decoding end equipment, and the device can comprise: the device comprises a determining unit, an acquiring unit and a filtering unit; wherein:
An obtaining unit, configured to obtain, when the determining unit determines that ALF filtering is started for a current LCU of a current frame image, a filter shape of a merge area to which the current LCU belongs based on the merge area to which the current LCU belongs;
The obtaining unit is further configured to obtain a filter coefficient of a merging area to which the current LCU belongs, based on the filter shape;
And the filtering unit is used for carrying out ALF filtering on the pixels of the current LCU one by one based on the filter shape and the filter coefficient.
The filtering device provided by the embodiment of the application can be applied to decoding end equipment, and the device can comprise: the device comprises a determining unit, an acquiring unit and a filtering unit; wherein:
The acquisition unit is used for acquiring a filter coefficient of a merging area to which the current LCU belongs and a weight coefficient of each reference pixel position based on the area to which the current LCU belongs when the determination unit determines that the ALF filtering is started on the current LCU of the current frame image;
And the filtering unit is used for carrying out ALF filtering on the pixels of the current LCU one by one based on the filtering coefficient and the weight coefficient of each reference pixel position.
The filtering device provided by the embodiment of the application can be applied to coding end equipment, and the device can comprise: the device comprises a dividing unit, a classifying unit, a merging unit and a coding unit; wherein:
a dividing unit for performing region division on a luminance component of a current image frame;
the classifying unit is used for classifying each LCU in any region, and dividing the region into a plurality of region categories based on the category of each LCU;
the merging unit is used for carrying out region merging on each region category and determining the filter coefficient of each merging region;
and the coding unit is used for writing the filter coefficient of each merging region and the region category identification of each LCU into the code stream.
The filtering device provided by the embodiment of the application can be applied to coding end equipment, and the device can comprise: a determining unit and a coding unit; wherein:
a determining unit, configured to determine, for any merging region of the current image frame, a filter coefficient used by the merging region based on an RDO decision;
The determining unit is further configured to determine a region coefficient identifier of the merging region based on a filter coefficient used by the merging region; wherein, the region coefficient mark is used for marking the filter coefficient used by the combining region in the preset multiple groups of filter coefficients;
and the coding unit is used for writing the filter coefficient used by each merging area and the area coefficient identifier of each merging area into the code stream.
In a possible implementation manner, the determining unit is further configured to determine, for any merging area of the current image frame, an LCU coefficient identifier of each LCU based on a filter coefficient used by each LCU in the merging area when the filter coefficient used by the merging area includes multiple sets;
The coding unit is further configured to write the LCU coefficient identifier of each LCU into the code stream.
The filtering device provided by the embodiment of the application can be applied to coding end equipment, and the device can comprise: a determining unit and a coding unit; wherein:
A determining unit, configured to determine, for any merging region of the current image frame, a filter coefficient used by the merging region from a plurality of sets of filter coefficients based on RDO decision;
The determining unit is further configured to determine a coefficient selection identifier of each LCU in the merging area based on a filter coefficient used in the merging area; wherein the coefficient selection identification is used for identifying filter coefficients selected for use by each LCU in a plurality of groups of candidate filter coefficients;
and the coding unit is used for writing the filter coefficient used by each merging area and the coefficient selection identifier of each LCU into the code stream.
In one possible implementation manner, for any merging region, the multiple sets of filter coefficients include:
Training the obtained filter coefficients of the merging region, the filter coefficients of the previous merging region of the merging region and the filter coefficients of the next merging region of the region.
The filtering device provided by the embodiment of the application can be applied to coding end equipment, and the device can comprise: a determining unit and a coding unit; wherein:
A determining unit for determining, for any merging region of the current image frame, a filter shape and filter coefficients used by the merging region based on RDO decision;
and the coding unit is used for writing the filter shape and the filter coefficient used by each merging area into the code stream.
The filtering device provided by the embodiment of the application can be applied to coding end equipment, and the device can comprise:
A determining unit, configured to determine, for any merging region of the current image frame, a filter coefficient used by the merging region and a weight coefficient of each corresponding reference pixel position based on RDO decision;
and the coding unit is used for writing the filter coefficient used by each merging area and the weight coefficient of each corresponding reference pixel position into the code stream.
Fig. 21 is a schematic hardware structure diagram of a decoding end device according to an embodiment of the present application. The decoding end device may include a processor 2101, a machine-readable storage medium 2102 storing machine-executable instructions. The processor 2101 and the machine-readable storage medium 2102 may communicate via the system bus 2103. Also, the processor 2101 may perform the filtering method of the decoding end device described above by reading and executing machine-executable instructions corresponding to the filtering control logic in the machine-readable storage medium 2102.
The machine-readable storage medium 2102 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state disk, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
In some embodiments, a machine-readable storage medium having stored thereon machine-executable instructions that when executed by a processor implement the above-described filtering method of a decoding end device is also provided. For example, the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Fig. 22 is a schematic hardware structure diagram of an encoding end device according to an embodiment of the present application. The encoding end device may include a processor 2201, a machine-readable storage medium 2202 storing machine-executable instructions. The processor 2201 and machine-readable storage medium 2202 may communicate via a system bus 2203. Also, the processor 2201 may perform the filtering method of the encoding end device described above by reading and executing machine-executable instructions corresponding to the filtering control logic in the machine-readable storage medium 2202.
The machine-readable storage medium 2202 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state disk, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
In some embodiments, a machine-readable storage medium having stored thereon machine-executable instructions that when executed by a processor implement the above-described filtering method of an encoding end device is also provided. For example, the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In some embodiments, there is also provided a camera apparatus including the filtering device of any of the above embodiments.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the application.

Claims (5)

1. A filtering method applied to a decoding end device, the method comprising:
Performing region division on a luminance component of a current image frame;
determining the region category of the self-adaptive correction filter unit based on the region category identification of the self-adaptive correction filter unit obtained by analysis from the code stream;
Determining the filter coefficient of the self-adaptive correction filter unit based on the region category of the self-adaptive correction filter unit and the filter coefficient obtained by analyzing the code stream;
ALF filtering is carried out on pixels of the self-adaptive correction filtering unit one by one based on the filtering coefficient of the self-adaptive correction filtering unit;
the adaptive correction filtering unit derives according to the current maximum coding unit LCU of the current frame image in the following manner:
Deleting the part of the sample area where the current maximum coding unit is located beyond the image boundary to obtain a first sample area;
If the sample where the lower boundary of the first sample area is located does not belong to the lower boundary of the image, the lower boundary of the first sample area of the brightness component and the chrominance component is shrunk upwards by four lines to obtain a second sample area;
if the sample where the lower boundary of the first sample area is located belongs to the lower boundary of the image, making the second sample area equal to the first sample area;
If the sample where the upper boundary of the second sample area is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of the cross-slice boundary filtering mark is 0, enabling the third sample area to be equal to the second sample area;
If the sample where the upper boundary of the second sample area is located does not belong to the upper boundary of the image and the condition that the sample belongs to the slice boundary and the value of the cross-slice boundary filter mark is 0 is not satisfied, expanding the upper boundary of the second sample area of the luminance component and the chrominance component upwards by four rows to obtain a third sample area;
the third sample area is taken as the current adaptive correction filtering unit.
2. The method according to claim 1, wherein the determining the region class to which the adaptive correction filter unit belongs based on the region class identification of the adaptive correction filter unit parsed from the code stream includes:
and determining the region type of the self-adaptive correction filter unit based on the region of the self-adaptive correction filter unit and the region type identifier of the self-adaptive correction filter unit.
3. The method according to claim 1, wherein the method further comprises:
When the current adaptive filtering unit is determined to be filtered, if the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, the sample is used for filtering;
If the sample used in the adaptive correction filtering process is not the sample in the adaptive correction filtering unit, then:
If the sample is outside the image boundary or outside the slice boundary and the value of the cross-slice boundary filtering sign is 0, filtering by using the sample closest to the sample in the adaptive correction filtering unit to replace the sample;
Filtering the sample by using a sample nearest to the sample in the adaptive correction filtering unit under the condition that the sample is not outside the image boundary and does not meet the condition that the value of a cross-slice boundary filtering mark is 0 outside the slice boundary and the sample is at the upper boundary or the lower boundary of the adaptive correction filtering unit;
The sample is used for filtering when the sample is not outside the image boundary and does not satisfy the condition that the value of the cross-slice boundary filter flag is 0 outside the slice boundary, and the sample is not at the adaptive correction filter unit upper boundary nor outside the adaptive correction filter unit lower boundary.
4. A filtering apparatus for use in a decoding end device, the apparatus comprising:
a dividing unit for performing region division on a luminance component of a current image frame;
the first determining unit is used for determining the region category of the adaptive correction filtering unit based on the region category identification of the adaptive correction filtering unit obtained by analysis from the code stream;
The second determining unit is used for determining the filter coefficient of the adaptive correction filtering unit based on the region category of the adaptive correction filtering unit and the filter coefficient obtained by analysis from the code stream;
A filtering unit for performing ALF filtering on pixels of the adaptive correction filtering unit one by one based on a filter coefficient of the adaptive correction filtering unit;
the adaptive correction filtering unit derives according to the current maximum coding unit LCU of the current frame image in the following manner:
Deleting the part of the sample area where the current maximum coding unit is located beyond the image boundary to obtain a first sample area;
If the sample where the lower boundary of the first sample area is located does not belong to the lower boundary of the image, the lower boundary of the first sample area of the brightness component and the chrominance component is shrunk upwards by four lines to obtain a second sample area;
if the sample where the lower boundary of the first sample area is located belongs to the lower boundary of the image, making the second sample area equal to the first sample area;
If the sample where the upper boundary of the second sample area is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of the cross-slice boundary filtering mark is 0, enabling the third sample area to be equal to the second sample area;
If the sample where the upper boundary of the second sample area is located does not belong to the upper boundary of the image and the condition that the sample belongs to the slice boundary and the value of the cross-slice boundary filter mark is 0 is not satisfied, expanding the upper boundary of the second sample area of the luminance component and the chrominance component upwards by four rows to obtain a third sample area;
the third sample area is taken as the current adaptive correction filtering unit.
5. A decoding end device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor for executing the machine-executable instructions to implement the method of any one of claims 1-3.
CN202410175711.6A 2021-02-23 2021-02-23 Filtering method, device and equipment Pending CN118101933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410175711.6A CN118101933A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410175711.6A CN118101933A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment
CN202110204290.1A CN114640846A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110204290.1A Division CN114640846A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment

Publications (1)

Publication Number Publication Date
CN118101933A true CN118101933A (en) 2024-05-28

Family

ID=81945752

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410175711.6A Pending CN118101933A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment
CN202110204290.1A Pending CN114640846A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110204290.1A Pending CN114640846A (en) 2021-02-23 2021-02-23 Filtering method, device and equipment

Country Status (1)

Country Link
CN (2) CN118101933A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118042164A (en) * 2022-11-14 2024-05-14 杭州海康威视数字技术股份有限公司 Filtering method, device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107750459B (en) * 2015-06-18 2020-09-15 Lg电子株式会社 Adaptive filtering method and device based on image characteristics in image coding system
US11563938B2 (en) * 2016-02-15 2023-01-24 Qualcomm Incorporated Geometric transforms for filters for video coding
CN109862374A (en) * 2019-01-07 2019-06-07 北京大学 A kind of adaptive loop filter method and device

Also Published As

Publication number Publication date
CN114640846A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US20230344993A1 (en) Video encoding/decoding method and device, and recording medium having bitstream stored therein
US20240155117A1 (en) Image encoding/decoding method and device, and recording medium having bitstream stored therein
CN108184129B (en) Video coding and decoding method and device and neural network for image filtering
CN110024385B (en) Video encoding/decoding method, apparatus, and recording medium storing bit stream
CN103975587B (en) Method and device for encoding/decoding of compensation offsets for a set of reconstructed samples of an image
CN110870319B (en) Method and apparatus for picture coding and decoding
CN103647975B (en) Improved sample adaptive offset filtering method based on histogram analysis
US20230209051A1 (en) Filtering method and apparatus, and device
CN113099221A (en) Cross-component sample point self-adaptive compensation method, coding method and related device
CN114640858B (en) Filtering method, device and equipment
CN118101933A (en) Filtering method, device and equipment
CN112929656B (en) Filtering method, device and equipment
EP3410723A1 (en) A method and a device for picture encoding and decoding
CN114598867B (en) Filtering method, device and equipment
CN114189683B (en) Enhanced filtering method and device
JP7460802B2 (en) Image enhancement method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination