WO2022184109A1 - 用于滤波的方法、装置及设备 - Google Patents
用于滤波的方法、装置及设备 Download PDFInfo
- Publication number
- WO2022184109A1 WO2022184109A1 PCT/CN2022/078876 CN2022078876W WO2022184109A1 WO 2022184109 A1 WO2022184109 A1 WO 2022184109A1 CN 2022078876 W CN2022078876 W CN 2022078876W WO 2022184109 A1 WO2022184109 A1 WO 2022184109A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- adaptive correction
- filtering
- filter
- lcu
- value
- Prior art date
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 926
- 238000000034 method Methods 0.000 title claims abstract description 165
- 230000003044 adaptive effect Effects 0.000 claims abstract description 594
- 238000012937 correction Methods 0.000 claims abstract description 555
- 230000008569 process Effects 0.000 claims description 54
- 238000005516 engineering process Methods 0.000 description 32
- 230000000694 effects Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 26
- 230000004048 modification Effects 0.000 description 23
- 238000012986 modification Methods 0.000 description 23
- 238000005457 optimization Methods 0.000 description 17
- 238000012549 training Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 8
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000013139 quantization Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 102220041891 rs587780805 Human genes 0.000 description 2
- 101150114515 CTBS gene Proteins 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 102220041870 rs587780789 Human genes 0.000 description 1
- 102220096718 rs865838543 Human genes 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present application relates to video coding and decoding technology, and in particular, to a method, apparatus and device for filtering.
- Complete video coding generally includes operations such as prediction, transformation, quantization, entropy coding, and filtering. There is a quantization operation after block-based motion compensation, which generates coding noise and distorts the video quality. Often, in-loop post-processing techniques can be used to reduce the effects of such distortions. However, it is found in practice that the filtering performance of the existing loop post-processing technology is poor.
- the present application provides a filtering method, apparatus and device. Specifically, the application is achieved through the following technical solutions:
- a filtering method applied to an encoding/decoding device, the method includes: determining whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering; if it is determined that the current adaptive correction filtering unit The filtering unit allows the use of enhanced adaptive correction filtering, and uses the first filter to perform adaptive correction filtering on the current adaptive correction filtering unit; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, use the second filter.
- the filter performs adaptive correction filtering on the current adaptive correction filtering unit.
- the first filter is a 7*7 cross and a 5*5 square center-symmetric filter
- the second filter is a 7*7 cross and a 3*3 square center-symmetric filter.
- determining whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering includes: determining whether to indicate whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering.
- the flag bit value of the adaptive correction filtering when the flag bit value is the first value, it is determined that the current adaptive correction filter unit is allowed to use the enhanced adaptive correction filter, when the flag bit value is the second value It is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering.
- the flag bit used to indicate whether the current adaptive correction filtering unit is allowed to use the enhanced adaptive correction filtering may be EalfEnableFlag; the value of the EalfEnableFlag may be derived from the decoding device, or at the decoding end device may be The value of the EalfEnableFlag is obtained from the code stream, or the value of the EalfEnableFlag is a constant value.
- Obtaining the value of the EalfEnableFlag from the code stream at the decoding end device may include: determining the value of the EalfEnableFlag based on the value of the enhanced adaptive correction filtering permission flag parsed from the code stream, and the enhanced The adaptive correction filtering enable flag can be a sequence level parameter.
- the method further includes: in the process of adaptively correcting and filtering the current filtering pixels in the current adaptive correction filtering unit, for the current filtering pixels
- the reference pixel when the reference pixel is in the current adaptive correction filtering unit, use the pixel value of the reference pixel to perform adaptive correction filtering; when the reference pixel is not in the current adaptive correction filtering unit , in the case that the pixel value of the reference pixel cannot be obtained, use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to replace the reference pixel to perform adaptive correction filtering, and obtain the reference pixel.
- adaptive correction filtering is performed using the pixel value of the reference pixel.
- the situation that the pixel value of the reference pixel cannot be obtained includes one of the following: the reference pixel is in the image of the current image frame Outside the boundary, outside the slice boundary of the current slice and filtering across slice boundaries is not allowed, outside the upper or lower boundary of the current adaptive modification filtering unit.
- the method further includes: if it is determined that the current adaptive correction filtering unit allows Enhanced adaptive correction filtering is used, and the pixel value of the pixel position used for the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, then the pixel closest to the reference pixel position in the current adaptive correction filtering unit is used to replace the pixel value.
- Adaptive correction filtering is performed with reference to pixels; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, then Using the pixel closest to the reference pixel in the current adaptive correction filtering unit to replace the reference pixel to perform adaptive correction filtering.
- a filtering apparatus which is applied to an encoding/decoding device, the apparatus comprising: a filtering unit configured to determine whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering; The filtering unit is also used to perform adaptive correction filtering on the current adaptive correction filtering unit using the first filter if it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering; if it is determined that the current adaptive correction filtering The unit does not allow the use of enhanced adaptive correction filtering, and the second filter is used to perform adaptive correction filtering on the current adaptive correction filtering unit.
- the first filter is a 7*7 cross and a 5*5 square center-symmetric filter
- the second filter is a 7*7 cross and a 3*3 square center-symmetric filter.
- the filtering unit determines whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, including: determining whether the current adaptive correction filtering unit is used to indicate whether the current adaptive correction filtering unit is used.
- the flag bit value of the enhanced adaptive correction filtering is allowed to be used, and when the flag bit value is the first value, it is determined that the current adaptive correction filtering unit is allowed to use the enhanced adaptive correction filter, and the flag bit value is the first value.
- the value is two, it is determined that the current adaptive correction filtering unit is not allowed to use enhanced adaptive correction filtering.
- the flag bit used to indicate whether the current adaptive correction filtering unit is allowed to use the enhanced adaptive correction filtering may be EalfEnableFlag; the value of the EalfEnableFlag may be derived from the decoding device, or at the decoding end device may be The value of the EalfEnableFlag is obtained from the code stream, or the value of the EalfEnableFlag is a constant value.
- Obtaining the value of the EalfEnableFlag from the code stream at the decoding end device may include: determining the value of the EalfEnableFlag based on the value of the enhanced adaptive correction filtering permission flag parsed from the code stream, and the enhanced The adaptive correction filtering enable flag can be a sequence level parameter.
- the filtering unit is further configured to perform adaptive correction and filtering on the current filtering pixels in the current adaptive correction filtering unit.
- Any reference pixel of the pixel when the reference pixel is in the current adaptive correction filtering unit, use the pixel value of the reference pixel to perform adaptive correction filtering; when the reference pixel is not in the current adaptive correction filtering unit
- the pixel value of the reference pixel cannot be obtained, use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to perform adaptive correction filtering instead of the reference pixel, and when the reference pixel is obtained.
- adaptive correction filtering is performed using the pixel value of the reference pixel.
- the situation that the pixel value of the reference pixel cannot be obtained includes one of the following: the reference pixel is in the image of the current image frame Outside the boundary, outside the slice boundary of the current slice and filtering across slice boundaries is not allowed, outside the upper or lower boundary of the current adaptive modification filtering unit.
- the filtering unit is further configured to: if the current adaptive correction is determined The filtering unit allows the use of enhanced adaptive correction filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, then use the current adaptive correction filtering unit The nearest to the reference pixel position is used. The pixel replaces the reference pixel to perform adaptive correction filtering; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be used. Obtained, the pixel in the current adaptive correction filtering unit that is closest to the reference pixel position is used to replace the reference pixel to perform adaptive correction filtering.
- a decoding device including a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor, the The processor is configured to execute machine-executable instructions to implement the filtering method provided in the first aspect.
- an encoding device including a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor, the The processor is configured to execute machine-executable instructions to implement the filtering method provided in the first aspect.
- the filtering method of the embodiment of the present application by determining whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and when it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, the first filter pair is used.
- the current adaptive correction filtering unit performs adaptive correction filtering; when it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, the second filter is used to perform adaptive correction on the current adaptive correction filtering unit Filtering improves the flexibility of filter selection, optimizes the filtering effect, and improves the encoding and decoding performance.
- FIG. 1A and 1B show a schematic flowchart of video encoding and decoding
- Fig. 2 is a kind of schematic diagram of area division
- Fig. 3 is a kind of schematic diagram of area merging
- 4A is a schematic diagram of a 7*7 cross plus a 5*5 square center-symmetric filter shape
- FIG. 4B is a schematic diagram of a reference pixel corresponding to the filter shown in FIG. 4A;
- 4C is a schematic diagram of a reference pixel position for filtering the current adaptive correction filtering unit
- FIG. 5 is a schematic diagram of a sample filtering compensation unit shown in an exemplary embodiment of the present application.
- FIG. 6A is a schematic flowchart of a filtering method according to an exemplary embodiment of the present application.
- FIG. 6B is a schematic flowchart of a filtering method according to an exemplary embodiment of the present application.
- FIG. 7 is a schematic flowchart of a filtering method according to an exemplary embodiment of the present application.
- FIG. 8 is a schematic flowchart of a filtering method according to an exemplary embodiment of the present application.
- FIG. 9 is a schematic diagram of a 7*7 cross plus a 3*3 square center-symmetric filter shape
- FIG. 10 is a schematic diagram of a merged region shown in an exemplary embodiment of the present application.
- FIGS. 11A to 11D are schematic diagrams of various filter shapes shown in an exemplary embodiment of the present application.
- FIG. 12 is a schematic diagram of a 3*3 pixel block shown in an exemplary embodiment of the present application.
- FIG. 13 is a schematic diagram of a filter with asymmetric filter coefficients according to an exemplary embodiment of the present application.
- FIG. 14A is a schematic diagram of a reference pixel position according to an exemplary embodiment of the present application.
- FIG. 14B is a schematic diagram of another reference pixel position shown in an exemplary embodiment of the present application.
- FIG. 15A and FIG. 15B are schematic diagrams illustrating secondary division of regions obtained by various ways of dividing fixed regions according to an exemplary embodiment of the present application
- FIG. 15C is a schematic diagram showing the area numbers corresponding to each secondary division manner in FIG. 15A according to an exemplary embodiment of the present application;
- FIG. 16 is a schematic structural diagram of a filtering device according to an exemplary embodiment of the present application.
- FIG. 17 is a schematic structural diagram of a filtering device according to an exemplary embodiment of the present application.
- FIG. 18 is a schematic diagram of the hardware structure of a decoding device shown in an exemplary embodiment of the present application.
- FIG. 19 is a schematic diagram of a hardware structure of an encoding device according to an exemplary embodiment of the present application.
- Rate-Distortion Optimized The indicators for evaluating coding efficiency include code rate and Peak Signal to Noise Ratio (PSNR). The smaller the bit rate, the greater the compression rate; the greater the PSNR, the better the reconstructed image quality. In mode selection, the discriminant formula is essentially a comprehensive evaluation of the two.
- D represents the distortion (Distortion), which is usually measured by the SSE (sum of difference mean square) index.
- SSE refers to the mean square sum of the difference between the reconstructed block and the source image block;
- ⁇ is the Lagrange multiplier;
- R is the The actual number of bits required for image block coding in this mode, including the sum of bits required for coding mode information, motion information, residuals, etc.
- mode selection if the RDO principle is used to make comparison decisions on encoding modes, the best encoding performance can usually be guaranteed.
- Coding Tree Unit Traditional video coding is implemented based on macroblocks. For video in 4:2:0 sampling format, a macroblock contains a 16 ⁇ 16 luminance block and two An 8x8 chroma block. Considering the characteristics of high-definition video/ultra-high-definition video, CTU is introduced in Versatile Video Coding (VVC for short), and its size is specified by the encoder and is allowed to be larger than the size of the macroblock.
- VVC Versatile Video Coding
- the value range of the luminance CTB size is ⁇ 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32, 64 ⁇ 64, 128 ⁇ 128 ⁇
- the value range of the chroma CTB size is ⁇ 4 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32, 64 ⁇ 64 ⁇
- a larger CTB can be used for better compression.
- Deblocking Filter (DBF for short): The image encoding process is based on different blocks, and each block is encoded relatively independently. Since each block uses different parameters, the distribution characteristics within the block are independent of each other, resulting in a discontinuous phenomenon at the edge of the block, which can be called the block effect.
- the deblocking filter mainly smoothes the boundaries of the blocks to remove the block effect.
- Sample Adaptive Offset Starting from the pixel domain, classifying the reconstructed image into categories according to the characteristics of the reconstructed image, and then performing compensation processing in the pixel domain. Mainly to reduce the ringing effect.
- Adaptive Leveling Filter applied after DB and SAO, mainly to further improve image quality under objective conditions.
- ALF technology builds a multiple linear regression model based on least squares, and performs filter compensation in the pixel domain.
- post-loop processing techniques may include DBF, SAO, and ALF.
- Wiener filtering The essence is to minimize the mean square value of the estimation error (defined as the difference between the expected response and the actual output of the filter).
- video coding generally includes processes such as prediction, transformation, quantization, and entropy coding. Further, the encoding process can be implemented according to the framework of FIG. 1B .
- Intra-frame prediction uses the surrounding coded blocks as a reference to predict the current uncoded blocks, effectively removing the redundancy in the spatial domain.
- Inter prediction is to use adjacent coded images to predict the current image, effectively removing the redundancy in the temporal domain.
- Transformation refers to converting an image from the spatial domain to the transform domain, and using transformation coefficients to represent the image. Most images contain many flat areas and slowly changing areas. Appropriate transformation can transform the image from a scattered distribution in the spatial domain to a relatively concentrated distribution in the transformed domain. In other words, the transform can remove the frequency-domain correlation between the signals, and by cooperating with the quantization process, the code stream can be effectively compressed.
- Entropy coding is a lossless coding method, which can convert a series of element symbols into a binary code stream for transmission or storage.
- the input symbols may include quantized transform coefficients, motion vector information, prediction mode information, and transform and quantization correlation. grammar, etc.
- Entropy coding can effectively remove the redundancy of video element symbols.
- video decoding usually includes entropy decoding, prediction, inverse quantization, inverse transformation, filtering and other processes, and the implementation principles of each process in video decoding are the same or similar to those in video encoding.
- video decoding usually includes entropy decoding, prediction, inverse quantization, inverse transformation, filtering and other processes, and the implementation principles of each process in video decoding are the same or similar to those in video encoding.
- the ALF technology used in the framework of the Audio Video Coding Standard calculates the optimal linear filtering that can be achieved by the original signal and the distorted signal in the mean square sense according to the principle of Wiener filtering.
- the ALF encoding process may include: region division ⁇ acquiring reference pixels ⁇ region merging and calculating filter coefficients ⁇ decision to determine whether each LCU starts filtering.
- the parameters that need to be calculated and obtained in the whole process are: 1), the number of filtering parameters; 2), the region merge identification; 3), each group of filter coefficients; 4), whether the LCU starts the filtering identification; 5), the current component (Y, U, V) whether to start the filter flag.
- the data on the luminance component is subjected to partition processing, and the data on the chrominance component is subjected to non-partition processing.
- the specific implementation process of the region division may be: dividing the image into 16 regions of substantially equal size and aligned based on the LCU.
- the width of the non-rightmost region is (((pic_width_InLcus+1)/4) ⁇ Lcu_Width), where pic_width_InLcus represents the number of LCUs in the image width, and Lcu_Width represents the width of each LCU.
- the width of the rightmost region is the difference between the image width and the widths of the three non-rightmost regions (the image width minus the total width of the three non-rightmost regions).
- the height of the non-bottom area is (((pic_height_InLcus+1)/4) ⁇ Lcu_Height), where pic_height_InLcus represents the number of LCUs on the image height, and Lcu_Height represents the height of each LCU.
- the height of the bottommost area is the difference between the image height and the heights of the three non-bottom areas (image height minus the total height of the non-bottom three areas).
- the region merging operation refers to judging whether adjacent regions are merged according to the order of index values.
- the purpose of combining is to reduce coding coefficients.
- a merge flag is required to indicate whether the current area is merged with the adjacent area.
- a total of 16 regions are included (which can be called 16 categories or 16 groups (groups), and the index values are 0 to 15 in sequence).
- 0 and region 1 are combined, region 1 and region 2 are combined, region 2 and region 3 are combined, ..., region 13 and region 14 are combined, region 14 and region 15 are combined, and the combination method with the smallest error is carried out.
- a region merge, so that 16 regions are merged into 15 regions.
- region 14 and region 15 are merged to obtain region 14+15, that is, the merged region includes region 2+3 and region 14+15
- region 14 and region 15 are merged to obtain region 14+15, that is, the merged region includes region 2+3 and region 14+15
- merge area 1 and area 2+3, merge area 2+3 and area 4 ..., merge area 12 and area 13, merge area 13 and area 14+15, and perform the merge method with the smallest error
- the filter coefficients may be calculated according to the Wiener filtering principle based on the reference pixels of the pixels in each region.
- each pixel is taken as the current pixel
- the surrounding pixels are taken as the reference pixel within a certain range centered on the pixel
- the reference pixel and the current pixel are used as input
- the pixel is taken as the input.
- the original value of is used as the target, and the filter coefficients are calculated by the least square method.
- FIG. 4A is a schematic diagram of a filter shape. As shown in FIG. 4A , it is a center-symmetric filter shape with a 7*7 cross shape and a 5*5 square shape, and the reference pixel corresponding to the filter can be seen in FIG. 4B .
- Pi belongs to the pixel in the reconstructed image before filtering
- Wiener filtering is to linearly combine the reference pixel values around the current pixel to approximate the current pixel value of the original image.
- the ALF technology is processed based on the largest coding unit (Largest Coding Unit, LCU for short). LCUs belonging to the same merged region use the same set of filter coefficients for filtering.
- LCU Large Coding Unit
- the adaptive modified filtering unit is derived according to the current maximum coding unit according to the following steps:
- the sample where the upper boundary of the sample area E1 is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of cross_patch_loopfilter_enable_flag is '0', make the sample area E2 equal to the sample area E1; otherwise, the luminance component and chrominance component samples are
- the upper boundary of the area E1 is extended upwards by four rows to obtain the sample area E2; the first row of samples in the sample area E1 is the upper boundary of the area;
- the sample area E2 is used as the current adaptive correction filtering unit; the first row of samples of the image is the upper boundary of the image, and the last row of samples of the image is the lower boundary of the image.
- the reference pixel used in the adaptive correction filtering process is a sample in the adaptive correction filtering unit, then the sample is directly used for filtering;
- the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, the sample is directly used for filtering; otherwise, the filtering is performed as follows:
- the LCU is used as the basic unit to determine whether each LCU in the current image uses ALF.
- the encoding device may calculate the rate-distortion cost before and after the current LCU is turned on and off, to determine whether the current LCU uses the ALF. If the current LCU is marked to use ALF, Wiener filtering is performed on each pixel within the LCU.
- the ALF technology only transmits a fixed set of filter coefficients for each region, and the filter shape is fixed. Therefore, there may be some problems, such as: fixedly divided regions cannot divide pixels with the same characteristics into the same category, or the shape of the filter used is inappropriate. At the same time, each divided region transmits at most one set of filter coefficients. For larger regions or regions with complex image textures, one set of filter coefficients is not enough.
- N is a positive integer.
- the LCUs in each area are divided into the same category, it corresponds to the fixed area division scheme of the traditional ALF scheme. In order to distinguish it from the fixed area division method of the traditional ALF scheme, N ⁇ 2.
- Scheme 2 Multiple sets of filter coefficients can be transmitted in each region, and the shape of each set of filters can be the same or different.
- a set of filter coefficients is adaptively selected based on each LCU, and LCUs in the same region can select filter coefficients in adjacent regions.
- Each region can only transmit one set of filter coefficients, but the filter shape of each region can be different.
- Option 5 Modify the symmetric filter into an asymmetric filter, including optimizing the filter coefficients at the symmetrical positions to be the same so that the filter coefficients at the symmetrical positions satisfy a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4.
- Scheme 6 Optimize the sample value of the boundary during filtering.
- ALF filtering is performed using the solution provided by the embodiments of the present application under the size of the LCU as an example, but in the embodiments of the present application, other sizes or methods for representing image blocks may also be used instead, such as image blocks of N*M size, N is a positive integer less than or equal to the width of the image frame, and M is a positive integer less than or equal to the height of the image frame.
- An embodiment of the present application provides a filtering method, wherein the filtering method can be applied to a decoding device, and the filtering method can include the following steps:
- T600 Perform regional division on the luminance component of the current image frame.
- T610 Determine the area type to which the LCU belongs based on the area type identifier of the LCU obtained by parsing from the code stream.
- the pixel characteristics of each pixel in the area can be used.
- the LCU is divided into at least one category, that is, an area may be divided into at least one sub-area or area category by means of LCU classification.
- the category of the region to which the LCU belongs may be determined based on the region to which the LCU belongs and the category of the LCU in the region to which it belongs.
- the region type identifier for identifying the region type to which each LCU belongs may be carried in the code stream and sent to the decoding device.
- the decoding device may parse the region category identifier of the LCU from the code stream, and determine the region category to which the LCU belongs based on the region category identifier of the LCU obtained by parsing.
- T620 Determine the filter coefficient of the LCU based on the type of the region to which the LCU belongs and the filter coefficient parsed from the code stream.
- the encoding device may perform region merging on each region category to obtain at least one merged region (which may be referred to as a merged region), and determine each region. Filter coefficients for merged regions.
- region merging for each region category is similar to the relevant description in the "region merging" section above, and will not be repeated here.
- the encoding apparatus may assign a coefficient index to it based on the merged region to which it belongs, where the coefficient index is a filter coefficient corresponding to one of the merged regions.
- the encoding device can write the filter coefficients of each merged region and the index of each region category into the code stream, and send it to the decoding device.
- the decoding device may determine the coefficient index of the region category to which the LCU belongs based on the region category to which the LCU belongs, and determine the filtering coefficient of the LCU based on the coefficient index and the filter coefficient parsed from the code stream. coefficient.
- T630 Perform ALF filtering on the pixels of the LCU one by one based on the filter coefficients of the LCU.
- ALF filtering may be performed on the pixels of the LCU one by one based on the filter coefficient of the LCU.
- the area division is more in line with the pixel characteristics of each LCU, so that the ALF filtering effect can be optimized and the encoding and decoding performance can be improved.
- determining the area type to which the LCU belongs based on the area type identifier of the LCU obtained by parsing from the code stream includes: determining the area type to which the LCU belongs based on the area to which the LCU belongs and the area type identifier of the LCU.
- the region category to which the LCU belongs may be determined based on the region to which the LCU belongs (the region obtained according to the fixed region division method) and the region category identifier of the LCU.
- the region category identifier of the LCU is used to identify the category of the LCU in the region to which the LCU belongs, and the category of the LCU in the region to which the LCU belongs is determined by classifying each LCU in the region to which the LCU belongs;
- the above-mentioned determining the area type to which the LCU belongs based on the area to which the LCU belongs and the area type identifier of the LCU may include: determining the area type to which the LCU belongs based on the number of types of each area, the area to which the LCU belongs, and the area type identifier of the LCU.
- the decoding device may determine the category of the LCU in the region to which the LCU belongs based on the region category identifier of the LCU obtained by parsing from the code stream. For example, assuming that LCUs in an area are divided into at most 2 types, for LCUs that are classified into the first type, their area type identifiers may be 0; for LCUs that are classified into the second type, their area type identifiers may be 0. is 1.
- any LCU in any area when the value of the area type identifier of the LCU obtained by parsing from the code stream is 0, it can be determined that the type of the LCU in this area is the first type; When the value of the area type identifier of the LCU obtained by parsing in the LCU is 1, it can be determined that the type of the LCU in the area is the second type.
- the decoding apparatus may determine the region category to which the LCU belongs based on the number of categories of each category, the region to which the LCU belongs, and the region category to which the LCU belongs.
- the above-mentioned determining the region category to which the LCU belongs based on the number of categories of each region, the region to which the LCU belongs, and the region category identifier of the LCU may include: determining, based on the number of categories of regions before the region to which the LCU belongs, determining the region before the region to which the LCU belongs. The total number of categories of each area; based on the total number of categories of each area before the area to which the LCU belongs and the area category identifier of the LCU, the area category to which the LCU belongs is determined.
- the total number of categories of the regions before the region to which the LCU belongs can be determined, and based on the total number of categories of the regions before the region to which the LCU belongs and the region category identifier of the LCU, Determine the regional category to which the LCU belongs.
- the brightness of the current image frame is divided into L areas, and the LCUs in each area are divided into N categories, then for any LCU in the area K, when the code stream is parsed and obtained
- the value of the area category identifier of the LCU is m, it can be determined that the area category to which the LCU belongs is N*K+m; where, m ⁇ [0, N-1], N ⁇ 1, K ⁇ [0, L- 1].
- the method may further include: determining whether to enable ALF filtering for the LCU; when determining whether to enable ALF for the LCU During filtering, it is determined to perform the above-mentioned operation of determining the filter coefficient of the LCU based on the type of the region to which the LCU belongs and the filter coefficient parsed from the code stream.
- the encoding apparatus may determine whether to enable ALF filtering for that LCU based on the RDO decision.
- the decoding device may first determine whether to enable ALF filtering for the LCU. For example, the decoding device may determine whether to enable ALF filtering for the LCU based on the identifier for identifying whether to enable ALF filtering for the LCU, which is obtained by parsing the code stream.
- the decoding device may determine the filter coefficient of the LCU based on the region category to which the LCU belongs and the filter coefficient parsed from the code stream in the manner described in the above embodiment.
- the above-mentioned determining whether to enable ALF filtering for the LCU may include: parsing the LCU coefficient identifier of the LCU from the code stream; wherein the LCU coefficient identifier is used to identify at least one set of filter coefficients used in the merge area to which the LCU belongs , the filter coefficient used by the LCU; when the value of the LCU coefficient identifier of the LCU is not the first value, it is determined to start ALF filtering for the LCU.
- the filter coefficients used in a merge area are no longer limited to one set of filter coefficients, but one or more sets of filter coefficients can be selected and used according to actual conditions.
- the encoding device may train multiple sets of filter coefficients, and determine, based on the RDO decision, to use one of the sets of filter coefficients or multiple sets of filter coefficients for the merged region. For any LCU in the region, the encoding device may identify the filter coefficient used by the LCU among one or more sets of filter coefficients used in the merged region by the LCU coefficient identifier.
- the value of the LCU coefficient identifier of the LCU is the first value, it indicates that ALF filtering is not started for the LCU.
- the value of the LCU coefficient identifier of the LCU parsed by the decoding device from the code stream is not the first value, it may be determined to start ALF filtering for the LCU.
- the decoding device parses the code stream and obtains the value of the LCU coefficient identifier of the LCU as 0, it can determine not to start ALF filtering for the LCU; when decoding When the LCU coefficient identifier of the LCU obtained by the device from the code stream is not 0, it can determine to start ALF filtering for the LCU, and can determine the filter coefficient used by the LCU according to the LCU coefficient identifier of the LCU.
- the filter coefficient of the LCU is the set of filter coefficients; if the merged region to which the LCU belongs uses multiple sets of filter coefficients filter coefficient, the filter coefficient of the LCU also needs to be determined according to the specific value of the LCU coefficient identifier of the LCU.
- determining the filter coefficient of the LCU based on the region category to which the LCU belongs and the filter coefficient parsed from the code stream may include: based on the region category to which the LCU belongs, the filter coefficient parsed from the code stream, and the filter coefficient obtained from the code stream.
- the region coefficient identifier of the merged region to which the LCU belongs is obtained by analysis, and the filter coefficient of the LCU is determined.
- the region coefficient identifier is used to identify the filter coefficient used in the merged region to which the LCU belongs among the preset multiple sets of filter coefficients.
- the encoding device may train multiple sets of filter coefficients, and determine, based on the RDO decision, that the merged region uses one or more sets of the multiple sets of filter coefficients, and will be used to identify the merged region to use.
- the region coefficients of the filter coefficients identify the write stream.
- the decoding device may determine the filter coefficient used in the merging region based on the region coefficient identifier of the merging region obtained by parsing from the code stream.
- the encoding device determines the merged region uses filter coefficient A, determines The value of the region coefficient identifier of the merged region is 0; when the encoding device determines that the merged region uses filter coefficient B, the encoding device determines that the value of the region coefficient identifier of the merged region is 1; when the encoding device determines that the merged region uses When filtering coefficient A and filtering coefficient B, the encoding device determines that the value of the region coefficient identifier of the merged region is 2.
- any merged region when the decoding device determines that the region uses a set of filter coefficients based on the region coefficient identifier of the merged region obtained by parsing from the code stream, for any LCU of the merged region, when it is determined that the When the LCU starts ALF filtering, if the value of the LCU coefficient identifier of the LCU is not the first value, it can be determined that the filter coefficient used by the LCU is the filter coefficient used in the merging region; When it is determined that the merged area uses multiple sets of filter coefficients, the area coefficient identifier of the merged area obtained through analysis, when it is determined to start ALF filtering for the LCU, the filter coefficient used by the LCU (the merged filter coefficient) can be determined based on the LCU coefficient identifier of the LCU. One of the sets of filter coefficients used by the region).
- Determining the filter coefficient of the LCU based on the category of the region to which the LCU belongs, the filter coefficients parsed from the code stream, and the region coefficient identifier of the merged region to which the LCU belongs is obtained by parsing from the code stream, which may include: when based on the region coefficient of the merged region to which the LCU belongs.
- the filter coefficient of the LCU is determined based on the category of the region to which the LCU belongs, the filter coefficient parsed from the code stream, and the LCU coefficient identification of the LCU.
- the decoding device when the decoding device determines the region category to which the LCU belongs based on the region category identifier of the LCU obtained by parsing from the code stream, it can also determine the merged region to which the LCU belongs based on the region category to which the LCU belongs, and based on the slave code
- the region coefficient identifier of the merged region to which the region category belongs obtained through analysis in the stream, determines the filter coefficient used by the region category.
- a total of 32 region categories are obtained by classifying the LCUs of the region categories.
- An index table is obtained from the merging of the region categories, and the index table may be a 32-element one-dimensional vector, and each element in the 32-element one-dimensional vector is the index of the merged region to which each region category belongs.
- the encoding device can send the above-mentioned index table to the decoding device through the code stream, so that the decoding device can determine the merged region to which each region category belongs based on the index table parsed from the code stream, so that for any LCU, it can be based on this index table.
- the region category identifier of the LCU determines the region category to which the LCU belongs, and determines the merged region to which the LCU belongs according to the region category to which the LCU belongs.
- the decoding device may determine the filter coefficient used by the LCU based on the LCU coefficient identifier of the LCU obtained by parsing from the code stream.
- the filter shapes of the sets of filter coefficients used in the merging region may or may not be exactly the same.
- the filter shapes of filter coefficient A, filter coefficient B and filter coefficient C may all be the same, or all different, or partially the same, such as the filter coefficients
- the filter shapes of A and filter coefficient B are the same, but the filter shapes of filter coefficient A and filter coefficient C are different.
- determining the filter coefficient of the LCU based on the region category to which the LCU belongs and the filter coefficient parsed from the code stream may include: based on the region category to which the LCU belongs, the filter coefficient parsed from the code stream, and the coefficient selection flag of the LCU, to determine the filter coefficient of the LCU; wherein, the coefficient selection flag is used to identify the filter coefficient selected and used by the LCU in the multiple groups of candidate filter coefficients.
- the LCU is no longer limited to selecting the filter coefficients of the merged region to which it belongs, but can adaptively select a set of filter coefficients from multiple sets of filter coefficients for ALF filtering.
- the candidate filter coefficients of the LCU may include, but are not limited to, the filter coefficients of the merged region to which it belongs and the filter coefficients of the adjacent merged regions of the merged region to which it belongs.
- one LCU may have multiple groups of candidate filter coefficients, which improves the flexibility of LCU filter coefficient selection, optimizes the ALF filtering effect, and improves encoding and decoding performance.
- the encoding device may determine, based on the RDO decision, filter coefficients used by the LCU in multiple sets of candidate filter coefficients, and write the coefficient selection identifier corresponding to the filter coefficient into the code stream and send it to the decoding device.
- the decoding device may determine the filter coefficient of the LCU based on the region category to which the LCU belongs, the filter coefficient parsed from the code stream, and the coefficient selection flag of the LCU.
- the above-mentioned determining the filter coefficient of the LCU based on the region category to which the LCU belongs, the filter coefficient parsed from the code stream, and the coefficient selection flag of the LCU may include: when the value of the coefficient selection flag of the LCU is the first value, the filter coefficient of the previous merged area to which the LCU belongs is determined as the filter coefficient of the LCU; when the value of the coefficient selection flag of the LCU is the second value, the filter coefficient of the merged area to which the LCU belongs is determined to be the filter coefficient of the LCU. Filter coefficient; when the value of the coefficient selection flag of the LCU is the third value, the filter coefficient of the merged area after the merged area to which the LCU belongs is determined as the filter coefficient of the LCU.
- its candidate filter coefficients may include the filter coefficients of the merged region to which it belongs, the filter coefficients of the merged region preceding the merged region to which it belongs, and the filter coefficients of the merged region after the merged region to which it belongs.
- the previous merged region of the merged region to which the LCU belongs is the merged region corresponding to the previous adjacent index of the index of the merged region to which the LCU belongs.
- the next merged region of the merged region to which the LCU belongs is the merged region corresponding to the next adjacent index of the index of the merged region to which the LCU belongs.
- the previous merged area of the merged area to which the LCU belongs is the merged area corresponding to the previous adjacent index (ie 1) of index 2 ( That is, merging region 1)
- the next merging region of the merging region to which the LCU belongs is the merging region (ie, merging region 3) corresponding to the next adjacent index (ie, 3) of index 2.
- the encoding device may determine the filter coefficients used by the LCU based on the RDO decision.
- the filter coefficients used by the LCU when determining the filter coefficients of the previous merged region to which the LCU belongs, it may determine the coefficient selection flag of the LCU.
- the value is the first value, such as 0; when the filter coefficient of the merging region to which the LCU belongs is determined, the value of the coefficient selection flag of the LCU can be determined as the second value, such as 1; Filter Coefficient Used by LCU When the filter coefficient of the merged area after the merged area to which the LCU belongs, the value of the coefficient selection flag of the LCU may be determined to be a third value, such as 2.
- the filter coefficient of the previous merged region to which the LCU belongs can be determined. is the filter coefficient of the LCU; when the value of the coefficient selection flag of the LCU obtained by parsing from the code stream is the second value, the filter coefficient of the merged region to which the LCU belongs can be determined as the filter coefficient of the LCU; when from When the value of the coefficient selection flag of the LCU obtained by parsing in the code stream is the third value, the filter coefficient of the merged area after the merged area to which the LCU belongs may be determined as the filter coefficient of the LCU.
- Obtaining filter coefficients by parsing from the code stream may include: for any combined region, analyzing the filter shape of the combined region from the code stream; and analyzing the filter coefficient of the combined region from the code stream based on the filter shape.
- each merging region is no longer limited to using the same filter shape, but can selectively use different filter shapes, that is, different merging regions.
- the filter shapes used by the regions can be the same or different.
- the encoding device may train multiple sets of filter coefficients with different filter shapes, determine the filter shape and filter coefficients used in the merged region based on the RDO decision, and use the filter shape and filter coefficients
- the write stream is sent to the decoding device.
- the decoding device when acquiring the filter coefficients of the merging region, can parse the filter shape of the merging region from the code stream, and parse the region from the code stream based on the filter shape. Filter coefficients for the category.
- determining the filter coefficient of the LCU based on the region category to which the LCU belongs and the filter coefficient parsed from the code stream may include: based on the region category to which the LCU belongs and the filter shape and filter coefficient parsed from the code stream , determine the filter shape and filter coefficient of the LCU;
- the above-mentioned performing ALF filtering on the pixels of the LCU one by one based on the filter coefficients of the LCU may include: performing ALF filtering on the pixels of the LCU one by one based on the filter shape and filter coefficients of the LCU.
- the combined region to which the LCU belongs may be determined based on the category of the region to which the LCU belongs, the filter shape and filter coefficient of the combined region may be parsed from the code stream, and the filter shape and filter coefficient may be determined as filter shape and filter coefficient of the LCU, and perform ALF filtering on the pixels of the LCU one by one based on the filter shape and filter coefficient.
- a filter shape may also be selected for an image frame, or a filter shape may be selected for a component of an image frame (eg, a luminance component and/or a chrominance component).
- a component of an image frame eg, a luminance component and/or a chrominance component.
- ALF filtering is performed on the pixels of the LCU one by one based on the filter coefficients of the LCU, which may include: based on the filter coefficients of the LCU and parsed from the code stream. Weight coefficient, ALF filtering is performed on the pixels of the LCU one by one.
- the filter used in the ALF filtering is no longer limited to a symmetric filter, but an asymmetric filter can be used, that is, the filter coefficients with symmetrical positions can be different, and Meet a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4.
- the filter coefficient at the symmetrical position of the filter coefficient is necessary to use the filter coefficient based on the filter coefficient, the filter coefficient at the symmetrical position of the filter coefficient, and the sum of the products of the reference pixels at the corresponding positions, respectively.
- the filtered pixel value is obtained. Therefore, the above ratio can be used as the ratio between the filter coefficients of the symmetrical position, or can also be used as the ratio of the weighted proportion when the pixel value of the reference pixel corresponding to the filter coefficient of the symmetrical position participates in the ALF filter calculation.
- asymmetric filter means that the filter coefficients of the symmetrical positions are different, or the pixel values of the reference pixels corresponding to the filter coefficients of the symmetrical positions have different weights when participating in the ALF filtering calculation.
- the filter coefficient Ci of a 7*7 cross shape plus a 5*5 square center-symmetric filter shape is C28-i
- Ci: C28-i Ai: (2-Ai )
- the ratio of the weighted weights of Pi and P28-i when participating in the ALF filtering calculation is Ai: (2-Ai)
- Pi is the pixel value of the reference pixel position corresponding to Ci
- P28-i is the corresponding pixel value of C28-i
- the filtered pixel value of the pixel can be determined in the following manner:
- Ci is the (i+1)th filter coefficient in the filter coefficients of the merged region to which the LCU belongs
- Pi is the pixel value of the reference pixel position corresponding to the filter coefficient Ci
- the pixel position of the current filter pixel is center-symmetric
- Ai is the weight coefficient of the pixel value of the reference pixel position corresponding to Pi
- P14 is the pixel value of the current filter pixel
- C14 is the filter coefficient of the current filter pixel, 0 ⁇ Ai ⁇ 2.
- the encoding device may determine the filter coefficient and filter performance of the merged region under different weight coefficients corresponding to each position. Select a set of filter coefficients with the best filtering performance, record the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, and write them into the code stream and send them to the decoding device.
- a set of weighting coefficients (such as the above-mentioned value set of Ai) may be constructed in advance, and each weighting coefficient may be selected from the set to obtain the filtering coefficient with the best filtering performance and the corresponding filter coefficient at each position of the corresponding filter. weight coefficient, and write the index of the weight coefficient in the weight coefficient set into the code stream and send it to the decoding device.
- the decoding device can parse the code stream to obtain the filter coefficients of the merged region to which the LCU belongs and the weight coefficients of each reference pixel position corresponding to the merged region to which the LCU belongs, and perform ALF filtering on the pixels of the LCU one by one.
- performing ALF filtering on the pixels of the LCU one by one based on the filter coefficients of the LCU may include: for the current filtered pixel of the LCU, in the process of performing the ALF filtering on the pixel, based on the surrounding pixels of the pixel The pixel value of the pixel is updated; based on the updated pixel value of the pixel, ALF filtering is performed on the pixel.
- the filtering performance of filtering the pixel position according to the traditional ALF technology is poor. Therefore, in order to optimize the ALF filtering effect, For the current filtered pixel, in the process of performing ALF filtering on the pixel, the pixel value of the pixel can be updated based on the pixel values of the surrounding pixels of the pixel, so that the pixel value of the pixel position is more than the pixel value of the surrounding pixels. smooth.
- the above-mentioned updating the pixel value of the pixel based on the pixel value of the surrounding pixels of the pixel may include: determining the maximum value and the minimum value of the pixel value of each pixel in the target pixel block except the center position; Among them, the target pixel block is a 3*3 pixel block with the pixel as the center position; when the pixel value of the pixel is greater than the maximum value, the pixel value of the pixel is updated to the maximum value; when the pixel value of the pixel is less than the minimum value , update the pixel value of the pixel to the minimum value.
- the surrounding pixels of the pixel as the 8 adjacent pixels of the pixel as an example, that is, the remaining pixels except the center position in the 3*3 pixel block (herein referred to as the target pixel block) with the pixel as the center position.
- the pixel value of each pixel in the target pixel block except the center position may be determined, and the maximum value and the minimum value of each pixel value may be determined.
- the pixel value of the pixel is greater than the maximum value, the pixel value of the pixel is updated to the maximum value; when the pixel value of the pixel is smaller than the minimum value, the pixel value of the pixel is updated to the minimum value.
- the maximum value of the pixel values of the 8 pixels is the pixel value of pixel 1 (assuming it is p1), and the minimum value is the pixel value of pixel 8 ( If p8 is assumed), the pixel value of pixel 0 (assuming p0) can be compared with p1 and p8. If p0>p1, the pixel value of pixel 0 is updated to p1; if p0 ⁇ p8, then the pixel of pixel 0 is updated. The value is updated to p8.
- FIG. 6A is a schematic flowchart of a filtering method provided by an embodiment of the present application, wherein the filtering method can be applied to an encoding/decoding device.
- the filtering method may include the following steps:
- Step S600a in the process of carrying out ALF filtering to the current filter pixel in the current adaptive correction filter unit, for any reference pixel of the current filter pixel, when the reference pixel is not in the current adaptive correction filter unit, go to Step S610a.
- Step S610a determine whether the pixel value of the reference pixel can be obtained; if so, go to step S630a; otherwise, go to step S620a.
- Step S620a use the current adaptive correction filtering unit and the pixel closest to the reference pixel in the boundary area to replace the reference pixel for filtering.
- Step S630a use the reference pixel to perform filtering.
- the filtering unit may be an LCU, or an image block obtained based on the LCU, for example, an image block obtained by cropping or expanding the LCU.
- the part may not be obtained.
- the pixel value of the reference pixel considering that for the boundary pixels of the filtering unit, there may be some reference pixels of the boundary pixels that are outside the filtering unit, that is, not in the filtering unit, at this time, the part may not be obtained.
- the pixel value of the reference pixel that cannot be obtained may include, but is not limited to, one of the following: the reference pixel is outside the image boundary, outside the slice boundary and is not allowed to filter across the slice boundary, in the current adaptive correction filtering unit outside the upper or lower boundary of the .
- the current adaptive correction filtering unit and the pixel closest to the position of the reference pixel in the boundary area can be used for filtering instead of the reference pixel.
- the distance between pixel locations may be Euclidean distance.
- the boundary area includes outside the left border or outside the right border of the current adaptive correction filtering unit, and outside the left border of the current adaptive correction filtering unit includes part or all of the area in the adjacent filtering units on the left side of the current adaptive correction filtering unit, Outside the right border of the current adaptive correction filtering unit includes part or all of the area in the adjacent filtering units to the right of the current adaptive correction filtering unit.
- the boundary area of the current adaptive correction filtering unit may include the left border of the sample filtering compensation unit shown in FIG. 5 .
- the boundary area of the current adaptive correction filtering unit may include The 3-column pixels on the right side of the right border of the sample filtering and compensation unit shown in 5 (that is, the 3-column pixels in the filtering unit on the right side of the current adaptive modification filtering unit, which are close to the current adaptive modification filtering unit, may be called outside the right border).
- the method includes: determining whether the reference pixel corresponds to the specified position of the filter shape; if so, determining to perform the above-mentioned filtering operation using the current adaptive correction filtering unit and the pixel closest to the reference pixel position in the boundary area instead of the reference pixel.
- a reference pixel at a certain position when the pixel value of the reference pixel cannot be obtained, usually the pixel value of the pixel position closest to the position in the boundary area cannot be obtained. For example, a reference pixel at a position just to the left, right, just above, or below the current filtered pixel position.
- the current filtered pixel position ie, the pixel position corresponding to C14
- the reference pixel position since the reference pixel position is on the left side of the current filtered pixel position, And the distance from the current filter pixel position is 3 pixels, and the width of a filter unit is usually greater than 3 pixels. Therefore, if the pixel value of the reference pixel position corresponding to C11 cannot be obtained, the current adaptive correction filter unit can be determined.
- the filtering unit on the left is outside the boundary of the current image frame (that is, the picture frame where the current adaptive correction filtering unit is located), or, is outside the slice boundary of the current slice (that is, the slice where the current adaptive correction filtering unit is located) and is not. It is allowed to filter across the slice boundary.
- the pixel position that is closest to the reference pixel position in the boundary area that is, the pixel value of the pixel position corresponding to C12, cannot be obtained.
- the pixel value from the reference pixel position (the reference pixel position corresponding to C11) is used for filtering instead of the reference pixel.
- the reference pixel at the upper left, upper right, lower left or lower right of the current filter pixel position when its pixel value cannot be obtained, it may be because it is outside the upper or lower boundary of the current adaptive correction filter unit (the current adaptive correction filter unit The pixel value of the pixel position outside the upper boundary or the lower boundary of the correction filtering unit cannot be obtained), and at this time, the pixel position closest to the reference pixel position may be outside the left or right boundary of the current adaptive correction filtering unit, Its pixel value may be available.
- the current filtered pixel position (ie, the pixel position corresponding to C14) is at the upper left of the current adaptive correction filtering unit, for the reference pixel position corresponding to C1
- the reference pixel position corresponding to C1 may be outside the upper boundary of the current adaptive correction filter unit, so the pixel value of the reference pixel position cannot be obtained
- the pixel position corresponding to C6 may be outside the left boundary of the current adaptive correction filter unit, and its pixel value may be available.
- the reference pixel position corresponding to C1 since the reference pixel position corresponding to C1 is outside the upper boundary of the current adaptive correction filter unit, the pixel value of the reference pixel position corresponding to C1 cannot be obtained, and the reference pixel position corresponding to C6 is in Outside the left boundary of the current adaptive correction filtering unit, when the pixel values outside the left boundary of the current adaptive correction filtering unit can be obtained, such as when the left boundary of the current adaptive correction filtering unit is not an image boundary or a slice boundary, the reference pixel position corresponding to C6 Therefore, in this case, for the reference pixel position where the pixel value cannot be obtained, the pixel value of the nearest pixel position in the current adaptive correction filtering unit or in the boundary area can be used to replace the reference pixel filter.
- the current adaptive correction filtering unit and the boundary area closest to the reference pixel position can be used.
- the pixel is filtered in place of the reference pixel.
- the pixel closest to the reference pixel position in the current adaptive correction filtering unit may be used instead of the reference pixel for filtering, that is, the pixels in the boundary area are not considered.
- the specified positions may include, but are not limited to, a first position, a second position, a third position, and a symmetrical position of the first position, the second position, and the third position in the first filter; wherein the first position
- the filter is a 7*7 cross-shaped plus 5*5 square center-symmetric filter, the first position is the upper left corner of the first filter, the second position is the right adjacent position of the first position, and the third position is The adjacent position below the first position, the symmetrical position includes an axis-symmetrical position and a centrally-symmetrical position.
- the first position is the C1 position
- the axisymmetric position is the C5 position
- the second position is the C2 position
- the axisymmetric position is the C4 position
- the third position is the C6 position
- its axis-symmetrical position is the C10 position, that is, the above specified positions may include C1, C2, C6, C4, C5 and C10.
- the filters used are typically different.
- the filter used in the case of allowing the use of enhanced adaptive correction filtering may be as shown in FIG. 4A
- the filter used in the case of not allowing the use of enhanced adaptive correction filtering may be as shown in FIG. 9 .
- the pixels in the boundary area may not be used to replace the reference pixels whose pixel values cannot be obtained for filtering, and the pixels in the current adaptive correction filtering unit may be used to replace the pixels whose pixel values cannot be obtained.
- the reference pixel is filtered. Therefore, for any reference pixel, if the pixel value of the reference pixel cannot be obtained, it can be determined whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering.
- EalfEnableFlag it indicates that enhanced adaptive correction filtering can be used; when EalfEnableFlag is equal to 0, it indicates that it should not be used.
- EalfEnableFlag is equal to 1
- the value of EalfEnableFlag may be derived from the decoding end, or obtained from the code stream at the decoding end, and the value of EalfEnableFlag may also be a constant value.
- the value of EalfEnableFlag may be determined based on the value of the enhanced adaptive correction filtering enable flag (ealf_enable_flag) obtained by parsing from the code stream.
- the "Enhanced Adaptive Correction Filtering Allowed Flag” can be a sequence-level parameter, that is, a value of the "Enhanced Adaptive Correction Filtering Allowed Flag” can be used to indicate whether an image sequence is allowed to use the Enhanced Adaptive Correction Filtering.
- the current adaptive correction filtering unit can be used. The pixel closest to the reference pixel in the adaptive correction filtering unit replaces the reference pixel for filtering.
- the boundary can be used.
- the pixel closest to the reference pixel in the region replaces the reference pixel for filtering.
- the current adaptive correction filtering unit can be used.
- the adaptive correction filtering unit and the pixel closest to the reference pixel position in the boundary area perform filtering instead of the reference pixel.
- the decoding device determines that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, it may be The pixel in the current adaptive correction filtering unit that is closest to the reference pixel position is used instead of the reference pixel to perform adaptive correction filtering, that is, the reference pixels in the boundary area are not considered.
- FIG. 6B a schematic flowchart of a filtering method provided by an embodiment of the present application, wherein the filtering method can be applied to an encoding/decoding device.
- the filtering method may include the following steps:
- Step S600b determining whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering. If yes, go to step S610b; otherwise, go to step S620b.
- Step S610b using the first filter to perform adaptive correction filtering on the current adaptive correction filtering unit.
- Step S620b using the second filter to perform adaptive correction filtering on the current adaptive correction filtering unit.
- the adaptive correction filtering unit allows the use of enhanced adaptive correction filtering and the filter used when the enhanced adaptive correction filtering is not allowed. device can be different.
- the filter used for performing the adaptive correction filtering on the current adaptive correction filtering unit may be the first filter.
- the first filter may be the filter shown in FIG. 4A .
- the filter used for performing the adaptive correction filtering on the current adaptive correction filtering unit may be the second filter.
- the second filter may be the filter shown in FIG. 9 .
- determining whether the current adaptive correction filtering unit is allowed to use the enhanced adaptive correction filtering may include: determining a flag bit used to indicate whether the current adaptive correction filtering unit is allowed to use the enhanced adaptive correction filtering Value, when the flag bit is the first value, it is determined that the current adaptive correction filtering unit is allowed to use enhanced adaptive correction filtering, and when the flag is the second value, it is determined that the current adaptive correction filtering unit is not allowed to use Enhanced adaptive correction filtering.
- a flag bit may be used to indicate whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering. Exemplarily, when the value of the flag bit is the first value (eg, 0), it indicates that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering. When the value of the flag bit is the second value (eg 1), it indicates that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering.
- the decoding device may acquire the value of the flag bit, and determine whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering based on the value of the flag bit.
- the unobtainable pixel value of a reference pixel may include, but is not limited to, one of the following: the reference pixel is outside the image boundary of the current image frame, outside the slice boundary of the current slice and filtering across slice boundaries is not allowed, Outside the upper or lower boundary of the current adaptive correction filter unit.
- the current adaptive correction filtering unit and the pixel closest to the reference pixel position in the boundary area can be used to replace the reference pixel for filtering.
- the distance between pixel locations may be Euclidean distance.
- the boundary area includes outside the left border or outside the right border of the current adaptive correction filtering unit, and outside the left border of the current adaptive correction filtering unit includes part or all of the area in the adjacent filtering units on the left side of the current adaptive correction filtering unit, Outside the right border of the current adaptive correction filtering unit includes part or all of the area in the adjacent filtering units to the right of the current adaptive correction filtering unit.
- the current adaptive correction filtering unit determines that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained.
- the pixel closest to the reference pixel position replaces the reference pixel to perform adaptive correction filtering; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, and is used to perform the self-adaptive correction filtering of the current adaptive correction filtering unit. If the pixel value of the pixel position of the adaptive correction filtering cannot be obtained, the pixel closest to the reference pixel position in the current adaptive correction filtering unit is used instead of the reference pixel to perform the adaptive correction filtering.
- An embodiment of the present application provides a filtering method, wherein the filtering method can be applied to a decoding device, and the filtering method can include the following steps:
- T710 Obtain the filter coefficients of the current LCU based on the region coefficient identifiers of the merging regions to which the current LCU belongs, where the region coefficient identifiers are used to identify filter coefficients used in the merging regions to which the LCU belongs in the preset multiple sets of filter coefficients.
- the filter coefficients used in a merge area are no longer limited to one set of filter coefficients, and one or more sets of filter coefficients can be selected and used according to actual conditions.
- the encoding device may train multiple sets of filter coefficients, and determine, based on the RDO decision, that the merged region uses one or more sets of the multiple sets of filter coefficients, and will be used to identify the merged region to use.
- the region coefficients of the filter coefficients identify the write stream.
- the decoding device may obtain the region coefficient identifier of the merged region to which the current LCU belongs based on the information parsed from the code stream, and based on the region coefficient identifier , to determine the filter coefficients used in the merge area to which the current LCU belongs.
- the filter coefficient of the current LCU may be determined from the filter coefficients used by the merge area.
- the filter coefficients used in the merged region may be determined as the filter coefficients of the current LCU.
- T720 Perform ALF filtering on the pixels of the current LCU one by one based on the filter coefficients of the current LCU.
- ALF filtering may be performed on the pixels of the current LCU one by one based on the filter coefficient of the current LCU.
- each merged region uses one or more sets of the trained sets of filter coefficients, and is identified by the region coefficients
- the decision result is notified to the decoding device, so that a region is no longer limited to using one set of filter coefficients, but can choose to use one or more sets of filter coefficients according to performance, which optimizes the ALF filtering performance and improves the encoding and decoding performance.
- determining to start the ALF filtering on the current LCU of the current frame image may include: parsing the LCU coefficient identifier of the current LCU from the code stream; wherein the LCU coefficient identifier is used to identify the mergence to which the current LCU belongs. Among the at least one set of filter coefficients used in the region, the filter coefficients used by the current LCU; when the value of the LCU coefficient identifier of the LCU is not the first value, it is determined to start ALF filtering for the current LCU.
- the encoding device may notify the decoding device to use one or more sets of filter coefficients to merge the regions through the region coefficient identifier. For any LCU in the region, the encoding device may identify the filter coefficient used by the LCU among one or more sets of filter coefficients used in the merged region by the LCU coefficient identifier.
- the decoding device may determine whether to enable ALF filtering for the LCU and filter coefficients for enabling ALF filtering for the LCU based on the LCU coefficient identifier of the LCU obtained by parsing from the code stream.
- the value of the LCU coefficient identifier of the LCU is the first value, it indicates that ALF filtering is not started for the LCU.
- the value of the LCU coefficient identifier of the LCU parsed by the decoding device from the code stream is not the first value, it may be determined to start ALF filtering for the LCU.
- the decoding device can determine not to start ALF filtering for the LCU; when When the value of the LCU coefficient identifier of the LCU obtained by the decoding device from the code stream is not 0, it can determine to start ALF filtering for the LCU, and the decoding device can determine the filter coefficient used by the LCU according to the LCU coefficient identifier of the LCU.
- the filter coefficient of the LCU is the set of filter coefficients; if the merged region to which the LCU belongs uses multiple sets of filter coefficients filter coefficient, the filter coefficient of the LCU needs to be determined according to the specific value of the LCU coefficient identifier of the LCU.
- obtaining the filter coefficients of the current LCU based on the region coefficient identifier of the merged region to which the current LCU belongs may include: using multiple sets of filter coefficients when determining the merged region to which the LCU belongs based on the region coefficient identifier of the merged region to which the current LCU belongs.
- the filter coefficient of the current LCU is determined from multiple sets of filter coefficients used in the merged region to which the LCU belongs.
- the decoding device may determine the filter coefficient used by the LCU based on the LCU coefficient identifier of the LCU obtained by parsing from the code stream.
- the filter shapes of the sets of filter coefficients used in the merging region may or may not be exactly the same.
- An embodiment of the present application provides a filtering method, wherein the filtering method can be applied to a decoding device, and the filtering method can include the following steps:
- T810 Determine the filter coefficient of the current LCU based on the merged region to which the current LCU belongs and the coefficient selection identifier of the current LCU; wherein the coefficient selection identifier is used to identify the filter coefficient selected and used by the current LCU among the multiple groups of candidate filter coefficients.
- the LCU in order to optimize the ALF filtering effect and improve the encoding and decoding performance, the LCU is no longer limited to selecting the filter coefficients of the merged region to which it belongs, but can adaptively select a set of filter coefficients from multiple sets of filter coefficients to perform ALF. filter.
- the candidate filter coefficients of the LCU may include, but are not limited to, the filter coefficients of the merged region to which it belongs and the filter coefficients of the adjacent regions of the merged region to which it belongs. Therefore, in the case where each region transmits a set of filter coefficients , one LCU can have multiple groups of candidate filter coefficients, which improves the flexibility of LCU filter coefficient selection, optimizes the ALF filtering effect, and improves encoding and decoding performance.
- the encoding device may determine, based on the RDO decision, filter coefficients used by the LCU in multiple sets of candidate filter coefficients, and write the coefficient selection identifier corresponding to the filter coefficient into the code stream and send it to the decoding device.
- the decoding device may determine the filter coefficient of the current LCU based on the merged region to which the current LCU belongs and the coefficient selection identifier of the current LCU obtained by parsing from the code stream.
- T820 Perform ALF filtering on the pixels of the current LCU one by one based on the filter coefficients of the current LCU.
- ALF filtering may be performed on the pixels of the current LCU one by one based on the filter coefficient of the current LCU.
- multiple groups of candidate filter coefficients are set for each LCU, the filter coefficients used by each LCU are determined based on the RDO decision, and the decision result is notified to the decoding device through the coefficient selection identifier, thereby improving the performance of each LCU.
- the flexibility of the filter coefficients used can optimize the ALF filtering performance and improve the encoding and decoding performance.
- determining the filter coefficients of the LCU based on the merged region to which the LCU belongs and the coefficient selection identifier of the LCU may include: when the value of the coefficient selection identifier of the current LCU is a first value, setting the The filter coefficient of the previous merge area of the merge area is determined as the filter coefficient of the previous LCU; when the value of the coefficient selection flag of the current LCU is the second value, the filter coefficient of the merge area to which the current LCU belongs is determined as the filter coefficient of the current LCU ; When the value of the coefficient selection flag of the current LCU is the third value, determine the filter coefficient of the merged region next to the merged region to which the current LCU belongs as the filter coefficient of the current LCU.
- the candidate filter coefficients of the LCU may include the filter coefficients of the merging region, the filter coefficients of the previous merging region of the merging region, and the latter merging region of the merging region. filter coefficients.
- the previous merged region of the merged region to which the LCU belongs is the merged region corresponding to the previous adjacent index of the index of the merged region to which the LCU belongs.
- the next merged region of the merged region to which the LCU belongs is the merged region corresponding to the next adjacent index of the index of the merged region to which the LCU belongs.
- the latter merged area of merged area 15 can be merged area 0, and the merged area of merged area 0
- the previous merged region may be region 15 .
- the encoding device may determine the filter coefficient used by the LCU based on the RDO decision, and when determining the filter coefficient used by the LCU, the filter coefficient of the previous merge area to which the LCU belongs, may determine the filter coefficient of the LCU.
- the value of the coefficient selection identifier is a first value, such as 0; when the filter coefficient of the merging region to which the LCU belongs is determined, the value of the coefficient selection identifier of the LCU can be determined as a second value, such as 1 ; When determining the filter coefficients of the filter coefficients used by the LCU, the filter coefficients of the merged area after the merged area to which the LCU belongs, it can be determined that the coefficient selection flag of the LCU is a third value, such as 3.
- the filter coefficient of the previous merged region to which the LCU belongs can be determined. is the filter coefficient of the LCU; when the value of the coefficient selection flag of the LCU obtained by parsing from the code stream is the second value, the filter coefficient of the merged region to which the LCU belongs can be determined as the filter coefficient of the LCU; when from When the value of the coefficient selection flag of the LCU obtained by parsing in the code stream is the third value, the filter coefficient of the merged area after the merged area to which the LCU belongs may be determined as the filter coefficient of the LCU.
- the filter coefficients of the merged area to which the LCU belongs and the previous merged area and the latter merged area to which the LCU belongs are used as the candidate filter coefficients of the LCU, and a set of filter coefficients are selected based on the RDO decision as the filter coefficients of the LCU , so that in the case of training a set of filter coefficients in one merge area, there may be multiple sets of candidate filter coefficients in the LCU in the merge area.
- the flexibility of the filter coefficients optimizes the ALF filtering performance and improves the encoding and decoding performance.
- FIG. 7 is a schematic flowchart of a filtering method provided by an embodiment of the present application, wherein the filtering method may be applied to a decoding device.
- the filtering method may include the following steps:
- Step S700 When it is determined that the ALF filtering is started on the current LCU of the current frame image, the filter shape of the merged area to which the current LCU belongs is obtained based on the merged area to which the current LCU belongs.
- Step S710 Based on the filter shape, obtain the filter coefficient of the merged region to which the current LCU belongs.
- each area is no longer limited to using the same filter shape, but can selectively use different filter shapes, that is, different filter shapes.
- the filter shapes used in the merge area can be the same or different.
- the encoding device may train multiple sets of filter coefficients with different filter shapes, determine the filter shape and filter coefficients used in the merged region based on the RDO decision, and use the filter shape and filter coefficients
- the write stream is sent to the decoding device.
- the decoding device when acquiring the filter coefficients of the merging region, can parse the filter shape of the merging region from the code stream, and parse the merging region from the code stream based on the filter shape. Filter coefficients for the region.
- Step S720 Perform ALF filtering on the pixels of the current LCU one by one based on the filter shape and the filter coefficient.
- ALF filtering may be performed on the pixels of the current LCU one by one based on the filter coefficient of the current LCU.
- the decoding device can obtain the filter shape and filter coefficient of each region from the code stream, thereby optimizing the ALF filtering effect and improving the encoding and decoding performance.
- a filter shape may also be selected for an image frame, or a filter shape may be selected for a component of an image frame (eg, a luminance component and/or a chrominance component).
- a component of an image frame e.g, a luminance component and/or a chrominance component.
- image frame A selects a center-symmetric filter shape of 7*7 cross and 5*5 square
- each LCU in image frame A that enables ALF filtering uses 7*7 cross and 5*5 square.
- the centrosymmetric filter shape of of .
- FIG. 8 is a schematic flowchart of a filtering method provided by an embodiment of the present application, wherein the filtering method may be applied to a decoding device.
- the filtering method may include the following steps:
- Step S800 When it is determined to start ALF filtering on the current LCU of the current frame image, based on the merged area to which the current LCU belongs, the filter coefficients of the merged area to which the current LCU belongs and the weight coefficients of each reference pixel position are obtained.
- Step S810 perform ALF filtering on the pixels of the current LCU one by one based on the filter coefficient and the weight coefficient of each reference pixel position.
- the filters used in the ALF filtering are no longer limited to symmetric filters, but asymmetric filters can be used, that is, the filter coefficients with symmetrical positions can be different , and satisfy a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4, etc.
- the filter coefficient at the symmetrical position of the filter coefficient is necessary to use the filter coefficient based on the filter coefficient, the filter coefficient at the symmetrical position of the filter coefficient, and the sum of the products of the reference pixels at the corresponding positions, respectively.
- the filtered pixel value is obtained. Therefore, the above ratio can be used as the ratio between the filter coefficients of the symmetrical position, or, as the pixel value of the reference pixel corresponding to the filter coefficient of the symmetrical position.
- the ratio of the weighted weight when participating in the ALF filter calculation (It may also be referred to as weight ratio), that is, the above-mentioned asymmetric filter means that the filter coefficients at the symmetrical positions are different, or the weights of the pixel values of the reference pixels corresponding to the filter coefficients at the symmetrical positions are different.
- the filter coefficient Ci of a 7*7 cross shape plus a 5*5 square center-symmetric filter shape is C28-i
- Ci: C28-i Ai: (2-Ai )
- the ratio of the weighted weights of Pi and P28-i when participating in the ALF filtering calculation is Ai: (2-Ai)
- Pi is the pixel value of the reference pixel position corresponding to Ci
- P28-i is the corresponding pixel value of C28-i
- the filtered pixel value of the pixel can be determined in the following manner:
- Ci is the (i+1)th filter coefficient in the filter coefficients of the merged region to which the LCU belongs
- Pi is the pixel value of the reference pixel position corresponding to the filter coefficient Ci
- the pixel position of the current filter pixel is center-symmetric
- Ai is the weight coefficient of the pixel value of the reference pixel position corresponding to Pi
- P14 is the pixel value of the current filter pixel
- C14 is the filter coefficient of the current filter pixel, 0 ⁇ Ai ⁇ 2.
- the encoding device may determine the filter coefficient and filter performance of the merged region under different weight coefficients corresponding to each position. Select a set of filter coefficients with the best filtering performance, record the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, and write them into the code stream and send them to the decoding device.
- a set of weighting coefficients (such as the above-mentioned value set of Ai) may be constructed in advance, and each weighting coefficient may be selected from the set to obtain the filtering coefficient with the best filtering performance and the corresponding filter coefficient at each position of the corresponding filter. weight coefficient, and write the index of the weight coefficient in the weight coefficient set into the code stream and send it to the decoding device.
- the decoding device can parse the code stream to obtain the filter coefficients of the merged region to which the LCU belongs and the weight coefficients of each reference pixel position corresponding to the merged region to which the LCU belongs, and perform ALF filtering on the pixels of the LCU one by one.
- the filters used in each merging area are no longer limited to symmetrical filters, and the filter coefficients of reference pixels at symmetrical positions are no longer limited to the same, but can satisfy a certain proportional relationship, and due to the symmetry If the filter coefficients at the positions satisfy the proportional relationship, the number of required filter coefficients will not increase, thereby improving the flexibility of the filter coefficients, optimizing the ALF filtering performance, and improving the encoding and decoding performance.
- An embodiment of the present application provides a filtering method, which can be applied to an encoding device, and the filtering method can include the following steps:
- T100 Perform regional division on the luminance component of the current image frame.
- T110 For any area, classify each LCU in the area, and divide the area into a plurality of area types based on the type of each LCU.
- the pixel value of each pixel in the region can be determined based on the According to the characteristic, the LCUs in a region are divided into at least one category, that is, a region can be divided into at least one sub-region or region category by means of LCU classification.
- T120 Perform region merging on each region category, and determine the filter coefficients of each merged region.
- T130 Write the filter coefficients of each merged region and the region type identifier of each LCU into the code stream.
- the encoding device when the encoding device classifies the LCUs in each region in the above-mentioned manner, the encoding device may perform region merging on each region category to obtain at least one combined region, and determine the size of each combined region. filter coefficients.
- region merging for each region category is similar to the relevant description in the "region merging" section above, and will not be repeated here.
- the encoding device may assign it a coefficient index based on the merged region to which it belongs, where the coefficient index is a filter coefficient corresponding to one of the merged regions.
- the encoding device may write the filter coefficients of each merged region, the index of each region category, and the region category identifier used to identify the region category to which each LCU belongs, into the code stream, and send it to the decoding device.
- An embodiment of the present application provides a filtering method, which can be applied to an encoding device, and the filtering method can include the following steps:
- T200 For any merged region of the current image frame, determine a filter coefficient used in the merged region based on the RDO decision.
- T210 Determine a region coefficient identifier of the combined region based on the filter coefficient used in the region; wherein the region coefficient identifier is used to identify the filter coefficient used in the combined region among the preset multiple groups of filter coefficients.
- T220 Write the filter coefficients used in each merging area and the area coefficient identifiers of each merging area into the code stream.
- the filter coefficients used in a merge area are no longer limited to one set of filter coefficients, and one or more sets of filter coefficients can be selected and used according to actual conditions.
- the encoding apparatus may train multiple sets of filter coefficients, and determine, based on the RDO decision, that the merged region uses one or more sets of the multiple sets of filter coefficients, and will use to identify the number of filter coefficients used by the merged region.
- the region coefficient identifies the write stream.
- the above filtering method may further include: for any merged area of the current image frame, when the filter coefficients used in the merged area include multiple sets, determining the LCU coefficient identifier of each LCU based on the filter coefficients used by each LCU in the merged area; Write the LCU coefficient identifiers of each LCU into the code stream.
- the encoding device may notify the decoding device to use one or more sets of filter coefficients to merge the regions through the region coefficient identifier.
- the encoding apparatus may identify the filter coefficient used by the LCU among one or more sets of filter coefficients used in the merged region by using the LCU coefficient identifier.
- the coding device may notify the filter coefficient used by the LCU in the multiple sets of filter coefficients through the LCU coefficient identifier to the decoding device.
- the encoding device can write the value of the LCU coefficient identifier of the LCU into the code stream as the first value. For example, assuming that the first value is 0, for any LCU, when the encoding device determines not to start ALF filtering for the LCU, the value of the LCU coefficient identifier of the LCU written in the code stream is 0.
- An embodiment of the present application provides a filtering method, which can be applied to an encoding device, and the filtering method can include the following steps:
- T300 for any merged region of the current image frame, determine the filter coefficient used in the merged region from multiple sets of filter coefficients based on the RDO decision;
- T310 Determine, based on the filter coefficients used in the region, a coefficient selection flag of each LCU in the merged region; wherein the coefficient selection flag is used to identify the filter coefficients selected and used by each LCU among the multiple groups of candidate filter coefficients.
- T320 Write the filter coefficients used in each merge area and the coefficient selection flags of each LCU into the code stream.
- the LCU in order to optimize the ALF filtering effect and improve the encoding and decoding performance, the LCU is no longer limited to selecting the filter coefficients of the merged region to which it belongs, but can adaptively select a set of filter coefficients from multiple sets of filter coefficients to perform ALF. filter.
- the candidate filter coefficients of the LCU may include, but are not limited to, the filter coefficients of the merged region to which it belongs and the filter coefficients of the adjacent regions of the merged region to which it belongs.
- one LCU can have multiple groups of candidate filter coefficients, which improves the flexibility of LCU filter coefficient selection, optimizes the ALF filtering effect, and improves encoding and decoding performance.
- the encoding device may determine the filter coefficients used by the LCU in multiple sets of candidate filter coefficients based on the RDO decision, and write the coefficient selection identifiers corresponding to the filter coefficients into the code stream and send it to the decoding device.
- the candidate filter coefficients of the LCU may include the filter coefficients of the merging region, the filter coefficients of the merging region preceding the merging region, and the filter coefficients of the merging region following the region.
- the latter merged region of merged region 15 can be merged region 0, merged region
- the previous merged area of 0 may be merged area 15.
- the encoding device may determine the filter coefficients used by the LCU based on the RDO decision.
- the filter coefficients used by the LCU when determining the filter coefficients of the previous merged region to which the LCU belongs, it may determine the coefficient selection flag of the LCU.
- the value is the first value, such as 0; when the filter coefficient of the merging region to which the LCU belongs is determined, the value of the coefficient selection flag of the LCU can be determined as the second value, such as 1; Filter Coefficient Used by the LCU When the filter coefficient of the merged area after the merged area to which the LCU belongs, the value of the coefficient selection flag of the LCU may be determined to be a third value, such as 3.
- An embodiment of the present application provides a filtering method, which can be applied to an encoding device, and the filtering method can include the following steps:
- T400 For any merged region of the current image frame, determine the filter shape and filter coefficient used in the merged region based on the RDO decision.
- T410 Write the filter shapes and filter coefficients used in combination in each area into the code stream.
- each merging area is no longer limited to using the same filter shape, but can selectively use different filter shapes, that is, The filter shapes used in different merge regions can be the same or different.
- the encoding device may train multiple sets of filter coefficients with different filter shapes, determine the filter shape and filter coefficients used in the merged region based on the RDO decision, and use the filter shape and filter coefficients
- the write stream is sent to the decoding device.
- a filter shape may also be selected for an image frame, or a filter shape may be selected for a component of an image frame (eg, a luminance component and/or a chrominance component).
- a component of an image frame e.g, a luminance component and/or a chrominance component.
- image frame A selects a center-symmetric filter shape of 7*7 cross and 5*5 square
- each LCU in image frame A that enables ALF filtering uses 7*7 cross and 5*5 square.
- the centrosymmetric filter shape of of .
- An embodiment of the present application provides a filtering method, which can be applied to an encoding device, and the filtering method can include the following steps:
- T500 for any merged area of the current image frame, determine the filter coefficient used by the merged area and the weight coefficient of each corresponding reference pixel position based on the RDO decision;
- T510 Write the filter coefficients used in each merge area and the corresponding weight coefficients of each reference pixel position into the code stream.
- the filters used in the ALF filtering are no longer limited to symmetric filters, but asymmetric filters can be used, that is, the filter coefficients with symmetrical positions can be different , and satisfy a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4, etc.
- the above ratio can be used as the ratio between the filter coefficients at the symmetrical position, or can also be used as the ratio of the weighted weight (also referred to as the weight ratio) when the pixel value of the reference pixel corresponding to the filter coefficient at the symmetrical position participates in the calculation of the ALF filter.
- the above-mentioned asymmetric filter means that the filter coefficients at the symmetrical positions are different, or the weights of the pixel values of the reference pixels corresponding to the filter coefficients at the symmetrical positions are different.
- the filtered pixel value of the pixel can be determined in the following manner:
- Ci is the (i+1)th filter coefficient in the filter coefficients of the merged region to which the LCU belongs
- Pi is the pixel value of the reference pixel position corresponding to the filter coefficient Ci
- the pixel position of the current filter pixel is center-symmetric
- Ai is the weight coefficient of the pixel value of the reference pixel position corresponding to Pi
- P14 is the pixel value of the current filter pixel
- C14 is the filter coefficient of the current filter pixel, 0 ⁇ Ai ⁇ 2.
- Ci:C28-i Ai(2-Ai)
- the encoding device may determine the filter coefficient and filter performance of the merged region under different weight coefficients corresponding to each position. Select a set of filter coefficients with the best filtering performance, record the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter, and write them into the code stream and send them to the decoding device.
- a set of weighting coefficients (such as the above-mentioned value set of Ai) may be constructed in advance, and each weighting coefficient may be selected from the set to obtain the filtering coefficient with the best filtering performance and the corresponding filter coefficient at each position of the corresponding filter. weight coefficient, and write the index of the weight coefficient in the weight coefficient set into the code stream and send it to the decoding device.
- Scheme 1 For each frame, take the LCU as the smallest unit, and adaptively divide it into multiple regions, each of which may include more than one LCU. Therefore, it is proposed to classify each LCU and classify the same region.
- LCUs are divided into N types, where N is a positive integer.
- Scheme 2 Multiple sets of filter coefficients can be transmitted in each region, and the shape of each set of filters can be the same or different.
- a set of filter coefficients is adaptively selected based on each LCU, and LCUs in the same region can select filter coefficients in adjacent regions.
- Each region can only transmit one set of filter coefficients, but the filter shape of each region can be different.
- Option 5 Modify the symmetric filter to an asymmetric filter, including optimizing the filter coefficients at the symmetrical positions to be the same so that the filter coefficients at the symmetrical positions satisfy a certain proportional relationship, such as 0.5:1.5 or 0.6:1.4.
- Scheme 6 Optimize the sample value of the boundary during filtering.
- the ALF switch sequence header can be obtained to determine whether the current sequence needs to enable ALF technology. If the ALF switch sequence header is off, the ALF technology is turned off, and the ALF optimization technology (that is, the optimization of the traditional ALF technology by the ALF filtering solution provided in the embodiment of the present application) may include any one or more of the above-mentioned solutions 1 to 6. optimization scheme) is also closed, and the ALF switch sequence header is passed to the decoding device. If the ALF switch sequence header is turned on, enter the ALF technology coding, and obtain the ALF technology optimized sequence header.
- the ALF optimization technology that is, the optimization of the traditional ALF technology by the ALF filtering solution provided in the embodiment of the present application
- the original ALF technology is used for filtering. Send the ALF switch sequence header, the optimization technology sequence header and the parameters required by the ALF technology to the decoding device.
- the following scheme can be used for optimization, and the ALF switch sequence header is passed to the decoding device, and the optimized technology sequence header and the parameters required by the optimized ALF technology are passed to the decoding device.
- the optimized technical sequence header may also not exist. In this case, if the ALF switch sequence header is turned on, it is determined to use the optimized ALF technical solution.
- the luminance component is divided into fixed regions to obtain multiple regions; for LCUs belonging to the same region, the LCUs are divided again (that is, the LCUs in the same region are classified), and the same region can be further divided into N1 categories at most, and N1 is a positive integer .
- N1 is a positive integer .
- the encoding device may mark the result of division of the LCUs in each region, and send it to the decoding device (that is, send the region type identifier of each LCU to the decoding device).
- n For any merged region, at most n sets of filter coefficients can be passed. And each LCU in the merged area is identified. 0 means off, that is, ALF filtering is not started; i means that the current LCU uses a certain set of filter coefficients in this area, and the value range of i is [1, n].
- each set of filter coefficients is obtained in the following manner: for the first training of filter coefficients, ALF filtering is enabled by default for all LCUs. For the training of secondary filter coefficients, LCUs with the same label train the same set of filter coefficients together.
- the training of the third filter coefficient is based on the result of the second decision.
- the image frame or the combined region uses at most n groups of filter coefficients, and finally the filter coefficients corresponding to each combined region are written into the code stream.
- Each LCU adaptively selects a set of filter coefficients
- N2 For the luminance component, for any LCU, you can choose to use the filter coefficients of the area where the current LCU is located or use the filter coefficients of other areas when making a decision. , N2 ⁇ 2), make RDO decision under N2 groups of filter coefficients, select a group of filter coefficients with the best performance, and send the optimal selection result of the current LCU to the decoding device (that is, send the coefficient selection identifier to the decoding device).
- N2 is less than or equal to the number of merged regions.
- Each region transmits a set of filter coefficients, so for any LCU, the encoding device can notify the decoding device to enable ALF filtering or not (ie, disable) ALF filtering for this LCU through a flag bit.
- the filter coefficients under N3 (N3 ⁇ 2) different filter shapes can be calculated separately, and the filtering performance under N3 filter shapes can be calculated separately. Choose the filter shape with the best performance. Then, the filter shape and filter coefficient with the best performance in each region are notified to the decoding device through the code stream.
- the encoding device may also select filters of different shapes based on the frame level, or select filters of different shapes based on the Y, U, and V components. Taking the selection of filters of different shapes based on the frame level as an example, for any image frame, the filter coefficients of each region under N4 different filter shapes can be calculated separately, and the filter shapes of N4 (N4 ⁇ 2) kinds of filter shapes can be calculated respectively. Under the filter performance of the image frame, select the filter shape with the best performance. Then, the filter shape with the best performance of the image frame and the filter coefficient of each region are notified to the decoding device through the code stream.
- the filter coefficients at the symmetrical positions satisfy different proportional relationships, so that when performing coefficient training, only training is required. filter coefficients.
- Ci and C28-i are symmetrical positions.
- the proportional relationship of the filter coefficients at each symmetrical position can be selected through the RDO decision, and the filter coefficients of each position and the ratio of the filter coefficients of the symmetrical positions are sent to the decoding device through the code stream.
- the ratio of the filter coefficients of all symmetrical positions is 1:1, the filter obtained by training is still a symmetrical filter.
- the ALF switch sequence header can be read from the code stream to determine whether the current sequence needs to enable ALF technology. If the ALF switch sequence header is off, the ALF technology is off. If the ALF switch sequence header is turned on, you can continue to read the ALF technology optimized sequence header.
- the filtering parameters required by the original ALF technology are obtained; if the ALF technology optimized sequence header is on, the filtering parameters required by the optimized ALF technology are read.
- sequence header of the ALF optimization technique may also not exist. In this case, if the sequence header of the ALF switch is turned on, the filtering parameters required by the optimized ALF technique are read.
- the luminance component is divided into fixed regions to obtain multiple regions; the filter coefficients of all regions are read from the code stream, and the region category identifiers of all LCUs that enable ALF filtering are read from the code stream. According to the fixed region division results, and The region category identifier of the LCU determines the region category to which the LCU belongs, obtains the corresponding filter coefficient according to the region category to which the LCU belongs, and performs ALF filtering on the pixels of the LCU one by one.
- Read frame-level or area-level coefficient identifiers from the code stream obtain multiple sets of filter coefficients in each merged area from the code stream according to the frame-level or area-level coefficient identifiers, and determine and select from the frame-level or area-level coefficient identifiers
- the number of filters that is, the number of filter coefficient groups).
- the LCU coefficient identifier of each LCU is obtained from the code stream, and a set of filter coefficients is selected according to the LCU coefficient identifier of each LCU.
- a set of filter coefficients is selected according to the LCU coefficient identifier of each LCU.
- Each LCU adaptively selects a set of filter coefficients
- the maximum number of optional filters is N2 (that is, the number of candidate filter coefficients is at most N2 groups).
- the filter shape of the merged region can be read from the code stream, and the filter coefficient of the merged region can be read according to the filter shape.
- the filter coefficients of each merged region are read from the code stream, and the scale coefficients of the filter coefficients at the symmetrical positions are read.
- the filter coefficients at each position and the scale coefficient of the filter coefficients at the symmetrical positions are derived, and the pixels of each LCU in the merged region are ALF filtered one by one.
- Embodiment 12 Sample value optimization scheme 1 of filtering boundary
- the filter shape is assumed as shown in Fig. 4A. If the sample used in the adaptive correction filtering process (that is, the reference pixel when filtering the current filter pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (that is, the reference pixel is in the adaptive correction filtering unit) If the sample used in the adaptive modification filtering process does not belong to the sample in the adaptive modification filtering unit (that is, the reference pixel is not in the adaptive modification filtering unit), then follow the following method To filter:
- Embodiment 13 Sample value optimization scheme 2 of filtering boundary
- the filter shape is assumed as shown in Fig. 4A. If the sample used in the adaptive correction filtering process (that is, the reference pixel when filtering the current filter pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (that is, the reference pixel is in the adaptive correction filtering unit) If the sample used in the adaptive modification filtering process does not belong to the sample in the adaptive modification filtering unit (that is, the reference pixel is not in the adaptive modification filtering unit), then follow the following method To filter:
- Embodiment 14 Sample Value Optimization Scheme 3 of Filtering Boundary
- the filter shape is assumed as shown in Fig. 4A. If the sample used in the adaptive correction filtering process (that is, the reference pixel when filtering the current filter pixel in the adaptive correction filtering unit) is the sample in the adaptive correction filtering unit (that is, the reference pixel is in the adaptive correction filtering unit) If the sample used in the adaptive modification filtering process does not belong to the sample in the adaptive modification filtering unit (that is, the reference pixel is not in the adaptive modification filtering unit), then follow the following method To filter:
- EalfEnableFlag encodeable flag bit for enhanced adaptive correction filtering
- EalfEnableFlag is equal to 0, use the sample closest to the sample in the adaptive correction filtering unit to replace the sample for filtering;
- EalfEnableFlag If EalfEnableFlag is equal to 1, use the adaptive correction filtering unit and the sample closest to the sample in the boundary area to replace the sample for filtering;
- EalfEnableFlag refers to a flag whose value can include '1' or '0'. When EalfEnableFlag is equal to 1, it means that enhanced adaptive correction filtering can be used; when EalfEnableFlag is equal to 0, it means that adaptive correction filtering should not be used.
- the value of EalfEnableFlag may be derived from the decoding end, or obtained from the code stream at the decoding end, and the value of EalfEnableFlag may also be a constant value.
- EalfEnableFlag can be equal to the value of ealf_enable_flag (that is, the enhanced adaptive correction filtering enable flag); when EalfEnableFlag is equal to 1, it means that enhanced adaptive correction filtering can be used; when EalfEnableFlag is equal to 0, it means that adaptive correction filtering should not be used.
- Embodiment 15 Sample Value Optimization Scheme 4 of Filtering Boundary
- the sample cannot obtain the true value, and the current adaptive correction filter unit is used.
- the sample that is closest to the sample in the filtering unit area is used for filtering instead of the sample.
- the current adaptive correction filtering unit may use the first filter shown in FIG. 4A for filtering, and the current adaptive correction filtering unit may also use the second filter shown in FIG. 9 for filtering.
- the current adaptive correction filtering unit uses the first filter shown in FIG. 4A for filtering, and the shape of the filter is shown in FIG. 4A .
- any sample used in the adaptive correction filtering process (the reference pixel sample when the current filter pixel in the current adaptive correction filtering unit is filtered) is the sample in the current adaptive correction filtering unit, then the sample is directly used for filtering;
- the filtering is performed as follows:
- CplfEnableFlag indicates whether to allow cross-patch boundary values, which may also be referred to as cross_patch_loopfilter_enable_flag.
- the pixel position where the adaptive correction filtering is performed in the current adaptive correction filtering unit is the 14-sample position, and it is necessary to obtain reference pixel samples outside the current adaptive correction filtering unit, as shown in FIG. 4C , as an example , the reference pixel samples 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 are outside the upper boundary of the current adaptive correction filter unit. When the sample values of these positions cannot be obtained, you need to find other sample values instead.
- the filter sample When the filter sample is in the upper left corner of the current adaptive correction filtering unit, for example, the sample values of the reference pixel samples 1, 2, 6 and 7 cannot be obtained, then the distance within the current adaptive correction filtering unit is used for the sample 1, 2, The sample values of the nearest sample positions 6 and 7 are filtered, and the 14 sample position is the position closest to the four point samples in the current adaptive correction filtering unit. Therefore, the sample values of the 14 sample positions can be replaced as these four points respectively. The values of the samples are filtered.
- the sample values at positions 4, 5, 9, and 10 of the reference pixel samples cannot be obtained, the sample values at the sample positions closest to 4, 5, 9, and 10 in the current adaptive correction filtering unit are used for filtering.
- the 15 sample position is In the current adaptive correction filtering unit, the sample positions closest to the 4 and 9 samples can be filtered by using the sample values of the 15 sample positions as the values of the 4 and 9 samples respectively; wherein, the 16 sample positions are in the current adaptive correction filtering unit.
- the sample values of the 16 sample positions can be filtered as the values of the 5 and 10 samples respectively.
- the sample value of the reference pixel sample 0, 3, and 8 positions cannot be obtained, and the sample value of the sample position closest to the sample 0, 3, and 8 in the current adaptive correction filtering unit is used for filtering, and the 14 sample position is the current adaptive correction filter.
- the ALF filtering optimization scheme is described in detail below.
- Embodiment 16 Adaptive area division with LCU as the smallest unit
- the fixed area division may be performed according to the manner described in the "area division" section above.
- the luminance component is divided into 16 regions, each region is marked as K, K ⁇ [0, 15], and there are 1 or more LCUs inside each region.
- the LCUs belonging to the same area are divided again.
- the intra-region LCU division method can use the LCU merging method to calculate the cost of merging LCUs in pairs, merging the two LCUs with the smallest cost, and so on, calculate when only [1, N6] classes are left in the merging. For all costs, choose a partitioning method with the smallest cost. Mark each LCU's final selection.
- each LCU is marked as 1 or 0, that is, the value of the area type identifier of the above LCU includes 1 or 0.
- the area category to which it belongs is 2K; for the LCU marked as 1 (that is, the value of the area category identifier is 1), the category to which it belongs is 2K. +1.
- the luminance component is divided into 32 regions at most. After the regions are divided, the 32 regions are merged to calculate the filter coefficient of each merged region.
- the LCU area division result (that is, the area type identifier of each LCU) and the filter coefficient obtained after the area division are sent to the decoding device through the code stream.
- a fixed area division can be performed in the manner described in the "area division" section above.
- the luminance component is divided into 16 regions, and a schematic diagram of the division result is shown in FIG. 2 .
- the area type identifier of the LCU can be obtained; the LCU whose area type identifier is 0 belongs to the area type 2K; the LCU whose area type is 1 belongs to the area type 2K+1.
- the filter coefficient of each LCU is determined, and the pixels of each LCU are ALF filtered one by one based on the filter coefficient of each LCU.
- Embodiment 17 An optimization scheme that can transmit 2 groups of filter coefficients in each region
- each LCU assuming that 0 means to turn off ALF filtering, 1 means to use the first group of filter coefficients, 2 means to use the second group of filter coefficients, that is, the value of the LCU coefficient identification of the LCU includes 0, 1 or 2.
- a value is 0, and non-first values include 1 or 2.
- Coefficients are identified for each region, 0 means only the first set of filter coefficients are used, 1 means only the second set of filter coefficients are used, and 2 means that two sets of filter coefficients are used.
- the filter shape of the first group of filter coefficients may be as shown in FIG. 4A
- the filter shape of the second group of filter coefficients may be as shown in FIG. 12 (a 7*7 cross plus the center of a 3*3 square). symmetric filter shape).
- the filtering parameters use all the pixels of the LCU in the current region. After the decision is made, the first set of coefficients is trained, and only the LCU with the value of 1 is used to participate in the training. When training the second set of coefficients, only the LCU with the value of 2 is used to participate in the training. Finally, through RDO decision, it is determined that only a certain group of filters is used in the current region, or the performance of two groups of filters is used.
- the value of the regional coefficient flag is 0 or 1; if only the performance of the second group of filter coefficients is optimal, determine the value of the regional coefficient identifier to be 1, write the code stream, and set the The second group of filter coefficients is written into the code stream, and the LCU coefficient identifiers of each LCU are written into the code stream, and the value of the LCU coefficient identifier of each LCU is 0 or 1; if the performance of the two groups of filter coefficients is optimal, then It is determined that the value of the regional coefficient identifier is 2, and the code stream is written, and two sets of filter coefficients are written into the code stream, and the LCU identifier is written into the code stream. The value of the LCU coefficient identifier
- any merged area read the area coefficient identifier of the merged area from the code stream. If the value of the area coefficient identifier is 0, then the merged area obtains 15 filter coefficients (ie, the first group of filter coefficients); if the area coefficient If the value of the identifier is 1, then the merged area obtains 9 filter coefficients (that is, the second group of filter coefficients); if the value of the area coefficient identifier is 2, then the merged area obtains 9 filter coefficients and 15 filter coefficients respectively .
- the LCU coefficient identifiers of all LCUs in the merged area are obtained; if the value of the LCU coefficient identifier is 0, it means that the LCU turns off ALF filtering, that is, does not start the LCU. ALF filtering; if the value of the LCU coefficient identifier is 1, it means that the LCU turns on the ALF filtering, that is, the ALF filtering is started for the LCU, and the LCU uses the first group of filtering coefficients.
- the LCU coefficient identifiers of all LCUs in the merged area are obtained. If the value of the LCU coefficient identifier is 0, it means that the ALF filtering is disabled for the LCU, that is, the ALF filtering is not started for the LCU; The value of the LCU coefficient identifier is 1, which means that the LCU turns on ALF filtering, that is, starts the ALF filtering for the LCU, and the LCU uses the second set of filter coefficients.
- the LCU coefficient identifiers of all LCUs in the merged area are obtained. If the value of the LCU coefficient identifier is 0, it means that the ALF filtering is disabled for the LCU, that is, the ALF filtering is not enabled for the LCU; The value of the LCU coefficient identifier is 1, which means that the LCU turns on ALF filtering, that is, the ALF filtering is started for the LCU, and the LCU uses the first group of filter coefficients; if the value of the LCU coefficient identifier is 2, it means that the LCU is turned on ALF filtering, that is, enabling ALF filtering for the LCU and using the second set of filter coefficients for the LCU.
- Embodiment 18 Each LCU adaptively selects a set of filter coefficients
- each LCU can select up to 3 sets of filter coefficients.
- the candidate filter coefficients of the LCU include the filter coefficients of the merged region to which the LCU belongs (may be referred to as the filter coefficients of the current merged region), the filter coefficients of the merged region before the merged region to which the LCU belongs (may be referred to as the previous merged region) filter coefficients of one merged region), and filter coefficients of the next merged region of the region to which the LCU belongs (may be referred to as filter coefficients of the latter merged region).
- the performance under 3 sets of filter coefficients can be calculated respectively, and a set of filter coefficients with the best performance can be selected.
- the value of the coefficient selection flag of the LCU is 0 (that is, the above first value is 0 as an example);
- the filter coefficient is the filter coefficient of the current merge area, and the value of the coefficient selection flag of the LCU is 1 (that is, the second value is 1 as an example);
- the filter coefficient with the best performance is the filter coefficient of the latter merge area, Then, the value of the coefficient selection flag of the LCU is 2 (that is, the above third value is 2 as an example).
- the candidate filter coefficients of the LCU may include merging
- the filter coefficients of the region 1, the filter coefficients of the merged region 2, and the filter coefficients of the merged region 3 can be determined based on the RDO decision to determine the filter coefficient with the best performance, and the LCU can be marked based on the decision result.
- the chrominance component (U component or V component) has only one set of filter coefficients, its LCU may not participate in the selection of filter coefficients, or the LCU of the two chrominance components may select the filter coefficient on the other component, that is, the LCU of the U component.
- the filter coefficient of the V component can be selected, and the LCU of the V component can select the filter coefficient of the U component.
- the LCU may not participate in the selection of filter coefficients.
- the filter coefficients above that is, the LCU of the U component can select the filter coefficient of the V component, and the LCU of the V component can select the filter coefficient of the U component.
- Embodiment 19 Each region selects filters of different shapes
- the filter shape can be selected from the filter shape shown in FIG. 4A or FIG. 9 , two or more of the four filter shapes shown in FIG. 11A to FIG. 11D can also be selected, or alternatively Filter shapes other than those shown in FIGS. 4A , 9 , and FIGS. 11A to 11D .
- N3 sets of filter coefficients can be trained.
- the filter shapes of the N3 sets of filter coefficients are different.
- the performance of the merged region using a certain filter shape is calculated, and the one with the best performance is selected.
- a set of filter coefficients, the corresponding filter shape is sent to the decoding device through the code stream, and the filter coefficient with the best performance in the combined area is sent to the decoding device.
- the filter shape of the combined region is acquired based on the combined region to which the LCU belongs, and the filter coefficient of the combined region is acquired based on the filter shape.
- ALF filtering may be performed on the pixels of the current LCU one by one based on the filter coefficient of the current LCU.
- Embodiment 20 Modify a symmetric filter to an asymmetric filter
- the weight is 2-Ai.
- Ai [0.5, 0.6, 0.8, 1.2, 1.4, 1.5, 1].
- the filtering coefficient and filtering performance of the region can be calculated under different weighting coefficients corresponding to each position.
- a set of filter coefficients with the best filtering performance is selected, and the filter coefficients and the corresponding weight coefficients at each position of the corresponding filter are recorded.
- a label (or referred to as an index) for identifying the position of the weight coefficient in the weight coefficient set may be transmitted to the decoding device.
- the decoding device acquires filter coefficients of each region and weight coefficients corresponding to each filter coefficient.
- the filtered pixel value of the pixel can be determined by:
- Ci is the (i+1)th filter coefficient in the filter coefficients of the merged region to which the LCU belongs
- Pi is the pixel value of the reference pixel position corresponding to the filter coefficient Ci
- the pixel position of the current filter pixel is centrally symmetric
- Ai is the weight coefficient of the pixel value of the reference pixel position corresponding to Pi
- P14 is the pixel value of the current filter pixel
- C14 is the filter coefficient of the current filter pixel, 0 ⁇ Ai ⁇ 2.
- Embodiment 21 Each LCU adaptively selects a set of filter coefficients
- any pixel to be filtered it is taken out in a 3*3 pixel block (3*3 pixel block with the current pixel to be filtered as the center point, and excluding the current pixel to be filtered)
- the maximum value and the minimum value of the pixel value that is, the maximum value and the minimum value among the pixel values of the 8 pixels except the center position in the 3*3 pixel block centered on the current pixel to be filtered.
- the maximum or minimum value replaces the pixel value of the current pixel to be filtered to participate in filtering, that is, if the pixel value of the current pixel to be filtered is greater than the maximum value, the The pixel value of the current pixel to be filtered is replaced with the maximum value to participate in the filtering; if the pixel value of the current pixel to be filtered is less than the minimum value, the pixel value of the current pixel to be filtered is replaced with the minimum value to participate in the filtering.
- Embodiment 22 Adaptive correction filtering and decoding process
- Adaptive correction filtering enable flag (alf_enable_flag): binary variable. A value of '1' indicates that adaptive correction filtering can be used; a value of '0' indicates that adaptive correction filtering should not be used.
- the value of AlfEnableFlag is equal to the value of alf_enable_flag.
- the value of alf_enable_flag can be obtained from the sequence header, that is, before the entire sequence starts to be compressed, a value of '1' indicates that the ALF technology of the entire video sequence is turned on, and a value of '0' indicates that the ALF technology of the entire video sequence is turned off, the sequence header flag.
- Enhanced adaptive correction filtering enable flag (ealf_enable_flag): a binary variable. A value of '1' indicates that enhanced adaptive correction filtering can be used; a value of '0' indicates that enhanced adaptive correction filtering should not be used.
- EalfEnableFlag is equal to the value of ealf_enable_flag, and its syntax is described as follows:
- the enhanced adaptive correction filtering permission flag is read from the code stream, which is the sequence header flag.
- Picture-level adaptive correction filtering enable flag (picture_alf_enable_flag[compIdx]): a binary variable. A value of '1' indicates that adaptive correction filtering can be used for the compIdx-th component of the current image; a value of '0' indicates that adaptive correction filtering should not be used for the compIdx-th component of the current image.
- PictureAlfEnableFlag[compIdx] is equal to the value of picture_alf_enable_flag[compIdx]
- its syntax is described as follows:
- the image-level adaptive correction filtering enable flag of the three components of Y, U, and V is read from the code stream, which is the image header flag.
- Image luminance component sample adaptive correction filter coefficient (alf_coeff_luma[i][j]): alf_coeff_luma[i][j] represents the jth coefficient of the ith adaptive correction filter of the luminance component.
- AlfCoeffLuma[i][j] is equal to the value of alf_coeff_luma[i][j].
- Image chroma component adaptive correction filter coefficient (alf_coeff_chroma[0][j], alf_coeff_chroma[1][j]): alf_coeff_chroma[0][j] represents the jth adaptive correction filter coefficient of the Cb component, alf_coeff_chroma [1][j] represents the j-th adaptive correction filter coefficient of the Cr component.
- the value of AlfCoeffChroma[0][j] is equal to the value of alf_coeff_chroma[0][j]
- the value of AlfCoeffChroma[1][j] is equal to the value of alf_coeff_chroma[1][j].
- alf_region_distance[i] represents the starting label of the i-th adaptive correction filtering region basic unit of the luminance component and the i-1 adaptive Correct the difference between the starting labels of the basic unit in the filtering area.
- the value range of alf_region_distance[i] should be 1 to FilterNum-1.
- alf_region_distance[i] when alf_region_distance[i] does not exist in the bit stream, if i is equal to 0, the value of alf_region_distance[i] is 0; if i is not equal to 0 and the value of alf_filter_num_minus1 is FilterNum-1, then alf_region_distance[i] value of 1.
- Adaptive Modified Filtering Enable Flag of the Maximum Coding Unit (alf_lcu_enable_flag[compIdx][LcuIndex]): a binary variable.
- a value of '1' indicates that the samples of the LcuIndex th largest coding unit compIdx component should use adaptive correction filtering; a value of '0' indicates that the samples of the LcuIndex th largest coding unit compIdx component should not use adaptive correction filtering.
- AlfLCUEnableFlag[compIdx][LcuIndex] is equal to the value of alf_lcu_enable_flag[compIdx][LcuIndex]
- its syntax is described as follows:
- the image-level adaptive correction filtering allows the flag to be turned on, then in the chroma, you need to obtain the filter coefficient alf_coeff_chroma on the component; in the luminance, you need to obtain the region merging mode flags alf_region_order_idx, The number of filter coefficients is reduced by 1 (alf_filter_num_minus1), the region merge result alf_region_distance[i], and each filter coefficient alf_coeff_luma.
- the value of PictureAlfEnableFlag[compIdx] is 0, the value of the offset sample component is directly used as the value of the corresponding reconstructed sample component; otherwise, adaptive correction filtering is performed on the corresponding offset sample component.
- compIdx equal to 0 represents the luminance component, equal to 1 represents the Cb component, and equal to 2 represents the Cr component.
- the unit of the adaptive correction filtering is the adaptive correction filtering unit derived from the maximum coding unit, which is processed in sequence according to the raster scan order. Firstly, the adaptive modified filter coefficients of each component are obtained according to the decoding process of the adaptive modified filter coefficients, then the adaptive modified filter coefficients of each component are obtained according to the derived adaptive modified filter unit, according to the adaptive modified filter coefficient index of the luminance component of the current adaptive modified filter unit, and finally according to the adaptive modified filter coefficient index of the luminance component of the adaptive modified filter unit The luminance and chrominance components of the correction filtering unit are subjected to adaptive correction filtering to obtain reconstructed samples.
- alfCoeffIdxTab[count+1] alfCoeffIdxTab[count]
- alfCoeffIdxTab[count+1] alfCoeffIdxTab[count]+1
- alfCoeffIdxTab[i] alfCoeffIdxTab[count]
- the coefficients AlfCoeffChroma[0][14] and AlfCoeffChroma[1][14] are processed as follows:
- the adaptive correction filtering unit (as shown in Figure 5) is derived according to the following steps:
- sample where the upper boundary of the sample area E1 is located belongs to the upper boundary of the image, or belongs to the slice boundary and the value of cross_patch_loopfilter_enable_flag is '0', make the sample area E2 equal to the sample area E1;
- the upper boundary of the component sample area E1 is extended upwards by four lines to obtain the sample area E2.
- the first row of samples in the sample area E1 is the upper boundary of the area;
- the sample area E2 is the current adaptive correction filtering unit.
- the first row of samples of the image is the upper boundary of the image, and the last row of samples is the lower boundary of the image.
- the adaptive correction filter coefficient index (referred to as filterIdx) of the current luminance component adaptive correction filter unit is calculated according to the following method:
- (x, y) is the coordinate in the image of the upper left corner sample of the maximum coding unit that derives the current adaptive correction filter unit, and the regionTable is defined as follows:
- regionTable[16] ⁇ 0,1,4,5,15,2,3,6,14,11,10,7,13,12,9,8 ⁇
- the adaptive correction filter coefficient index (denoted as filterIdx) of the current luminance component adaptive correction filter unit is calculated according to the following method.
- y_interval ((((vertical_size+lcu_height-1)/lcu_height)+4)/8*lcu_height)
- x_interval ((((horizontal_size+lcu_width-1)/lcu_width)+4)/8*lcu_width)
- y_cnt Clip3(0,8,(vertical_size+y_interval-1)/y_interval)
- y_st_offset vertical_size-y_interval*(y_cnt-1)
- y_st_offset (y_st_offset+lcu_height/2)/lcu_height*lcu_height
- x_cnt Clip3(0,8,(horizontal_size+x_interval-1)/x_interval)
- x_st_offset horizontal_size-x_interval*(x_cnt-1)
- x_st_offset (x_st_offset+lcu_width/2)/lcu_width*lcu_width
- (x, y) is the coordinate in the image of the upper left corner sample of the maximum coding unit that derives the current adaptive correction filter unit, and the regionTable is defined as follows:
- regionTable[4][64] ⁇ 63,60,59,58,5,4,3,0,62,61,56,57,6,7,2,1,49,50,55,54, 9,8,13,14,48,51,52,53,10,11,12,15,47,46,33,32,31,30,17,16,44,45,34,35,28, 29,18,19,43,40,39,36,27,24,23,20,42,41,38,37,26,25,22,21 ⁇ , ⁇ 42,43,44,47,48, 49,62,63,41,40,45,46,51,50,61,60,38,39,34,33,52,55,56,59,37,36,35,32,53,54, 57,58,26,27,28,31,10,9,6,5,25,24,29,30,11,8,7,4,22,23,18,17,12,13,2, 3,21,20,19,16,15,14,1,0 ⁇ , ⁇
- the left boundary of the current adaptive correction filtering unit is the image boundary, or is located outside the slice boundary and the value of CplfEnableFlag is 0, the left boundary does not exist, otherwise the current adaptive correction filtering unit is outside the left boundary. Move 3 sample points to the current Adaptively corrects the area of the filter unit.
- the right boundary of the current adaptive correction filtering unit is outside the image boundary, or it is located outside the slice boundary and the value of CplfEnableFlag is 0, then the right boundary does not exist, otherwise the area outside the right boundary is the current adaptive correction filtering unit moved left by 3 sample points to The area of the current adaptive correction filter unit.
- the border area includes a left border area and a right border area.
- EalfEnableFlag is equal to 0, when the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, the sample is directly used for filtering; when the sample used in the adaptive correction filtering process does not belong to the adaptive correction filtering process
- filtering is performed as follows:
- the adaptive modification filtering operation of the luminance component of the adaptive modification filtering unit is as follows:
- p(x, y) is the sample after offset
- p'(x, y) is the reconstructed sample
- the adaptive correction filtering operation of the chrominance component of the adaptive correction filtering unit is as follows:
- p(x, y) is the sample after migration
- p'(x, y) is the reconstructed sample
- EalfEnableFlag is equal to 1, when the sample used in the adaptive correction filtering process is the sample in the adaptive correction filtering unit, the sample is directly used for filtering; when the sample used in the adaptive correction filtering process does not belong to the adaptive correction filtering process
- filtering is performed as follows:
- the adaptive modification filtering operation of the luminance component of the adaptive modification filtering unit is as follows:
- p(x, y) is the sample after offset
- p'(x, y) is the reconstructed sample
- the adaptive correction filtering operation of the chrominance component of the adaptive correction filtering unit is as follows:
- the image can be divided into fixed regions, and the result of the fixed region division can be shown in Figure 2, and the index value of each region can be obtained.
- the regions it can be considered to be divided into 8 types as shown in FIG. 15A , or the partial division method shown in FIG. 15A can also be retained, as shown in FIG. 15B .
- the encoding device may determine the final division mode based on the RDO decision, and transmit the division mode of each region to the decoding device through the code stream.
- the encoding device may determine the final division mode based on the RDO decision, and transmit the division mode of each region to the decoding device through the code stream.
- a maximum of 16 areas obtained by the fixed division method can be divided into 64 areas.
- the decoding device can first perform fixed area division, and then read the specific division method of each area from the code stream to obtain the final division method of the entire frame.
- the divided area numbers may be as shown in FIG. 15C .
- the value of J is the maximum index value of the previous region+1.
- the image width and the number of LCUs included in the high school can be determined according to the image width and the number of LCUs included in the high school. If the fixed area is divided into fixed 4*4 areas, and each area index is shown in Figure 2, when the number of LCUs in width or height is less than 4, there will be areas in some columns or rows that do not contain image information . All these regions that do not contain image information are denoted as set G. The size of the set G is denoted as N7, and N7 is a positive integer.
- An embodiment of the present application provides a filtering device, wherein the filtering device can be applied to an encoding device or a decoding device, and the device can include a filtering unit, configured to: determine whether the current adaptive modification filtering unit allows the use of enhanced adaptive modification filtering; If it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, the first filter is used to perform adaptive correction filtering on the current adaptive correction filtering unit; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering If adaptive correction filtering is performed, the second filter is used to perform adaptive correction filtering on the current adaptive correction filtering unit.
- the first filter may be a center-symmetric filter of 7*7 cross and 5*5 square; the second filter may be a center-symmetric filter of 7*7 cross and 3*3 square.
- the filtering unit is further configured to, in the process of performing adaptive correction filtering on the current filtering pixels in the current adaptive correction filtering unit, for any reference pixel of the current filtering pixels , when the reference pixel is in the current adaptive correction filtering unit, the pixel value of the reference pixel is used to perform adaptive correction filtering; when the reference pixel is not in the current adaptive correction filtering unit, it is impossible to obtain In the case of the pixel value of the reference pixel, use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to replace the reference pixel to perform adaptive correction filtering. When the pixel value of the reference pixel is obtained Next, use the pixel value of the reference pixel to perform adaptive correction filtering.
- the situation that the pixel value of the reference pixel cannot be obtained includes one of the following: the reference pixel is outside the image boundary of the current image frame; the reference pixel is outside the slice boundary of the current slice and Filtering across slice boundaries is not allowed; the reference pixel is outside the upper boundary of the current adaptive correction filtering unit; or the reference pixel is outside the lower boundary of the current adaptive correction filtering unit.
- the filtering unit is further configured to: if it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and for performing the adaptive correction filtering of the current adaptive correction filtering unit If the pixel value of the pixel position cannot be obtained, the pixel closest to the reference pixel position in the current adaptive correction filtering unit is used to replace the reference pixel for adaptive correction filtering; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction Filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, then use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to replace the reference pixel for adaptive Correction filtering.
- the filtering unit determines whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, including: determining whether to indicate whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering.
- the value of the flag bit when the value of the flag bit is the first value, it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and when the value of the flag bit is the second value, it is determined The current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering.
- the flag bit used to indicate whether the current adaptive correction filtering unit is allowed to use enhanced adaptive correction filtering is EalfEnableFlag; the value of the EalfEnableFlag is derived from the decoding device, or in the decoding The device obtains the value of the EalfEnableFlag from the code stream, or the value of the EalfEnableFlag is a constant value.
- obtaining the value of the EalfEnableFlag from the code stream at the decoding device includes: determining the value of the EalfEnableFlag based on the value of the enhanced adaptive correction filtering permission flag parsed from the code stream. Describe the value of EalfEnableFlag.
- the enhanced adaptive correction filtering permission flag is a sequence level parameter.
- FIG. 16 is a schematic structural diagram of a filtering apparatus provided by an embodiment of the present application, wherein the filtering apparatus may be applied to an encoding/decoding apparatus, and the apparatus may include a filtering unit 1610 for: In the process of performing ALF filtering on the current filter pixel in the adaptive correction filtering unit, for any reference pixel of the current filtering pixel, when the reference pixel is in the current adaptive correction filtering unit, the pixel of the reference pixel is used.
- the reference pixel is not in the current adaptive correction filtering unit, in the case that the pixel value of the reference pixel cannot be obtained, use the current adaptive correction filtering unit distance from the The pixel with the closest reference pixel position is used for filtering instead of the reference pixel, and when the pixel value of the reference pixel is obtained, the pixel value of the reference pixel is used for filtering.
- the situation that the pixel value of the reference pixel cannot be obtained includes one of the following: the reference pixel is outside the image boundary of the current image frame, outside the slice boundary of the current slice, and is not allowed to cross The slice boundary is filtered outside the upper boundary or the lower boundary of the current adaptive modification filtering unit.
- FIG. 17 is a schematic structural diagram of a filtering apparatus provided by an embodiment of the present application, wherein the filtering apparatus may be applied to an encoding/decoding apparatus, and the apparatus may include a filtering unit 1710, configured to: determine the current Whether the adaptive correction filtering unit allows the use of enhanced adaptive correction filtering; if it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, use the first filter to perform adaptive correction filtering on the current adaptive correction filtering unit ; If it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering, use the second filter to perform adaptive correction filtering on the current adaptive correction filtering unit.
- the first filter is a 7*7 cross and a 5*5 square center-symmetric filter
- the second filter is a 7*7 cross and a 3*3 square center-symmetric filter.
- the filtering unit 1710 determines whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, including: determining whether to indicate whether the current adaptive correction filtering unit allows the use of enhanced adaptive correction. The value of the flag bit of the correction filtering, when the flag bit is the first value, it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and when the flag bit is the second value, it is determined The current adaptive correction filtering unit does not allow the use of enhanced adaptive correction filtering.
- the filtering unit 1710 is further configured to: in the process of adaptively correcting and filtering the current filtering pixels in the current adaptive correcting filtering unit, any reference to the current filtering pixels Pixel, when the reference pixel is in the current adaptive correction filtering unit, use the pixel value of the reference pixel to perform adaptive correction filtering; when the reference pixel is not in the current adaptive correction filtering unit, in the inability to In the case of obtaining the pixel value of the reference pixel, use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to replace the reference pixel to perform adaptive correction filtering, and after the pixel value of the reference pixel is obtained. In this case, adaptive correction filtering is performed using the pixel value of the reference pixel.
- the situation that the pixel value of the reference pixel cannot be obtained includes one of the following: the reference pixel is outside the image boundary of the current image frame, outside the slice boundary of the current slice, and is not allowed to cross The slice boundary is filtered outside the upper boundary or the lower boundary of the current adaptive modification filtering unit.
- the filtering unit 1710 is further configured to: if it is determined that the current adaptive correction filtering unit allows the use of enhanced adaptive correction filtering, and is used to perform the adaptive correction filtering of the current adaptive correction filtering unit If the pixel value of the pixel position cannot be obtained, the pixel closest to the reference pixel position in the current adaptive correction filtering unit is used to replace the reference pixel for adaptive correction filtering; if it is determined that the current adaptive correction filtering unit does not allow the use of enhanced adaptive correction Filtering, and the pixel value of the pixel position used to perform the adaptive correction filtering of the current adaptive correction filtering unit cannot be obtained, then use the pixel closest to the reference pixel position in the current adaptive correction filtering unit to replace the reference pixel for adaptive Correction filtering.
- FIG. 18 is a schematic diagram of a hardware structure of a decoding device according to an embodiment of the present application.
- the decoding apparatus may include a processor 1801, a machine-readable storage medium 1802 having machine-executable instructions stored thereon.
- the processor 1801 and the machine-readable storage medium 1802 may communicate via a system bus 1803 . And, by reading and executing the machine-executable instructions corresponding to the filtering control logic in the machine-readable storage medium 1802, the processor 1801 can perform the filtering method applied to the decoding apparatus described above.
- the machine-readable storage medium 1802 referred to herein can be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like.
- the machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drives (such as hard disk drives), solid-state drives, any type of storage disk (such as optical discs, DVDs, etc.), or similar storage media, or a combination thereof.
- a machine-readable storage medium stores machine-executable instructions that, when executed by a processor, implement the above-described application to decoding The filtering method of the device.
- the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tapes, floppy disks, optical data storage devices, and the like.
- FIG. 19 is a schematic diagram of a hardware structure of an encoding device according to an embodiment of the present application.
- the encoding apparatus may include a processor 1901, a machine-readable storage medium 1902 having machine-executable instructions stored thereon.
- the processor 1901 and the machine-readable storage medium 1902 can communicate via a system bus 1903 . And, by reading and executing the machine-executable instructions corresponding to the filtering control logic in the machine-readable storage medium 1902, the processor 1901 can perform the filtering method applied to the encoding apparatus described above.
- the machine-readable storage medium 1902 referred to herein can be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like.
- the machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, storage drives (such as hard disk drives), solid-state drives, any type of storage disk (such as optical discs, DVDs, etc.), or similar storage media, or a combination thereof.
- a machine-readable storage medium having machine-executable instructions stored therein, the machine-executable instructions, when executed by a processor, implement the above-described application to encoding The filtering method of the device.
- the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tapes, floppy disks, optical data storage devices, and the like.
- a camera device including the filtering device in any of the above-mentioned embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Filtering Materials (AREA)
- Image Processing (AREA)
Abstract
Description
if(AlfEnableFlag){ | |
ealf_enable_flag | u(1) |
} |
if(AlfEnableFlag){ | |
for(compIdx=0;compIdx<3;compIdx++){ | |
picture_alf_enable_flag[compIdx] | u(1) |
} |
自适应修正滤波参数定义 | 描述符 |
alf_parameter_set(){ | |
if(EalfEnableFlag){ | |
coeffNum=15 | |
FilterNum=64 | |
} | |
else{ | |
FilterNum=16 | |
coeffNum=9 | |
} | |
if(PictureAlfEnableFlag[0]==1){ | |
alf_filter_num_minus1 | ue(v) |
if(EalfEnableFlag){ | |
alf_region_order_idx | u(2) |
} | |
for(i=0;i<alf_filter_num_minus1+1;i++){ | |
if((i>0)&&(alf_filter_num_minus1!=FilterNum)) | |
alf_region_distance[i] | ue(v) |
for(j=0;j<coeffNum;j++) | |
alf_coeff_luma[i][j] | se(v) |
} | |
} | |
if(PictureAlfEnableFlag[1]==1){ | |
for(j=0;j<coeffNum;j++) | |
alf_coeff_chroma[0][j] | se(v) |
} | |
if(PictureAlfEnableFlag[2]==1){ | |
for(j=0;j<coeffNum;j++) | |
alf_coeff_chroma[1][j] | se(v) |
} | |
} |
j的值 | Hor[j]的值 | Ver[j]的值 |
0 | 0 | 3 |
1 | 0 | 2 |
2 | 1 | 1 |
3 | 0 | 1 |
4 | 1 | -1 |
5 | 3 | 0 |
6 | 2 | 0 |
7 | 1 | 0 |
j的值 | Hor[j]的值 | Ver[j]的值 |
0 | 0 | 3 |
1 | 2 | 2 |
2 | 1 | 2 |
3 | 0 | 2 |
4 | 1 | -2 |
5 | 2 | -2 |
6 | 2 | 1 |
7 | 1 | 1 |
8 | 0 | 1 |
9 | 1 | -1 |
10 | 2 | -1 |
11 | 3 | 0 |
12 | 2 | 0 |
13 | 1 | 0 |
Claims (18)
- 一种滤波方法,应用于编码设备或解码设备,其特征在于,所述方法包括:确定当前自适应修正滤波单元是否允许使用增强自适应修正滤波;若确定当前自适应修正滤波单元允许使用增强自适应修正滤波,则使用第一滤波器对所述当前自适应修正滤波单元进行自适应修正滤波;若确定当前自适应修正滤波单元不允许使用增强自适应修正滤波,则使用第二滤波器对所述当前自适应修正滤波单元进行自适应修正滤波,其中,所述第一滤波器为7*7十字形加5*5方形的中心对称滤波器;所述第二滤波器为7*7十字形加3*3方形的中心对称滤波器。
- 根据权利要求1所述的方法,其特征在于,在对当前自适应修正滤波单元内的当前滤波像素进行自适应修正滤波的过程中,对于所述当前滤波像素的任一参考像素,当该参考像素处于所述当前自适应修正滤波单元内时,使用该参考像素的像素值进行自适应修正滤波;当该参考像素未处于所述当前自适应修正滤波单元内时:在无法获取到该参考像素的像素值的情况下,使用所述当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波;在获取到该参考像素的像素值的情况下,使用该参考像素的像素值进行自适应修正滤波。
- 根据权利要求2所述的方法,其特征在于,所述无法获取到该参考像素的像素值的情况包括以下之一:该参考像素处于当前图像帧的图像边界外;该参考像素处于当前片的片边界外、且不允许跨越片边界进行滤波;该参考像素处于所述当前自适应修正滤波单元的上边界外;或该参考像素处于所述当前自适应修正滤波单元的下边界外。
- 根据权利要求2或3所述的方法,其特征在于,所述方法还包括:若确定当前自适应修正滤波单元允许使用增强自适应修正滤波,且用于进行当前自适应修正滤波单元的自适应修正滤波的像素位置的像素值无法获取,则使用当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波;若确定当前自适应修正滤波单元不允许使用增强自适应修正滤波,且用于进行当前自适应修正滤波单元的自适应修正滤波的像素位置的像素值无法获取,则使用当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波。
- 根据权利要求1所述的方法,其特征在于,确定所述当前自适应修正滤波单元是否允许使用增强自适应修正滤波,包括:确定用于指示当前自适应修正滤波单元是否允许使用增强自适应修正滤波的标志位取值,在所述标志位取值为第一取值时,确定当前自适应修正滤波单元允许使用增强自适应修正滤波,在所述标志位取值为第二取值时,确定当前自适应修正滤波单元不允许使用增强自适应修正滤波。
- 根据权利要求5所述的方法,其特征在于,所述用于指示当前自适应修正滤波单元是否允许使用增强自适应修正滤波的标志位为EalfEnableFlag;所述EalfEnableFlag的值从所述解码设备导出,或在所述解码设备处由码流中获取所述EalfEnableFlag的值,或者所述EalfEnableFlag的值为一个常值。
- 根据权利要求6所述的方法,其特征在于,在所述解码设备处由码流中获取所 述EalfEnableFlag的值,包括:基于从所述码流中解析得到的增强自适应修正滤波允许标志的值,确定所述EalfEnableFlag的值;其中,所述增强自适应修正滤波允许标志为序列级参数。
- 一种滤波装置,应用于编码设备或解码设备,其特征在于,所述装置包括:滤波单元,用于确定当前自适应修正滤波单元是否允许使用增强自适应修正滤波;所述滤波单元,还用于若确定当前自适应修正滤波单元允许使用增强自适应修正滤波,则使用第一滤波器对所述当前自适应修正滤波单元进行自适应修正滤波;若确定当前自适应修正滤波单元不允许使用增强自适应修正滤波,则使用第二滤波器对所述当前自适应修正滤波单元进行自适应修正滤波,其中,所述第一滤波器为7*7十字形加5*5方形的中心对称滤波器;所述第二滤波器为7*7十字形加3*3方形的中心对称滤波器。
- 根据权利要求8所述的装置,其特征在于,所述滤波单元还用于:在对当前自适应修正滤波单元内的当前滤波像素进行自适应修正滤波的过程中,对于所述当前滤波像素的任一参考像素,当该参考像素处于所述当前自适应修正滤波单元内时,使用该参考像素的像素值进行自适应修正滤波;当该参考像素未处于所述当前自适应修正滤波单元内时:在无法获取到该参考像素的像素值的情况下,使用所述当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波;在获取到该参考像素的像素值的情况下,使用该参考像素的像素值进行自适应修正滤波。
- 根据权利要求9所述的装置,其特征在于,所述无法获取到该参考像素的像素值的情况包括以下之一:该参考像素处于当前图像帧的图像边界外;该参考像素处于当前片的片边界外且不允许跨越片边界进行滤波;该参考像素处于所述当前自适应修正滤波单元的上边界外;或该参考像素处于所述当前自适应修正滤波单元的下边界外。
- 根据权利要求9或10所述的装置,其特征在于,所述滤波单元,还用于若确定当前自适应修正滤波单元允许使用增强自适应修正滤波,且用于进行当前自适应修正滤波单元的自适应修正滤波的像素位置的像素值无法获取,则使用当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波;若确定当前自适应修正滤波单元不允许使用增强自适应修正滤波,且用于进行当前自适应修正滤波单元的自适应修正滤波的像素位置的像素值无法获取,则使用当前自适应修正滤波单元内距离该参考像素位置最近的像素代替该参考像素进行自适应修正滤波。
- 根据权利要求8所述的装置,其特征在于,所述滤波单元确定所述当前自适应修正滤波单元是否允许使用增强自适应修正滤波,包括:确定用于指示当前自适应修正滤波单元是否允许使用增强自适应修正滤波的标志位取值,在所述标志位取值为第一取值时,确定当前自适应修正滤波单元允许使用增强自适应修正滤波,在所述标志位取值为第二取值时,确定当前自适应修正滤波单元不允许使用增强自适应修正滤波。
- 根据权利要求12所述的装置,其特征在于,所述用于指示当前自适应修正滤波单元是否允许使用增强自适应修正滤波的标志 位为EalfEnableFlag;所述EalfEnableFlag的值从所述解码设备导出,或在所述解码设备处由码流中获取所述EalfEnableFlag的值,或所述EalfEnableFlag的值为一个常值。
- 根据权利要求13所述的装置,其特征在于,在所述解码设备处由码流中获取所述EalfEnableFlag的值,包括:基于从所述码流中解析得到的增强自适应修正滤波允许标志的值,确定所述EalfEnableFlag的值;其中,所述增强自适应修正滤波允许标志为序列级参数。
- 一种解码设备,其特征在于,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令,所述处理器用于执行机器可执行指令,以实现如权利要求1至7任一项所述的滤波方法。
- 一种编码设备,其特征在于,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令,所述处理器用于执行机器可执行指令,以实现如权利要求1至7任一项所述的滤波方法。
- 一种非易失性存储介质,其上存储有机器可执行指令,所述机器可执行指令被处理器执行时,促使所述处理器实现如权利要求1至7任一项所述的滤波方法。
- 一种摄像机设备,包括如权利要求8至14任一项所述的滤波装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023547310A JP2024506002A (ja) | 2021-03-05 | 2022-03-02 | フィルタリングのための方法、装置及びデバイス |
US18/262,227 US20240146916A1 (en) | 2021-03-05 | 2022-03-02 | Filtering method and apparatus and devices |
KR1020237024597A KR20230119718A (ko) | 2021-03-05 | 2022-03-02 | 필터링에 사용되는 방법, 장치 및 기기 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110247471.2A CN114640858B (zh) | 2021-03-05 | 2021-03-05 | 滤波方法、装置及设备 |
CN202110247471.2 | 2021-03-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022184109A1 true WO2022184109A1 (zh) | 2022-09-09 |
Family
ID=78980555
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/078876 WO2022184109A1 (zh) | 2021-03-05 | 2022-03-02 | 用于滤波的方法、装置及设备 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240146916A1 (zh) |
JP (1) | JP2024506002A (zh) |
KR (1) | KR20230119718A (zh) |
CN (2) | CN114640858B (zh) |
TW (1) | TWI806468B (zh) |
WO (1) | WO2022184109A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114640858B (zh) * | 2021-03-05 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | 滤波方法、装置及设备 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120177104A1 (en) * | 2011-01-12 | 2012-07-12 | Madhukar Budagavi | Reduced Complexity Adaptive Loop Filter (ALF) for Video Coding |
US20120189064A1 (en) * | 2011-01-14 | 2012-07-26 | Ebrisk Video Inc. | Adaptive loop filtering using multiple filter shapes |
CN104702963A (zh) * | 2015-02-13 | 2015-06-10 | 北京大学 | 一种自适应环路滤波的边界处理方法及装置 |
CN105306957A (zh) * | 2015-10-23 | 2016-02-03 | 北京中星微电子有限公司 | 自适应环路滤波方法和设备 |
US20180324420A1 (en) * | 2015-11-10 | 2018-11-08 | Vid Scale, Inc. | Systems and methods for coding in super-block based video coding framework |
EP3481064A1 (en) * | 2017-11-06 | 2019-05-08 | Dolby Laboratories Licensing Corp. | Adaptive loop filtering for high dynamic range video |
CN113824956A (zh) * | 2020-08-24 | 2021-12-21 | 杭州海康威视数字技术股份有限公司 | 滤波方法、装置、设备及机器可读存储介质 |
CN113852831A (zh) * | 2021-03-05 | 2021-12-28 | 杭州海康威视数字技术股份有限公司 | 滤波方法、装置、设备及机器可读存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102857751B (zh) * | 2011-07-01 | 2015-01-21 | 华为技术有限公司 | 一种视频编解码方法和装置 |
KR102276854B1 (ko) * | 2014-07-31 | 2021-07-13 | 삼성전자주식회사 | 인루프 필터 파라미터 예측을 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
WO2018170801A1 (zh) * | 2017-03-22 | 2018-09-27 | 华为技术有限公司 | 图像滤波方法及装置 |
WO2019204672A1 (en) * | 2018-04-19 | 2019-10-24 | Huawei Technologies Co., Ltd. | Interpolation filter for an intra prediction apparatus and method for video coding |
WO2020185879A1 (en) * | 2019-03-11 | 2020-09-17 | Dolby Laboratories Licensing Corporation | Video coding using reference picture resampling supporting region of interest |
US11546587B2 (en) * | 2019-04-11 | 2023-01-03 | Mediatek Inc. | Adaptive loop filter with adaptive parameter set |
-
2021
- 2021-03-05 CN CN202110247471.2A patent/CN114640858B/zh active Active
- 2021-03-05 CN CN202111146284.1A patent/CN113852831B/zh active Active
-
2022
- 2022-03-02 KR KR1020237024597A patent/KR20230119718A/ko active Search and Examination
- 2022-03-02 WO PCT/CN2022/078876 patent/WO2022184109A1/zh active Application Filing
- 2022-03-02 US US18/262,227 patent/US20240146916A1/en active Pending
- 2022-03-02 JP JP2023547310A patent/JP2024506002A/ja active Pending
- 2022-03-03 TW TW111107827A patent/TWI806468B/zh active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120177104A1 (en) * | 2011-01-12 | 2012-07-12 | Madhukar Budagavi | Reduced Complexity Adaptive Loop Filter (ALF) for Video Coding |
US20120189064A1 (en) * | 2011-01-14 | 2012-07-26 | Ebrisk Video Inc. | Adaptive loop filtering using multiple filter shapes |
CN104702963A (zh) * | 2015-02-13 | 2015-06-10 | 北京大学 | 一种自适应环路滤波的边界处理方法及装置 |
CN105306957A (zh) * | 2015-10-23 | 2016-02-03 | 北京中星微电子有限公司 | 自适应环路滤波方法和设备 |
US20180324420A1 (en) * | 2015-11-10 | 2018-11-08 | Vid Scale, Inc. | Systems and methods for coding in super-block based video coding framework |
EP3481064A1 (en) * | 2017-11-06 | 2019-05-08 | Dolby Laboratories Licensing Corp. | Adaptive loop filtering for high dynamic range video |
CN113824956A (zh) * | 2020-08-24 | 2021-12-21 | 杭州海康威视数字技术股份有限公司 | 滤波方法、装置、设备及机器可读存储介质 |
CN113852831A (zh) * | 2021-03-05 | 2021-12-28 | 杭州海康威视数字技术股份有限公司 | 滤波方法、装置、设备及机器可读存储介质 |
Non-Patent Citations (2)
Title |
---|
D. SOCEK (INTEL), A. PURI (INTEL): "Alternate ALF filter shapes for luma", 125. MPEG MEETING; 20190114 - 20190118; MARRAKECH; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 8 January 2019 (2019-01-08), pages 1 - 4, XP030214025 * |
J. TAQUET (CANON), P. ONNO (CANON), C. GISQUET (CANON), G. LAROCHE (CANON): "CE5-4: alternative luma filter sets and alternative chroma filters for ALF", 15. JVET MEETING; 20190703 - 20190712; GOTHENBURG; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 18 June 2019 (2019-06-18), Gothenburg SE, pages 1 - 6, XP030205630 * |
Also Published As
Publication number | Publication date |
---|---|
TWI806468B (zh) | 2023-06-21 |
CN113852831B (zh) | 2023-03-28 |
KR20230119718A (ko) | 2023-08-16 |
US20240146916A1 (en) | 2024-05-02 |
TW202241128A (zh) | 2022-10-16 |
CN114640858A (zh) | 2022-06-17 |
CN113852831A (zh) | 2021-12-28 |
CN114640858B (zh) | 2023-05-26 |
JP2024506002A (ja) | 2024-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20190043482A (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
CN110024385B (zh) | 影像编码/解码方法、装置以及对比特流进行存储的记录介质 | |
WO2022104498A1 (zh) | 帧内预测方法、编码器、解码器以及计算机存储介质 | |
WO2018219663A1 (en) | A method and a device for picture encoding and decoding | |
CN113727106B (zh) | 视频编码、解码方法、装置、电子设备及存储介质 | |
US20230209051A1 (en) | Filtering method and apparatus, and device | |
KR20200010113A (ko) | 지역 조명 보상을 통한 효과적인 비디오 부호화/복호화 방법 및 장치 | |
WO2018219664A1 (en) | A method and a device for picture encoding and decoding | |
WO2023065891A1 (zh) | 多媒体数据处理方法、装置、设备、计算机可读存储介质及计算机程序产品 | |
JP2022544438A (ja) | ループ内フィルタリングの方法及びループ内フィルタリングの装置 | |
WO2022184109A1 (zh) | 用于滤波的方法、装置及设备 | |
CN112929656B (zh) | 滤波方法、装置及设备 | |
CN114598867B (zh) | 滤波方法、装置及设备 | |
WO2022174469A1 (zh) | 一种光照补偿方法、编码器、解码器及存储介质 | |
CN114640846A (zh) | 滤波方法、装置及设备 | |
CN118101933A (zh) | 滤波方法、装置及设备 | |
TWI826792B (zh) | 圖像增強方法及裝置 | |
TWI834773B (zh) | 使用適應性迴路濾波器以編碼和解碼影像之一或多個影像部分的方法、裝置和電腦可讀儲存媒體 | |
CN113727103B (zh) | 视频编码、解码方法、装置、电子设备及存储介质 | |
WO2024007116A1 (zh) | 解码方法、编码方法、解码器以及编码器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22762572 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20237024597 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18262227 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023547310 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22762572 Country of ref document: EP Kind code of ref document: A1 |