WO2021134706A1 - 环路滤波的方法与装置 - Google Patents

环路滤波的方法与装置 Download PDF

Info

Publication number
WO2021134706A1
WO2021134706A1 PCT/CN2019/130954 CN2019130954W WO2021134706A1 WO 2021134706 A1 WO2021134706 A1 WO 2021134706A1 CN 2019130954 W CN2019130954 W CN 2019130954W WO 2021134706 A1 WO2021134706 A1 WO 2021134706A1
Authority
WO
WIPO (PCT)
Prior art keywords
alf
component
current block
target filter
chrominance component
Prior art date
Application number
PCT/CN2019/130954
Other languages
English (en)
French (fr)
Inventor
马思伟
孟学苇
郑萧桢
王苫社
Original Assignee
北京大学
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学, 深圳市大疆创新科技有限公司 filed Critical 北京大学
Priority to KR1020227022560A priority Critical patent/KR20220101743A/ko
Priority to CN201980051177.5A priority patent/CN112544081B/zh
Priority to JP2022537322A priority patent/JP2023515742A/ja
Priority to CN202311663799.8A priority patent/CN117596413A/zh
Priority to PCT/CN2019/130954 priority patent/WO2021134706A1/zh
Priority to EP19958181.0A priority patent/EP4087243A4/en
Priority to CN202311663855.8A priority patent/CN117596414A/zh
Publication of WO2021134706A1 publication Critical patent/WO2021134706A1/zh
Priority to US17/853,906 priority patent/US20220345699A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • the present invention relates to the technical field of digital video coding, and more specifically, to a method and device for loop filtering.
  • the video coding compression process includes: block division, prediction, transformation, quantization, and entropy coding processes to form a hybrid video coding framework.
  • the video coding and decoding technology standards have gradually formed.
  • some mainstream video coding and decoding standards include: international video coding standards H.264/MPEG-AVC, H. 265/MEPG-HEVC, the domestic audio and video coding standard AVS2, and the H.266/VVC international standard and AVS3 domestic standard that are being developed.
  • the loop filter includes deblocking filter (DBF), adaptive Sample compensation filter (Sample Adaptive Offset, SAO) and Adaptive Loop Filter (Adaptive Loop Filter, ALF). Among them, the filtering process still has room for improvement.
  • DPF deblocking filter
  • SAO sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the present invention provides a method and device for loop filtering. Compared with the prior art, the complexity of loop filtering can be reduced and the filtering effect can be improved.
  • a method for loop filtering including:
  • Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
  • a method of loop filtering including:
  • the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
  • the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
  • a device for loop filtering including: a memory for storing codes;
  • the processor is configured to execute the code stored in the memory to perform the following operations:
  • Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
  • a loop filtering device including:
  • Memory used to store code
  • the processor is configured to execute the code stored in the memory to perform the following operations:
  • the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
  • the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
  • the technical method of the embodiment of the present application improves the coding and decoding performance by optimizing the coding method in the coding and decoding loop filtering process.
  • Fig. 1 is a structural diagram of a technical solution applying an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a video coding framework according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a video decoding framework according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a Wiener filter according to an embodiment of the present application.
  • Fig. 5a is a schematic diagram of an ALF filter according to an embodiment of the present application.
  • Fig. 5b is a schematic diagram of another ALF filter according to an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a loop filtering method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the shape of a CC-ALF filter according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a loop filtering method according to another embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a loop filtering method according to another embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a loop filtering device according to another embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a loop filtering device according to another embodiment of the present application.
  • the embodiments of the present application can be applied to standard or non-standard image or video encoders.
  • the encoder of the VVC standard For example, the encoder of the VVC standard.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • Fig. 1 is a structural diagram of a technical solution applying an embodiment of the present application.
  • the system 100 can receive the data 102 to be processed, process the data 102 to be processed, and generate processed data 108.
  • the system 100 may receive the data to be encoded and encode the data to be encoded to generate encoded data, or the system 100 may receive the data to be decoded and decode the data to be decoded to generate decoded data.
  • the components in the system 100 may be implemented by one or more processors.
  • the processor may be a processor in a computing device or a processor in a mobile device (such as a drone).
  • the processor may be any type of processor, which is not limited in the embodiment of the present invention.
  • the processor may include an encoder, a decoder, or a codec, etc.
  • One or more memories may also be included in the system 100.
  • the memory can be used to store instructions and data, for example, computer-executable instructions that implement the technical solutions of the embodiments of the present invention, to-be-processed data 102, processed data 108, and so on.
  • the memory may be any type of memory, which is not limited in the embodiment of the present invention.
  • the data to be encoded may include text, images, graphic objects, animation sequences, audio, video, or any other data that needs to be encoded.
  • the data to be encoded may include sensor data from sensors, which may be vision sensors (for example, cameras, infrared sensors), microphones, near-field sensors (for example, ultrasonic sensors, radars), position sensors, and temperature sensors. Sensors, touch sensors, etc.
  • the data to be encoded may include information from the user, for example, biological information, which may include facial features, fingerprint scans, retinal scans, voice recordings, DNA sampling, and the like.
  • Fig. 2 is a schematic diagram of a video coding framework 2 according to an embodiment of the present application.
  • each frame in the video to be coded is coded in sequence.
  • the current coded frame mainly undergoes processing such as prediction (Prediction), transformation (Transform), quantization (Quantization), and entropy coding (Entropy Coding), and finally the bit stream of the current coded frame is output.
  • the decoding process usually decodes the received code stream according to the inverse process of the above process to recover the video frame information before decoding.
  • the video encoding framework 2 includes an encoding control module 201, which is used to perform decision-making control actions and parameter selection in the encoding process.
  • the encoding control module 201 controls the parameters used in transformation, quantization, inverse quantization, and inverse transformation, controls the selection of intra-frame or inter-frame modes, and parameter control of motion estimation and filtering, and
  • the control parameters of the encoding control module 201 will also be input to the entropy encoding module, and the encoding will be performed to form a part of the encoded bitstream.
  • the frame to be coded is divided 202 processing, specifically, it is firstly divided into slices, and then divided into blocks.
  • the frame to be encoded is divided into a plurality of non-overlapping largest coding tree units (CTUs), and each CTU can also be divided into quad-tree, or binary tree, or tri-tree.
  • the method is iteratively divided into a series of smaller coding units (Coding Unit, CU).
  • the CU may also include a prediction unit (Prediction Unit, PU) and a transformation unit (Transform Unit, TU) associated with it.
  • PU is the basic unit of prediction
  • TU is the basic unit of transformation and quantization.
  • the PU and TU are respectively obtained by dividing into one or more blocks on the basis of the CU, where one PU includes multiple prediction blocks (PB) and related syntax elements.
  • the PU and TU may be the same, or obtained by the CU through different division methods.
  • at least two of the CU, PU, and TU are the same.
  • CU, PU, and TU are not distinguished, and prediction, quantization, and transformation are all performed in units of CU.
  • the CTU, CU, or other data units formed are all referred to as coding blocks in the following.
  • the data unit for video encoding may be a frame, a slice, a coding tree unit, a coding unit, a coding block, or any group of the above.
  • the size of the data unit can vary.
  • a prediction process is performed to remove the spatial and temporal redundant information of the current frame to be encoded.
  • predictive coding methods include intra-frame prediction and inter-frame prediction.
  • Intra-frame prediction uses only the reconstructed information in the current frame to predict the current coding block
  • inter-frame prediction uses the information in other previously reconstructed frames (also called reference frames) to predict the current coding block.
  • Make predictions Specifically, in the embodiment of the present application, the encoding control module 201 is used to make a decision to select intra prediction or inter prediction.
  • the process of intra-frame prediction 203 includes obtaining the reconstructed block of the coded neighboring block around the current coding block as a reference block, and based on the pixel value of the reference block, the prediction mode method is used to calculate the predicted value to generate the predicted block , Subtracting the corresponding pixel values of the current coding block and the prediction block to obtain the residual of the current coding block, the residual of the current coding block is transformed 204, quantized 205, and entropy coding 210 to form the code stream of the current coding block. Further, all the coded blocks of the frame to be coded currently form a part of the coded stream of the frame to be coded after undergoing the above-mentioned coding process. In addition, the control and reference data generated in the intra-frame prediction 203 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
  • the transform 204 is used to remove the correlation of the residual of the image block, so as to improve the coding efficiency.
  • the two-dimensional discrete cosine transform (DCT) transformation and the two-dimensional discrete sine transform (DST) transformation are usually used.
  • the information is respectively multiplied by an N ⁇ M transformation matrix and its transposed matrix, and the transformation coefficient of the current coding block is obtained after the multiplication.
  • the quantization 205 is used to further improve the compression efficiency.
  • the transform coefficients can be quantized to obtain the quantized coefficients, and then the quantized coefficients are entropy-encoded 210 to obtain the residual code stream of the current coding block, wherein the entropy coding method includes But it is not limited to content adaptive binary arithmetic coding (Context Adaptive Binary Arithmetic Coding, CABAC) entropy coding.
  • CABAC Context Adaptive Binary Arithmetic Coding
  • the coded neighboring block in the intra prediction 203 process is: the neighboring block that has been coded before the current coding block is coded, and the residual generated in the coding process of the neighboring block is transformed 204, quantized 205, After inverse quantization 206 and inverse transform 207, the reconstructed block is obtained by adding the prediction block of the neighboring block.
  • the inverse quantization 206 and the inverse transformation 207 are the inverse processes of the quantization 206 and the transformation 204, which are used to restore the residual data before the quantization and transformation.
  • the inter prediction process includes motion estimation 208 and motion compensation 209. Specifically, the motion estimation is performed 208 according to the reference frame image in the reconstructed video frame, and the image block most similar to the current encoding block is searched for in one or more reference frame images according to a certain matching criterion as a matching block.
  • the relative displacement with the current coding block is the motion vector (Motion Vector, MV) of the current block to be coded.
  • Motion Compensation is performed 209 on the frame to be coded based on the motion vector and the reference frame to obtain the prediction value of the frame to be coded.
  • the original value of the pixel of the frame to be coded is subtracted from the corresponding predicted value to obtain the residual of the frame to be coded.
  • the residual of the current frame to be encoded is transformed 204, quantized 205, and entropy encoding 210 to form a part of the encoded bitstream of the frame to be encoded.
  • the control and reference data generated in the motion compensation 209 are also encoded by the entropy encoding 210 to form a part of the encoded bitstream.
  • the reconstructed video frame is a video frame obtained after filtering 211.
  • the filtering 211 is used to reduce compression distortion such as blocking effects and ringing effects generated in the encoding process.
  • the reconstructed video frame is used to provide a reference frame for inter-frame prediction; in the decoding process, the reconstructed video frame is output as the final decoded video after post-processing.
  • the filtering 211 includes at least one of the following filtering techniques: deblocking DB filtering, adaptive sample compensation offset SAO filtering, adaptive loop filtering ALF, cross-component ALF (Cross-Component ALF, CC-ALF).
  • ALF is set after DB and/or SAO.
  • the luminance component before ALF is used to filter the chrominance component after ALF.
  • the filter parameters in the process of filtering 211 are also transmitted to the entropy coding for coding, forming a part of the coded bitstream.
  • Fig. 3 is a schematic diagram of a video decoding framework 3 according to an embodiment of the present application.
  • video decoding executes operation steps corresponding to video encoding.
  • the residual data undergoes inverse quantization 302 and inverse transformation 303 to obtain original residual data information.
  • the reconstructed image block in the current frame is used to construct prediction information according to the intra-frame prediction method; if it is inter-frame prediction, according to the decoded motion compensation syntax, Determine the reference block in the reconstructed image to obtain the prediction information; then, superimpose the prediction information and the residual information, and filter 311 to obtain the reconstructed video frame. After the reconstructed video frame undergoes post-processing 306, the decoded video is obtained .
  • the filter 311 may be the same as the filter 211 in FIG. 2 and includes at least one of the following: deblocking DB filter, adaptive sample compensation offset SAO filter, adaptive loop filter ALF, cross-component ALF (Cross-Component ALF, CC-ALF).
  • deblocking DB filter adaptive sample compensation offset SAO filter
  • adaptive loop filter ALF adaptive loop filter ALF
  • cross-component ALF Cross-component ALF
  • CC-ALF cross-component ALF
  • the filter parameters and control parameters in the filter 311 can be obtained by entropy decoding the coded code stream, and filtering is performed based on the obtained filter parameters and control parameters respectively.
  • the DB filter is used to process pixels on the boundary between the prediction unit PU and the transformation unit TU, and a low-pass filter obtained by training is used to perform nonlinear weighting of boundary pixels, thereby reducing blocking effects.
  • SAO filtering uses the coding block in the frame image as a unit to classify the pixel values in the coding block, and add compensation values to each type of pixel. Different coding blocks use different filtering forms and different The different types of pixel compensation values in the encoding block are different, so that the reconstructed frame image is closer to the original frame image, and the ringing effect is avoided.
  • ALF filtering is a Wiener filtering process.
  • filter coefficients are calculated for filtering, which is mainly used to minimize the mean square between the reconstructed frame image and the original frame image.
  • Error Magnetic-square Error, MSE
  • MSE mean-square Error
  • a pixel signal in the currently encoded original encoding frame is X
  • the reconstructed pixel signal after encoding, DB filtering and SAO filtering Is Y
  • the noise or distortion introduced by Y in this process is e
  • the reconstructed pixel signal is filtered by the filter coefficient f in the Wiener filter to form an ALF reconstructed signal So that the ALF reconstructs the signal
  • the mean square error with the original pixel signal is the smallest, and f is obtained as the ALF filter coefficient.
  • the calculation formula of f is as follows:
  • a filter composed of a set of ALF filter coefficients is shown in Figures 5a and 5b, with 13 filter coefficients distributed symmetrically from C0 to C12, and the filter length L is 7. ; Or there are 7 filter coefficients distributed symmetrically from C0 to C6, and the filter length L is 5.
  • the filter shown in Figure 5a is also called a 7*7 filter, which is suitable for encoding frame brightness components
  • the filter shown in Figure 5b is also called a 5*5 filter, which is suitable for encoding frame colors. Degree component.
  • the filter composed of the coefficients of the ALF filter may also be a filter of other forms, for example, a filter form such as a symmetrical distribution and a filter length of 9. This is not limited.
  • the weighted average of surrounding pixels is used to obtain the result after the current point filtering, that is, the corresponding pixel in the ALF reconstructed image frame .
  • the pixel I (x, y) in the reconstructed image frame is the current pixel to be filtered
  • (x, y) is the position coordinate of the current pixel to be filtered in the encoding frame
  • the filter coefficient of the filter center corresponds to it
  • the other filter coefficients in the filter correspond to the pixels around I(x, y) one by one.
  • the filter coefficient value in the filter is the weight value.
  • the filter coefficient value in the filter is multiplied by the corresponding pixel point.
  • the value obtained by adding and averaging is the filtered pixel value O(x, y) of the current pixel I(x, y) to be filtered.
  • the specific calculation formula is as follows:
  • w(i,j) represents any filter coefficient in the filter
  • (i,j) represents the relative position of the filter coefficient in the filter from the center point
  • i and j are both less than L/2 and greater than -L/ An integer of 2, where L is the length of the filter.
  • the filter coefficient C12 at the center of the filter is represented as w(0,0)
  • the filter coefficient C6 above C12 is represented as w(0,1)
  • the filter coefficient C11 to the right of C12 It is expressed as w(1, 0).
  • each pixel in the reconstructed image frame is filtered in turn to obtain the filtered ALF reconstructed image frame.
  • the filter coefficient w(i, j) of the filter is an integer between [-1, 1).
  • the filter coefficient w(i,j) is enlarged by 128 times and then rounded to obtain w'(i,j), w'(i,j) is [- 128, 128).
  • encoding and transmitting the amplified w'(i,j) is easy to implement hardware encoding and decoding, and the amplified w'(i,j) is used for filtering to obtain the calculation formula of O(x,y) as follows:
  • the filter is no longer directly used as a weight, and a weighted average of multiple pixels is used to obtain the filtered result.
  • nonlinear parameter factors are introduced to optimize the filtering effect.
  • the nonlinear ALF filter is used to filter I(x, y) to obtain the calculation formula of O’(x, y) as follows:
  • the filter coefficient w(i, j) of the filter is an integer between [-1, 1).
  • k(i,j) represents the loop filter ALF correction clip parameter, which is also referred to as the correction parameter or the clip parameter hereinafter, and each filter coefficient w(i,j) will correspond to A clip parameter.
  • the clip parameter selects one from ⁇ 1024, 181, 32, 6 ⁇ .
  • the clip parameter selects one from ⁇ 1024, 161, 25, 4 ⁇ , and each The index corresponding to the clip parameter, that is, the clip index parameter is written into the code stream. If the clip parameter is 1024, the clip index parameter 0 must be written into the code stream. Similarly, if it is 181, 1 must be written into the code stream. Therefore, you can see the clip of the coded frame brightness classification and the coded frame chroma classification.
  • the index parameters are all integers between 0 and 3.
  • the encoding frame of the luminance Y component can correspond to 25 sets of filters at most, and the encoding frame of the chrominance UV component corresponds to a set of filters.
  • the pixel category may be a category corresponding to the luminance Y component, but the embodiment of the present application is not limited to this, and the pixel category may also be a category corresponding to other components or all components.
  • the following takes the classification and division and ALF filtering of the coded frame of the luminance Y component as an example for description.
  • the reconstructed image frame after DB filtering and SAO filtering is divided into a plurality of 4*4 pixel blocks. Classify the multiple 4*4 blocks.
  • each 4*4 block can be classified according to the Laplace direction:
  • D is the Laplace direction, It is the result of the fine classification after the direction D (Direction) classification, There are many ways to obtain, and here is only the result of sub-categorization.
  • the calculation method of direction D is as follows. First, calculate the Laplacian gradient of the current 4*4 block in different directions. The calculation formula is:
  • i and j are the coordinates of the upper left pixel of the current 4*4 block.
  • R(k, 1) represents the reconstructed pixel value at the (k, 1) position in the 4*4 block.
  • V k,l represents the vertical Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block.
  • H k,l represents the Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block in the horizontal direction.
  • D1 k,l represents the Laplacian gradient of the pixel at the (k,1) coordinate in the 4*4 block in the direction of 135 degrees.
  • D2 k,l represents the 45-degree Laplacian gradient of the pixel at the (k, 1) coordinate in the 4*4 block.
  • the calculated g v represents the Laplacian gradient of the current 4*4 block in the vertical direction.
  • g h represents the Laplacian gradient of the current 4*4 block in the horizontal direction.
  • g d1 represents the Laplacian gradient of the current 4*4 block in the direction of 135 degrees.
  • g d2 represents the Laplacian gradient of the current 4*4 block in the direction of 45 degrees.
  • R h, v represents the ratio of the Laplacian gradient in the horizontal and vertical directions.
  • R d0, d1 represent the ratio of the Laplacian gradient in the 45 and 135 directions.
  • t1 and t2 represent preset thresholds.
  • the value range of C is an integer between 0-24.
  • at most 4*4 blocks in one frame of image are divided into 25 categories.
  • each type of 4*4 block has a set of ALF filter coefficients, where N is an integer between 1-25.
  • the number of classifications can be classified into any other number in addition to 25 types, which is not limited in the embodiment of the present application.
  • ALF filtering can be divided into frame-based ALF, block-based ALF and quad-tree-based ALF.
  • frame-based ALF is to use a set of filter coefficients to filter the entire frame
  • block-based ALF is to divide the coded frame into image blocks of equal size, and determine whether to perform ALF filtering on the image block, based on quadtree ALF
  • the coding frame is divided into image blocks of different sizes based on the quadtree division method, and it is judged whether to perform ALF filtering.
  • the frame-based ALF calculation is simple, but the filtering effect is not good, and the quad-tree-based ALF calculation is more complicated. Therefore, in some standards or technologies, such as the latest VVC standard under study, its reference software VTM uses block-based ALF.
  • VTM Take the block-based ALF in VTM as an example.
  • a coded frame has a frame-level ALF filter flag and a block-level ALF filter flag.
  • the block level may be a CTU, a CU, or an image block in other division modes, which is not limited in the embodiment of the present application.
  • the CTU level ALF filter flag bit is used as an example for illustration below.
  • the frame-level ALF filter flag indicates that ALF filtering is not performed
  • the CTU-level ALF filter flag in the encoded frame is not identified.
  • the frame-level ALF filter flag indicates that ALF filtering is performed
  • the CTU-level ALF in the encoded frame is not identified.
  • the filtering flag bit indicates whether the current CTU performs ALF filtering.
  • the coded frame includes Z CTUs
  • the method for calculating N groups of ALF filter coefficients in the coded frame is as follows: whether the Z CTUs in the coded frame are combined with ALF filtering, and for each combination mode, the calculation is obtained.
  • the calculation method of the i-th group of ALF in each group of ALF filter coefficients is: in the current CTU combination mode, the i-th type of pixels in the CTU undergoing ALF filtering are calculated for f, and the other CTUs not undergoing ALF filtering are calculated For the i-th type of pixels, no f calculation is performed, and the i-th group of ALF coefficients in the current combination mode are calculated. It should be understood that in different combinations, the N groups of ALF filter coefficients obtained by calculation may be different from each other.
  • the frame-level ALF flag of the coded frame is identified as performing ALF filtering, and the CTU-level ALF flag in turn indicates whether to perform ALF filtering in the CTU data . For example, when the flag is marked as 0, it means that ALF filtering is not performed, and when the flag is marked as 1, it means that ALF filtering is performed.
  • the coded frame is not subjected to ALF filtering, and the frame-level ALF flag bit of the coded frame is marked as not performing ALF filtering. At this time, the ALF flag bit of the CTU level is not identified.
  • the ALF in the embodiments of the present application is not only applicable to the VVC standard, but also applicable to other technical solutions or standards using block-based ALF.
  • Cross-Component ALF Cross-Component ALF (Cross-Component ALF, CC-ALF)
  • CC-ALF is used to adjust the chrominance component by using the value of the luminance component to improve the quality of the chrominance component.
  • the current block includes a luminance component and a chrominance component, where the chrominance component includes a first chrominance component (for example, Cb in FIG. 6) and a second chrominance component (for example, Cr in FIG. 6).
  • the luminance component is filtered through SAO and ALF in sequence.
  • the first chrominance component is filtered through SAO and ALF in sequence.
  • the second chrominance component is filtered through SAO and ALF in sequence.
  • a CC-ALF filter is also used to perform CC-ALF on the chrominance components.
  • the shape of the CC-ALF filter may be as shown in FIG. 7.
  • the CC-ALF filter uses a 3x4 diamond shape with a total of 8 coefficients.
  • the position of the mark 2 is the current pixel of the first chroma component or the second chroma component, and the weighted average of the surrounding 7 points is used to obtain the filtered result of the pixel at the position of the middle mark 2.
  • first chrominance component and the second chrominance component can be selected from the same CC-ALF filter with the same or different target filters for filtering, or,
  • the target filter can be selected from different CC-ALF filters for filtering.
  • the total number of CC-ALF filters used in the current image needs to be written into the bitstream, where the total number of CC-ALF filters may include the total number of CC-ALF filters for the first chrominance component and/or the total number of CC-ALF filters for the second chrominance component.
  • the total number of CC-ALF filters in the first chrominance component is the same as the total number of CC-ALF filters in the second chrominance component, or the first chrominance component and the second chrominance component can be selected from the same CC-ALF filter In the case of selecting the target filter, the total number of only one CC-ALF filter can be used to indicate.
  • the index of the target filter selected by the current block is also encoded into the code stream.
  • the indexes of the target filters respectively selected by the first chrominance component and the second chrominance component are the same or different
  • the indexes of the target filters of the two chrominance components may be encoded into the code stream respectively.
  • only one index may be encoded into the bitstream, and the index is used to indicate the two chrominance components The target filter.
  • the first chrominance component determines the target filter of the first chrominance component of the current block from multiple CC-ALF filters; according to the brightness without ALF (for example, after SAO and without ALF) Component, and the ALF first chrominance component of the current block, determine the target filter coefficient of the first chrominance component.
  • the first chrominance component is filtered according to the target filter and the target filter coefficient of the first chrominance component.
  • the filtering result of the first chrominance component is determined.
  • the second chrominance component determines the target filter of the second chrominance component of the current block from multiple CC-ALF filters; according to the luminance component without ALF (for example, after SAO and without ALF), and The ALF second chrominance component of the current block determines the target filter coefficient of the second chrominance component.
  • the second chrominance component is filtered according to the target filter and the target filter coefficient of the second chrominance component.
  • the filtering result of the second chrominance component is determined.
  • the total number of multiple CC-ALF filters is encoded into the code stream as syntax elements, and the index of the target filter selected by the first chroma component of the current block, the second color
  • the index of the target filter selected by the degree component is coded into the code stream as a syntax element.
  • the syntax element used to indicate the total number of multiple CC-ALF filters there is only one syntax element used to indicate the total number of multiple CC-ALF filters in the bitstream of one frame of image.
  • the syntax element used to indicate the total number of filters of multiple CC-ALFs is located in the adaptation parameter set (Adaptation parameter set syntax) of the image.
  • the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
  • a truncated binary code may be used to encode a syntax element indicating the total number of filters of multiple CC-ALFs.
  • a truncated binary code may be used to encode the index of the target filter.
  • the target filter coefficient of the first chrominance component and the target filter coefficient of the second chrominance component of the current block are also encoded into the bitstream.
  • the decoder After receiving the code stream, the decoder decodes the index of the target filter selected by the chrominance component of the current block and the total number of CC-ALF filters from the code stream, and determines the color of the current block according to the index and total number. CC-ALF filter for the degree component.
  • the decoding end also decodes the target filter coefficient of the chrominance component of the current block from the code stream, so as to filter the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
  • the technical solutions of the embodiments of the present application can be applied to both the encoding end and the decoding end.
  • the following describes the technical solutions of the embodiments of the present application from the encoding end and the decoding end respectively.
  • FIG. 8 shows a schematic flowchart of a method 200 for loop filtering according to an embodiment of the present application.
  • the method 200 may be executed by the encoding end. For example, it can be executed by the system 100 shown in FIG. 1 when performing an encoding operation.
  • S210 Determine a target filter of the chrominance component of the current block from a plurality of cross-component adaptive loop filtering ALF filters.
  • S220 Determine the target filter coefficient of the chrominance component of the current block according to the ALF chrominance component of the current block and the non-ALF luminance component of the current block.
  • S230 Filter the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
  • S240 Determine the filtered chrominance component of the current block according to the chrominance component filtered by the target filter coefficient and the ALF chrominance component of the current block.
  • S250 Encoding according to the filtered chrominance component of the current block, and encoding the total number of the multiple cross-component ALF filters as a syntax element, wherein the code stream of one frame of image contains only one Is a syntax element indicating the total number of the plurality of cross-component ALF filters.
  • the syntax element used to indicate the total number of the multiple cross-component ALF filters is located in the adaptation parameter set syntax of the image.
  • the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
  • the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
  • the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
  • the encoding the total number of the multiple cross-component ALF filters as a syntax element includes: encoding the total number of the multiple cross-component ALF filters by using a truncated binary code.
  • the method further includes: encoding the index of the target filter as a syntax element.
  • the encoding the index of the target filter as a syntax element includes: encoding the index of the target filter by using a truncated binary code.
  • the method further includes: encoding the target filter coefficient of the chrominance component of the current block into a bitstream.
  • FIG. 9 shows a schematic flowchart of a method 200 for loop filtering according to an embodiment of the present application.
  • the method 300 can be executed by the encoding end. For example, it can be executed by the system 100 shown in FIG. 1 when performing an encoding operation.
  • S310 Decode the total number of cross-component ALF filters and the index of the target filter from the code stream, where the target filter is the ALF filter used by the chrominance component of the current block; among them, the code stream of one frame of image Contains only one syntax element indicating the total number of cross-component ALF filters.
  • S320 Decode the target filter coefficient of the chrominance component of the current block from the code stream, where the target filter coefficient is a coefficient in the target filter.
  • S330 Perform cross-component filtering on the ALF chrominance component of the current block according to the target filter and the target filter coefficient.
  • S340 Determine the filtered chrominance component of the current block according to the chrominance component filtered by the target filter coefficient and the ALF chrominance component of the current block.
  • the syntax element used to indicate the total number of the cross-component ALF filters is located in the adaptation parameter set syntax of the image.
  • the syntax element used to indicate the total number of the cross-component ALF filters does not exist in the image header and/or the slice header.
  • the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
  • the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
  • said decoding the total number of cross-component ALF filters and the index of the target filter from the code stream includes: using a truncated binary code to compare the total number of cross-component ALF filters and/or said The index of the target filter is decoded.
  • Fig. 10 is a schematic block diagram of another device 30 for loop filtering at the encoding end according to an embodiment of the present application.
  • the device 30 for loop filtering is a device for loop filtering at the video encoding end.
  • the loop The filtering device 20 may correspond to the method 100 of loop filtering.
  • the loop filtering device 30 includes: a processor 31 and a memory 32;
  • the memory 32 may be used to store programs, and the processor 31 may be used to execute the programs stored in the memory to perform the following operations:
  • Encoding is performed according to the filtered chrominance component of the current block, and the total number of the multiple cross-component ALF filters is used as a syntax element for encoding, wherein the bitstream of one frame of image contains only one for indicating The total number of syntax elements of the plurality of cross-component ALF filters.
  • the syntax element used to indicate the total number of the plurality of cross-component ALF filters does not exist in the image header and/or the slice header.
  • the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
  • the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
  • the encoding the total number of the multiple cross-component ALF filters as a syntax element includes:
  • a truncated binary code is used to encode the total number of the multiple cross-component ALF filters.
  • the processor is further configured to:
  • the index of the target filter is coded as a syntax element.
  • the encoding the index of the target filter as a syntax element includes:
  • FIG. 11 is a schematic block diagram of a device 40 for loop filtering at the decoding end according to an embodiment of the present application.
  • the device 40 for loop filtering may correspond to the method 200 for loop filtering.
  • the loop filtering device 40 includes: a processor 41 and a memory 42;
  • the memory 42 may be used to store programs, and the processor 41 may be used to execute the programs stored in the memory to perform the following operations:
  • the total number of cross-component ALF filters and the index of the target filter are decoded from the code stream.
  • the target filter is the ALF filter used by the chrominance component of the current block; among them, only one frame of image code stream Contains a syntax element for indicating the total number of cross-component ALF filters;
  • the syntax element used to indicate the total number of the cross-component ALF filters is located in the adaptation parameter set syntax of the image.
  • the ALF chrominance component of the current block is specifically the chrominance component of the current block that has undergone adaptive sample compensation filtering SAO and ALF in sequence.
  • the luminance component of the current block without ALF is specifically the luminance component of the current block after SAO and without ALF.
  • the decoding of the total number of cross-component ALF filters and the index of the target filter from the code stream includes:
  • the truncated binary code is used to decode the total number of the cross-component ALF filters and/or the index of the target filter.
  • the memory of the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • the embodiment of the present application also proposes a computer program, which includes instructions, when the computer program is executed by a computer, the computer can execute the method of the embodiments shown in FIG. 6 to FIG. 14.
  • An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions, and the at least one processor is used to call the at least one memory. To execute the method of the embodiment shown in FIG. 6 to FIG. 14.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

提供环路滤波的方法与装置,通过优化编解码环路滤波过程中的编码方式,降低环路滤波的复杂度,提高编解码性能。一种环路滤波的方法包括:从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。

Description

环路滤波的方法与装置
版权申明
本专利文件披露的内容包含受版权保护的材料。该版权为版权所有人所有。版权所有人不反对任何人复制专与商标局的官方记录和档案中所存在的该专利文件或者该专利披露。
技术领域
本发明涉及数字视频编码技术领域,并且更为具体地,涉及一种环路滤波的方法与装置。
背景技术
目前,为了减少视频存储和传输所占用的带宽,需要对视频数据进行编码压缩处理。目前常用的编码技术中,视频的编码压缩处理过程包括:块划分、预测、变换、量化和熵编码过程,形成一个混合视频编码框架。在该混合视频编码框架的基础上,经过几十年的发展,逐渐形成了视频编解码技术标准,目前主流的一些视频编解码标准包括:国际视频编码标准H.264/MPEG-AVC、H.265/MEPG-HEVC、国内音视频编码标准AVS2,以及正在制定的H.266/VVC国际标准和AVS3国内标准。
在块划分、预测、变换、量化和熵编码的编码过程中,由于量化的存在,解码重构视频中会存在块效应、振铃效应等压缩失真,同时,帧间预测模式中,重构视频中的压缩失真会影响后续图像的编码质量。因此,为了降低压缩失真,通过编解码结构框架中引入环路滤波(in loop filter)技术,提高当前解码图像质量以及为后续编码图像提供高质量的参考图像,提高压缩效率。
在目前正在制定的通用视频编码(Versatile Video Coding,VVC)标准以及部分高性能视频编码(High Efficiency Video Coding,HEVC)标准中,环路滤波器包括去块滤波(deblocking filter,DBF),自适应样值补偿滤波(Sample Adaptive Offset,SAO)以及自适应环路滤波(Adaptive Loop Filter,ALF)。其中,滤波的过程仍有改进的空间。
发明内容
本发明提供一种环路滤波的方法与装置,相对于现有技术,可以降低环路滤波的复杂度,提高滤波的效果。
第一方面,提供了一种环路滤波的方法,包括:
从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;
根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;
根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
第二方面,提供了一种环路滤波的方法,包括:
从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;
根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
第三方面,提供了一种环路滤波的装置,包括:存储器,用于存储代码;
处理器,用于执行所述存储器中存储的代码,以执行如下操作:
从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;
根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;
根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
第四方面,提供了一种环路滤波的装置,包括:
存储器,用于存储代码;
处理器,用于执行所述存储器中存储的代码,以执行如下操作:
从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;
根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
本申请实施例的技术方法,通过优化编解码环路滤波过程中的编码方式,提高编解码性能。
附图说明
图1是应用本申请实施例的技术方案的架构图。
图2是根据本申请实施例的视频编码框架示意图。
图3是根据本申请实施例的视频解码框架示意图。
图4是根据本申请实施例的维纳滤波器示意图。
图5a是根据本申请实施例的一种ALF滤波器示意图。
图5b是根据本申请实施例的另一种ALF滤波器示意图。
图6是本申请一个实施例的环路滤波的方法的示意性流程图。
图7是本申请一个实施例的CC-ALF滤波器的形状示意图。
图8是本申请另一个实施例的环路滤波的方法的示意性流程图。
图9是本申请另一个实施例的环路滤波的方法的示意性流程图。
图10是本申请另一个实施例的环路滤波的装置的示意性流程图。
图11是本申请另一个实施例的环路滤波的装置的示意性流程图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。
本申请实施例可适用于标准或非标准的图像或视频编码器。例如,VVC标准的编码器。
应理解,本文中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。
还应理解,本申请实施例中的公式只是一种示例,而非限制本申请实施例的范围,各公式可以进行变形,这些变形也应属于本申请保护的范围。
还应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,本说明书中描述的各种实施方式,既可以单独实施,也可以组合实施,本申请实施例对此并不限定。
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了描述具体的实施例的目的,不是旨在限制本申请的范围。本申请所使用的术语“和/或”包括一个或多个相关的所列项的任意的和所有的组合。
图1是应用本申请实施例的技术方案的架构图。
如图1所示,系统100可以接收待处理数据102,对待处理数据102进行处理,产生处理后的数据108。例如,系统100可以接收待编码数据,对待编码数据进行编码以产生编码后的数据,或者,系统100可以接收待解码数据,对待解码数据进行解码以产生解码后的数据。在一些实施例中,系统100中的部件可以由一个或多个处理器实现,该处理器可以是计算设备中的 处理器,也可以是移动设备(例如无人机)中的处理器。该处理器可以为任意种类的处理器,本发明实施例对此不做限定。在一些可能的设计中,该处理器可以包括编码器、解码器或编解码器等。系统100中还可以包括一个或多个存储器。该存储器可用于存储指令和数据,例如,实现本发明实施例的技术方案的计算机可执行指令、待处理数据102、处理后的数据108等。该存储器可以为任意种类的存储器,本发明实施例对此也不做限定。
待编码数据可以包括文本、图像、图形对象、动画序列、音频、视频、或者任何需要编码的其他数据。在一些情况下,待编码数据可以包括来自传感器的传感数据,该传感器可以为视觉传感器(例如,相机、红外传感器),麦克风、近场传感器(例如,超声波传感器、雷达)、位置传感器、温度传感器、触摸传感器等。在一些情况下,待编码数据可以包括来自用户的信息,例如,生物信息,该生物信息可以包括面部特征、指纹扫描、视网膜扫描、嗓音记录、DNA采样等。
图2是根据本申请实施例的视频编码框架2示意图。如图2所示,在接收待编码视频后,从待编码视频的第一帧开始,依次对待编码视频中的每一帧进行编码。其中,当前编码帧主要经过:预测(Prediction)、变换(Transform)、量化(Quantization)和熵编码(Entropy Coding)等处理,最终输出当前编码帧的码流。对应的,解码过程通常是按照上述过程的逆过程对接收到的码流进行解码,以恢复出解码前的视频帧信息。
具体地,如图2所示,所述视频编码框架2中包括一个编码控制模块201,用于进行编码过程中的决策控制动作,以及参数的选择。例如,如图2所示,编码控制模块201控制变换、量化、反量化、反变换的中用到的参数,控制进行帧内或者帧间模式的选择,以及运动估计和滤波的参数控制,且编码控制模块201的控制参数也将输入至熵编码模块中,进行编码形成编码码流中的一部分。
对当前待编码帧开始编码时,对待编码帧进行划分202处理,具体地,首先对其进行条带(slice)划分,再进行块划分。可选地,在一个示例中,待编码帧划分为多个互不重叠的最大的编码树单元(Coding Tree Unit,CTU),各CTU还可以分别按四叉树、或二叉树、或三叉树的方式迭代划分为一系列更小的编码单元(Coding Unit,CU),一些示例中,CU还可以包含与之 相关联的预测单元(Prediction Unit,PU)和变换单元(Transform Unit,TU),其中PU为预测的基本单元,TU为变换和量化的基本单元。一些示例中,PU和TU分别是在CU的基础上划分成一个或多个块得到的,其中一个PU包含多个预测块(Prediction Block,PB)以及相关语法元素。一些示例中,PU和TU可以是相同的,或者,是由CU通过不同的划分方法得到的。一些示例中,CU、PU和TU中的至少两种是相同的,例如,不区分CU、PU和TU,全部是以CU为单位进行预测、量化和变换。为方便描述,下文中将CTU、CU或者其它形成的数据单元均称为编码块。
应理解,在本申请实施例中,视频编码针对的数据单元可以为帧,条带,编码树单元,编码单元,编码块或以上任一种的组。在不同的实施例中,数据单元的大小可以变化。
具体地,如图2所示,待编码帧划分为多个编码块后,进行预测过程,用于去除当前待编码帧的空域和时域冗余信息。当前比较常用的预测编码方法包括帧内预测和帧间预测两种方法。帧内预测仅利用本帧图像中己重建的信息对当前编码块进行预测,而帧间预测会利用到之前已经重建过的其它帧图像(也被称作参考帧)中的信息对当前编码块进行预测。具体地,在本申请实施例中,编码控制模块201用于决策选择帧内预测或者帧间预测。
当选择帧内预测模式时,帧内预测203的过程包括获取当前编码块周围已编码相邻块的重建块作为参考块,基于该参考块的像素值,采用预测模式方法计算预测值生成预测块,将当前编码块与预测块的相应像素值相减得到当前编码块的残差,当前编码块的残差经过变换204、量化205以及熵编码210后形成当前编码块的码流。进一步的,当前待编码帧的全部编码块经过上述编码过程后,形成待编码帧的编码码流中的一部分。此外,帧内预测203中产生的控制和参考数据也经过熵编码210编码,形成编码码流中的一部分。
具体地,变换204用于去除图像块的残差的相关性,以便提高编码效率。对于当前编码块残差数据的变换通常采用二维离散余弦变换(Discrete Cosine Transform,DCT)变换和二维离散正弦变换(Discrete Sine Transform,DST)变换,例如在编码端将待编码块的残差信息分别与一个N×M的变换矩阵及其转置矩阵相乘,相乘之后得到当前编码块的变换系数。
在产生变换系数之后用量化205进一步提高压缩效率,变换系数经量化 可以得到量化后的系数,然后将量化后的系数进行熵编码210得到当前编码块的残差码流,其中,熵编码方法包括但不限于内容自适应二进制算术编码(Context Adaptive Binary Arithmetic Coding,CABAC)熵编码。
具体地,帧内预测203过程中的已编码相邻块为:当前编码块编码之前,已进行编码的相邻块,该相邻块的编码过程中产生的残差经过变换204、量化205、反量化206、和反变换207后,与该相邻块的预测块相加得到的重建块。对应的,反量化206和反变换207为量化206和变换204的逆过程,用于恢复量化和变换前的残差数据。
如图2所示,当选择帧间预测模式时,帧间预测过程包括运动估计208和运动补偿209。具体地,根据重建视频帧中的参考帧图像进行运动估计208,在一张或多张参考帧图像中根据一定的匹配准则搜索到与当前编码块最相似的图像块为匹配块,该匹配块与当前编码块的相对位移即为当前待编码块的运动矢量(Motion Vector,MV)。对待编码帧中的全部编码块进行运动估计之后,基于运动矢量和参考帧对当前待编码帧进行运动补偿209,获得当前待编码帧的预测值。将该待编码帧像素的原始值与对应的预测值相减得到待编码帧的残差。当前待编码帧的残差经过变换204、量化205以及熵编码210后形成待编码帧的编码码流中的一部分。此外,运动补偿209中产生的控制和参考数据也经过熵编码210编码,形成编码码流中的一部分。
其中,如图2所示,重建视频帧为经过滤波211之后得到视频帧。滤波211用于减少编码过程中产生的块效应和振铃效应等压缩失真。在编码过程中,重建视频帧用于为帧间预测提供参考帧;在解码过程中,重建视频帧经过后处理后输出为最终的解码视频。在本申请实施例中,滤波211包括以下滤波技术中的至少一种:去块DB滤波、自适应样值补偿偏移SAO滤波、自适应环路滤波ALF、跨分量ALF(Cross-Component ALF,CC-ALF)。在一个示例中,ALF设置于DB和/或SAO之后。在一个示例中,采用经ALF前的亮度分量对经ALF之后的色度分量进行滤波。滤波211过程中的滤波参数同样被传送至熵编码中进行编码,形成编码码流中的一部分。
图3是根据本申请实施例的视频解码框架3示意图。如图3所示,视频解码执行与视频编码相对应的操作步骤。首先利用熵解码301得到编码码流中的残差数据、预测语法、帧内预测语法、运动补偿语法以及滤波语法中的 一种或多种数据信息。其中,残差数据经过反量化302和反变换303得到原始残差数据信息。此外,根据预测语法确定当前解码块使用帧内预测还是帧间预测。如果是帧内预测304,则根据解码得到的帧内预测语法,利用当前帧中已重建图像块按照帧内预测方法构建预测信息;如果是帧间预测,则根据解码得到的运动补偿语法,在已重建的图像中确定参考块,得到预测信息;接下来,再将预测信息与残差信息进行叠加,并经过滤波311操作便可以得到重建视频帧,重建视频帧经过后处理306后得到解码视频。
具体地,在本申请实施例中,所述滤波311可以与图2中的滤波211相同,包括以下至少一种:去块DB滤波、自适应样值补偿偏移SAO滤波、自适应环路滤波ALF、跨分量ALF(Cross-Component ALF,CC-ALF)。其中,滤波311中的滤波参数和控制参数可以通过对编码码流进行熵解码得到,基于得到的滤波参数和控制参数分别进行滤波。
一个示例中,DB滤波用于处理预测单元PU和变换单元TU边界的像素,利用训练得到的低通滤波器进行边界像素的非线性加权,从而减少块效应。一个示例中,SAO滤波以帧图像中的编码块为单元,用于对编码块内的像素值进行分类,并为每一类像素加上补偿值,不同的编码块采用不同的滤波形式且不同的编码块中不同类的像素补偿值不同,使得重构帧图像更接近原始帧图像,避免振铃效应。一个示例中,ALF滤波为一种维纳滤波(wiener filtering)过程,根据维纳滤波的原理,计算出滤波系数进行滤波,主要用于最小化重构帧图像与原始帧图像之间的均方误差(Mean-square Error,MSE),从而进一步改善重建帧的图像质量,提高运动估计和运动补偿的准确度并有效的提高整个编码系统的编码效率,但是与此同时,ALF滤波的复杂度高、运算耗时,在实际应用过程中存在一定的缺陷。
为方便理解,下面结合图4、图5a和图5b对ALF滤波过程的一个示例进行描述。
ALF滤波系数计算原理
首先,根据维纳滤波原理,说明计算ALF滤波系数的计算方法,如图4所示,当前编码的原始编码帧中的一个像素信号为X,经过编码以及DB滤波和SAO滤波后的重建像素信号为Y,在此过程中Y引入的噪声或者失真为e,重建像素信号经过维纳滤波器中的滤波系数f滤波后,形成ALF重建 信号
Figure PCTCN2019130954-appb-000001
使得该ALF重建信号
Figure PCTCN2019130954-appb-000002
与原始像素信号的均方误差最小,得到f即为ALF滤波系数,具体地,f的计算公式如下:
Figure PCTCN2019130954-appb-000003
可选地,在一种可能的实施方式中,一组ALF滤波系数组成的滤波器如图5a和图5b所示,具有C0~C12呈对称分布的13个滤波系数,滤波器长度L为7;或者具有C0~C6呈对称分布的7个滤波系数,滤波器长度L为5。可选地,图5a所示的滤波器也称为7*7的滤波器,适用于编码帧亮度分量,图5b所示的滤波器也称为5*5的滤波器,适用于编码帧色度分量。
应理解,在本申请实施例中,所述ALF滤波器系数组成的滤波器还可以为其它形式的滤波器,例如具有呈对称分布,滤波器长度为9等滤波器形式,本申请实施例对此不做限定。
可选地,在一种线性ALF滤波过程中,对于重建图像帧中的待滤波像素点,使用周围像素点的加权平均来得到当前点滤波之后的结果,即ALF重建图像帧中对应的像素点。具体地,重建图像帧中像素点I(x,y)为当前待滤波像素点,(x,y)为当前待滤波像素点在编码帧中的位置坐标,滤波器中心的滤波系数与其对应,滤波器中其它滤波系数与I(x,y)周围的像素点一一对应,滤波器中的滤波系数值即为权值,将滤波器中的滤波系数值与对应的像素点相乘后相加,再平均得到的数值即为当前待滤波像素点I(x,y)滤波后的像素值O(x,y),具体的计算公式如下:
Figure PCTCN2019130954-appb-000004
其中,w(i,j)表示滤波器中的任意一个滤波系数,(i,j)表示滤波器中的滤波系数距离中心点的相对位置,i和j均为小于L/2大于-L/2的整数,其中L为滤波器的长度。例如,如图5a中的滤波器所示,滤波器中心的滤波系数C12表示为w(0,0),C12上方的滤波系数C6表示为w(0,1),C12右方的滤波系数C11表示为w(1,0)。
按照此方式,依次对重建图像帧中的每个像素点进行滤波,得到滤波后的ALF重建图像帧。
可选地,在一种可能的实现方式中,所述滤波器的滤波系数w(i,j)为[-1,1)之间的整数。
可选地,在一种可能的实施方式中,对所述滤波器系数w(i,j)放大128 倍后取整得到w’(i,j),w’(i,j)为[-128,128)之间的整数。具体地,对放大后的w’(i,j)进行编码传输易于硬件编解码实现,且采用放大后的w’(i,j)进行滤波得到O(x,y)的计算公式如下:
Figure PCTCN2019130954-appb-000005
可选地,在另一种非线性ALF滤波过程中,不再直接采用滤波器作为权值,将多个像素点的加权平均得到滤波之后的结果。而是引入非线性参数因子,优化滤波效果,具体地,采用非线性ALF滤波对I(x,y)进行滤波计算得到O’(x,y)的计算公式如下:
Figure PCTCN2019130954-appb-000006
其中,所述滤波器的滤波系数w(i,j)为[-1,1)之间的整数。K(d,b)是一个修正(clip)的操作,K(d,b)=min(b,max(-b,d))。
具体地,在K(d,b)clip操作中,k(i,j)代表环路滤波ALF修正clip参数,下文也简称修正参数或者clip参数,每一个滤波系数w(i,j)都会对应一个clip的参数。对于编码帧亮度分量,clip参数从{1024,181,32,6}中选择一个,对于编码帧色度分量,clip参数从{1024,161,25,4}中选择一个,且需要将每一个clip参数对应的索引,即修正(clip)索引参数写入码流。如果clip参数是1024,就要将clip索引参数0写入码流,同理,如果是181,就要将1写入码流,因此可以看出编码帧亮度分类和编码帧色度分类的clip索引参数均为0~3之间的整数。
像素分类划分
其次,若对一个像素点计算一组对应的ALF滤波系数,其计算复杂度大且耗时,并且如果将每一个像素点的ALF系数都写入码流,会到来巨大的开销,因此,需要对重建图像中的像素点进行分类划分,每一类像素点采用同一组ALF滤波系数(一种滤波器),这样能够减少计算复杂度,提高编码效率。
可选地,像素分类的方式可以有很多种。例如,可以只对像素的亮度Y分量进行分类,对色度UV分量不进行分类。例如,对亮度Y分量划分为25类,色度UV分量不划分,只有一类。换言之,对于一帧图像来讲,亮度 Y分量的编码帧最多可以对应25组滤波器,色度UV分量的编码帧对应一组滤波器。
应理解,在本申请实施例中,像素类别可以是对应于亮度Y分量的类别,但本申请实施例对此并不限定,像素类别也可以是对应于其他分量或所有分量的类别。为方便描述,下文以对亮度Y分量的编码帧进行分类划分和ALF滤波为例进行说明。
可选地,在一种可能的实施方式中,将经过DB滤波和SAO滤波后的重建图像帧划分为多个4*4像素的块。将该多个4*4的块进行分类。
例如,每一个4*4的块都可以根据拉普拉斯(Laplace)方向进行分类:
Figure PCTCN2019130954-appb-000007
C代表像素块所属类别。D为拉普拉斯方向,
Figure PCTCN2019130954-appb-000008
是进行方向D(Direction)分类之后的细分类结果,
Figure PCTCN2019130954-appb-000009
的获取可以有多种方式,这里只是代表细分类的结果。
方向D的计算方式如下,首先,计算当前4*4块在不同方向上的拉普拉斯梯度,计算公式为:
Figure PCTCN2019130954-appb-000010
Figure PCTCN2019130954-appb-000011
Figure PCTCN2019130954-appb-000012
Figure PCTCN2019130954-appb-000013
其中,i和j是当前4*4块的左上像素点的坐标。
R(k,1)代表4*4块中位于(k,1)位置的重构像素值。V k,l代表4*4块中位于(k,1)坐标的像素点在竖直方向的拉普拉斯梯度。H k,l代表4*4块中位于(k,1)坐标的像素点在水平方向拉普拉斯梯度。D1 k,l代表4*4块中位于(k,1)坐标的像素点在135度方向拉普拉斯梯度。D2 k,l代表4*4块中位于(k,1)坐标的像素点在45度拉普拉斯梯度。
对应的,计算得到的g v代表当前4*4块在竖直方向的拉普拉斯梯度。g h代表当前4*4块在水平方向的拉普拉斯梯度。g d1代表当前4*4块在135度方向的拉普拉斯梯度。g d2代表当前4*4块在45度方向的拉普拉斯梯度。
然后,根据四个方向上拉普拉斯梯度的极值比,判断方向D,具体计算公式如下:
Figure PCTCN2019130954-appb-000014
Figure PCTCN2019130954-appb-000015
Figure PCTCN2019130954-appb-000016
Figure PCTCN2019130954-appb-000017
其中,
Figure PCTCN2019130954-appb-000018
代表水平、竖直方向拉普拉斯梯度值的最大值。
Figure PCTCN2019130954-appb-000019
代表水平、竖直方向拉普拉斯梯度值的最小值。
Figure PCTCN2019130954-appb-000020
代表45、135方向拉普拉斯梯度值的最大值。
Figure PCTCN2019130954-appb-000021
代表45、135方向拉普拉斯梯度值的最小值。R h,v代表水平、竖直方向拉普拉斯梯度的比值。R d0,d1代表45、135方向拉普拉斯梯度的比值。
如果
Figure PCTCN2019130954-appb-000022
而且
Figure PCTCN2019130954-appb-000023
D设置为0。
如果
Figure PCTCN2019130954-appb-000024
而且
Figure PCTCN2019130954-appb-000025
D设置为1。
如果
Figure PCTCN2019130954-appb-000026
而且
Figure PCTCN2019130954-appb-000027
D设置为2。
如果
Figure PCTCN2019130954-appb-000028
而且
Figure PCTCN2019130954-appb-000029
D设置为3。
如果
Figure PCTCN2019130954-appb-000030
而且
Figure PCTCN2019130954-appb-000031
D设置为4。
t1和t2代表预先设置的阈值。
可选地,在一种可能的实施方式中,
Figure PCTCN2019130954-appb-000032
的计算方式如下,
Figure PCTCN2019130954-appb-000033
将A量化得到0~4之间的整数,得到
Figure PCTCN2019130954-appb-000034
因此,综合D和A的取值,C的取值范围为0~24之间的整数,在本申请实施例中,最多将一帧图像中的4*4块划分为25类。
可选地,在一种可能的实施方式中,编码帧中具有N类4*4块,每一类4*4块具有一组ALF滤波系数,其中,N为1~25之间的整数。
应理解,在本申请实施例中,对整帧图像除了可以划分为多个4*4的块以外,还可以为划分为其它像素大小的块,例如,划分为多个8*8或者16*16 大小的块,本申请实施例对此不做限制。
还应理解,在本申请实施例中,除了上述根据拉普拉斯(Laplace)方向进行分类外,还可以采用其他的分类方法对块进行分类,本申请实施例对此不做限定。
还应理解,在本申请实施例中,分类数量除了25种外,还可以分类为其它任意数量,本申请实施例对此也不做限定。
基于块的ALF滤波
ALF滤波可以分为基于帧的ALF,基于块的ALF以及基于四叉树的ALF。其中,基于帧的ALF为采用一组滤波系数对整帧进行滤波,基于块的ALF为将编码帧划分为大小相等的图像块,对图像块进行判断是否进行ALF滤波,基于四叉树的ALF则是基于四叉树的划分方式将编码帧划分为大小不同的图像块,进行判断是否进行ALF滤波。其中,基于帧的ALF计算简单,但滤波效果不佳,而基于四叉树的ALF计算复杂度较高,因此,在一些标准或技术中,例如最新正在研究制定的VVC标准中,其参考软件VTM采用的是基于块的ALF。
以VTM中基于块的ALF作为示例说明。在VTM中,编码帧具有帧级ALF滤波标志位,且具有块级ALF滤波标志位。可选地,该块级可以为CTU,CU或者其它划分方式的图像块,本申请实施例对此不做限定,为方便描述,下文以CTU级ALF滤波标志位进行举例说明。
具体地,当帧级ALF滤波标志位标识不进行ALF滤波时,则不标识编码帧中CTU级ALF滤波标志位,当帧级ALF滤波标志位标识进行ALF滤波时,对编码帧中CTU级ALF滤波标志位进行标识表示当前CTU是否进行ALF滤波。
可选地,编码帧中包括Z个CTU,计算编码帧中的N组ALF滤波系数的方法如下:对编码帧中Z个CTU是否进行ALF滤波进行组合,针对每一种组合方式,计算得到该方式下的N组ALF滤波系数和编码帧的率失真代价(Rate-distortion Cost,RD Cost)。其中,每一组ALF滤波系数中的第i组ALF的计算方式为:在当前CTU组合方式下,将进行ALF滤波的CTU中的第i类像素进行f计算,其它不进行ALF滤波的CTU中的第i类像素则不进行f计算,计算得到当前组合方式下的第i组ALF系数。应理解,不同 组合方式下,计算得到的N组ALF滤波系数可能互不相同。
比较多个组合方式下的RD Cost,其中,RD Cost最小的组合方式确定为最终的组合方式。且在该组合方式下计算得到的N组ALF滤波系数为适应性最优的ALF滤波系数。
当RD Cost最小的组合方式为Z个CTU中至少一个CTU进行ALF滤波时,编码帧的帧级ALF标志位标识为进行ALF滤波,CTU级的ALF标志位依次在CTU数据中标识是否进行ALF滤波。例如,标识位标识为0时,表示不进行ALF滤波,标识位标识为1时,表示进行ALF滤波。
特别地,当RD Cost最小的组合方式为Z个CTU均不进行ALF滤波时,此时,编码帧不进行ALF滤波,将编码帧的帧级ALF标志位标识为不进行ALF滤波。此时,CTU级的ALF标志位不进行标识。
应理解,本申请实施例中的ALF不仅适用于VVC标准中,还适应用于其它采用基于块的ALF技术方案或者标准中。
跨分量ALF(Cross-Component ALF,CC-ALF)
一个示例中,CC-ALF用于利用亮度分量的数值来对色度分量进行调整,提升色度分量质量。为方便理解,下面结合图6对CC-ALF和ALF过程的一个示例进行描述。当前块包括亮度分量和色度分量,其中色度分量包括第一色度分量(例如图6中的Cb)和第二色度分量(例如图6中的Cr)。
亮度分量依次经过SAO和ALF进行滤波。第一色度分量依次经过SAO和ALF进行滤波。第二色度分量依次经过SAO和ALF进行滤波。另外,还采用CC-ALF滤波器对色度分量进行CC-ALF。
在一个示例中,CC-ALF滤波器的形状可以如图7所示。该CC-ALF滤波器中采用3x4菱形,共8个系数。图中标识2所在位置为当前的第一色度分量或者第二色度分量的像素点,使用周围的7个点的加权平均得到中间标识2所在位置像素点滤波之后的结果。
一帧图像中总共可以有多套滤波器,其中,第一色度分量和第二色度分量可以从同一拨CC-ALF滤波器中分别选择相同或不同的目标滤波器进行滤波,或者,也可以分别从不同拨CC-ALF滤波器中各自选择出目标滤波器进行滤波。
当前图像使用的CC-ALF滤波器的总数需要写入码流,其中,该CC-ALF 滤波器的总数可以包括第一色度分量的CC-ALF滤波器总数和/或第二色度分量的CC-ALF滤波器总数。在第一色度分量的CC-ALF滤波器总数和第二色度分量的CC-ALF滤波器总数相同,或者第一色度分量和第二色度分量可以从同一拨CC-ALF滤波器中选择目标滤波器的情况中,可以仅用一个CC-ALF滤波器的总数来指示。
对于当前块,还将该当前块所选择的目标滤波器的索引编码到码流中。在第一色度分量和第二色度分量分别所选择的目标滤波器的索引相同或不同的情况中,可以分别将该两个色度分量的目标滤波器的索引编码到码流中。或者,在第一色度分量和第二色度分量分别所选择的目标滤波器的索引相同情况中,可以将仅将一个索引编码到码流中,该索引用来指示该两个色度分量的目标滤波器。
下面结合图6进行具体解释。
具体的,对于第一色度分量:从多个CC-ALF滤波器中确定出当前块的第一色度分量的目标滤波器;根据未经ALF(例如经SAO之后且未经ALF)的亮度分量,以及当前块的经ALF的第一色度分量,确定该第一色度分量的目标滤波系数。根据第一色度分量的目标滤波器和目标滤波系数对第一色度分量进行滤波。然后根据该经目标滤波器和目标滤波系数滤波后的第一色度分量以及经ALF后(例如依次经SAO和ALF后)的第一色度分量,确定出第一色度分量的滤波结果。
对于第二色度分量:从多个CC-ALF滤波器中确定出当前块的第二色度分量的目标滤波器;根据未经ALF(例如经SAO之后且未经ALF)的亮度分量,以及当前块的经ALF的第二色度分量,确定该第二色度分量的目标滤波系数。根据第二色度分量的目标滤波器和目标滤波系数对第二色度分量进行滤波。然后根据该经目标滤波器和目标滤波系数滤波后的第二色度分量以及经ALF后(例如依次经SAO和ALF后)的第二色度分量,确定出第二色度分量的滤波结果。
对当前块进行编码时,将多个CC-ALF的滤波器的数量总数作为语法元素编码到码流中,以及将当前块的第一色度分量所选择的目标滤波器的索引、第二色度分量所选择的目标滤波器的索引作为语法元素编码到码流中。
在一个示例中,一帧图像的码流中用于指示多个CC-ALF的滤波器的数 量总数的语法元素的数量仅有一个。一个示例中,用于指示多个CC-ALF的滤波器的数量总数的语法元素位于图像的自适应参数集(Adaptation parameter set syntax)中。一个示例中,用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
在一个示例中,可采用截断二元码对用于指示多个CC-ALF的滤波器的数量总数的语法元素进行编码。在一个示例中,可采用截断二元码对所述目标滤波器的索引进行编码。
对于当前块,还将该当前块的第一色度分量的目标滤波系数和第二色度分量的目标滤波系数编码到码流中。
解码端在接收到码流后,从码流中解码出当前块的色度分量所选择的目标滤波器的索引以及CC-ALF滤波器的总数,并根据该索引和总数确定出当前块的色度分量的CC-ALF滤波器。解码端还从码流中解码出当前块的色度分量的目标滤波系数,以根据该目标滤波器和目标滤波系数对当前块的经ALF的色度分量进行滤波。
本申请实施例的技术方案,既可以应用于编码端,也可以应用于解码端。下面分别从编码端和解码端描述本申请实施例的技术方案。
图8示出了本申请一个实施例的环路滤波的方法200的示意性流程图。该方法200可以由编码端执行。例如,可以由图1所示的系统100在进行编码操作时执行。
S210:从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器。
S220:根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数。
S230:根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波。
S240:根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
S250:根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法 元素。
可选地,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于所述图像的Adaptation parameter set syntax中。
可选地,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
可选地,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
可选地,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
可选地,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
可选地,所述方法还包括:将所述目标滤波器的索引作为语法元素进行编码。
可选地,所述将所述目标滤波器的索引作为语法元素进行编码,包括:采用截断二元码对所述目标滤波器的索引进行编码。
可选地,所述方法还包括:将所述当前块的色度分量的目标滤波系数编码到码流中。
图9示出了本申请一个实施例的环路滤波的方法200的示意性流程图。该方法300可以由编码端执行。例如,可以由图1所示的系统100在进行编码操作时执行。
S310:从码流中解码出跨分量ALF滤波器的数量总数、目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素。
S320:从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数。
S330:根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行跨分量滤波。
S340:根据经所述目标滤波系数滤波后的色度分量和所述当前块的经 ALF后的色度分量,确定所述当前块的滤波后的色度分量。
可选地,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的Adaptation parameter set syntax中。
可选地,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
可选地,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
可选地,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
可选地,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
图10据本申请实施例的另一种编码端的环路滤波的装置30的示意性框图,该环路滤波的装置30为视频编码端中的环路滤波的装置,可选地,该环路滤波的装置20可以对应于环路滤波的方法100。
如图10所述,所述环路滤波的装置30包括:处理器31和存储器32;
存储器32可用于存储程序,处理器31可用于执行所述存储器中存储的程序,以执行如下操作:
从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;
根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;
根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于 所述图像的Adaptation parameter set syntax中。
可选地,所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
可选地,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
可选地,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
可选地,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:
采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
可选地,所述处理器还用于:
将所述目标滤波器的索引作为语法元素进行编码。
可选地,所述将所述目标滤波器的索引作为语法元素进行编码,包括:
采用截断二元码对所述目标滤波器的索引进行编码。
应理解,装置实施例与方法实施例相互对应,类似的描述可以参照方法实施例。
图11是根据本申请实施例的解码端的环路滤波的装置40的示意性框图。可选地,该环路滤波的装置40可以对应于环路滤波的方法200。
如图11所述,所述环路滤波的装置40包括:处理器41和存储器42;
存储器42可用于存储程序,处理器41可用于执行所述存储器中存储的程序,以执行如下操作:
从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;
从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数;
根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
可选地,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的Adaptation parameter set syntax中。
可选地,所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
可选地,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
可选地,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
可选地,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:
采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
本申请实施例还提供了一种电子设备,该电子设备可以包括上述本申请各种实施例的环路滤波的装置。
应理解,本申请实施例的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器包括但不限于以下各种:通用处理器,中央处理器CPU、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是 只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行时,能够使该便携式电子设备执行图6至图14所示实施例的方法。
本申请实施例还提出了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行图6至图14所示实施例的方法。
本申请实施例还提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行图6至图14所示实施例的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描 述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应所述理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者所述技术方案的部分可以以软件产品的形式体现出来,所述计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (28)

  1. 一种环路滤波的方法,其特征在于,包括:
    从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;
    根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
    根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
    根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;
    根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
  2. 根据权利权要1所述的环路滤波的方法,其特征在于,
    所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
  3. 根据权利权要1或2所述的环路滤波的方法,其特征在于,
    所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
  4. 根据权利要求1所述的方法,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
  5. 根据权利要求1所述的方法,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
  6. 根据权利要求1所述的方法,其特征在于,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:
    采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将所述目标滤波器的索引作为语法元素进行编码。
  8. 根据权利要求7所述的方法,其特征在于,所述将所述目标滤波器 的索引作为语法元素进行编码,包括:
    采用截断二元码对所述目标滤波器的索引进行编码。
  9. 一种环路滤波的方法,其特征在于,包括:
    从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;
    从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数;
    根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
    根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
  10. 根据权利权要9所述的环路滤波的方法,其特征在于,
    所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
  11. 根据权利权要9或10所述的环路滤波的方法,其特征在于,
    所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
  12. 根据权利要求9所述的方法,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
  13. 根据权利要求9所述的方法,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
  14. 根据权利要求9所述的方法,其特征在于,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:
    采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
  15. 一种环路滤波装置,其特征在于,包括:
    存储器,用于存储代码;
    处理器,用于执行所述存储器中存储的代码,以执行如下操作:
    从多个跨分量自适应环路滤波ALF滤波器中确定当前块的色度分量的目标滤波器;
    根据所述当前块的经ALF的色度分量和所述当前块的未经ALF的亮度分量确定所述当前块的色度分量的目标滤波系数;
    根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
    根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量;
    根据所述当前块的滤波后的色度分量进行编码,以及将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,其中,一帧图像的码流中仅包含一个用于指示所述多个跨分量ALF滤波器的数量总数的语法元素。
  16. 根据权利权要15所述的环路滤波的装置,其特征在于,
    所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
  17. 根据权利权要15或16所述的环路滤波的装置,其特征在于,
    所述用于指示所述多个跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
  18. 根据权利要求15所述的装置,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
  19. 根据权利要求15所述的装置,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
  20. 根据权利要求15所述的装置,其特征在于,所述将所述多个跨分量ALF滤波器的数量总数作为语法元素进行编码,包括:
    采用截断二元码对所述多个跨分量ALF滤波器的数量总数进行编码。
  21. 根据权利要求15所述的装置,其特征在于,所述处理器还用于:
    将所述目标滤波器的索引作为语法元素进行编码。
  22. 根据权利要求21所述的装置,其特征在于,所述将所述目标滤波器的索引作为语法元素进行编码,包括:
    采用截断二元码对所述目标滤波器的索引进行编码。
  23. 一种环路滤波的装置,其特征在于,包括:
    存储器,用于存储代码;
    处理器,用于执行所述存储器中存储的代码,以执行如下操作:
    从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,所述目标滤波器为当前块的色度分量所采用的ALF滤波器;其中,一帧图像的码流中仅包含一个用于指示跨分量ALF滤波器的数量总数的语法元素;
    从码流中解码出所述当前块的色度分量的目标滤波系数,所述目标滤波系数是所述目标滤波器中的系数;
    根据所述目标滤波器和所述目标滤波系数对所述当前块的经ALF的色度分量进行滤波;
    根据经所述目标滤波系数滤波后的色度分量和所述当前块的经ALF后的色度分量,确定所述当前块的滤波后的色度分量。
  24. 根据权利权要23所述的环路滤波的装置,其特征在于,
    所述用于指示所述跨分量ALF滤波器的数量总数的语法元素位于所述图像的自适应参数集Adaptation parameter set syntax中。
  25. 根据权利权要23或24所述的环路滤波的装置,其特征在于,
    所述用于指示所述跨分量ALF滤波器的数量总数的语法元素未存在于图像头和/或条带slice头中。
  26. 根据权利要求23所述的装置,其特征在于,所述当前块的经ALF的色度分量具体为所述当前块的依次经过自适应样值补偿滤波SAO和经ALF之后的色度分量。
  27. 根据权利要求23所述的装置,其特征在于,所述当前块的未经ALF的亮度分量具体为所述当前块的经过SAO后且未经ALF的亮度分量。
  28. 根据权利要求23所述的装置,其特征在于,所述从码流中解码出跨分量ALF滤波器的数量总数和目标滤波器的索引,包括:
    采用截断二元码对所述跨分量ALF滤波器的数量总数和/或所述目标滤波器的索引进行解码。
PCT/CN2019/130954 2019-12-31 2019-12-31 环路滤波的方法与装置 WO2021134706A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
KR1020227022560A KR20220101743A (ko) 2019-12-31 2019-12-31 루프 필터링 방법 및 비일시적 컴퓨터 저장 매체
CN201980051177.5A CN112544081B (zh) 2019-12-31 2019-12-31 环路滤波的方法与装置
JP2022537322A JP2023515742A (ja) 2019-12-31 2019-12-31 ループ内フィルタリングの方法、コンピュータ可読記憶媒体及びプログラム
CN202311663799.8A CN117596413A (zh) 2019-12-31 2019-12-31 视频处理方法及装置
PCT/CN2019/130954 WO2021134706A1 (zh) 2019-12-31 2019-12-31 环路滤波的方法与装置
EP19958181.0A EP4087243A4 (en) 2019-12-31 2019-12-31 LOOP FILTRATION METHOD AND APPARATUS
CN202311663855.8A CN117596414A (zh) 2019-12-31 2019-12-31 视频处理方法及装置
US17/853,906 US20220345699A1 (en) 2019-12-31 2022-06-29 In-loop filtering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130954 WO2021134706A1 (zh) 2019-12-31 2019-12-31 环路滤波的方法与装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/853,906 Continuation US20220345699A1 (en) 2019-12-31 2022-06-29 In-loop filtering method and device

Publications (1)

Publication Number Publication Date
WO2021134706A1 true WO2021134706A1 (zh) 2021-07-08

Family

ID=75013407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130954 WO2021134706A1 (zh) 2019-12-31 2019-12-31 环路滤波的方法与装置

Country Status (6)

Country Link
US (1) US20220345699A1 (zh)
EP (1) EP4087243A4 (zh)
JP (1) JP2023515742A (zh)
KR (1) KR20220101743A (zh)
CN (3) CN117596413A (zh)
WO (1) WO2021134706A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489977B (zh) * 2021-07-02 2022-12-06 浙江大华技术股份有限公司 环路滤波方法、视频/图像编解码方法及相关装置
CN114205582B (zh) * 2021-05-28 2023-03-24 腾讯科技(深圳)有限公司 用于视频编解码的环路滤波方法、装置及设备
CN114222118B (zh) * 2021-12-17 2023-12-12 北京达佳互联信息技术有限公司 编码方法及装置、解码方法及装置
WO2023245544A1 (zh) * 2022-06-23 2023-12-28 Oppo广东移动通信有限公司 编解码方法、码流、编码器、解码器以及存储介质
WO2024016981A1 (en) * 2022-07-20 2024-01-25 Mediatek Inc. Method and apparatus for adaptive loop filter with chroma classifier for video coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051438A1 (en) * 2010-09-01 2012-03-01 Qualcomm Incorporated Filter description signaling for multi-filter adaptive filtering
CN104683819A (zh) * 2015-01-31 2015-06-03 北京大学 一种自适应环路滤波方法及装置
CN104735450A (zh) * 2015-02-26 2015-06-24 北京大学 一种在视频编解码中进行自适应环路滤波的方法及装置
US20180063527A1 (en) * 2016-08-31 2018-03-01 Qualcomm Incorporated Cross-component filter

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180041778A1 (en) * 2016-08-02 2018-02-08 Qualcomm Incorporated Geometry transformation-based adaptive loop filtering
US10440396B2 (en) * 2017-03-28 2019-10-08 Qualcomm Incorporated Filter information sharing among color components
WO2019060443A1 (en) * 2017-09-20 2019-03-28 Vid Scale, Inc. HANDLING FACIAL DISCONTINUITIES IN 360-DEGREE VIDEO CODING
CN116233457A (zh) * 2018-03-30 2023-06-06 松下电器(美国)知识产权公司 编码装置、解码装置以及存储介质
US11197030B2 (en) * 2019-08-08 2021-12-07 Panasonic Intellectual Property Corporation Of America System and method for video coding
JP2022545747A (ja) * 2019-08-29 2022-10-28 エルジー エレクトロニクス インコーポレイティド クロス-コンポーネント適応的ループフィルタリングベースの映像コーディング装置及び方法
US20210076032A1 (en) * 2019-09-09 2021-03-11 Qualcomm Incorporated Filter shapes for cross-component adaptive loop filter with different chroma formats in video coding
CN114391255B (zh) * 2019-09-11 2024-05-17 夏普株式会社 用于基于交叉分量相关性来减小视频编码中的重构误差的系统和方法
US11451834B2 (en) * 2019-09-16 2022-09-20 Tencent America LLC Method and apparatus for cross-component filtering
US11202068B2 (en) * 2019-09-16 2021-12-14 Mediatek Inc. Method and apparatus of constrained cross-component adaptive loop filtering for video coding
US11343493B2 (en) * 2019-09-23 2022-05-24 Qualcomm Incorporated Bit shifting for cross-component adaptive loop filtering for video coding
US11979566B2 (en) * 2019-10-09 2024-05-07 Sharp Kabushiki Kaisha Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
JP7389252B2 (ja) * 2019-10-29 2023-11-29 北京字節跳動網絡技術有限公司 クロスコンポーネント適応ループフィルタ
US11425405B2 (en) * 2019-11-15 2022-08-23 Qualcomm Incorporated Cross-component adaptive loop filter in video coding
CN114731399A (zh) * 2019-11-22 2022-07-08 韩国电子通信研究院 自适应环内滤波方法和装置
US11265558B2 (en) * 2019-11-22 2022-03-01 Qualcomm Incorporated Cross-component adaptive loop filter
US11432016B2 (en) * 2019-12-05 2022-08-30 Hfi Innovation Inc. Methods and apparatuses of syntax signaling constraint for cross-component adaptive loop filter in video coding system
JP2023507090A (ja) * 2019-12-17 2023-02-21 テレフオンアクチーボラゲット エルエム エリクソン(パブル) 低複雑性画像フィルタ

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120051438A1 (en) * 2010-09-01 2012-03-01 Qualcomm Incorporated Filter description signaling for multi-filter adaptive filtering
CN104683819A (zh) * 2015-01-31 2015-06-03 北京大学 一种自适应环路滤波方法及装置
CN104735450A (zh) * 2015-02-26 2015-06-24 北京大学 一种在视频编解码中进行自适应环路滤波的方法及装置
US20180063527A1 (en) * 2016-08-31 2018-03-01 Qualcomm Incorporated Cross-component filter

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LUO LIDONG, WANG YONG-FANG;SHANG XI-WU;YANG PING;ZHANG ZHAO-YANG: "A low Complexity Adaptive Loop Filter Algorithm for Multi-View Video Coding", GUANGDIANZI-JIGUANG - JOURNAL OF OPTRONICS-LASER, TIANJIN DAXUE JIDIAN FENXIAO, TIANJIN, CN, vol. 25, no. 2, 1 February 2014 (2014-02-01), CN, pages 336 - 342, XP055827944, ISSN: 1005-0086, DOI: 10.16136/j.joel.2014.02.021 *
MA, SIWEI, FALEI LUO, TIEJUN HUANG: "Kernel Technologies and Applications of AVS2 Video Coding Standard", DIANXIN KEXUE - TELECOMMUNICATIONS SCIENCE, RENMIN YOUDIAN CHUBANSHE, BEIJING, CN, vol. 33, no. 8, 20 August 2017 (2017-08-20), CN, pages 1 - 15, XP055709436, ISSN: 1000-0801, DOI: 10.11959/j.issn.1000-0801.2017245 *
See also references of EP4087243A4 *

Also Published As

Publication number Publication date
US20220345699A1 (en) 2022-10-27
CN112544081B (zh) 2023-12-22
CN117596413A (zh) 2024-02-23
KR20220101743A (ko) 2022-07-19
CN117596414A (zh) 2024-02-23
JP2023515742A (ja) 2023-04-14
EP4087243A1 (en) 2022-11-09
EP4087243A4 (en) 2023-09-06
CN112544081A (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2021203394A1 (zh) 环路滤波的方法与装置
WO2021134706A1 (zh) 环路滤波的方法与装置
JP7308983B2 (ja) クロマのためのクロスコンポーネント適応(アダプティブ)ループフィルタ
CN113766248B (zh) 环路滤波的方法与装置
WO2021163862A1 (zh) 视频编码的方法与装置
CN110337811A (zh) 运动补偿的方法、装置和计算机系统
WO2022252222A1 (zh) 编码方法和编码装置
CN116456083A (zh) 解码预测方法、装置及计算机存储介质
WO2021056220A1 (zh) 视频编解码的方法与装置
CN113196762A (zh) 图像分量预测方法、装置及计算机存储介质
CN112640458A (zh) 信息处理方法及装置、设备、存储介质
WO2024007116A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2023197189A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023141970A1 (zh) 解码方法、编码方法、解码器、编码器和编解码系统
TWI834773B (zh) 使用適應性迴路濾波器以編碼和解碼影像之一或多個影像部分的方法、裝置和電腦可讀儲存媒體
WO2023197179A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2023197181A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2023019407A1 (zh) 帧间预测方法、编码器、解码器以及存储介质
WO2022016535A1 (zh) 视频编解码的方法和装置
WO2023123398A1 (zh) 滤波方法、滤波装置以及电子设备
WO2021134700A1 (zh) 视频编解码的方法和装置
WO2020181541A1 (zh) 环路滤波的方法、装置、计算机系统和可移动设备
WO2021056920A1 (zh) 视频编解码的方法和装置
WO2019191888A1 (zh) 环路滤波的方法、装置和计算机系统
WO2019157718A1 (zh) 运动补偿的方法、装置和计算机系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958181

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022537322

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227022560

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019958181

Country of ref document: EP

Effective date: 20220801