WO2023240618A1 - 一种滤波方法、解码器、编码器及计算机可读存储介质 - Google Patents

一种滤波方法、解码器、编码器及计算机可读存储介质 Download PDF

Info

Publication number
WO2023240618A1
WO2023240618A1 PCT/CN2022/099527 CN2022099527W WO2023240618A1 WO 2023240618 A1 WO2023240618 A1 WO 2023240618A1 CN 2022099527 W CN2022099527 W CN 2022099527W WO 2023240618 A1 WO2023240618 A1 WO 2023240618A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
frame
scaling factor
level
level scaling
Prior art date
Application number
PCT/CN2022/099527
Other languages
English (en)
French (fr)
Inventor
戴震宇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/099527 priority Critical patent/WO2023240618A1/zh
Priority to TW112122079A priority patent/TW202404349A/zh
Publication of WO2023240618A1 publication Critical patent/WO2023240618A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing

Definitions

  • Embodiments of the present application relate to video coding technology, including but not limited to a filtering method, a decoder, an encoder, and a computer-readable storage medium.
  • loop filters are used to improve the subjective and objective quality of reconstructed images.
  • the traditional loop filter module mainly includes DeBlocking Filter (DBF), Sample Adaptive Offset (SAO) and Adaptive Loop Filter (ALF).
  • DPF DeBlocking Filter
  • SAO Sample Adaptive Offset
  • ALF Adaptive Loop Filter
  • the image-level scaling factor is usually calculated by traversing all pixels in the current image, and video encoding is performed based on the block level such as Coding Tree Unit (CTU), combined with the image-level scaling factor.
  • CTU Coding Tree Unit
  • the image-level scaling factor reflects the overall scaling of the image.
  • the calculated image-level scaling factors will be affected by them and produce a large error with the scaling factors of most CTU units.
  • the image-level scaling factor is used to encode most other CTU units, the accuracy of video encoding and decoding will be reduced due to the error in the image-level scaling factor.
  • Embodiments of the present application provide a filtering method, a decoder, an encoder and a computer-readable storage medium, which can improve the accuracy of encoding and decoding.
  • embodiments of the present application provide a filtering method applied to a decoder.
  • the method includes:
  • the frame-level scaling factor, the first reconstructed image block and the first filtered image block are used to perform scaling processing to obtain a modified image block corresponding to the current block.
  • embodiments of the present application also provide a filtering method applied to an encoder.
  • the method includes:
  • the filter Filter the block-level scaling factor corresponding to each block corresponding to the current frame to determine the first frame-level scaling factor; the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block; the difference block is The blocks in the current frame that are different from other blocks;
  • the frame-level scaling factor is determined based on the first frame-level scaling factor and the second frame-level scaling factor.
  • embodiments of the present application provide a decoder, including:
  • the parsing part is configured to parse the code stream and determine the frame-level scaling factor and the initial residual of the current block; wherein the frame-level scaling factor is determined by the first frame-level scaling factor and the second frame-level scaling factor corresponding to the current frame. It is determined that the first frame-level scaling factor is obtained by filtering the block-level scaling factors in the current frame, wherein the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block; so The difference block is a block that is different from other blocks in the current frame;
  • the first reconstruction and filtering part is configured to perform image block reconstruction and filtering based on the initial residual, and determine the first reconstructed image block and the first filtered image block;
  • the first determining part is configured to perform scaling processing using the frame-level scaling factor, the first reconstructed image block and the first filtered image block to obtain a modified image block corresponding to the current block.
  • an encoder including:
  • the second reconstruction and filtering part is configured to perform image reconstruction and filtering based on the initial residual of each block in the current frame, and determine the second reconstructed image block and the second filtered image block corresponding to each block;
  • the second determination part is configured to perform scaling factor calculation based on the second reconstructed image block and the second filtered image block, and determine the block-level scaling factor corresponding to each block; for each block corresponding to the current frame The corresponding block-level scaling factors are screened to determine the first frame-level scaling factor; the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block; the difference block is the difference between the current frame and other blocks.
  • Differential blocks perform scaling factor calculation based on the second reconstructed image corresponding to the current frame and the second filtered image to obtain the second frame-level scaling factor corresponding to the current frame; the second reconstructed image corresponds to each block through The second reconstructed image block is determined; the second filtered image is determined by the second filtered image block corresponding to each block; based on the first frame level scaling factor and the second frame level scaling factor, the frame level is determined Stretch factor.
  • embodiments of the present application also provide a decoder, including:
  • the first memory stores a computer program that can be run on a first processor, and when the first processor executes the program, the filtering method of the decoder is implemented.
  • embodiments of the present application also provide an encoder, including:
  • the second memory stores a computer program executable on a second processor, and the second processor executes the filtering method of the encoder when the program is executed.
  • Embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
  • the filtering method of the claim decoder is implemented; or, the computer program is executed by a second processor.
  • the filtering method of the claim encoder is implemented.
  • Embodiments of the present application provide a filtering method, a decoder, an encoder, and a computer-readable storage medium.
  • the frame-level scaling factor parsed from the code stream by the decoder is obtained by combining the first frame-level scaling factor corresponding to the current frame and
  • the second frame-level scaling factor is determined.
  • the first frame-level scaling factor is obtained by filtering the block-level scaling factors of the current frame, and does not include the block-level scaling factors corresponding to the difference blocks, thereby reducing the block-level scaling factors corresponding to the difference blocks to calculate the entire frame level.
  • the error caused by the scaling factor enables the calculated first frame-level scaling factor to more accurately represent the magnitude of the correction required for most blocks in the current frame, improving the accuracy of the first frame-level scaling factor.
  • the frame-level scaling factor is determined between the first frame-level shrinking factor and the second frame-level shrinking factor by comparing the two frame-level scaling factors, the distortion correction performance of the frame-level shrinking factor is further improved, thus improving Improves the encoding performance and encoding accuracy of the encoder.
  • the frame-level shrinkage factor is used at the decoder end to scale the first reconstructed image block and the first filtered image block obtained by image block reconstruction and filtering through the initial residual to achieve decoding of the current block in the current frame. Filtering to obtain the corrected image block corresponding to the current block can improve the accuracy of the corrected image block, thereby improving decoding performance and decoding accuracy.
  • Figure 1 is a schematic diagram of the application of a coding framework provided
  • Figure 2 is a schematic diagram of the application of another coding framework provided
  • Figure 3 is a network structure diagram in which the loop filter of the multi-layer convolution network provided by the embodiment of the present application performs filtering optimization on the input reconstructed image;
  • Figure 4 is a network structure diagram of a loop filter based on a multi-layer residual network provided by an embodiment of the present application
  • Figure 5 is a schematic diagram of the division of exemplary coding units provided by the embodiment of the present application.
  • Figure 6A is a schematic distribution diagram of block-level scaling factors provided by an embodiment of the present application.
  • Figure 6B is another schematic diagram of the distribution of block-level scaling factors provided by an embodiment of the present application.
  • Figure 7A is a detailed framework diagram of a video coding system provided by an embodiment of the present application.
  • Figure 7B is a detailed framework diagram of a video decoding system provided by an embodiment of the present application.
  • Figure 8 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 9 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 10 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 11 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 12 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 13 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 14 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 15 is a schematic diagram of an exemplary coding framework provided by the embodiment of the present application.
  • Figure 16 is an optional flow diagram of the filtering method provided by the embodiment of the present application.
  • Figure 17 is a schematic structural diagram of a decoder provided by an embodiment of the present application.
  • Figure 18 is a schematic structural diagram 2 of a decoder provided by an embodiment of the present application.
  • Figure 19 is a schematic structural diagram of an encoder provided by an embodiment of the present application.
  • Figure 20 is a schematic structural diagram 2 of an encoder provided by an embodiment of the present application.
  • VVC Very Video Coding
  • VVC reference software test platform
  • Audio and video coding standard (AVS)
  • HPM High-Performance Model
  • HPM-ModAI High Performance-Modular Artificial Intelligence Model
  • ALF Adaptive loop filter
  • Quantization Parameter (QP) Quantization Parameter
  • MAE Mean Absolute Error
  • digital video compression technology mainly compresses huge digital image and video data to facilitate transmission and storage.
  • digital video compression standards can save a lot of video data, it is still necessary to pursue better digital video compression technology to reduce the number of Bandwidth and traffic pressure of video transmission.
  • the encoder reads unequal pixels from the original video sequence in different color formats, which contain brightness components and chrominance components. That is, the encoder reads a black and white or color image. The image is then divided into blocks, and the block data is handed over to the encoder for encoding.
  • Today's encoders usually use mixed frame coding modes, which generally include intra-frame prediction and inter-frame prediction, transformation/quantization, inverse quantization/inverse transform, For operations such as loop filtering and entropy coding, please refer to Figure 1 for the specific processing flow.
  • intra-frame prediction only refers to the information of the same frame image, and predicts pixel information within the current divided block to eliminate spatial redundancy
  • inter-frame prediction can include motion estimation and motion compensation, which can refer to image information of different frames, using Motion estimation searches for the motion vector information that best matches the current divided block, which is used to eliminate temporal redundancy; transformation converts the predicted image blocks into the frequency domain, and the energy is redistributed. Combined with quantization, information that is insensitive to the human eye can be removed.
  • the traditional loop filter module mainly includes deblocking filter (DeBlocking Filter, DBF), sample adaptive offset filter (Sample adaptive offset, SAO) and adaptive correction filter (Adaptive loop filter (ALF).
  • DBF deblocking filter
  • SAO sample adaptive offset filter
  • ALF adaptive correction filter
  • the Neural Network based Loop Filter (CNNLF) based on the residual neural network is also used as the baseline solution of the intelligent loop filtering module, and is set between the SAO filtering and the ALF filtering , as shown in Figure 2 for details.
  • the scenario where filtering processing can be performed may be the AVS-based reference software test platform HPM or the multifunctional video coding (Versatile Video Coding, VVC)-based VVC reference software test platform (VVC TEST MODEL, VTM).
  • VVC Very Video Coding
  • VVC TEST MODEL VTM
  • Neural network loop filtering tools usually include loop filters of multi-layer convolutional networks and loop filters of multi-layer residual networks.
  • the loop filter of the structure of multi-layer convolutional networks can be The network model shown in Figure 3.
  • the network model contains 12 hidden layers and 1 3x3 convolutional layer. Each hidden layer consists of a 3x3 convolutional layer and an activation layer (Leaky ReLU), and each convolutional layer contains 96 channels.
  • the input of the network model is the reconstructed image 1 (NxN size) containing 4 luma sub-blocks (Y) and 2 chroma blocks (U, V).
  • the network model performs filtering optimization on the input reconstructed image 1 through 12 stacked hidden network layers to obtain a filtered image. Then the residual between the input reconstructed image 1 and the filtered image is corrected according to the image-level scaling factor (SF), and the final output image 2 (NxN size) of the network is obtained based on the corrected residual and the reconstructed image. , so that the final output image 2 is closer to the original image, bringing better encoding performance.
  • SF image-level scaling factor
  • the loop filter of the multi-layer residual network may be a network model as shown in Figure 4.
  • the input of the network model also includes block division information par_yuv, prediction information pred_yuv, and QP information.
  • QP information can include basic QP (Base QP) information and slice QP (Slice QP) information.
  • the network model combines the above information (cat) and filters the combined information through a multi-layer residual network to obtain the filtered image output_yuv.
  • the network model also corrects the residual between the input reconstructed image rec_yuv and the filtered image output_yuv based on the image-level scaling factor, and obtains the final output image of the network based on the corrected residual and the reconstructed image.
  • encoding is performed in block units.
  • the encoder first reads the image information and divides the image into several coding tree units (Coding Tree Unit, CTU), and a coding tree unit can be further divided into several coding units (CU). These coding units can be rectangular blocks. It can also be a square block. The specific relationship can be seen in Figure 5.
  • Encoders generally perform encoding on a block-level basis such as CTU units or CU units. Based on the current block-level encoding process, if block-level scaling factors can be used, the performance of correcting distortion will be better in theory. However, the use of more block-level scaling factors requires encoding more bits, thus reducing bit rate performance. Considering the overall performance of bit rate and distortion, most neural network-based loop filtering tools use image-level scaling factors.
  • the image-level scaling factor is obtained by directly traversing all pixels in the image when exporting. It can be seen that when each CTU unit or CU unit in the image is encoded, if the block-level scaling factor of some CTU units in the image is too different from the block-level scaling factor of other CTU units, the image-level scaling factor will be There is a large error in the calculation.
  • an image is divided into a 4x2 layout, with a total of 8 CTU block units.
  • the block-level scaling factor of each CTU unit can be obtained by traversing each CTU unit, as shown in Figure 6A and Figure 6B.
  • the image-level scaling factor can be obtained by traversing each pixel in the entire image.
  • the image-level scaling factor of Figure 6A is 0.26
  • the image-level scaling factor of Figure 6B is 0.18.
  • Embodiments of the present application provide a filtering method, a decoder, an encoder, and a computer-readable storage medium, which can improve the accuracy of video encoding and decoding.
  • the video encoding system 100 includes a transformation and quantization unit 101, an intra estimation unit 102, an intra prediction unit 103, a motion compensation unit 104, a motion estimation unit 105, an inverse transformation and inverse quantization unit 106, and a filter.
  • a video coding block can be obtained by dividing the coding tree unit (CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is paired through the transformation and quantization unit 101
  • the video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 102 and the intra-frame prediction unit 103 are used to Intra prediction is performed on the video encoding block; specifically, intra estimation unit 102 and intra prediction unit 103 are used to determine an intra prediction mode to be used to encode the video encoding block;
  • motion compensation unit 104 and motion estimation unit 105 is used to perform inter-frame prediction encoding of the received video encoding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information; motion estimation performed by the motion estimation unit 105 is to generate a motion vector
  • the motion vector can estimate the motion of the video encoding block, and then the motion compensation unit 104 performs motion compensation based on the motion vector determined by the motion estimation unit 105; after determining the intra prediction mode, the intra prediction unit 103 also is used to provide the selected intra prediction data to the encoding unit 109, and the motion estimation unit 105 also sends the calculated and determined motion vector data to the encoding unit 109; in addition, the inverse transformation and inverse quantization unit 106 is used for the video Reconstruction of the coding block, the residual block is reconstructed in the pixel domain, the reconstructed residual block removes block effect artifacts through the filter control analysis unit 107 and the filtering unit 108, and then the reconstructed residual block is added to the decoding A predictive block in the frame of the image cache unit 110 is used to generate a reconstructed video encoding block; the encoding unit 109 is used to encode various encoding parameters and quantized transform coefficients.
  • the contextual content can be based on adjacent coding blocks and can be used to encode information indicating the determined intra prediction mode and output the code stream of the video signal; and the decoded image buffer unit 110 is used to store the reconstructed video coding blocks for Forecast reference. As the video image encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image cache unit 110 .
  • the video decoding system 200 includes a decoding unit 201, an inverse transform and inverse quantization unit 202, an intra prediction unit 203, a motion compensation unit 204, a filtering unit 205, a decoded image cache unit 206, etc., wherein the decoding unit 201 can implement header information decoding and CABAC decoding, and the filtering unit 205 can implement DBF filtering/SAO filtering/ALF filtering.
  • the code stream of the video signal is output; the code stream is input into the video decoding system 200, and first passes through the decoding unit 201 to obtain the decoded transformation coefficient; for the transformation coefficient, pass Inverse transform and inverse quantization unit 202 performs processing to generate a residual block in the pixel domain; intra prediction unit 203 may be operable to generate based on the determined intra prediction mode and data from previously decoded blocks of the current frame or picture. Prediction data for the current video decoding block; motion compensation unit 204 determines prediction information for the video decoding block by parsing motion vectors and other associated syntax elements, and uses the prediction information to generate predictions for the video decoding block being decoded.
  • a decoded video block is formed by summing the residual block from inverse transform and inverse quantization unit 202 and the corresponding predictive block generated by intra prediction unit 203 or motion compensation unit 204; the decoded video signal Video quality can be improved by filtering unit 205 to remove blocking artifacts; the decoded video blocks are then stored in decoded image buffer unit 206, which stores reference images for subsequent intra prediction or motion compensation. , and is also used for the output of video signals, that is, the restored original video signals are obtained.
  • the method provided by the embodiment of the present application can be applied to the filtering unit 108 part as shown in Figure 7A (indicated by a black bold box).
  • the method in the embodiment of the present application can be applied to a video coding system (referred to as "encoder” for short).
  • encoder video coding system
  • the "current block” specifically refers to the block currently to be encoded in the video image (which may also be referred to as the "encoding block” for short).
  • filtering processing is performed on the current block, which mainly acts on the filtering unit 108 of the video encoding system 1 .
  • the filtering method provided by the embodiment of the present application is implemented through the filtering unit 108.
  • the embodiment of the present application provides a filtering method, which is applied to a video encoding device, that is, an encoder.
  • the functions implemented by this method can be implemented by calling the program code by the second processor in the video encoding device.
  • the program code can be stored in a computer-readable storage medium.
  • the video encoding device at least includes a second processor and a third processor. 2. Storage media. Among them, the current decoding block and the current encoding block are both represented by the current block in the following.
  • the current frame is the original video frame.
  • the encoder divides the input original video frame into at least one video encoding block, and encodes each block in the at least one video encoding block through each block-level encoding process.
  • the encoder obtains the residual pixel information of the current block through intra-frame or inter-frame prediction as the initial residual.
  • intra-frame prediction the encoder predicts the current block with reference to the image information of adjacent blocks of the current frame to obtain the prediction block; the prediction block and the current block, that is, the original image block, perform residual calculation to obtain the initial residual .
  • the encoder performs image reconstruction and filtering on the initial residual of the current block in the current frame, determines the second reconstructed image block and the second filtered image block corresponding to the current block, and continues to perform image block reconstruction and filtering on the next block in the current frame. , until the current frame is processed, and the second reconstructed image block and the second filtered image block corresponding to each block are obtained.
  • the encoder performs image reconstruction and filtering on the initial residual of the current block in the current frame.
  • the process of obtaining the second reconstructed image block and the second filtered image block corresponding to the current block may include:
  • the encoder performs image reconstruction on the initial residual of the current block to obtain a second reconstructed image block; it filters the second reconstructed image block to obtain a second filtered image block.
  • the process by which the encoder performs image reconstruction on the initial residual of the current block may include: performing image reconstruction on the initial residual of the current block to obtain a second initial reconstructed image block; performing an image reconstruction on the second initial reconstructed image block.
  • Deblock filtering is performed to obtain a second deblocked image block; sample adaptive compensation filtering is performed on the second deblocked image block to obtain a second reconstructed image block.
  • the encoder may filter the second reconstructed image block to obtain a second filtered image block corresponding to the current block.
  • the encoder performs image reconstruction on the initial residual of the current block to obtain a second initial reconstructed image block, and uses the second initial reconstructed image block as the input of the filtering unit in the encoder, through
  • the DBF filter module obtains the second deblocked image block; performs SAO filtering on the second deblocked image block to obtain the second reconstructed image block.
  • the second reconstructed image block is used as the input of the loop filter module in the filtering unit, and a loop filter module based on a multi-layer convolutional neural network or a loop filter module (CNNLF) based on a multi-layer residual neural network is used to perform the second reconstruction
  • a loop filter module based on a multi-layer convolutional neural network or a loop filter module (CNNLF) based on a multi-layer residual neural network is used to perform the second reconstruction
  • the image block is subjected to loop filtering to obtain a second filtered image block output by the loop filtering module.
  • the encoder calculates the scaling factor within the pixel range of each block based on the second reconstructed image block and the second filtered image block corresponding to each block in the current frame, and determines the block-level scaling factor corresponding to each block.
  • the encoder's calculation process of the block-level scaling factor includes: a pixel residual calculation process, a block-level residual calculation process, and a block-level residual fitting process.
  • the encoder can be in the block-level pixel range corresponding to the current block, and based on the position of each pixel in the block-level pixel range, the current block (that is, the original image block ), perform residual calculation on each corresponding pixel in the second reconstructed image block and the second filtered image block, and obtain the original pixel residual corresponding to each pixel position that represents the difference between the original pixel and the reconstructed pixel, and the filtered pixel that represents the difference.
  • At least one block-level residual is obtained by fusing the original pixel residual and the filtered pixel residual at each pixel position within the pixel range of each block. For example, the original pixel residual and the filtered pixel residual at each pixel position within the pixel range of each block are traversed and weighted to obtain the block-level original pixel residual, the block-level filtered pixel residual, and the first block Level weighted residuals vs. second block level weighted residuals.
  • the first block-level weighted residual and the second block-level weighted residual are obtained by applying different weighting methods to the original pixel residual and the filtered pixel residual at each pixel position within the current block pixel range.
  • the encoder may fit at least one block-level residual calculated by the block-level residual calculation process to obtain the block-level scaling factor.
  • the block-level scaling factor is obtained by fitting using the least squares method in combination with a preset value range (exemplarily, including a preset upper limit value and a preset lower limit value).
  • the encoder performs scaling factor calculation based on the second reconstructed image block and the second filtered image block.
  • the process of determining the block-level scaling factor corresponding to each block can be as shown in formulas (1) to (7), as follows: :
  • cnn(xi , y i ) represents the pixel corresponding to the i-th pixel position in the second filtered image block, which is the second filtered pixel.
  • the encoder uses formula (2 ) to determine the filtered pixel residual cnnResi(x i ,y i ).
  • W k represents the width value of each block (taking the k-th block as an example), and H k represents the height value of each block.
  • the encoder can traverse each block through formula (3).
  • the original pixel residual orgResi( xi ,y i ) at each pixel position determines the block-level original pixel residual sum_orgResi corresponding to each block; and through formula (4), traverse the filtered pixel residual at each pixel position cnnResi( xi ,y i ), determines the block-level filtered pixel residual sum_cnnResi corresponding to each block.
  • the encoder performs weighting processing based on the block-level original pixel residual and the block-level filtered pixel residual to obtain the block-level scaling factor corresponding to each block.
  • the encoder uses formula (6) to weight and fuse the original pixel residual orgResi( xi , yi ) and the filtered pixel residual cnnResi( xi , yi ) corresponding to each pixel position in each block to determine each The second block-level weighted residual sum_crossMulti corresponding to one block.
  • SF bottom is the preset lower limit value
  • SF up is the preset upper limit value.
  • the encoder combines the preset upper limit value and the preset lower limit value to calculate the first block-level weighted residual sum_cnnMulti, the second block-level weighted residual sum_crossMulti, the block-level original pixel residual sum_orgResi and the block-level
  • the filtered pixel residual sum_cnnResi is subjected to least squares processing to obtain the block-level scaling factor BSF corresponding to each block.
  • the above-mentioned encoder performs scaling factor calculation based on the second reconstructed image block and the second filtered image block, and in the process of determining the block-level scaling factor corresponding to each block, each of the at least one color component can be Calculate the original pixel residual and filtered pixel residual on the color components respectively, and obtain the original pixel residual and filtered pixel residual corresponding to each color component. Then, at each pixel position within the pixel range of each block, by comparing each color The original pixel residuals and filtered pixel residuals corresponding to the components are traversed and weighted to obtain the block-level scaling factors corresponding to each color component.
  • the encoder when the block-level scaling factors include block-level scaling factors corresponding to each color component, when the encoder fuses the filtered block-level scaling factors, the encoder can fuse each color component to obtain each color.
  • the first frame-level scaling factor corresponding to the component when the block-level scaling factors include block-level scaling factors corresponding to each color component, when the encoder fuses the filtered block-level scaling factors, the encoder can fuse each color component to obtain each color.
  • the first frame-level scaling factor corresponding to the component when the block-level scaling factors include block-level scaling factors corresponding to each color component.
  • the encoder when the encoder completes the filtering process for each block in the current frame through the filtering unit, it can traverse the second reconstructed image block corresponding to each block obtained through each block-level encoding process, and obtain the second reconstructed image block corresponding to the current frame. Reconstruct the image. Similarly, by traversing the second filtered image block corresponding to each block, the second filtered image corresponding to the current frame is obtained, that is, the video image signal output by the filtering unit on the encoder side.
  • the encoder performs an operation based on the second reconstructed image and the second filtered image corresponding to the current frame, and in the entire image (frame level) range, through the pixels corresponding to each pixel position on the current frame, the second reconstructed image, and the second filtered image.
  • the scaling factor is calculated to obtain the second frame-level scaling factor.
  • the encoder's process of calculating the second frame-level scaling factor is similar to the above-mentioned process of calculating the block-level scaling factor, including: a pixel residual calculation process, a block-level residual calculation process, and a block-level residual fitting process.
  • the encoder can determine the original pixel residual between the second reconstructed pixels corresponding to the same pixel position in the second reconstructed image based on the original pixel corresponding to each pixel position in the current frame; According to the second filtered pixel corresponding to each pixel position in the second filtered image, the filtered pixel residual between the second reconstructed pixels corresponding to the same pixel position is determined.
  • the original pixel residual and the filtered pixel residual at each pixel position within the pixel range of the current frame are traversed and weighted to obtain the frame-level original pixel residual and the frame-level filtered pixel residual.
  • Difference, first frame level weighted residual and second frame level weighted residual are obtained by applying different weighting methods to the original pixel residual and the filtered pixel residual at each pixel position within the current frame range.
  • the second reconstructed image is obtained based on the second reconstructed image block corresponding to each block, and the second reconstructed image is obtained based on the second filtered image block corresponding to each block.
  • the second filtered image is then used to calculate the second frame-level scaling factor based on the second reconstructed image and the second filtered image. Therefore, the encoder can also directly use the pixel residual calculation process in the calculation process of the block-level scaling factor of each block.
  • the obtained filtered pixel residual and original pixel residual corresponding to each pixel position in each block are determined to determine the frame-level original pixel residual and the frame-level filtered pixel residual to reduce the computational workload.
  • the first frame-level weighted residual can be obtained by weighting and fusing the filtered pixel residuals corresponding to each pixel position in the second filtered image;
  • the filtered pixel residual corresponding to each pixel position is weighted and fused with the original pixel residual corresponding to each pixel position in the second reconstructed image to obtain the second frame-level weighted residual; combine the preset upper limit value and the preset Lower limit value, perform least squares processing on the first frame-level weighted residual, the second frame-level weighted residual, the frame-level original pixel residual and the frame-level filtered pixel residual to obtain the second frame-level scaling factor.
  • the calculation formula of the second frame-level scaling factor may be similar to the above-mentioned formulas (1) to (7). It should be noted that when calculating the second frame-level scaling factor, (x i , y i ) in formulas (1) to (7) represent the i-th pixel position within the pixel range of the current frame, and W k can be W m , represents the width value of the current frame (taking the m-th frame in the current video stream sequence as an example); H k can be H m , indicating the height of the current frame (taking the m-th frame in the current video stream sequence as an example) value.
  • the above two calculation methods of the second frame-level scaling factor can be selected according to the specific actual situation, and are not limited in the embodiment of the present application.
  • the second frame-level scaling factor can also be obtained by separately calculating each color component of at least one color component.
  • the original pixel residual and filtered pixel residual corresponding to each color component are then traversed and weighted at each pixel position within the pixel range of the current frame by traversing and weighting the original pixel residual and filtered pixel residual corresponding to each color component.
  • Statistics are used to obtain the second frame-level scaling factor corresponding to each color component.
  • S105 Determine the frame-level scaling factor based on the first frame-level scaling factor and the second frame-level scaling factor.
  • the first frame-level scaling factor is obtained by filtering the scaling factors corresponding to each block; the second frame-level scaling factor is obtained by calculating the scaling factor of pixels within the image level range.
  • the encoder can compare the distortion correction performance based on the first frame-level scaling factor and the second frame-level scaling factor, and determine the scaling factor with better distortion correction performance as the final frame-level scaling factor.
  • the distortion correction performance can be compared by calculating the distortion cost of the first frame-level scaling factor and the second frame-level scaling factor respectively, or other related indicators can be used to measure and compare the first frame. Corrected distortion performance of first-level scaling factor and second frame-level scaling factor. The specific selection is made according to the actual situation, and is not limited by the embodiments of this application.
  • S105 can be implemented by executing S1051-S1053, as follows:
  • the encoder can use the first frame-level scaling factor and the second frame-level scaling factor to scale the second reconstructed image and the second filtered image, respectively, to obtain the first modified image corresponding to the first frame-level scaling factor, and The second modified image corresponding to the second frame-level scaling factor.
  • the first frame-level scaling factor includes the first frame-level scaling factor corresponding to each color component.
  • the encoder can, for each pixel position in the second reconstructed image and the second filtered image, based on the first frame-level scaling factor corresponding to each color component, The first reconstructed pixels on each color component of each second reconstructed pixel in the second reconstructed image, and the second filtered pixels on each color component of each second filtered pixel in the second filtered image are scaled and superposed accordingly. Processing to obtain the first corrected pixel corresponding to each pixel position and including each color component, using the first frame-level scaling factor to scale the second reconstructed image and the second filtered image to obtain the first corrected image.
  • the encoder uses the first frame-level scaling factor to perform scaling processing on the second reconstructed image and the second filtered image.
  • the process of obtaining the first modified image can be implemented through formula (8), as follows:
  • RSF is the first frame-level scaling factor
  • ( xi ,y i ) represents the i-th pixel position within the pixel range of the current frame
  • cnn( xi ,y i ) represents the i-th pixel position
  • rec( xi , yi ) represents the second reconstructed pixel corresponding to the i-th pixel position in the second reconstructed image.
  • the encoder uses formula (8) to determine the second pixel residual cnn( xi ,y i )-rec( x i ,y i ); and use the first frame-level scaling factor RSF to scale the second pixel residual, and superimpose it with the second reconstructed pixel rec(xi , y i ) to obtain the corresponding position of each pixel
  • the first corrected pixel output 1 ( xi , y i ) and then by traversing the first corrected pixel corresponding to each pixel position, the first corrected image is obtained.
  • the encoder uses the first frame-level scaling factor to scale the second reconstructed image and the second filtered image.
  • the process of obtaining the first modified image can also be implemented through formula (9), as follows:
  • 1 is the preset coefficient.
  • the preset coefficient value is 1.
  • the encoder uses formula (9) to use the difference (1-RSF) between the first frame-level scaling factor and the preset coefficient (1) to scale the second reconstructed pixel rec(xi , y i ).
  • the encoder uses the second frame-level scaling factor to perform scaling processing on the second reconstructed image and the second filtered image.
  • the process of obtaining the second corrected image is the same as the above-mentioned process of using the first frame-level scaling factor to obtain the first corrected image. Consistent and will not be repeated here.
  • the encoder determines the distortion cost between the first modified image and the current frame, and obtains the first distortion cost corresponding to the first frame-level scaling factor; the encoder determines the distortion cost between the second modified image and the current frame, obtaining The second distortion cost corresponding to the second frame-level scaling factor.
  • the encoder determines the first distortion cost and the second distortion cost by calculating the MSE or MAE of each pixel position, or other error calculation methods can be used to obtain the difference between the modified image and the original image.
  • the distortion cost of the error or distortion program The specific selection is made according to the actual situation, and is not limited by the embodiments of this application.
  • the encoder compares the first distortion cost and the second distortion cost to evaluate which of the first frame-level scaling factor and the second frame-level scaling factor corresponds to higher coding performance, thereby starting from the first frame-level scaling factor. Determine the frame-level scaling factor with the second frame-level scaling factor.
  • the encoder changes the first frame-level scaling factor Determined as the frame-level scaling factor. If the first distortion cost is greater than the second distortion cost, it means that the distortion correction performance of the second frame-level scaling factor is higher than the first frame-level scaling factor, and the encoder determines the second frame-level scaling factor as the frame-level scaling factor.
  • the encoder determines the frame-level scaling factor
  • it encodes the frame-level scaling factor into the code stream for the decoder to read.
  • the decoder can parse the current frame and the frame-level scaling factor corresponding to the current frame from the code stream.
  • the filtering unit on the decoder side uses the frame-level scaling factor to reconstruct the decoding process. The image is subjected to residual correction.
  • image reconstruction and filtering are performed based on the initial residual of each block in the current frame.
  • the encoder can also perform image reconstruction and filtering based on the corresponding second block of each block. of the second filtered image blocks, that is, by traversing the second filtered image blocks corresponding to each block, the second filtered image corresponding to the current frame is obtained.
  • the above-mentioned filtering unit may also include an ALF filter.
  • image reconstruction and filtering are performed based on the initial residual of each block in the current frame to obtain the second reconstructed image block and the second filtered image block corresponding to each block.
  • the encoder can also perform adaptive filtering on the second filtered image block corresponding to each block during the filtering process in the encoding process corresponding to each block, to obtain an adaptive filtered image block corresponding to each block.
  • adaptive filtering is performed on the second filtered image block corresponding to the current block to obtain the adaptive filtered image block corresponding to the current block, and the adaptive filtering is continued on the second filtered image block corresponding to the next block in the current frame to obtain the following A corresponding adaptive filtering image block until the current frame is processed, and each corresponding adaptive filtering image block is obtained.
  • the encoder can obtain the second filtered image corresponding to the current frame by traversing the adaptive filtering image blocks corresponding to each block.
  • image reconstruction and filtering are performed based on the initial residual of each block in the current frame, and the second reconstructed image block and the second filtered image block corresponding to each block are determined; based on the second reconstructed image block and the second filtered image block, the scaling factor is calculated within the range of the image block, and the block-level scaling factor corresponding to each block is determined. Furthermore, the block-level scaling factor corresponding to each block in the current frame is screened to determine the first frame-level scaling factor, so that the first frame-level scaling factor does not include the scaling factor corresponding to the difference block, thereby reducing the number of difference blocks in the current frame.
  • the corresponding scaling factor affects the calculation of the frame-level scaling factor for the entire image, improving the accuracy of the first frame-level scaling factor.
  • the embodiment of the present application determines the frame-level scaling factor by comparing the first frame-level scaling factor with the second frame-level scaling factor calculated within the image-level range, thereby improving the performance of the frame-level scaling factor in correcting distortion. Improved encoding performance and ultimately improved encoding accuracy.
  • N is a positive integer greater than 0.
  • the encoder determines and discards the block-level scaling factors corresponding to the difference blocks from the block-level scaling factors corresponding to each block through boundary value screening, and obtains N candidate block-level scaling factors.
  • S201 can be implemented through S2011-S2013, as follows:
  • S2012 Determine the number of block-level scaling factors corresponding to the difference block and the first proportion of the total number of block-level scaling factors corresponding to each block.
  • the encoder counts the number of block-level scaling factors corresponding to the difference block, and calculates the number of block-level scaling factors corresponding to the difference block.
  • the total number of block-level scaling factors corresponding to each block is the same as the current frame.
  • the difference blocks are discarded from the block-level scaling factor corresponding to each block.
  • the block-level scaling factors other than the block-level scaling factors corresponding to the difference blocks are used as N candidate block-level scaling factors.
  • the preset proportion threshold may include a preset upper limit proportion threshold TH up and a first preset lower proportion threshold TH bottom .
  • the specific selection is based on the actual situation and is not limited in the embodiments of this application.
  • S201 can be implemented through S2014-S2017, as follows:
  • the encoder traverses the block-level scaling factors corresponding to each block, and determines at least one of the maximum block-level scaling factor corresponding to the maximum value and the minimum block-level scaling factor corresponding to the minimum value, as at least one block-level scaling factor. factor.
  • the encoder may determine the maximum block-level scaling factor corresponding to the maximum value among the block-level scaling factors corresponding to each block as at least one block-level scaling factor; it may also determine the minimum value corresponding to the block-level scaling factors corresponding to each block.
  • the minimum block-level scaling factor is used as at least one block-level scaling factor; the maximum block-level scaling factor and the minimum block-level scaling factor among the block-level scaling factors corresponding to each block can also be determined as at least one block-level scaling factor.
  • the encoder performs mean calculation on the block-level scaling factors except at least one block-level scaling factor among the block-level scaling factors corresponding to each block, and obtains the first mean value.
  • the encoder calculates the difference between at least one block-level scaling factor, that is, at least one of the maximum block-level scaling factor and the minimum block-level scaling factor, and the first mean value. If the difference is greater than the preset difference threshold, it means that the maximum block-level scaling factor or the minimum block-level scaling factor is significantly different from the other block-level scaling factors. Therefore, at least one block-level scaling factor is used as the block level corresponding to the difference block. Stretch factor.
  • the maximum block-level scaling factor is used as the block-level scaling factor corresponding to the difference block; if the minimum block-level scaling factor is different from the first mean value, the maximum block-level scaling factor is used as the block-level scaling factor corresponding to the difference block;
  • the difference between the mean values is greater than the preset difference threshold, then the minimum block-level scaling factor is used as the block-level scaling factor corresponding to the difference block; if the difference between the maximum block-level scaling factor and the first mean value, and the difference between the minimum block-level scaling factor and the If the differences between the average values are both greater than the preset difference threshold, the maximum block-level scaling factor and the minimum block-level scaling factor are both used as the block-level scaling factors corresponding to the difference blocks.
  • the preset difference threshold may include a preset maximum difference threshold TH max and a preset minimum difference threshold TH min .
  • the specific selection is based on the actual situation and is not limited in the embodiments of this application.
  • the block-level scaling factors other than the block-level scaling factors corresponding to the difference blocks are used as N candidate block-level scaling factors.
  • the encoder discards the block-level scaling factors corresponding to the difference blocks filtered by the maximum value and the minimum value from the block-level scaling factors corresponding to each block, and obtains N candidate block-level scaling factors.
  • the number of the maximum block-level scaling factor or the minimum block-level scaling factor may be multiple.
  • the scaling factor is used as the block-level scaling factor corresponding to the difference block.
  • the block-level scaling factors other than the block-level scaling factor corresponding to the difference block are used as N candidate block-level scaling factors.
  • the block-level scaling factor corresponding to the difference block is determined through the maximum scaling factor and the minimum scaling factor, and among the block-level scaling factors corresponding to each block, the block-level scaling factors other than the block-level scaling factor corresponding to the difference block are
  • the scaling factor is used as N candidate block-level scaling factors.
  • the first frame-level scaling factor is calculated based on the N candidate block-level scaling factors, which reduces the impact of the differences of a few blocks on the calculation of the first frame-level scaling factor and improves the efficiency of the calculation. The accuracy of the first frame-level scaling factor, thereby improving the coding accuracy.
  • S202 may be as shown in Figure 13, including S2021, as follows:
  • the encoder may average the N candidate block-level scaling factors filtered according to the above-mentioned preset upper limit value and the preset lower limit value to obtain the first frame-level scaling factor; or, obtain the first frame-level scaling factor based on the above-mentioned maximum
  • the block-level scaling factor and the N candidate block-level scaling factors screened out by the minimum block-level scaling factor are averaged to obtain the first frame-level scaling factor.
  • the encoder further filters the N candidate block-level scaling factors based on the preset upper limit value and the preset lower limit value. , determining at least one of the largest candidate block-level scaling factor corresponding to the maximum value and the smallest candidate block-level scaling factor corresponding to the minimum value as at least one candidate block-level scaling factor.
  • the process by which the encoder determines at least one candidate block-level scaling factor among N candidate block-level scaling factors is consistent with the above description of the process of determining at least one block-level scaling factor among the block-level scaling factors corresponding to each block, No further details will be given here.
  • the encoder uses the at least one candidate block-level scaling factor as the scaling factor corresponding to the difference block, among the N candidates Among the block-level scaling factors, discard at least one candidate block-level scaling factor to obtain M updated block-level scaling factors; that is, among the N candidate block-level scaling factors, except for at least one candidate block-level scaling factor
  • the candidate block-level scaling factors are determined as M updated block-level scaling factors.
  • M is a positive integer greater than 0 and less than or equal to N.
  • S2025 Average the M updated block-level scaling factors to obtain the first frame-level scaling factor.
  • the encoder averages the M updated block-level scaling factors obtained through two screenings to obtain the first frame-level scaling factor.
  • the encoder can also pass the preset upper limit value and the preset lower limit value, further filter the N candidate block-level scaling factors to obtain M updated block-level scaling factors, and then average the M updated block-level scaling factors to obtain the first frame-level scaling factor. Stretch factor. No further details will be given here.
  • the N candidate block-level scaling factors are screened again through the maximum scaling factor and the minimum scaling factor, further reducing the number of candidates.
  • the impact of block differences on the calculation of the first frame-level scaling factor improves the accuracy of the first frame-level scaling factor, thereby improving the coding accuracy.
  • an encoder framework is provided, as shown in Figure 15.
  • Neural Network based in-Loop Filter can be implemented through a loop filter based on a multi-layer convolutional network or a loop filter based on a multi-layer residual network.
  • the scaling factor calculation is used to determine the second frame-level scaling factor SF and the first frame-level scaling factor RSF, and the frame-level scaling factor is determined based on SF and RSF.
  • the enabling of scaling factor calculation does not depend on the switches of DB, SAO, ALF, and NNLF. It is placed after NNLF and before ALF.
  • the scaling factor calculation process includes: through each block-level coding process, for the current sequence For each pixel position in each block of the current frame, traverse each color component of each pixel corresponding to each pixel position in each block, the second reconstructed image block, and the second filtered image block, by Formulas (1)-(7) calculate the block-level scaling factor corresponding to each block.
  • L block-level scaling factors are obtained.
  • the derived range of the scaling factors is limited by the block-level scaling factors with the preset upper limit value and the preset lower limit value.
  • the block-level scaling factor performs a filter.
  • ctuSF(j) among the L block-level scaling factors if ctuSF(j) is equal to the preset upper limit value or the preset lower limit value, it means that the second reconstruction corresponding to ctuSF(j)
  • the magnitude of correction required for an image block reaches a critical value that can be corrected, it is regarded as a special block, that is, a difference block.
  • the proportion of the special block in the L blocks is determined and counted, that is, the first proportion. If the proportion is less than the preset proportion threshold, special blocks are discarded from L block-level scaling factors for the sake of global image blocks.
  • the largest block can be obtained by traversing the remaining block-level scaling factors, that is, by traversing at least one candidate block-level scaling factor.
  • An average value is calculated for at least one candidate block-level scaling factor to obtain a second average value. If the difference between ctuSF_max or ctuSF_min and the second mean value exceeds a preset difference threshold, then ctuSF_max or ctuSF_min is treated as a special block, and the special block is discarded from at least one candidate block-level scaling factor for global image block consideration.
  • the updated block-level scaling factor set ctuSF_SET is obtained, that is, M updated block-level scaling factors.
  • the first frame-level scaling factor RSF is obtained by averaging ctuSF_SET.
  • each color component of each pixel corresponding to each pixel position in each block in the current frame, the second reconstructed image and the second filtered image is traversed, Calculate the second frame-level scaling factor SF corresponding to the current frame.
  • the first corrected image and the second corrected image are respectively compared with the original image and the cost is calculated. For example, the first distortion cost D_RSF corresponding to RSF and the second distortion cost D_SF corresponding to SF are obtained. Compare D_RSF and D_SF.
  • the applicant used a network model integrated with the scaling factor calculation method of the present application under a common test condition (Random Access, RA) configuration to conduct tests on some data sets in the universal sequence Class D WQVGA specified by JVET. Tested.
  • the test is based on the VTM11.0-nnvc reference platform.
  • the embodiment of the present application reduces the impact of a few special blocks on the calculation of the frame-level scaling factor by considering the differences in each block-level scaling factor, so that the ultimately derived optimized frame-level scaling factor, that is, the first frame-level scaling factor factor, more suitable for scaling most image blocks.
  • the scaling factor calculation method in the embodiment of the present application only runs on the encoding side, and the calculated first frame-level scaling factor is image-level, so there is no need to consume encoding bits and has no impact on decoding complexity.
  • the optimization method of the scaling factor in the embodiment of the present application the coding performance can be greatly improved based on the neural network loop filter. In practical applications, further improvements can be made through the setting of optimization algorithms and threshold values. Improve performance.
  • FIG. 16 is an optional flow diagram of the filtering method provided by the embodiment of the present application, including S301-S303, as follows:
  • the frame-level scaling factor is determined by the first frame-level scaling factor and the second frame-level scaling factor corresponding to the current frame.
  • the first frame The level scaling factor is obtained by filtering the block-level scaling factors in the current frame. Among them, the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block; the difference block is the block-level scaling factor in the current frame that is different from other blocks. piece.
  • the decoder receives and parses the code stream to obtain the frame-level scaling factor corresponding to the current frame, and performs image reconstruction through the current block in the current frame, such as inverse transformation and inverse quantization, to obtain the current block in the current frame. the initial residuals.
  • the frame-level scaling factor is determined by comparing the corresponding distortion costs of the first frame-level scaling factor and the second frame-level scaling factor corresponding to the current frame in the above-mentioned encoder filtering method.
  • the first frame-level scaling factor is obtained by filtering the block-level scaling factors in the current frame and fusing them based on the filtered block-level scaling factors.
  • the filtered block-level scaling factor does not include the block-level scaling factor corresponding to the difference block; the difference block is a block that is different from other blocks in the current frame. Therefore, the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block.
  • the decoder performs image block reconstruction based on the initial residual, and obtains the first reconstructed image block by superimposing the initial residual on the predicted predicted image block.
  • the decoder filters the first reconstructed image block to obtain a first filtered image block.
  • the decoder performs image block reconstruction on the initial residual to obtain the first initial reconstructed image block; performs deblocking filtering on the first initial reconstructed image block to obtain the first deblocked image block; and performs deblocking on the first deblocked image block.
  • the image block is subjected to sample adaptive compensation filtering to obtain a first reconstructed image block.
  • the encoder performs loop filtering on the first reconstructed image block to obtain a first filtered image block.
  • the decoder determines the first reconstructed pixel and the first filtered pixel corresponding to each pixel position based on the first reconstructed image block and the first filtered image block; and uses the frame-level scaling factor to calculate the first reconstructed pixel and the first filtered image block.
  • the pixels are scaled to determine the corrected pixel corresponding to each pixel position; the corrected image block is determined based on the corrected pixel corresponding to each pixel position.
  • the decoder may, for each pixel position, determine the first pixel residual between the first reconstructed pixel and the first filtered pixel; use the frame-level scaling factor to perform scaling processing on the first pixel residual, And superimposed with the first reconstructed pixel to obtain the corrected pixel corresponding to each pixel position.
  • the decoder can use the difference between the frame-level scaling factor and the preset coefficient to perform scaling processing on the first reconstructed pixel to obtain the first reconstructed corrected pixel; use the frame-level scaling factor to perform scaling on the first filtered pixel.
  • the pixels are scaled to obtain the first filtered corrected pixels; the first reconstructed corrected pixels and the first filtered corrected pixels are combined to obtain the corrected pixels.
  • the frame-level scaling factor includes: a frame-level scaling factor corresponding to each color component.
  • the decoder corresponds to the first pixel of the first reconstructed pixel on each color component and the second pixel of the first filtered pixel on each color component according to the frame-level scaling factor corresponding to each color component.
  • the scaling process is performed to obtain the corrected pixels corresponding to each color component.
  • the decoder can also perform image block reconstruction and filtering on the next block in the current frame, and use the frame-level scaling factor to scale the first reconstructed image block and the first filtered image block of the next block, to obtain The corresponding corrected image block of the next block is processed until the current frame is processed.
  • the first filtered image of the current frame is obtained, that is, the video image signal output by the filtering unit on the decoder side.
  • the decoder can also perform adaptive filtering on the modified image block to obtain the second Filter the image block; continue filtering the next block in the current frame to obtain the second filtered image block corresponding to the next block, and continue processing until the processing of the current frame is completed.
  • the second filtered image block corresponding to each block in the current frame we obtain The first filtered image of the current frame.
  • the filtering method on the decoder side provided by the embodiment of the present application is applied to the filtering unit on the decoder side.
  • the description of the relevant steps on the decoder side is consistent with the description when the filtering unit on the encoder side performs the corresponding steps. , which will not be described again here.
  • the frame-level scaling factor parsed from the code stream by the decoder is determined by the first frame-level scaling factor and the second frame-level scaling factor corresponding to the current frame.
  • the first frame-level scaling factor is obtained by filtering the block-level scaling factors of the current frame, and does not include the block-level scaling factors corresponding to the difference blocks, thereby reducing the block-level scaling factors corresponding to the difference blocks to calculate the entire frame level.
  • the error caused by the scaling factor enables the calculated first frame-level scaling factor to more accurately represent the magnitude of the correction required for most blocks in the current frame, improving the accuracy of the first frame-level scaling factor.
  • the frame-level scaling factor is determined in the first frame-level shrinking factor and the second frame-level shrinking factor through the comparison of two frame-level scaling factors, the distortion correction performance of the frame-level shrinking factor is further improved, and the distortion correction performance of the frame-level shrinking factor is improved.
  • the frame-level shrinkage factor is used to filter the current block to obtain the accuracy of the corrected image block, thereby improving the decoding performance and decoding accuracy.
  • this embodiment of the present application provides a decoder 1, including:
  • the parsing part 10 is configured to parse the code stream and determine the frame-level scaling factor and the initial residual of the current block; wherein the frame-level scaling factor is determined by the first frame-level scaling factor and the second frame-level scaling factor corresponding to the current frame.
  • the first frame-level scaling factor is determined by the factor, and the first frame-level scaling factor is obtained by screening the block-level scaling factors in the current frame, wherein the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block;
  • the difference block is a block that is different from other blocks in the current frame;
  • the first reconstruction and filtering part 11 is configured to perform image block reconstruction and filtering based on the initial residual, and determine the first reconstructed image block and the first filtered image block;
  • the first determining part 12 is configured to perform scaling processing using the frame-level scaling factor, the first reconstructed image block and the first filtered image block to obtain a modified image block corresponding to the current block.
  • the first determining part 12 is further configured to determine the first reconstructed pixel corresponding to each pixel position based on the first reconstructed image block and the first filtered image block.
  • the first filtered pixel using the frame-level scaling factor, perform scaling processing on the first reconstructed pixel and the first filtered pixel, and determine the corrected pixel corresponding to each pixel position; according to each pixel position Corresponding corrected pixels determine the corrected image block.
  • the first determining part 12 is further configured to determine, for each pixel position, a first pixel residual value between the first reconstructed pixel and the first filtered pixel. Difference; use the frame-level scaling factor to scale the first pixel residual and superimpose it with the first reconstructed pixel to obtain the corrected pixel corresponding to each pixel position.
  • the first determining part 12 is also configured to use the difference between the frame-level scaling factor and the preset coefficient to perform scaling processing on the first reconstructed pixel, obtaining A first reconstructed corrected pixel; using the frame-level scaling factor, perform scaling processing on the first filtered pixel to obtain a first filtered corrected pixel; combining the first reconstructed corrected pixel with the first filtered corrected pixel, we obtain The corrected pixels.
  • the frame-level scaling factor includes: a frame-level scaling factor corresponding to each color component; the first determining part 12 is also configured to, for each pixel position, according to the The frame-level scaling factor corresponding to each color component corresponds to the first pixel of the first reconstructed pixel on each color component and the second pixel of the first filtered pixel on each color component. Scaling processing is performed to obtain the corrected pixels corresponding to each color component.
  • the first reconstruction and filtering part 11 is also configured to perform image block reconstruction based on the initial residual to obtain a first reconstructed image block; Filter to obtain the first filtered image block.
  • the first reconstruction and filtering part 11 is also configured to perform image block reconstruction on the initial residual to obtain a first initial reconstructed image block; Perform deblocking filtering on the block to obtain a first deblocked image block; perform sample adaptive compensation filtering on the first deblocked image block to obtain the first reconstructed image block.
  • the parsing part 10, the first reconstruction and filtering part 11 and the first determining part 12 are also configured to utilize the frame-level scaling factor, the first reconstruction The image block and the first filtered image block are scaled. After obtaining the corrected image block corresponding to the current block, image block reconstruction and filtering are performed on the next block in the current frame, and the frame-level scaling factor is used. Perform scaling processing on the first reconstructed image block and the first filtered image block of the next block to obtain the corrected image block corresponding to the next block. Until the processing of the current frame is completed, according to the corresponding The corrected image block is used to obtain the first filtered image of the current frame.
  • the decoder 1 further includes a first adaptive filtering part configured to perform adaptive filtering on the modified image block to obtain a second filtering part.
  • Image blocks continue to perform filtering processing on the next block in the current frame to obtain the second filtered image block corresponding to the next block, and continue processing until the processing of the current frame is completed.
  • the second filtered image block is used to obtain the first filtered image of the current frame.
  • the embodiment of this application also provides a decoder, including:
  • first memory 14 and first processor 15 are first memory 14 and first processor 15;
  • the first memory 14 stores a computer program that can be run on the first processor 15.
  • the filtering method corresponding to the decoder is implemented.
  • the first processor 15 can be implemented by software, hardware, firmware or a combination thereof, and can use circuits, single or multiple application specific integrated circuits (ASICs), single or multiple general-purpose integrated circuits, single or multiple A microprocessor, a single or multiple programmable logic devices, or a combination of the aforementioned circuits or devices, or other suitable circuits or devices, so that the first processor 15 can perform filtering on the decoder side in the aforementioned embodiments. corresponding steps of the method.
  • ASICs application specific integrated circuits
  • a microprocessor single or multiple programmable logic devices
  • the embodiment of the present application provides an encoder 2, as shown in Figure 19, including:
  • the second reconstruction and filtering part 20 is configured to perform image reconstruction and filtering based on the initial residual of each block in the current frame, and determine the second reconstructed image block and the second filtered image block corresponding to each block;
  • the second determination part 21 is configured to perform scaling factor calculation based on the second reconstructed image block and the second filtered image block, and determine the block-level scaling factor corresponding to each block;
  • the block-level scaling factor corresponding to a block is screened to determine the first frame-level scaling factor; the first frame-level scaling factor does not include the block-level scaling factor corresponding to the difference block; the difference block is the difference between the current frame and other blocks.
  • the blocks with differences perform scaling factor calculation based on the second reconstructed image corresponding to the current frame and the second filtered image to obtain the second frame-level scaling factor corresponding to the current frame; the second reconstructed image passes through each block The corresponding second reconstructed image block is determined; the second filtered image is determined by the second filtered image block corresponding to each block; based on the first frame-level scaling factor and the second frame-level scaling factor, the frame is determined level scaling factor.
  • the second determining part 21 is also configured to perform boundary value screening on the block-level scaling factors corresponding to each block to obtain N candidate block-level scaling factors; N is greater than A positive integer of 0; perform averaging processing based on the N candidate block-level scaling factors to obtain the first frame-level scaling factor.
  • the second determining part 21 is also configured to determine a block-level scaling factor equal to the preset upper limit value or the preset lower limit value in the block-level scaling factor corresponding to each block. factor, as the block-level scaling factor corresponding to the difference block; determine the number of block-level scaling factors corresponding to the difference block, and the first proportion of the total number of block-level scaling factors corresponding to each block; if the first If the proportion does not exceed the preset proportion threshold, then among the block-level scaling factors corresponding to each block, the block-level scaling factors other than the block-level scaling factors corresponding to the difference blocks are used as the N candidates. block-level scaling factor.
  • the second determining part 21 is further configured to determine at least one of the maximum block-level scaling factor and the minimum block-level scaling factor among the block-level scaling factors corresponding to each block.
  • Block-level scaling factor among the block-level scaling factors corresponding to each block, average the block-level scaling factors except the at least one block-level scaling factor to obtain a first average value; if the at least one block-level scaling factor If the difference between the level scaling factor and the first mean value is greater than the preset difference threshold, the at least one block-level scaling factor is used as the block-level scaling factor corresponding to the difference block; the block-level scaling factor corresponding to each block is Among the factors, block-level scaling factors other than the block-level scaling factors corresponding to the difference blocks are used as the N candidate block-level scaling factors.
  • the second determining part 21 is further configured to determine the largest candidate block-level scaling factor and the smallest candidate block-level scaling factor among the N candidate block-level scaling factors.
  • the candidate block-level scaling factors except the at least one candidate block-level scaling factor are averaged to obtain a second average value; if the at least one candidate block-level scaling factor If the difference between the block-level scaling factor and the second mean value is greater than the preset difference threshold, then the candidate block-level scaling factors among the N candidate block-level scaling factors except the at least one candidate block-level scaling factor are The block-level scaling factor is determined as M updated block-level scaling factors; M is greater than 0 and less than or equal to N; the M updated block-level scaling factors are averaged to obtain the first frame-level scaling factor .
  • the second determining part 21 is further configured to use the first frame-level scaling factor to perform scaling processing on the second reconstructed image and the second filtered image, to obtain First corrected image;
  • the second frame-level scaling factor uses the second frame-level scaling factor to perform scaling processing on the second reconstructed image and the second filtered image to obtain a second corrected image; determine the first corrected image and the second corrected image respectively.
  • the first distortion cost corresponding to the first frame-level scaling factor and the second distortion cost corresponding to the second frame-level scaling factor are obtained; by comparing the first The distortion cost and the second distortion cost determine the frame-level scaling factor from the first frame-level scaling factor and the second frame-level scaling factor.
  • the second determination part 21 is further configured to determine the first frame-level scaling factor as if the first distortion cost is less than or equal to the second distortion cost.
  • the frame-level scaling factor if the first distortion cost is greater than the second distortion cost, the second frame-level scaling factor is determined as the frame-level scaling factor.
  • the second determining part 21 is further configured to determine the distance between the second reconstructed pixel of the second reconstructed image block and the original pixel corresponding to each pixel position in each block.
  • the original pixel residual of The filtered pixel residual determines the block-level original pixel residual and the block-level filtered pixel residual corresponding to each block; performs weighting processing based on the block-level original pixel residual and the block-level filtered pixel residual, Obtain the block-level scaling factor corresponding to each block.
  • the second determination part 21 is also configured to weight and fuse the filtered pixel residuals corresponding to each pixel position in each block, and determine the corresponding The first block-level weighted residual; weight and fuse the original pixel residual and the filtered pixel residual corresponding to each pixel position in each block to determine the second block-level corresponding to each block. Weighted residual; combine the preset upper limit value and the preset lower limit value, and calculate the first block-level weighted residual, the second block-level weighted residual, the block-level original pixel residual and the block-level weighted residual.
  • the level filtered pixel residuals are subjected to least squares processing to obtain the block-level scaling factor corresponding to each block.
  • the second determination part 21 is also configured to traverse the original pixel residual corresponding to each pixel position in the second reconstructed image and determine the frame-level original pixel residual;
  • the original pixel residual corresponding to each pixel position in the second reconstructed image is obtained based on the original pixel residual corresponding to each pixel position in the second reconstructed image block corresponding to each block; traverse each pixel position in the second filtered image
  • the filtered pixel residual corresponding to the pixel position determines the frame-level filtered pixel residual; the filtered pixel residual corresponding to each pixel position in the second filtered image is based on the filtered pixel position corresponding to each pixel position in the second filtered image block.
  • the pixel residual is obtained; the filtered pixel residual corresponding to each pixel position in the second filtered image is weighted and fused to obtain the first frame-level weighted residual; the corresponding pixel position in the second filtered image is obtained
  • the filtered pixel residual is weighted and fused with the original pixel residual corresponding to each pixel position in the second reconstructed image to obtain the second frame-level weighted residual; combine the preset upper limit value and the preset lower limit value , perform least squares processing on the first frame-level weighted residual, the second frame-level weighted residual, the frame-level original pixel residual and the frame-level filtered pixel residual, to obtain the second Frame level scaling factor.
  • the first frame-level scaling factor includes a first frame-level scaling factor corresponding to each color component; the second determining part 21 is also configured to compare the second reconstructed image with For each pixel position in the second filtered image, according to the first frame-level scaling factor corresponding to each color component, the position of each second reconstructed pixel in the second reconstructed image on each color component is calculated.
  • the first reconstructed pixel, and each second filtered pixel in the second filtered image perform corresponding scaling and superposition processing on the second filtered pixels on each color component, to obtain the corresponding corresponding to each pixel position, including the The first corrected pixel of each color component; traverse the first corrected pixel corresponding to each pixel position to obtain the first corrected image.
  • the second reconstruction and filtering part 22 is also configured to perform image reconstruction and filtering on the initial residual of the current block in the current frame to obtain the second image corresponding to the current block. Reconstruct the image block and the second filtered image block, continue to perform image block reconstruction and filtering on the next block in the current frame, until the processing of the current frame is completed, and obtain the second reconstructed image block and the second reconstructed image block corresponding to each block. Two filtered image blocks.
  • the second reconstruction and filtering part 22 is also configured to perform image reconstruction on the initial residual of the current block to obtain the second reconstructed image block;
  • the reconstructed image block is filtered to obtain the second filtered image block.
  • the encoder 2 also includes a filtered image output part, and the filtered image output part is configured to perform image reconstruction and filtering based on the initial residual of each block in the current frame, After obtaining the second reconstructed image block and the second filtered image block corresponding to each block, a second filtered image corresponding to the current frame is obtained based on the second filtered image block corresponding to each block.
  • the filtered image output part is also configured to perform image reconstruction and filtering on the initial residual of each block in the current frame to obtain a second reconstructed image corresponding to each block.
  • adaptive filtering is performed on the second filtered image block corresponding to each block to obtain an adaptive filtered image block corresponding to each block, and the adaptive filtered image block corresponding to each block is traversed, Obtain the second filtered image corresponding to the current frame.
  • this embodiment of the present application also provides an encoder, including:
  • the second memory 25 stores a computer program that can be run on the second processor 26, and the second processor 26 executes the filtering method corresponding to the encoder when executing the program.
  • Embodiments of the present application provide a computer-readable storage medium on which a computer program is stored.
  • the filtering method corresponding to the claim decoder is implemented; or, the computer program is executed by a first processor.
  • the second processor is executed, the filtering method corresponding to the claim encoder is implemented.
  • Each component in the embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially or The part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes a number of instructions to cause a computer device (which may be A personal computer, server, or network device, etc.) or processor executes all or part of the steps of the method described in this embodiment.
  • the aforementioned computer-readable storage media include: magnetic random access memory (FRAM, ferromagnetic random access memory), read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read-Only Memory (EPROM, Erasable Programmable Read-Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Flash Memory, Magnetic Surface
  • FRAM magnetic random access memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • Flash Memory Magnetic Surface
  • Various media that can store program codes such as memory, optical disks, or CD-ROM (Compact Disc Read-Only Memory), are not limited by the embodiments of this disclosure.
  • Embodiments of the present application provide a filtering method, a decoder, an encoder, and a computer-readable storage medium.
  • the frame-level scaling factor parsed from the code stream by the decoder is obtained by combining the first frame-level scaling factor corresponding to the current frame and The second frame-level scaling factor is determined.
  • the first frame-level scaling factor is obtained by filtering the block-level scaling factor of the current frame, and does not include the block-level scaling factor corresponding to the difference block, thereby reducing the block-level scaling factor corresponding to the difference block to calculate the entire frame-level scaling
  • the error caused by the factor enables the calculated first frame-level scaling factor to more accurately represent the magnitude of the correction required for most blocks in the current frame, improving the accuracy of the first frame-level scaling factor.
  • the frame-level scaling factor is determined in the first frame-level shrinking factor and the second frame-level shrinking factor through the comparison of two frame-level scaling factors, the distortion correction performance of the frame-level shrinking factor is further improved, and the distortion correction performance of the frame-level shrinking factor is improved.
  • the frame-level shrinkage factor is used to filter the current block to obtain the accuracy of the corrected image block, thereby improving the decoding performance and decoding accuracy.
  • the encoder performs image reconstruction and filtering based on the initial residual of each block in the current frame, and determines the second reconstructed image block and the second filtered image block corresponding to each block; based on the second reconstructed image block and the second filtered image block , perform scaling factor calculation within the image block range, and determine the block-level scaling factor corresponding to each block.
  • the block-level scaling factor corresponding to each block in the current frame is screened to determine the first frame-level scaling factor, so that the first frame-level scaling factor does not include the scaling factor corresponding to the difference block, thereby reducing the number of difference blocks in the current frame.
  • the corresponding scaling factor affects the calculation of the frame-level scaling factor for the entire image, improving the accuracy of the first frame-level scaling factor.
  • the frame-level scaling factor is determined, thereby improving the performance of the frame-level scaling factor in correcting distortion, thereby improving Encoding performance, ultimately improving encoding accuracy.
  • the filtered block-level scaling factors are fused to obtain the first frame-level scaling factor, and then when the first frame-level scaling factor is encoded into the code stream for transmission, there is no need to consume excess coding bits, and the coding efficiency is ensured. Improved encoding performance and accuracy on a reduced basis.
  • a screening is performed through the preset upper limit value and the preset lower limit value, and the block-level scaling factors of the N candidate candidates after screening are filtered through the maximum scaling factor and the minimum scaling factor. Another screening further reduces the impact of the differences of a few blocks on the calculation of the first frame-level scaling factor, improves the accuracy of the first frame-level scaling factor, and further improves the coding accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例提供了一种滤波方法、解码器、编码器和计算机可读存储介质,能够提高编解码准确性。方法包括:解析码流,确定帧级伸缩因子和当前块的初始残差;其中,帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,第一帧级伸缩因子是对当前帧中的块级伸缩因子进行筛选得到的,其中,第一帧级伸缩因子不包括差异块对应的块级伸缩因子;差异块为当前帧中与其他块具有差异性的块(S301);基于初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块(S302);利用帧级伸缩因子、第一重建图像块和第一滤波图像块进行缩放处理,得到当前块对应的修正图像块(S303)。

Description

一种滤波方法、解码器、编码器及计算机可读存储介质 技术领域
本申请实施例涉及视频编码技术,涉及但不限于一种滤波方法、解码器、编码器及计算机可读存储介质。
背景技术
目前,在视频编解码标准例如多功能视频编码(Versatile Video Coding,VVC)中,环路滤波器被使用来提升重建图像的主客观质量。其中,传统环路滤波模块主要包含有去块滤波器(DeBlocking Filter,DBF),样值自适应补偿(Sample adaptive Offset,SAO)和自适应修正滤波器(Adaptive loop filter,ALF)。随着深度学习技术的发展,基于神经网络的环路滤波器的探索工作也逐渐展开。
对于一帧编码图像,通常是通过遍历当前图像内的所有像素点来计算图像级的伸缩因子,并基于块级例如编码树单元(Coding Tree Unit,CTU),结合图像级伸缩因子进行视频编码。
然而,图像级的伸缩因子反映的是图像的整体伸缩性。在编码图像中存在少数CTU单元的块级伸缩因子与其他CTU单元差距过大时,会由导致计算出的图像级伸缩因子受其影响而与大多数CTU单元的伸缩因子产生较大的误差。这样,再利用图像级伸缩因子对其他大部分CTU单元进行编码时,会由于图像级伸缩因子的误差而降低视频编解码的准确性。
发明内容
本申请实施例提供了一种滤波方法、解码器、编码器及计算机可读存储介质,能够提高编解码准确性。
第一方面,本申请实施例提供了一种滤波方法,应用于解码器,所述方法包括:
解析码流,确定帧级伸缩因子和当前块的初始残差;其中,所述帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,所述第一帧级伸缩因子是对所述当前帧中的块级伸缩因子进行筛选得到的,其中,所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块。
第二方面,本申请实施例还提供了一种滤波方法,应用于编码器,所述方法包括:
基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块;
基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子;
对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子;所述第二重建图像通过每一块对应的第二重建图像块确定;所述第二滤波图像通过每一块对应的第二滤波图像块确定;
基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子。
第三方面,本申请实施例提供了一种解码器,包括:
解析部分,被配置为解析码流,确定帧级伸缩因子和当前块的初始残差;其中,所述帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,所述第一帧级伸缩因子是对所述当前帧中的块级伸缩因子进行筛选得到的,其中,所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
第一重建与滤波部分,被配置为基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
第一确定部分,被配置为利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块。
第四方面,本申请实施例提供了一种编码器,包括:
第二重建与滤波部分,被配置为基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块;
第二确定部分,被配置为基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子;对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子;所述第二重建图像通过每一块对应的第二重建图像块确定;所述第二滤波图像通过每一块对应的第二滤波图像块确定;基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子。
第五方面,本申请实施例还提供了一种解码器,包括:
第一存储器和第一处理器;
所述第一存储器存储有可在第一处理器上运行的计算机程序,所述第一处理器执行所述程序时实现解码器的所述滤波方法。
第六方面,本申请实施例还提供了一种编码器,包括:
第二存储器和第二处理器;
所述第二存储器存储有可在第二处理器上运行的计算机程序,所述第二处理器执行所述程序时编码器的所述滤波方法。
本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被第一处理器执行时,实现权利要求解码器的所述滤波方法;或者,该计算机程序被第二处理器执行时,实现权利要求编码器的所述滤波方法。
本申请实施例提供了一种滤波方法、解码器、编码器及计算机可读存储介质,解码器端从码流中解析出的帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的。其中,第一帧级伸缩因子是对当前帧的块级伸缩因子进行筛选得到的,不包含差异块对应的块级伸缩因子,从而减小了差异块对应的块级伸缩因子对计算整个帧级伸缩因子造成的误差,使得计算得到第一帧级伸缩因子能够更准确地表征当前帧中大部分块的所需修正的幅度,提高了第一帧级伸缩因子的准确性。并且,由于帧级伸缩因子是通过两种帧级伸缩因子的对比,在第一帧级收缩因子和第二帧级收缩因子中确定的,进一步提高了帧级收缩因子的失真修正性能,因此提升了编码器端的编码性能和编码准确性。如此,在解码器端利用帧级收缩因子,对通过初始残差进行图像块重建与滤波得到的第一重建图像块和第一滤波图像块进行缩放处理,以实现对当前帧中当前块的解码滤波,得到当前块对应的修正图像块,能够提高修正图像块的准确性,进而提高解码性能与解码准确性。
附图说明
图1为提供的一种编码框架的应用示意图;
图2为提供的另一种编码框架的应用示意图;
图3为本申请实施例提供的多层卷积网络的环路滤波器对输入的重建图像进行滤波优化的网络结构图;
图4为本申请实施例提供的基于多层残差网络的环路滤波器的网络结构图;
图5为本申请实施例提供的示例性的编码单元的划分示意图;
图6A为本申请实施例提供的块级伸缩因子的一种分布示意图;
图6B为本申请实施例提供的块级伸缩因子的另一种分布示意图;
图7A为本申请实施例提供的一种视频编码系统的详细框架示意图;
图7B为本申请实施例提供的一种视频解码系统的详细框架示意图;
图8为本申请实施例提供的滤波方法的一种可选的流程示意图;
图9为本申请实施例提供的滤波方法的一种可选的流程示意图;
图10为本申请实施例提供的滤波方法的一种可选的流程示意图;
图11为本申请实施例提供的滤波方法的一种可选的流程示意图;
图12为本申请实施例提供的滤波方法的一种可选的流程示意图;
图13为本申请实施例提供的滤波方法的一种可选的流程示意图;
图14为本申请实施例提供的滤波方法的一种可选的流程示意图;
图15为本申请实施例提供的示例性的一种编码框架示意图;
图16为本申请实施例提供的滤波方法的一种可选的流程示意图;
图17为本申请实施例提供的一种解码器的结构示意图一;
图18为本申请实施例提供的一种解码器的结构示意图二;
图19为本申请实施例提供的一种编码器的结构示意图一;
图20为本申请实施例提供的一种编码器的结构示意图二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。
需要说明的是,说明书通篇中提到的“第一”、“第二”、“第三”等,仅仅是为了区分不同的特征,不具有限定优先级、先后顺序、大小关系等功能。
对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释:
新一代视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC)
VVC的参考软件测试平台(VVC Test Model,VTM)
音视频编码标准(Audio Video coding Standard,AVS)
AVS的高性能测试模型(High-Performance Model,HPM)
AVS的高性能-模块化智能编码测试模型(High Performance-Modular Artificial Intelligence Model,HPM-ModAI)
基于残差神经网络的环路滤波器(Convolutional Neural Network based in-Loop Filter,CNNLF)
去块滤波器(DeBlocking Filter,DBF)
样值自适应补偿(Sample adaptive Offset,SAO)
自适应修正滤波器(Adaptive loop filter,ALF)
量化参数(Quantization Parameter,QP)
编码单元(Coding Unit,CU)
编码树单元(Coding Tree Unit,CTU)
均方误差(Mean Squared Error,MSE)
平均绝对误差(MAE,Mean Absolute Error)
可以理解,数字视频压缩技术主要是将庞大的数字影像视频数据进行压缩,以便于传输以及存储等。随着互联网视频的激增以及人们对视频清晰度的要求越来越高,尽管已有的数字视频压缩标准能够节省不少视频数据,但目前仍然需要追求更好的数字视频压缩技术,以减少数字视频传输的带宽和流量压力。
在数字视频编码过程中,编码器对不同颜色格式的原始视频序列读取不相等的像素,其中,包含亮度分量和色度分量,即编码器读取一副黑白或者彩色图像。然后将该图像进行划分成块,将块数据交由编码器进行编码,如今编码器通常为混合框架编码模式,一般可以包含帧内预测与帧间预测、变换/量化、反量化/逆变换、环路滤波及熵编码等操作,处理流程具体可参考图1所示。这里,帧内预测只参考同一帧图像的信息,预测当前划分块内的像素信息,用于消除空间冗余;帧间预测可以包括运动估计和运动补偿,其可参考不同帧的图像信息,利用运动估计搜索最匹配当前划分块的运动矢量信息,用于消除时间冗余;变换将预测后的图像块转换到频率域,能量重新分布,结合量化可以将人眼不敏感的信息去除,用于消除视觉冗余;熵编码可以根据当前上下文模型以及二进制码流的概率信息消除字符冗余;环路滤波模块则主要对反变换与反量化后的像素进行处理,弥补失真信息,为后续编码像素提供更好的参考。
对于AVS3而言,在环路滤波部分,传统环路滤波模块主要包含去块滤波器(DeBlocking Filter, DBF)、样值自适应补偿滤波器(Sample adaptive Offset,SAO)和自适应修正滤波器(Adaptive loop filter,ALF)。在HPM-ModAI的应用中,还采用了基于残差神经网络的环路滤波器(Neural Network based Loop Filter,CNNLF)作为智能环路滤波模块的基线方案,并设置于SAO滤波和ALF滤波之间,具体详见图2所示。在编码测试时,按照智能编码通用测试条件,对于全帧内(All Intra)配置,打开ALF,关闭DBF和SAO;对于随机接入(Random Access)和低延迟(Low Delay)配置,打开I帧的DBF,打开ALF,关闭SAO。
在一些实施例中,可进行滤波处理的场景可以是基于AVS的参考软件测试平台HPM或基于多功能视频编码(Versatile Video Coding,VVC)的VVC参考软件测试平台(VVC TEST MODEL,VTM),本申请实施例不作限制。
随着深度学习技术的发展,对基于神经网络来实现上述环路滤波模块的探索工作也逐渐展开。神经网络环路滤波工具通常包括多层卷积网络的环路滤波器以及多层残差网络的环路滤波器,在一些实施例中,多层卷积网络的结构的环路滤波器可以是如图3所示的网络模型。该网络模型包含12个隐藏层和1个3x3卷积层。每个隐藏层由一个3x3卷积层和一个激活层(Leaky ReLU)组成,每个卷积层包含96个通道。网络模型的输入为包含4个亮度子块(Y)和2个色度块(U、V)的重建图像1(NxN尺寸)。网络模型通过12层堆叠的隐藏网络层,对输入的重建图像1进行滤波优化,得到滤波图像。再根据图像级伸缩因子(Scaling Factor,SF),对输入的重建图像1与滤波图像之间的残差进行修正,根据修正后的残差和重建图像得到网络最终的输出图像2(NxN尺寸),以使最终的输出图像2更加接近于原始图像,带来更好的编码性能。
在一些实施例中,多层残差网络的环路滤波器可以是如图4所示的网络模型。该网络模型的输入除了重建图像rec_yuv之外,还包括块划分信息par_yuv、预测信息pred_yuv、以及QP信息。这里,QP信息可以包括基础QP(Base QP)信息和分片QP(Slice QP)信息。网络模型对上述信息进行结合(cat),通过多层残差网络对结合后的信息进行滤波,得到滤波图像output_yuv。网络模型同样根据图像级伸缩因子,对输入的重建图像rec_yuv与滤波图像output_yuv之间的残差进行修正,根据修正后的残差和重建图像得到网络最终的输出图像。
可以看出,上述图3与图4中,无论是多层卷积网络的环路滤波器,还是多层残差网络的环路滤波器,仅在得到滤波图像的过程上存在网络结构和处理方法的差异,而在计算伸缩因子时,都是通过直接遍历重建图像与滤波图像中所有对应的像素点,来计算得到图像级的伸缩因子。
然而,在当前的混合编解码框架的视频编码过程中,是以块为单位进行编码的。编码器首先读取图像信息,将图像划分成若干个编码树单元(Coding Tree Unit,CTU),而一个编码树单元又可以继续划分成若干个编码单元(CU),这些编码单元可以为矩形块也可以为方形块,具体关系可以参考图5所示。编码器一般基于块级例如CTU单元或CU单元来进行编码。基于目前的块级编码过程,如果能使用块级的伸缩因子,理论上修正失真的性能是更好的。然而更多块级伸缩因子的使用需要编码更多的比特数,从而降低码率性能。综合考虑码率和失真的整体性能,基于神经网络的环路滤波工具中,大多使用了图像级的伸缩因子。
基于上述内容,图像级的伸缩因子在导出时,是通过直接遍历图像内的所有像素点得到的。可以看出,在图像中的每个CTU单元或CU单元进行编码时,如果图像中一些CTU单元的块级伸缩因子与其他CTU单元的块级伸缩因子差距过大,会使得图像级伸缩因子的计算出现较大的误差。
示例性地,如图6A和图6B所示,一张图像被划分为4x2的布局,共有8个CTU块单元。通过遍历各个CTU单元可以得到每个CTU单元的块级伸缩因子,如图6A和图6B所示。通过遍历整幅图像中的每个像素可以得到图像级的伸缩因子。这里,图6A的图像级伸缩因子为0.26,图6B图像级伸缩因子为0.18。
可以看出,图6A中,位于右上角的块级伸缩因子0.9与图6A中其他块级伸缩因子的数值差距较大。这样,尽管图6A和图6B中除右上角之外的其他CTU单元的块级伸缩因子的数值都相同,图6A中计算得到的图像级的伸缩因子仍然受到右上角CTU单元的影响而放大,相比图6B,放大了约150%((0.26/0.18)*100%)。因此,在计算图像级伸缩因子时,可能由于少数块的差异性或特殊性,使得计算出的图像级伸缩因子出现较大误差,进而降低了根据该图像级伸缩因子对整幅图像中大部分块的进行失真修正的性能,降低了编解码准确性。
本申请实施例提供一种滤波方法、解码器、编码器及计算机可读存储介质,能够提高视频编解码的准确性。下面将结合附图对本申请各实施例进行详细说明。
参见图7A,其示出了本申请实施例提供的一种视频编码系统的详细框架示意图。如图7A所示,该视频编码系统100包括变换与量化单元101、帧内估计单元102、帧内预测单元103、运动补偿单 元104、运动估计单元105、反变换与反量化单元106、滤波器控制分析单元107、滤波单元108、编码单元109和解码图像缓存单元110等,其中,滤波单元108可以实现DBF滤波/SAO滤波/ALF滤波,编码单元109可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的原始视频信号,通过编码树单元(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元101对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元102和帧内预测单元103是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元102和帧内预测单元103用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元104和运动估计单元105用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元105执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元104基于由运动估计单元105所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元103还用于将所选择的帧内预测数据提供到编码单元109,而且运动估计单元105将所计算确定的运动向量数据也发送到编码单元109;此外,反变换与反量化单元106是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元107和滤波单元108去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元110的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元109是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元110是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元110中。
参见图7B,其示出了本申请实施例提供的一种视频解码系统的详细框架示意图。如图7B所示,该视频解码系统200包括解码单元201、反变换与反量化单元202、帧内预测单元203、运动补偿单元204、滤波单元205和解码图像缓存单元206等,其中,解码单元201可以实现头信息解码以及CABAC解码,滤波单元205可以实现DBF滤波/SAO滤波/ALF滤波。输入的视频信号经过图7A的编码处理之后,输出该视频信号的码流;该码流输入视频解码系统200中,首先经过解码单元201,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元202进行处理,以便在像素域中产生残差块;帧内预测单元203可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元204是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元202的残差块与由帧内预测单元203或运动补偿单元204产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元205以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元206中,解码图像缓存单元206存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
需要说明的是,本申请实施例提供的方法,可以应用在如图7A所示的滤波单元108部分(用黑色加粗方框表示)。也就是说,本申请实施例中的方法,可以应用于视频编码系统(简称为“编码器”)。还需要说明的是,当本申请实施例应用于编码器时,“当前块”具体是指视频图像中的当前待编码的块(也可以简称为“编码块”)。
基于此,在进行滤波方法时对当前块实行滤波处理,主要作用于视频编码系统1的滤波单元108。通过滤波单元108实现本申请实施例提供的滤波方法。
本申请实施例提供一种滤波方法,该方法应用于视频编码设备,即编码器。该方法所实现的功能可以通过视频编码设备中的第二处理器调用程序代码来实现,当然程序代码可以保存在计算机可读存储介质中,可见,该视频编码设备至少包括第二处理器和第二存储介质。其中,当前解码块和当前编码块下述均用当前块来表示。
图8为本申请实施例提供的滤波方法的一种可选的流程示意图,应用于编码器,该方法包括:
S101、基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块。
S101中,对于编码器来说,当前帧为原始视频帧。编码器将输入的原始视频帧划分为至少一个视频编码块,通过每个块级编码过程,对至少一个视频编码块中的每一块进行编码。其中,对于当前帧中的当前块,编码器通过帧内或帧间预测,得到当前块的残差像素信息,作为初始残差。示例 性地,对于帧内预测,编码器参考当前帧的相邻块图像信息对当前块进行预测,得到预测块;将预测块与当前块也即原始图像块进行残差计算,得到初始残差。
编码器对当前帧中当前块的初始残差进行图像重建与滤波,确定当前块对应的第二重建图像块与第二滤波图像块;并继续对当前帧中的下一块进行图像块重建与滤波,直至对当前帧处理完成,得到每一块对应的第二重建图像块与第二滤波图像块。
在一些实施例中,编码器对当前帧中的当前块的初始残差进行图像重建与滤波,得到当前块对应的第二重建图像块与第二滤波图像块的过程可以包括:
编码器对当前块的初始残差进行图像重建,得到第二重建图像块;对第二重建图像块进行滤波,得到第二滤波图像块。
在一些实施例中,编码器对当前块的初始残差进行图像重建的过程可以包括:对当前块的初始残差进行图像重建,得到第二初始重建图像块;对第二初始重建图像块进行去块滤波,得到第二去块图像块;对第二去块图像块进行样值自适应补偿滤波,得到第二重建图像块。编码器可以对第二重建图像块进行滤波,得到当前块对应第二滤波图像块。
示例性地,在当前块级编码过程中,编码器对当前块的初始残差进行图像重建,得到第二初始重建图像块,将第二初始重建图像块作为编码器中滤波单元的输入,通过DBF滤波模块,得到第二去块图像块;对第二去块图像块进行SAO滤波,得到第二重建图像块。将第二重建图像块作为滤波单元中环路滤波模块的输入,利用基于多层卷积神经网络的环路滤波模块或基于多层残差神经网络的环路滤波模块(CNNLF),对第二重建图像块进行环路滤波,得到环路滤波模块输出的第二滤波图像块。
可理解,编码器对当前帧中每一块的初始残差进行图像重建与滤波的过程与上述对当前块进行图像块重建与滤波的过程描述一致,从而得到当前帧中每一块对应的第二重建图像块与第二滤波图像块。
S102、基于第二重建图像块与第二滤波图像块进行伸缩因子计算,确定每一块对应的块级伸缩因子。
S102中,编码器基于当前帧中每一块对应的第二重建图像块与第二滤波图像块,在每一块的像素范围内进行伸缩因子计算,确定每一块对应的块级伸缩因子。
在一些实施例中,编码器对块级伸缩因子的计算过程包括:像素残差计算过程、块级残差计算过程与块级残差拟合过程。在像素残差计算过程中,对于当前帧中的当前块,编码器可以在当前块对应的块级像素范围内,根据块级像素范围中每个像素位置分别在当前块(也即原始图像块)、第二重建图像块与第二滤波图像块中对应的各个像素进行残差计算,得到每个像素位置对应的,表征原始像素与重建像素之间差异的原始像素残差,以及表征滤波像素与重建像素之间差异的滤波像素残差。在块级残差计算过程中,通过对每一块的像素范围内每个像素位置的原始像素残差与滤波像素残差进行融合处理,得到至少一种块级残差。示例性地,对每一块的像素范围内每个像素位置的原始像素残差与滤波像素残差进行遍历和加权等处理,得到块级原始像素残差、块级滤波像素残差、第一块级加权残差与第二块级加权残差。这里,第一块级加权残差与第二块级加权残差通过对当前块像素范围内每个像素位置的原始像素残差与滤波像素残差进行不同的加权方法得到。
在块级残差拟合过程中,编码器可以对块级残差计算过程计算得到的至少一种块级残差进行拟合,得到块级伸缩因子。示例性地,通过最小二乘法,结合预设取值范围(示例性地,包括预设上限值与预设下限值)来进行拟合,得到块级伸缩因子。
需要说明的是,S102中,编码器计算每一块对应的块级伸缩因子的过程,是在当前帧中对该块的块级编码过程实现的;具体地,是在对该块的块级编码过程的滤波过程中实现的。在当前块级编码过程中,计算出当前帧中的当前块对应的块级伸缩因子;继续进行下一块级编码过程,计算出当前帧中的下一块对应的块级伸缩因子,直至对当前帧处理完成,得到每一块对应的块级伸缩因子。
在一些实施例中,编码器基于第二重建图像块与第二滤波图像块进行伸缩因子计算,确定每一块对应的块级伸缩因子的过程可以如公式(1)至(7)所示,如下:
orgResi(x i,y i)=[org(x i,y i)-rec(x i,y i)]      (1)
公式(1)中,(x i,y i)表示每一块中的第i个像素位置,也即第二重建图像块与第二滤波图像块中第i个像素位置的水平坐标值和垂直坐标值;org(x i,y i)表示第i个像素位置在每一块对应的像素,为原始像素;rec(x i,y i)表示第i个像素位置在第二重建图像块中对应的像素,为第二重建像素。编 码器通过公式(1),根据每一块中每个像素位置对应的原始像素,确定与第二重建图像块的第二重建像素之间的原始像素残差orgResi(x i,y i)。也就是说,编码器计算每个像素位置对应的原始像素与相同像素位置对应的第二重建像素之间的像素差,得到原始像素残差。
cnnResi(x i,y i)=[cnn(x i,y i)-rec(x i,y i)]      (2)
公式(2)中,cnn(x i,y i)表示第i个像素位置在第二滤波图像块中对应的像素,为第二滤波像素。编码器通过公式(2),根据第二滤波图像块中每个像素位置对应的第二滤波像素,确定与第二重建像素rec(x i,y i)之间的滤波像素残差cnnResi(x i,y i)。
S102中,编码器可以基于上述得到的原始像素残差与滤波像素残差,确定每一块对应的块级原始像素残差与块级滤波像素残差,如公式(3)和公式(4)所示,如下:
Figure PCTCN2022099527-appb-000001
Figure PCTCN2022099527-appb-000002
公式(3)和公式(4)中,W k表示每一块(以第k块为例)的宽度值,H k表示每一块的高度值,编码器可以通过公式(3),遍历每一块中每个像素位置上的原始像素残差orgResi(x i,y i),确定每一块对应的块级原始像素残差sum_orgResi;并通过公式(4),遍历每个像素位置上的滤波像素残差cnnResi(x i,y i),确定每一块对应的块级滤波像素残差sum_cnnResi。
在一些实施例中,编码器基于块级原始像素残差与块级滤波像素残差进行加权处理,得到每一块对应的块级伸缩因子。
这里,编码器可以通过多种可选的加权和统计算法,基于块级原始像素残差与块级滤波像素残差进行处理,得到每一块对应的块级伸缩因子。具体的根据实际情况进行选择,本申请实施例不作限定。示例性地,可以通过公式(5)-公式(7)来实现加权处理,得到块级伸缩因子,如下:
Figure PCTCN2022099527-appb-000003
编码器通过公式(5),对每一块中每个像素位置对应的滤波像素残差cnnResi(x i,y i)进行加权与融合,确定每一块对应的第一块级加权残差sum_cnnMulti。
Figure PCTCN2022099527-appb-000004
编码器通过公式(6),对每一块中每个像素位置对应的原始像素残差orgResi(x i,y i)和滤波像素残差cnnResi(x i,y i)进行加权与融合,确定每一块对应的第二块级加权残差sum_crossMulti。
Figure PCTCN2022099527-appb-000005
公式(7)中,SF bottom为预设下限值,SF up为预设上限值。编码器通过公式(7),结合预设上限值与预设下限值,对第一块级加权残差sum_cnnMulti、第二块级加权残差sum_crossMulti、块级原始像素残差sum_orgResi与块级滤波像素残差sum_cnnResi进行最小二乘处理,得到每一块对应的块级伸缩因子BSF。
需要说明的是,本申请实施例中的每个像素可以包括至少一个颜色分量对应的像素值。示例性地,对于颜色编码方法为YUV或YCbCr格式的图像,至少一个颜色分量可以包括:表征明亮度(Luma)的颜色分量Y,表征色度(Chroma)的一个蓝色色度分量和一个红色色度分量。其中,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;U和V表示为色度用于描述色彩及饱和度。
这样,在一些实施例中,上述编码器基于第二重建图像块与第二滤波图像块进行伸缩因子计算,确定每一块对应的块级伸缩因子的过程中,可以在至少一个颜色分量中的各个颜色分量上分别计算原始像素残差与滤波像素残差,得到各个颜色分量对应的原始像素残差与滤波像素残差,进而在每一块的像素范围内的每个像素位置上,通过对各个颜色分量对应的原始像素残差与滤波像素残差进 行遍历与加权统计,得到各个颜色分量对应的块级伸缩因子。
S103、对当前帧对应的每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;第一帧级伸缩因子不包括差异块对应的块级伸缩因子;差异块为当前帧中与其他块具有差异性的块。
S103中,由于编码器是在每个块级编码过程中,在通过滤波单元对当前帧中的每一块进行滤波处理,直至对当前帧处理完成的;这样,在编码器对当前帧中的每一块全部处理完成,得到每一块对应的块级伸缩因子时,编码器对每一块对应的伸缩因子,也即当前帧中全部的块级伸缩因子进行筛选。编码器根据不同的块级伸缩因子所表征出差异性,在当前帧中确定出与其他块具有差异性的块,进而基于每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子来确定第一帧级伸缩因子。
在一些实施例中,编码器可以通过边界值筛选、最大值最小值筛选等等方法,从每一块对应的块级伸缩因子中确定差异块对应的块级伸缩因子,将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子作为筛选后的块级伸缩因子,再对筛选后的块级伸缩因子进行融合处理得到第一帧级伸缩因子。
在一些实施例中,对于块级伸缩因子包括各个颜色分量对应的块级伸缩因子的情况,编码器在对筛选后的块级伸缩因子进行融合时,可以在各个颜色分量进行融合,得到各个颜色分量对应的第一帧级伸缩因子。
可以看出,S103中,由于第一帧级伸缩因子是通过对每一块对应的块级伸缩因子进行了筛选,利用不包含差异块对应的块级伸缩因子的筛选后的块级伸缩因子计算得到的,从而减少了少数块的差异性对计算第一帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性,进而提高了编码准确性。并且,由于将筛选后的块级伸缩因子融合得到第一帧级伸缩因子,在将第一帧级伸缩因子编入码流以进行传输时,无需消耗多余的编码比特数,在保证编码效率不降低的基础上提高了编码性能和准确性。
S104、基于当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到当前帧对应的第二帧级伸缩因子;第二重建图像通过每一块对应的第二重建图像块确定;第二滤波图像通过每一块对应的第二滤波图像块确定。
S104中,编码器在通过滤波单元,对当前帧中的每一块完成滤波处理时,可以遍历通过每个块级编码过程得到的每一块对应的第二重建图像块,得到当前帧对应的第二重建图像。同样地,通过遍历每一块对应的第二滤波图像块,得到当前帧对应的第二滤波图像,也即编码器侧的滤波单元输出的视频图像信号。
编码器基于当前帧对应的第二重建图像与第二滤波图像,在整个图像(帧级)范围内,通过每个像素位置在当前帧、第二重建图像与第二滤波图像上对应的像素进行伸缩因子计算,得到第二帧级伸缩因子。
在一些实施例中,编码器计算第二帧级伸缩因子的过程与上述计算块级伸缩因子的过程类似,包括:像素残差计算过程与块级残差计算过程与块级残差拟合过程。其中,在像素残差计算过程中,编码器可以根据当前帧中每个像素位置对应的原始像素,确定与第二重建图像中相同像素位置对应的第二重建像素之间的原始像素残差;根据第二滤波图像中每个像素位置对应的第二滤波像素,确定与相同像素位置对应的第二重建像素之间的滤波像素残差。在块级残差计算过程中,对当前帧的像素范围内每个像素位置的原始像素残差与滤波像素残差进行遍历和加权等处理,得到帧级原始像素残差、帧级滤波像素残差、第一帧级加权残差与第二帧级加权残差。这里,第一帧级加权残差与第二帧级加权残差通过对当前帧范围内每个像素位置的原始像素残差与滤波像素残差进行不同的加权方法得到。
在一些实施例中,由于编码器是在对当前帧中的每一块全部处理完成,基于每一块对应的第二重建图像块得到第二重建图像,基于每一块对应的第二滤波图像块得到第二滤波图像,进而基于第二重建图像与第二滤波图像进行第二帧级伸缩因子的计算的,故,编码器也可以直接利用每一块的块级伸缩因子计算过程中,像素残差计算过程所得到的每一块中每个像素位置对应的滤波像素残差和原始像素残差,确定帧级原始像素残差和帧级滤波像素残差,以减少计算工作量。这里,第二重建图像中每个像素位置对应的原始像素残差可以通过遍历每一块对应的第二重建图像块中每个像素位置对应的原始像素残差得到;第二滤波图像中每个像素位置对应的滤波像素残差可以通过遍历第二滤波图像块中每个像素位置对应的滤波像素残差得到。进而,在块级残差拟合过程中,可以通过对第二滤波图像中每个像素位置对应的滤波像素残差进行加权与融合,得到第一帧级加权残差;对第二滤波图像中每个像素位置对应的滤波像素残差,与第二重建图像中每个像素位置对应的原始像 素残差进行加权与融合,得到第二帧级加权残差;结合预设上限值与预设下限值,对第一帧级加权残差、第二帧级加权残差、帧级原始像素残差与帧级滤波像素残差进行最小二乘处理,得到第二帧级伸缩因子。
示例性地,第二帧级伸缩因子的计算公式可以与上述公式(1)至(7)类似。需要说明的是,在计算第二帧级伸缩因子时,公式(1)至(7)中的(x i,y i)表示当前帧的像素范围内的第i个像素位置,W k可以为W m,表示当前帧(以当前视频流序列中的第m帧为例)的宽度值;H k可以为H m,表示当前帧(以当前视频流序列中的第m帧为例)的高度值。
本申请实施例中,上述两种第二帧级伸缩因子的计算方法可以的根据具体的实际情况进行选择,本申请实施例不作限定。
需要说明的是,在一些实施例中,与S102中在至少一个颜色分量上计算块级伸缩因子的过程类似,第二帧级伸缩因子也可以通过在至少一个颜色分量的各个颜色分量上分别得到各个颜色分量对应的原始像素残差与滤波像素残差,进而在当前帧的像素范围内的每个像素位置上,通过对各个颜色分量对应的原始像素残差与滤波像素残差进行遍历与加权统计,得到各个颜色分量对应的第二帧级伸缩因子。
S105、基于第一帧级伸缩因子与第二帧级伸缩因子,确定帧级伸缩因子。
S105中,第一帧级伸缩因子是通过对每一块对应的伸缩因子进行筛选得到的;第二帧级伸缩因子是通过图像级范围内的像素进行伸缩因子计算得到的。编码器可以基于第一帧级伸缩因子与第二帧级伸缩因子进行修正失真性能的比较,从中确定出修正失真性能更好的伸缩因子,作为最终的帧级伸缩因子。
在一些实施例中,可以通过分别计算第一帧级伸缩因子与第二帧级伸缩因子的失真代价的方法,来进行修正失真性能的比较,也可以利用其他相关指标来衡量和对比第一帧级伸缩因子与第二帧级伸缩因子的修正失真性能。具体的根据实际情况进行选择,本申请实施例不作限定。
在一些实施例中,如图9所示,S105可以通过执行S1051-S1053来实现,如下:
S1051、利用第一帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第一修正图像;利用第二帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第二修正图像。
S1051,编码器可以分别利用第一帧级伸缩因子和第二帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第一帧级伸缩因子对应的第一修正图像,以及第二帧级伸缩因子对应的第二修正图像。
在一些实施例中,基于上述S103中的一些实施例,第一帧级伸缩因子包括各个颜色分量对应的第一帧级伸缩因子。对于利用第一帧级伸缩因子得到第一修正图像的过程,编码器可以对于第二重建图像与第二滤波图像中的每个像素位置,根据各个颜色分量对应的第一帧级伸缩因子,对第二重建图像中每个第二重建像素在各个颜色分量上的第一重建像素,以及第二滤波图像中每个第二滤波像素在各个颜色分量上的第二滤波像素进行对应的缩放与叠加处理,得到每个像素位置对应的,包含各个颜色分量的第一修正像素中利用第一帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第一修正图像。
在一些实施例中,编码器利用第一帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第一修正图像的过程可以通过公式(8)来实现,如下:
output 1(x i,y i)=rec(x i,y i)+RSF*[cnn(x i,y i)-rec(x i,y i)]      (8)
公式(8)中,RSF为第一帧级伸缩因子;(x i,y i)表示当前帧的像素范围内的第i个像素位置;cnn(x i,y i)表示第i个像素位置在第二滤波图像中对应的第二滤波像素;rec(x i,y i)表示第i个像素位置在第二重建图像中对应的第二重建像素。编码器通过公式(8),对当前帧的像素范围内的每个像素位置,确定第二重建像素和第二滤波像素之间的第二像素残差cnn(x i,y i)-rec(x i,y i);并利用第一帧级伸缩因子RSF,对第二像素残差进行缩放处理,并与第二重建像素rec(x i,y i)进行叠加,得到每个像素位置对应的第一修正像素output 1(x i,y i),进而通过遍历每个像素位置对应的第一修正像素,得到第一修正图像。
在一些实施例中,编码器利用第一帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第一修正图像的过程还可以通过公式(9)来实现,如下:
output 1(x i,y i)=(1-RSF)*rec(x i,y i)+RSF*cnn(x i,y i)      (9)
公式(9)中,1为预设系数。示例性地,对于伸缩因子大于0且小于1的情况,预设系数取值 为1。编码器通过公式(9),利用第一帧级伸缩因子与预设系数(1)之间的差值(1-RSF),对第二重建像素rec(x i,y i)进行缩放处理,得到第二重建修正像素(1-RSF)*rec(x i,y i);利用第一帧级伸缩因子RSF,对第二滤波像素进行缩放处理cnn(x i,y i),得到第二滤波修正像素RSF*cnn(x i,y i);结合第二重建修正像素与第二滤波修正像素,得到第一修正像素output 1(x i,y i),进而通过遍历每个像素位置对应的第一修正像素,得到第一修正图像。
这里,编码器利用第二帧级伸缩因子,对第二重建图像与第二滤波图像进行缩放处理,得到第二修正图像的过程与上述利用第一帧级伸缩因子得到第一修正图像的过程描述一致,此处不再赘述。
S1052、分别确定第一修正图像和第二修正图像,与当前帧之间的失真代价,得到第一帧级伸缩因子对应的第一失真代价,以及与第二帧级伸缩因子对应的第二失真代价。
S1052中,编码器确定第一修正图像与当前帧之间的失真代价,得到第一帧级伸缩因子对应的第一失真代价;编码器确定第二修正图像与当前帧之间的失真代价,得到第二帧级伸缩因子对应的第二失真代价。
在一些实施例中,编码器通过计算每个像素位置的MSE或MAE,来确定第一失真代价与第二失真代价,也可以通过其他误差计算方法,来得到用于表征修正图像与原始图像之间误差或失真程序的失真代价。具体的根据实际情况进行选择,本申请实施例不作限定。
S1053、通过对比第一失真代价与第二失真代价,从第一帧级伸缩因子与第二帧级伸缩因子中确定帧级伸缩因子。
S1053中,编码器通过对比第一失真代价与第二失真代价,评估第一帧级伸缩因子与第二帧级伸缩因子中的哪一个对应的编码性能更高,从而从第一帧级伸缩因子与第二帧级伸缩因子中确定帧级伸缩因子。
在一些实施例中,若第一失真代价小于或等于第二失真代价,说明第一帧级伸缩因子的失真修正性能高于或等于第二帧级伸缩因子,编码器将第一帧级伸缩因子确定为帧级伸缩因子。若第一失真代价大于第二失真代价,说明第二帧级伸缩因子的失真修正性能高于第一帧级伸缩因子,编码器将第二帧级伸缩因子确定为帧级伸缩因子。
本申请实施例中,编码器确定出帧级伸缩因子之后,会将帧级伸缩因子编入码流供解码器读取。这样,解码器可以从码流中解析出当前帧以及当前帧对应的帧级伸缩因子,在对当前帧的解码过程中,通过解码器端的滤波单元,根据帧级伸缩因子对解码过程中的重建图像进行残差修正。
在一些实施例中,S101中基于当前帧中每一块的初始残差进行图像重建与滤波,得到每一块对应的第二重建图像块与第二滤波图像块之后,编码器还可以基于每一块对应的第二滤波图像块,也即通过遍历每一块对应的第二滤波图像块,得到当前帧对应的第二滤波图像。
或者,在一些实施例中,上述滤波单元还可以包括ALF滤波器,S101中基于当前帧中每一块的初始残差进行图像重建与滤波,得到每一块对应的第二重建图像块与第二滤波图像块之后,编码器还可以在每一块对应的编码过程中的滤波过程中,对每一块对应的第二滤波图像块进行自适应滤波,得到每一块对应的自适应滤波图像块。其中,对当前块对应的第二滤波图像块进行自适应滤波,得到当前块对应的自适应滤波图像块,继续对当前帧中的下一块对应的第二滤波图像块进行自适应滤波,得到下一块对应的自适应滤波图像块,直至对当前帧处理处理完毕,得到每一块对应的自适应滤波图像块。这样,编码器可以通过遍历每一块对应的自适应滤波图像块,得到当前帧对应的第二滤波图像。
可以理解的是,本申请实施例中,通过基于当前帧中每一块的初始残差进行图像重建与滤波,确定每一块对应的第二重建图像块与第二滤波图像块;基于第二重建图像块与第二滤波图像块,在图像块范围内进行伸缩因子计算,确定每一块对应的块级伸缩因子。进而,对当前帧中每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子,使得第一帧级伸缩因子中不包含差异块对应的伸缩因子,从而减少了当前帧中差异块对应的伸缩因子对计算整幅图像的帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性。本申请实施例通过将第一帧级伸缩因子与对图像级范围内计算得到的第二帧级伸缩因子进行对比,从中确定出帧级伸缩因子,提高了帧级伸缩因子修正失真的性能,从而提高了编码性能,最终提高了编码准确性。
在一些实施例中,基于图8或图9,如图10所示,上述S103可以通过执行S201-S202来实现,如下:
S201、对每一块对应的块级伸缩因子进行边界值筛选,得到N个候选的块级伸缩因子。
S201中,N为大于0的正整数。编码器通过边界值筛选的方式,从每一块对应的块级伸缩因子 中确定并舍弃差异块对应的块级伸缩因子,得到N个候选的块级伸缩因子。
在一些实施例中,基于图10,如图11所示,S201可以通过S2011-S2013来实现,如下:
S2011、在每一块对应的块级伸缩因子中,确定出等于预设上限值或预设下限值的块级伸缩因子,作为差异块对应的块级伸缩因子。
S201中,基于上述计算块级伸缩因子的公式(7),对于通过公式(7)计算得到的等于预设上限值SF up或预设下限值SF bottom的块级伸缩因子,说明这些伸缩因子对应的第二重建图像块需要做修正的幅度达到了能修正的临界值。编码器在每一块对应的块级伸缩因子中,确定出等于预设上限值或预设下限值的块级伸缩因子,作为差异块对应的块级伸缩因子。
S2012、确定差异块对应的块级伸缩因子的数量,与每一块对应的块级伸缩因子的总数量的第一占比。
S2013、若第一占比不超过预设占比阈值,则将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子,作为N个候选的块级伸缩因子。
S2012和S2013中,编码器统计差异块对应的块级伸缩因子的数量,并计算差异块对应的块级伸缩因子的数量,与每一块对应的块级伸缩因子的总数量,也即与当前帧中全部块级伸缩因子的总数量的占比。若第一占比不超过预设占比阈值,说明差异块的数量占比不高,为了减少差异块对计算全局块伸缩因子的影响,则从每一块对应的块级伸缩因子中舍弃差异块对应的块级伸缩因子,将差异块对应的块级伸缩因子之外的块级伸缩因子,作为N个候选的块级伸缩因子。
在一些实施例中,预设占比阈值可以包括预设上限占比阈值TH up和第一预设下限占比阈值TH bottom,具体的根据实际情况进行选择,本申请实施例不作限定。
可以理解的是,通过预设上限值和预设下限值确定出差异块对应的块级伸缩因子,并将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子作为N个候选的块级伸缩因子,基于N个候选的块级伸缩因子计算得到第一帧级伸缩因子,减少了少数块的差异性对计算第一帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性,进而提高了编码准确性。
在一些实施例中,基于图10,如图12所示,S201可以通过S2014-S2017来实现,如下:
S2014、在每一块对应的块级伸缩因子中,确定最大块级伸缩因子与最小块级伸缩因子中的至少一个块级伸缩因子。
S2014中,编码器遍历每一块对应的块级伸缩因子中,从中确定出最大值对应的最大块级伸缩因子,与最小值对应的最小块级伸缩因子中的至少一个,作为至少一个块级伸缩因子。
示例性地,编码器可以确定每一块对应的块级伸缩因子中最大值对应的最大块级伸缩因子,作为至少一个块级伸缩因子;也可以确定每一块对应的块级伸缩因子中最小值对应的最小块级伸缩因子,作为至少一个块级伸缩因子;也可以确定每一块对应的块级伸缩因子中的最大块级伸缩因子与最小块级伸缩因子,作为至少一个块级伸缩因子。
S2015、在每一块对应的块级伸缩因子中,对于除至少一个块级伸缩因子之外的块级伸缩因子进行平均处理,得到第一均值。
S2015中,编码器在每一块对应的块级伸缩因子中,对于除至少一个块级伸缩因子之外的其余块级伸缩因子进行均值计算,得到第一均值。
S2016、若至少一个块级伸缩因子与第一均值的差值大于预设差值阈值,则将至少一个块级伸缩因子,作为差异块对应的块级伸缩因子。
S2016中,编码器计算至少一个块级伸缩因子,也即最大块级伸缩因子与最小块级伸缩因子中的至少一个,与第一均值的差值。若差值大于预设差值阈值,说明最大块级伸缩因子或最小块级伸缩因子与其余块级伸缩因子的差异性较大,故将至少一个块级伸缩因子,作为差异块对应的块级伸缩因子。
示例性地,若最大块级伸缩因子与第一均值的差值大于预设差值阈值,则将最大块级伸缩因子作为差异块对应的块级伸缩因子;若最小块级伸缩因子与第一均值的差值大于预设差值阈值,则将最小块级伸缩因子作为差异块对应的块级伸缩因子;若最大块级伸缩因子与第一均值的差值,以及最小块级伸缩因子与第一均值的差值均大于预设差值阈值,则将最大块级伸缩因子与最小块级伸缩因子均作为差异块对应的块级伸缩因子。
在一些实施例中,预设差值阈值可以包括预设最大差值阈值TH max和预设最小差值阈值TH min,具体的根据实际情况进行选择,本申请实施例不作限定。
S2017、将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子, 作为N个候选的块级伸缩因子。
S2017中,编码器从每一块对应的块级伸缩因子中,舍弃由最大值和最小值筛选得到的差异块对应的块级伸缩因子,得到N个候选的块级伸缩因子。
在一些实施例中,最大块级伸缩因子或最小块级伸缩因子的数量可以是多个,如最大值的块级伸缩因子在每一块对应的块级伸缩因子中有多个。故,S2016中将至少一个块级伸缩因子,作为差异块对应的块级伸缩因子之后,还可以计算由最大块级伸缩因子与最小块级伸缩因子中的至少一个确定出的差异块对应的块级伸缩因子的数量,与每一块对应的块级伸缩因子的总数量之间的第二占比;若第二占比不超过预设占比阈值,则将最大块级伸缩因子或最小块级伸缩因子作为差异块对应的块级伸缩因子,将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子,作为N个候选的块级伸缩因子。
可以理解的是,通过最大伸缩因子和最小伸缩因子确定出差异块对应的块级伸缩因子,并将每一块对应的块级伸缩因子中,除差异块对应的块级伸缩因子之外的块级伸缩因子作为N个候选的块级伸缩因子,基于N个候选的块级伸缩因子计算得到第一帧级伸缩因子,减少了少数块的差异性对计算第一帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性,进而提高了编码准确性。
S202、基于N个候选的块级伸缩因子进行平均处理,得到第一帧级伸缩因子。
S202中,编码器基于N个候选的块级伸缩因子进行平均处理,将N个候选的块级伸缩因子融合为第一帧级伸缩因子。
在一些实施例中,基于图11和图12,S202可以如图13所示,包括S2021,如下:
S2021、对N个候选的块级伸缩因子进行平均处理,得到第一帧级伸缩因子。
S2021中,编码器可以对根据上述预设上限值和预设下限值筛选得到的N个候选的块级伸缩因子进行平均化处理,得到第一帧级伸缩因子;或者,对根据上述最大块级伸缩因子与最小块级伸缩因子筛选得到的N个候选的块级伸缩因子进行平均化处理,得到第一帧级伸缩因子。
在一些实施例中,基于图11,也即基于根据上述预设上限值和预设下限值筛选得到的N个候选的块级伸缩因子,还可以如图14所示,通过执行S2022-S2025来实现S202,如下:
S2022、在N个候选的块级伸缩因子中,确定最大候选的块级伸缩因子与最小候选的块级伸缩因子中的至少一个候选的块级伸缩因子。
S2022中,基于图11中的S2013之后,编码器基于由预设上限值和预设下限值筛选得到的N个候选的块级伸缩因子进行进一步筛选,在N个候选的块级伸缩因子中,确定最大值对应的最大候选的块级伸缩因子,以及最小值对应的最小候选的块级伸缩因子中的至少一个,作为至少一个候选的块级伸缩因子。
这里,编码器在N个候选的块级伸缩因子中确定至少一个候选的块级伸缩因子的过程,与上述在每一块对应的块级伸缩因子中确定至少一个块级伸缩因子的过程描述一致,此处不再赘述。
S2023、在N个候选的块级伸缩因子中,对于除至少一个候选的块级伸缩因子之外的候选的块级伸缩因子进行平均处理,得到第二均值。
S2023中,编码器对N个候选的块级伸缩因子中,对于除至少一个候选的块级伸缩因子之外的候选的块级伸缩因子进行平均处理,得到第二均值的过程与上述S2015中计算第一均值的过程描述一致,此处不再赘述。
S2024、若至少一个候选的块级伸缩因子与第二均值的差值大于预设差值阈值,则将N个候选的块级伸缩因子中,除至少一个候选的块级伸缩因子之外的候选的块级伸缩因子,确定为M个更新的块级伸缩因子。
S2024中,若至少一个候选的块级伸缩因子与第二均值的差值大于预设差值阈值,编码器则将至少一个候选的块级伸缩因子作为差异块对应的伸缩因子,在N个候选的块级伸缩因子中,舍弃至少一个候选的块级伸缩因子,得到M个更新的块级伸缩因子;也即将N个候选的块级伸缩因子中,除至少一个候选的块级伸缩因子之外的候选的块级伸缩因子,确定为M个更新的块级伸缩因子。这里,M为大于0,且小于或等于N的正整数。
S2025、对M个更新的块级伸缩因子进行平均处理,得到第一帧级伸缩因子。
S2025中,编码器对于通过两次筛选得到的M个更新的块级伸缩因子进行平均处理,得到第一帧级伸缩因子。
同样地,在一些实施例中,基于图12,也即基于根据上述最大块级伸缩因子与最小块级伸缩因子筛选得到的N个候选的块级伸缩因子,编码器也可以通过预设上限值和预设下限值,对N个候选的块级伸缩因子进行进一步筛选,得到M个更新的块级伸缩因子,进而对M个更新的块级伸缩因 子进行平均处理,得到第一帧级伸缩因子。此处不再赘述。
需要说明的是,上述的图13和图14是并列的方法流程,实际应用中可以根据实际情况选择其中的一种方法流程来执行,本申请实施例不作限定。
可以理解的是,通过预设上限值和预设下限值进行一次筛选,在筛选后的N个候选的块级伸缩因子通过最大伸缩因子和最小伸缩因子进行又一次筛选,进一步减少了少数块的差异性对计算第一帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性,进而提高了编码准确性。
在本申请实施例的一些实施例中,提供了一种编码器框架,如图15所示。其中,神经网络环路滤波(Neural Network based in-Loop Filter,NNLF)可以通过基于多层卷积网络的环路滤波器,也可以通过多层残差网络的环路滤波器来实现。图15中,伸缩因子计算用于确定第二帧级伸缩因子SF与第一帧级伸缩因子RSF,基于SF和RSF确定帧级伸缩因子。伸缩因子计算的使能不依赖于DB、SAO、ALF、NNLF的开关,在位置上置于NNLF之后,ALF之前。
如图15所示,编码端进入环路滤波时,按照既定的滤波器顺序进行处理,当进入伸缩因子计算时,伸缩因子计算的处理过程包括:通过每个块级编码过程,对于当前序列的当前帧的每个块中每个像素位置,遍历每个块中每个像素位置在每个块、第二重建图像块与第二滤波图像块中对应的每个像素的每个颜色分量,通过公式(1)-(7)计算出每一块对应的块级伸缩因子。在对当前帧中的每一块都处理完成时,得到L个块级伸缩因子。
在伸缩因子计算过程中,对于当前帧对应的L个块级伸缩因子,通过预设上限值和预设下限值的块级伸缩因子来限制伸缩因子的导出范围,通过导出范围对L个块级伸缩因子进行一次筛选。对于L个块级伸缩因子中的第j个块级伸缩因子ctuSF(j),如果ctuSF(j)等于预设上限值或预设下限值,则说明ctuSF(j)对应的第二重建图像块需要做修正的幅度达到了能修正的临界值,将其作为特殊块,即差异块。通过将块级伸缩因子中的每一个块级伸缩因子与预设上限值和预设下限值对比,确定并统计特殊块在L个块中的占比,即第一占比。如果占比小于预设占比阈值,为了全局图像块的考虑,从L个块级伸缩因子中舍弃特殊块。
在伸缩因子计算过程中,在完成预设上限值与预设下限值的筛选后,通过遍历剩余的块级伸缩因子,也即通过遍历至少一个候选的块级伸缩因子,可以得到最大块级伸缩因子ctuSF_max或最小块级伸缩因子ctuSF_min。对于至少一个候选的块级伸缩因子进行均值计算,得到第二均值。如果ctuSF_max或ctuSF_min与第二均值之间的差值超过预设差值阈值,则将ctuSF_max或ctuSF_min看作特殊块,为了全局图像块的考虑,从至少一个候选的块级伸缩因子中舍弃特殊块,得到更新的块级伸缩因子集合ctuSF_SET,也即M个更新的块级伸缩因子。对ctuSF_SET求平均得到第一帧级伸缩因子RSF。
在伸缩因子计算过程中,基于当前帧中每个像素位置,遍历每个块中每个像素位置在当前帧、第二重建图像与第二滤波图像中对应的每个像素的每个颜色分量,计算出当前帧对应的第二帧级伸缩因子SF。分别使用SF和RSF对第二重建图像和第二滤波图像的残差进行修正,得到RSF对应的第一修正图像与SF对应的第二修正图像。将第一修正图像与第二修正图像分别与原始图像相比较并计算代价,示例性地,得到RSF对应的第一失真代价D_RSF,以及SF对应的第二失真代价D_SF。对D_RSF和D_SF进行比较,如果D_RSF<=D_SF,则采用RSF为当前帧的当前颜色分量最终使用的伸缩因子;否则如果D_RSF>D_SF,则采用SF为最终所使用的伸缩因子,并把选中的伸缩因子编码到码流中。若当前帧已完成伸缩因子的计算,则加载下一帧进行处理。
在一些实施例中,申请人在通用测试条件(Random Access,RA)配置下,使用集成了本申请伸缩因子计算方法的网络模型,在JVET规定的通用序列Class D WQVGA中的部分数据集上进行了测试。测试基于VTM11.0-nnvc参考平台,上述预设占比阈值被配置为TH bottom=TH up=10%,且预设差值阈值被配置为TH min=TH max=0.5。根据测试结果,集成了本申请伸缩因子计算方法的网络模型,相较于相关技术中的网络模型,对于颜色分量Y、U和V各自在RA性能上的提升如表1所示(表1中的数据与RA性能负相关):
表1
测试数据集 Y分量 U分量 V分量
BasketballPass 0.00% -0.19% 0.03%
BQSquare 0.01% -0.22% -0.29%
BlowingBubbles 0.02% -0.22% 0.04%
RaceHorese -0.01% -0.10% -0.15%
从表1可以看出,相较于相关技术中的网络模型,对于不同的测试数据集,集成了本申请伸缩 因子计算模块的网络模型的U分量和V分量上的提升非常明显,在U分量上的性能提升了-0.19%、-0.22%、-0.22%与-0.10%;在V分量上提升的分别达到了-0.29%与-0.15%。这一数据说明提升了编解码性能,提高了编解码准确性。
可以理解的是,本申请实施例通过考虑各个块级伸缩因子的差异性,降低了少数特殊块对帧级伸缩因子计算的影响,使得最终导出的优化帧级伸缩因子,即第一帧级伸缩因子,更加适用于大多数的图像块进行缩放处理。利用第一帧级伸缩因子对重建图像和神经网络环路滤波图像的残差进行修正,能够使得修正后的图像更加接近于原始图像,以获得更好的编码性能。同时,本申请实施例的伸缩因子计算方法仅在编码端运行,计算得到的第一帧级伸缩因子是图像级的,因此无需消耗编码比特数,对解码复杂度没有影响。通过引入本申请实施例对伸缩因子的优化方法,能够在神经网络环路滤波器的基础上有较大的编码性能提升,在实际应用中,还可以通过优化算法和门限值的设定进一步提高性能。
本申请实施例提供一种滤波方法,应用于解码器,图16为本申请实施例提供的滤波方法的一种可选的流程示意图,包括S301-S303,如下:
S301、解析码流,确定帧级伸缩因子和当前块的初始残差;其中,帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,第一帧级伸缩因子是对当前帧中的块级伸缩因子进行筛选得到的,其中,第一帧级伸缩因子不包括差异块对应的块级伸缩因子;差异块为当前帧中与其他块具有差异性的块。
S301中,解码器通过接收并解析码流,得到当前帧对应的帧级伸缩因子,并通过当前帧中的当前块进行图像重建,如反变换与反量化等步骤,得到当前帧中的当前块的初始残差。这里,帧级伸缩因子即为上述的编码器滤波方法中,通过对比当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子各自对应的失真代价确定的。其中,第一帧级伸缩因子是通过对当前帧中的块级伸缩因子进行筛选,基于筛选后的块级伸缩因子进行融合得到的。这里,筛选后的块级伸缩因子不包含差异块对应的块级伸缩因子;差异块为当前帧中与其他块具有差异性的块。故,第一帧级伸缩因子不包括差异块对应的块级伸缩因子。
S302、基于初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
S302中,解码器基于初始残差进行图像块重建,通过将初始残差叠加在预测得到的预测图像块上,得到第一重建图像块。解码器对第一重建图像块进行滤波,得到第一滤波图像块。
在一些实施例中,解码器对初始残差进行图像块重建,得到第一初始重建图像块;对第一初始重建图像块进行去块滤波,得到第一去块图像块;对第一去块图像块进行样值自适应补偿滤波,得到第一重建图像块。编码器对对第一重建图像块进行环路滤波,得到第一滤波图像块。
S303、利用帧级伸缩因子、第一重建图像块和第一滤波图像块进行缩放处理,得到当前块对应的修正图像块。
S303中,解码器基于第一重建图像块和第一滤波图像块,确定每个像素位置对应的第一重建像素和第一滤波像素;利用帧级伸缩因子,对第一重建像素和第一滤波像素进行缩放处理,确定每个像素位置对应的修正像素;根据每个像素位置对应的修正像素,确定修正图像块。
在一些实施例中,解码器可以对于每个像素位置,确定第一重建像素和第一滤波像素之间的第一像素残差;利用帧级伸缩因子,对第一像素残差进行缩放处理,并与第一重建像素进行叠加,得到每个像素位置对应的修正像素。
在一些实施例中,解码器可以利用帧级伸缩因子与预设系数之间的差值,对第一重建像素进行缩放处理,得到第一重建修正像素;利用帧级伸缩因子,对第一滤波像素进行缩放处理,得到第一滤波修正像素;结合第一重建修正像素与第一滤波修正像素,得到修正像素。
在本申请的一些实施例中,帧级伸缩因子包括:各个颜色分量对应的帧级伸缩因子。解码器对于每个像素位置,根据各个颜色分量对应的帧级伸缩因子,对第一重建像素在各个颜色分量上的第一像素,与第一滤波像素在各个颜色分量上的第二像素进行对应的缩放处理,得到各个颜色分量对应的修正像素。
本申请实施例中,解码器还可以对当前帧中的下一块进行图像块重建与滤波,并利用帧级伸缩因子对下一块的第一重建图像块和第一滤波图像块进行缩放处理,得到下一块对应的修正图像块,直至对当前帧处理完成,根据当前帧中每一块对应的修正图像块,得到当前帧的第一滤波图像,也即解码器侧的滤波单元输出的视频图像信号。
或者,在利用帧级伸缩因子、第一重建图像块和第一滤波图像块进行缩放处理,得到当前块对应的修正图像块之后,解码器还可以对修正图像块进行自适应滤波,得到第二滤波图像块;对当前 帧中下一块继续进行滤波处理,得到下一块对应的第二滤波图像块,继续处理直至对当前帧处理完成,根据当前帧中每一块对应的第二滤波图像块,得到当前帧的第一滤波图像。
需要说明的是,本申请实施例提供的解码器侧的滤波方法应用于解码器侧的滤波单元中,上述解码器侧的相关步骤的描述与编码器侧的滤波单元执行对应步骤时的描述一致,此处不再赘述。
可以理解的是,由于解码器端从码流中解析出的帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的。其中,第一帧级伸缩因子是对当前帧的块级伸缩因子进行筛选得到的,不包含差异块对应的块级伸缩因子,从而减小了差异块对应的块级伸缩因子对计算整个帧级伸缩因子造成的误差,使得计算得到第一帧级伸缩因子能够更准确地表征当前帧中大部分块的所需修正的幅度,提高了第一帧级伸缩因子的准确性。并且,由于帧级伸缩因子是通过两种帧级伸缩因子的对比,在第一帧级收缩因子和第二帧级收缩因子中确定的,进一步提高了帧级收缩因子的失真修正性能,提高了利用帧级收缩因子对当前块进行滤波,得到修正图像块的准确性,进而提高了解码性能与解码准确性。
基于前述实施例的实现基础,如图17所示,本申请实施例提供了一种解码器1,包括:
解析部分10,被配置为解析码流,确定帧级伸缩因子和当前块的初始残差;其中,所述帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,所述第一帧级伸缩因子是对所述当前帧中的块级伸缩因子进行筛选得到的,其中,所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
第一重建与滤波部分11,被配置为基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
第一确定部分12,被配置为利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块。
在本申请的一些实施例中,所述第一确定部分12,还被配置为基于所述第一重建图像块和所述第一滤波图像块,确定每个像素位置对应的第一重建像素和第一滤波像素;利用所述帧级伸缩因子,对所述第一重建像素和所述第一滤波像素进行缩放处理,确定所述每个像素位置对应的修正像素;根据所述每个像素位置对应的修正像素,确定所述修正图像块。
在本申请的一些实施例中,所述第一确定部分12,还被配置为对于所述每个像素位置,确定所述第一重建像素和所述第一滤波像素之间的第一像素残差;利用所述帧级伸缩因子,对所述第一像素残差进行缩放处理,并与所述第一重建像素进行叠加,得到所述每个像素位置对应的修正像素。
在本申请的一些实施例中,所述第一确定部分12,还被配置为利用所述帧级伸缩因子与预设系数之间的差值,对所述第一重建像素进行缩放处理,得到第一重建修正像素;利用所述帧级伸缩因子,对所述第一滤波像素进行缩放处理,得到第一滤波修正像素;结合所述第一重建修正像素与所述第一滤波修正像素,得到所述修正像素。
在本申请的一些实施例中,所述帧级伸缩因子包括:各个颜色分量对应的帧级伸缩因子;所述第一确定部分12,还被配置为对于所述每个像素位置,根据所述各个颜色分量对应的帧级伸缩因子,对所述第一重建像素在所述各个颜色分量上的第一像素,与所述第一滤波像素在所述各个颜色分量上的第二像素进行对应的缩放处理,得到所述各个颜色分量对应的所述修正像素。
在本申请的一些实施例中,所述第一重建与滤波部分11,还被配置为基于所述初始残差进行图像块重建,得到第一重建图像块;对所述第一重建图像块进行滤波,得到所述第一滤波图像块。
在本申请的一些实施例中,所述第一重建与滤波部分11,还被配置为对所述初始残差进行图像块重建,得到第一初始重建图像块;对所述第一初始重建图像块进行去块滤波,得到第一去块图像块;对所述第一去块图像块进行样值自适应补偿滤波,得到所述第一重建图像块。
在本申请的一些实施例中,所述解析部分10、所述第一重建与滤波部分11以及所述第一确定部分12,还被配置为利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块之后,对所述当前帧中的下一块进行图像块重建与滤波,并利用所述帧级伸缩因子对所述下一块的第一重建图像块和第一滤波图像块进行缩放处理,得到所述下一块对应的修正图像块,直至对所述当前帧处理完成,根据所述当前帧中每一块对应的修正图像块,得到当前帧的第一滤波图像。
在本申请的一些实施例中,所述解码器1还包括第一自适应滤波部分,所述第一自适应滤波部分,被配置为对所述修正图像块进行自适应滤波,得到第二滤波图像块;对所述当前帧中下一块继续进行滤波处理,得到所述下一块对应的第二滤波图像块,继续处理直至对所述当前帧处理完成,根据所述当前帧中每一块对应的第二滤波图像块,得到当前帧的第一滤波图像。
在本申请的实际应用中,如图18所示,本申请实施例还提供了一种解码器,包括:
第一存储器14和第一处理器15;
所述第一存储器14存储有可在第一处理器15上运行的计算机程序,所述第一处理器15执行所述程序时实现解码器对应的滤波方法。
其中,第一处理器15可以通过软件、硬件、固件或者其组合实现,可以使用电路、单个或多个专用集成电路(application specific integrated circuits,ASIC)、单个或多个通用集成电路、单个或多个微处理器、单个或多个可编程逻辑器件、或者前述电路或器件的组合、或者其他适合的电路或器件,从而使得该第一处理器15可以执行前述实施例中的解码器侧的滤波方法的相应步骤。
需要说明的是,以上解码器实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本发明解码器实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解。
本申请实施例提供了一种编码器2,如图19所示,包括:
第二重建与滤波部分20,被配置为基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块;
第二确定部分21,被配置为基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子;对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子;所述第二重建图像通过每一块对应的第二重建图像块确定;所述第二滤波图像通过每一块对应的第二滤波图像块确定;基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为对所述每一块对应的块级伸缩因子进行边界值筛选,得到N个候选的块级伸缩因子;N为大于0的正整数;基于所述N个候选的块级伸缩因子进行平均处理,得到所述第一帧级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为在每一块对应的块级伸缩因子中,确定出等于预设上限值或预设下限值的块级伸缩因子,作为差异块对应的块级伸缩因子;确定所述差异块对应的块级伸缩因子的数量,与所述每一块对应的块级伸缩因子的总数量的第一占比;若所述第一占比不超过预设占比阈值,则将所述每一块对应的块级伸缩因子中,除所述差异块对应的块级伸缩因子之外的块级伸缩因子,作为所述N个候选的块级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为在所述每一块对应的块级伸缩因子中,确定最大块级伸缩因子与最小块级伸缩因子中的至少一个块级伸缩因子;在所述每一块对应的块级伸缩因子中,对于除所述至少一个块级伸缩因子之外的块级伸缩因子进行平均处理,得到第一均值;若所述至少一个块级伸缩因子与所述第一均值的差值大于预设差值阈值,则将所述至少一个块级伸缩因子,作为差异块对应的块级伸缩因子;将所述每一块对应的块级伸缩因子中,除所述差异块对应的块级伸缩因子之外的块级伸缩因子,作为所述N个候选的块级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为在所述N个候选的块级伸缩因子中,确定最大候选的块级伸缩因子与最小候选的块级伸缩因子中的至少一个候选的块级伸缩因子;
在所述N个候选的块级伸缩因子中,对于除所述至少一个候选的块级伸缩因子之外的候选的块级伸缩因子进行平均处理,得到第二均值;若所述至少一个候选的块级伸缩因子与所述第二均值的差值大于预设差值阈值,则将所述N个候选的块级伸缩因子中,除所述至少一个候选的块级伸缩因子之外的候选的块级伸缩因子,确定为M个更新的块级伸缩因子;M大于0,且小于或等于N;对所述M个更新的块级伸缩因子进行平均处理,得到所述第一帧级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为利用所述第一帧级伸缩因子,对所述第二重建图像与所述第二滤波图像进行缩放处理,得到第一修正图像;
利用所述第二帧级伸缩因子,对所述第二重建图像与所述第二滤波图像进行缩放处理,得到第二修正图像;分别确定所述第一修正图像和所述第二修正图像,与所述当前帧之间的失真代价,得到所述第一帧级伸缩因子对应的第一失真代价,以及与所述第二帧级伸缩因子对应的第二失真代价;通过对比所述第一失真代价与第二失真代价,从所述第一帧级伸缩因子与所述第二帧级伸缩因子中确定所述帧级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为若所述第一失真代价小于或等于所述第二失真代价,则将所述第一帧级伸缩因子确定为所述帧级伸缩因子;若所述第一失真代价大 于所述第二失真代价,则将所述第二帧级伸缩因子确定为所述帧级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为根据所述每一块中每个像素位置对应的原始像素,确定与第二重建图像块的第二重建像素之间的原始像素残差;根据所述第二滤波图像块中每个像素位置对应的第二滤波像素,确定与所述第二重建像素之间的滤波像素残差;基于所述原始像素残差与所述滤波像素残差,确定所述每一块对应的块级原始像素残差与块级滤波像素残差;基于所述块级原始像素残差与所述块级滤波像素残差进行加权处理,得到所述每一块对应的块级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为对所述每一块中每个像素位置对应的所述滤波像素残差进行加权与融合,确定所述每一块对应的第一块级加权残差;对所述每一块中每个像素位置对应的所述原始像素残差和所述滤波像素残差进行加权与融合,确定所述每一块对应的第二块级加权残差;结合预设上限值与预设下限值,对所述第一块级加权残差、所述第二块级加权残差、所述块级原始像素残差与所述块级滤波像素残差进行最小二乘处理,得到所述每一块对应的块级伸缩因子。
在本申请的一些实施例中,所述第二确定部分21,还被配置为遍历所述第二重建图像中每个像素位置对应的原始像素残差,确定帧级原始像素残差;所述第二重建图像中每个像素位置对应的原始像素残差基于所述每一块对应的第二重建图像块中每个像素位置对应的原始像素残差得到;遍历所述第二滤波图像中每个像素位置对应的滤波像素残差,确定帧级滤波像素残差;所述第二滤波图像中每个像素位置对应的滤波像素残差基于所述第二滤波图像块中每个像素位置对应的滤波像素残差得到;对所述第二滤波图像中每个像素位置对应的滤波像素残差进行加权与融合,得到第一帧级加权残差;对所述第二滤波图像中每个像素位置对应的滤波像素残差,与所述第二重建图像中每个像素位置对应的原始像素残差进行加权与融合,得到第二帧级加权残差;结合预设上限值与预设下限值,对所述第一帧级加权残差、所述第二帧级加权残差、所述帧级原始像素残差与所述帧级滤波像素残差进行最小二乘处理,得到所述第二帧级伸缩因子。
在本申请的一些实施例中,所述第一帧级伸缩因子包括各个颜色分量对应的第一帧级伸缩因子;所述第二确定部分21,还被配置为对于所述第二重建图像与所述第二滤波图像中的每个像素位置,根据所述各个颜色分量对应的第一帧级伸缩因子,对所述第二重建图像中每个第二重建像素在所述各个颜色分量上的第一重建像素,以及所述第二滤波图像中每个第二滤波像素在所述各个颜色分量上的第二滤波像素进行对应的缩放与叠加处理,得到每个像素位置对应的,包含所述各个颜色分量的第一修正像素;遍历每个像素位置对应的第一修正像素,得到第一修正图像。
在本申请的一些实施例中,所述第二重建与滤波部分22,还被配置为对所述当前帧中当前块的初始残差进行图像重建与滤波,得到所述当前块对应的第二重建图像块与第二滤波图像块,继续对所述当前帧中的下一块进行图像块重建与滤波,直至对所述当前帧处理完成,得到所述每一块对应的第二重建图像块与第二滤波图像块。
在本申请的一些实施例中,所述第二重建与滤波部分22,还被配置为对所述当前块的初始残差进行图像重建,得到所述第二重建图像块;对所述第二重建图像块进行滤波,得到所述第二滤波图像块。
在本申请的一些实施例中,所述编码器2还包括滤波图像输出部分,所述滤波图像输出部分,被配置为在所述基于当前帧中每一块的初始残差进行图像重建与滤波,得到所述每一块对应的第二重建图像块与第二滤波图像块之后,基于所述每一块对应的第二滤波图像块,得到所述当前帧对应的第二滤波图像。
在本申请的一些实施例中,所述滤波图像输出部分,还被配置为在对所述当前帧中每一块的初始残差进行图像重建与滤波,得到所述每一块对应的第二重建图像块与第二滤波图像块之后,对所述每一块对应的第二滤波图像块进行自适应滤波,得到每一块对应的自适应滤波图像块,遍历所述每一块对应的自适应滤波图像块,得到所述当前帧对应的第二滤波图像。
在实际应用中,如图20所示,本申请实施例还提供了一种编码器,包括:
第二存储器25和第二处理器26;
所述第二存储器25存储有可在第二处理器26上运行的计算机程序,所述第二处理器26执行所述程序时编码器对应的滤波方法。
需要说明的是,以上编码器实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本发明编码器实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解。
本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被第一处理器执行时,实现权利要求解码器对应的所述滤波方法;或者,该计算机程序被第二处理器执行时,实现权利要求编码器对应的所述滤波方法。
在本申请实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的计算机可读存储介质包括:磁性随机存取存储器(FRAM,ferromagnetic random access memory)、只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory)等各种可以存储程序代码的介质,本公开实施例不作限制。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例提供了一种滤波方法、解码器、编码器及计算机可读存储介质,解码器端从码流中解析出的帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的。其中,第一帧级伸缩因子是对当前帧的块级伸缩因子进行筛选得到,不包含差异块对应的块级伸缩因子,从而减小了差异块对应的块级伸缩因子对计算整个帧级伸缩因子造成的误差,使得计算得到第一帧级伸缩因子能够更准确地表征当前帧中大部分块的所需修正的幅度,提高了第一帧级伸缩因子的准确性。并且,由于帧级伸缩因子是通过两种帧级伸缩因子的对比,在第一帧级收缩因子和第二帧级收缩因子中确定的,进一步提高了帧级收缩因子的失真修正性能,提高了利用帧级收缩因子对当前块进行滤波,得到修正图像块的准确性,进而提高了解码性能与解码准确性。
并且,编码器通过基于当前帧中每一块的初始残差进行图像重建与滤波,确定每一块对应的第二重建图像块与第二滤波图像块;基于第二重建图像块与第二滤波图像块,在图像块范围内进行伸缩因子计算,确定每一块对应的块级伸缩因子。进而,对当前帧中每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子,使得第一帧级伸缩因子中不包含差异块对应的伸缩因子,从而减少了当前帧中差异块对应的伸缩因子对计算整幅图像的帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性。以及,通过将第一帧级伸缩因子与对图像级范围内计算得到的第二帧级伸缩因子进行对比,从中确定出帧级伸缩因子,提高了帧级伸缩因子修正失真的性能,从而提高了编码性能,最终提高了编码准确性。并且,由于将筛选后的块级伸缩因子融合得到第一帧级伸缩因子,再将第一帧级伸缩因子编入码流以进行传输时,无需消耗多余的编码比特数,在保证编码效率不降低的基础上提高了编码性能和准确性。进一步地,在筛选差异块对应的伸缩因子时,通过预设上限值和预设下限值进行一次筛选,在筛选后的N个候选的块级伸缩因子通过最大伸缩因子和最小伸缩因子进行又一次筛选,进一步减少了少数块的差异性对计算第一帧级伸缩因子的影响,提高了第一帧级伸缩因子的准确性,进一步提高了编码准确性。

Claims (29)

  1. 一种滤波方法,应用于解码器,包括:
    解析码流,确定帧级伸缩因子和当前块的初始残差;其中,所述帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,所述第一帧级伸缩因子是对所述当前帧中的块级伸缩因子进行筛选得到的,其中,所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
    基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
    利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块。
  2. 根据权利要求1所述的方法,其中,所述利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块,包括:
    基于所述第一重建图像块和所述第一滤波图像块,确定每个像素位置对应的第一重建像素和第一滤波像素;
    利用所述帧级伸缩因子,对所述第一重建像素和所述第一滤波像素进行缩放处理,确定所述每个像素位置对应的修正像素;
    根据所述每个像素位置对应的修正像素,确定所述修正图像块。
  3. 根据权利要求2所述的方法,其中,所述利用所述帧级伸缩因子,对所述第一重建像素和所述第一滤波像素进行缩放处理,确定所述每个像素位置对应的修正像素,包括:
    对于所述每个像素位置,确定所述第一重建像素和所述第一滤波像素之间的第一像素残差;
    利用所述帧级伸缩因子,对所述第一像素残差进行缩放处理,并与所述第一重建像素进行叠加,得到所述每个像素位置对应的修正像素。
  4. 根据权利要求2所述的方法,其中,所述利用所述帧级伸缩因子,对所述第一重建像素和所述第一滤波像素进行缩放处理,确定所述每个像素位置对应的修正像素,包括:
    利用所述帧级伸缩因子与预设系数之间的差值,对所述第一重建像素进行缩放处理,得到第一重建修正像素;
    利用所述帧级伸缩因子,对所述第一滤波像素进行缩放处理,得到第一滤波修正像素;
    结合所述第一重建修正像素与所述第一滤波修正像素,得到所述修正像素。
  5. 根据权利要求2所述的方法,其中,所述帧级伸缩因子包括:各个颜色分量对应的帧级伸缩因子;
    所述利用所述帧级伸缩因子,对所述第一重建像素和所述第一滤波像素进行缩放处理,确定所述每个像素位置对应的修正像素,包括:
    对于所述每个像素位置,根据所述各个颜色分量对应的帧级伸缩因子,对所述第一重建像素在所述各个颜色分量上的第一像素,与所述第一滤波像素在所述各个颜色分量上的第二像素进行对应的缩放处理,得到所述各个颜色分量对应的所述修正像素。
  6. 根据权利要求1-5任一项所述的方法,其中,所述基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块,包括:
    基于所述初始残差进行图像块重建,得到第一重建图像块;
    对所述第一重建图像块进行滤波,得到所述第一滤波图像块。
  7. 根据权利要求6所述的方法,其中,所述基于所述初始残差进行图像块重建,得到第一重建图像块,包括:
    对所述初始残差进行图像块重建,得到第一初始重建图像块;
    对所述第一初始重建图像块进行去块滤波,得到第一去块图像块;
    对所述第一去块图像块进行样值自适应补偿滤波,得到所述第一重建图像块。
  8. 根据权利要求1-5、或7任一项所述的方法,其中,所述利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块之后,所述方法还包括:
    对所述当前帧中的下一块进行图像块重建与滤波,并利用所述帧级伸缩因子对所述下一块的第一重建图像块和第一滤波图像块进行缩放处理,得到所述下一块对应的修正图像块,直至对所述当前帧处理完成,根据所述当前帧中每一块对应的修正图像块,得到当前帧的第一滤波图像。
  9. 根据权利要求1-5、或7任一项所述的方法,其中,所述利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块之后,所述方法还包括:
    对所述修正图像块进行自适应滤波,得到第二滤波图像块;
    对所述当前帧中下一块继续进行滤波处理,得到所述下一块对应的第二滤波图像块,继续处理直至对所述当前帧处理完成,根据所述当前帧中每一块对应的第二滤波图像块,得到当前帧的第一滤波图像。
  10. 一种滤波方法,应用于编码器,包括:
    基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块;
    基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子;
    对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
    基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子;所述第二重建图像通过每一块对应的第二重建图像块确定;所述第二滤波图像通过每一块对应的第二滤波图像块确定;
    基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子。
  11. 根据权利要求10所述的方法,其中,所述对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子,包括:
    对所述每一块对应的块级伸缩因子进行边界值筛选,得到N个候选的块级伸缩因子;N为大于0的正整数;
    基于所述N个候选的块级伸缩因子进行平均处理,得到所述第一帧级伸缩因子。
  12. 根据权利要求11所述的方法,其中,所述对所述每一块对应的块级伸缩因子进行边界值筛选,得到N个候选的块级伸缩因子,包括:
    在每一块对应的块级伸缩因子中,确定出等于预设上限值或预设下限值的块级伸缩因子,作为差异块对应的块级伸缩因子;
    确定所述差异块对应的块级伸缩因子的数量,与所述每一块对应的块级伸缩因子的总数量的第一占比;
    若所述第一占比不超过预设占比阈值,则将所述每一块对应的块级伸缩因子中,除所述差异块对应的块级伸缩因子之外的块级伸缩因子,作为所述N个候选的块级伸缩因子。
  13. 根据权利要求11所述的方法,其中,所述对所述每一块对应的块级伸缩因子进行边界值筛选,得到N个候选的块级伸缩因子,包括:
    在所述每一块对应的块级伸缩因子中,确定最大块级伸缩因子与最小块级伸缩因子中的至少一个块级伸缩因子;
    在所述每一块对应的块级伸缩因子中,对于除所述至少一个块级伸缩因子之外的块级伸缩因子进行平均处理,得到第一均值;
    若所述至少一个块级伸缩因子与所述第一均值的差值大于预设差值阈值,则将所述至少一个块级伸缩因子,作为差异块对应的块级伸缩因子;
    将所述每一块对应的块级伸缩因子中,除所述差异块对应的块级伸缩因子之外的块级伸缩因子,作为所述N个候选的块级伸缩因子。
  14. 根据权利要求12所述的方法,其中,所述基于所述N个候选的块级伸缩因子进行平均处理,得到所述第一帧级伸缩因子,包括:
    在所述N个候选的块级伸缩因子中,确定最大候选的块级伸缩因子与最小候选的块级伸缩因子中的至少一个候选的块级伸缩因子;
    在所述N个候选的块级伸缩因子中,对于除所述至少一个候选的块级伸缩因子之外的候选的块级伸缩因子进行平均处理,得到第二均值;
    若所述至少一个候选的块级伸缩因子与所述第二均值的差值大于预设差值阈值,则将所述N个候选的块级伸缩因子中,除所述至少一个候选的块级伸缩因子之外的候选的块级伸缩因子,确定为M个更新的块级伸缩因子;M大于0,且小于或等于N;
    对所述M个更新的块级伸缩因子进行平均处理,得到所述第一帧级伸缩因子。
  15. 根据权利要求10-14任一项所述的方法,其中,所述基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子,包括:
    利用所述第一帧级伸缩因子,对所述第二重建图像与所述第二滤波图像进行缩放处理,得到第一修正图像;
    利用所述第二帧级伸缩因子,对所述第二重建图像与所述第二滤波图像进行缩放处理,得到第二修正图像;
    分别确定所述第一修正图像和所述第二修正图像,与所述当前帧之间的失真代价,得到所述第一帧级伸缩因子对应的第一失真代价,以及与所述第二帧级伸缩因子对应的第二失真代价;
    通过对比所述第一失真代价与第二失真代价,从所述第一帧级伸缩因子与所述第二帧级伸缩因子中确定所述帧级伸缩因子。
  16. 根据权利要求15所述的方法,其中,所述通过对比所述第一失真代价与第二失真代价,从所述第一帧级伸缩因子与所述第二帧级伸缩因子中确定所述帧级伸缩因子,包括:
    若所述第一失真代价小于或等于所述第二失真代价,则将所述第一帧级伸缩因子确定为所述帧级伸缩因子;
    若所述第一失真代价大于所述第二失真代价,则将所述第二帧级伸缩因子确定为所述帧级伸缩因子。
  17. 根据权利要求10-14、或16任一项所述的方法,其中,所述基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子,包括:
    根据所述每一块中每个像素位置对应的原始像素,确定与第二重建图像块的第二重建像素之间的原始像素残差;
    根据所述第二滤波图像块中每个像素位置对应的第二滤波像素,确定与所述第二重建像素之间的滤波像素残差;
    基于所述原始像素残差与所述滤波像素残差,确定所述每一块对应的块级原始像素残差与块级滤波像素残差;
    基于所述块级原始像素残差与所述块级滤波像素残差进行加权处理,得到所述每一块对应的块级伸缩因子。
  18. 根据权利要求17所述的方法,其中,所述基于所述块级原始像素残差与所述块级滤波像素残差进行加权处理,得到所述每一块对应的块级伸缩因子,包括:
    对所述每一块中每个像素位置对应的所述滤波像素残差进行加权与融合,确定所述每一块对应的第一块级加权残差;
    对所述每一块中每个像素位置对应的所述原始像素残差和所述滤波像素残差进行加权与融合,确定所述每一块对应的第二块级加权残差;
    结合预设上限值与预设下限值,对所述第一块级加权残差、所述第二块级加权残差、所述块级原始像素残差与所述块级滤波像素残差进行最小二乘处理,得到所述每一块对应的块级伸缩因子。
  19. 根据权利要求10-14、或16任一项所述的方法,其中,所述基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子,包括:
    遍历所述第二重建图像中每个像素位置对应的原始像素残差,确定帧级原始像素残差;所述第二重建图像中每个像素位置对应的原始像素残差基于所述每一块对应的第二重建图像块中每个像素位置对应的原始像素残差得到;
    遍历所述第二滤波图像中每个像素位置对应的滤波像素残差,确定帧级滤波像素残差;所述第二滤波图像中每个像素位置对应的滤波像素残差基于所述第二滤波图像块中每个像素位置对应的滤波像素残差得到;
    对所述第二滤波图像中每个像素位置对应的滤波像素残差进行加权与融合,得到第一帧级加权残差;
    对所述第二滤波图像中每个像素位置对应的滤波像素残差,与所述第二重建图像中每个像素位置对应的原始像素残差进行加权与融合,得到第二帧级加权残差;
    结合预设上限值与预设下限值,对所述第一帧级加权残差、所述第二帧级加权残差、所述帧级原始像素残差与所述帧级滤波像素残差进行最小二乘处理,得到所述第二帧级伸缩因子。
  20. 根据权利要求15所述的方法,其中,所述第一帧级伸缩因子包括各个颜色分量对应的第一帧级伸缩因子;
    所述利用所述第一帧级伸缩因子,对所述第二重建图像与所述第二滤波图像进行缩放处理,得到第一修正图像,包括:
    对于所述第二重建图像与所述第二滤波图像中的每个像素位置,根据所述各个颜色分量对应的第一帧级伸缩因子,对所述第二重建图像中每个第二重建像素在所述各个颜色分量上的第一重建像素,以及所述第二滤波图像中每个第二滤波像素在所述各个颜色分量上的第二滤波像素进行对应的缩放与叠加处理,得到每个像素位置对应的,包含所述各个颜色分量的第一修正像素;
    遍历每个像素位置对应的第一修正像素,得到第一修正图像。
  21. 根据权利要求10-14、16、18、20任一项所述的方法,其中,所述基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块,包括:
    对所述当前帧中当前块的初始残差进行图像重建与滤波,得到所述当前块对应的第二重建图像块与第二滤波图像块,继续对所述当前帧中的下一块进行图像块重建与滤波,直至对所述当前帧处理完成,得到所述每一块对应的第二重建图像块与第二滤波图像块。
  22. 根据权利要求21所述的方法,其中,所述对所述当前帧中当前块的初始残差进行图像重建与滤波,得到所述当前块对应的第二重建图像块与第二滤波图像块,包括:
    对所述当前块的初始残差进行图像重建,得到所述第二重建图像块;
    对所述第二重建图像块进行滤波,得到所述第二滤波图像块。
  23. 根据权利要求10-14、16、18、20-22任一项所述的方法,其中,所述基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块之后,所述方法还包括:
    基于所述每一块对应的第二滤波图像块,得到所述当前帧对应的第二滤波图像。
  24. 根据权利要求10-14、16、18、20-22任一项所述的方法,其中,所述基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块之后,所述方法还包括:
    对所述每一块对应的第二滤波图像块进行自适应滤波,得到每一块对应的自适应滤波图像块,遍历所述每一块对应的自适应滤波图像块,得到所述当前帧对应的第二滤波图像。
  25. 一种解码器,包括:
    解析部分,被配置为解析码流,确定帧级伸缩因子和当前块的初始残差;其中,所述帧级伸缩因子是通过当前帧对应的第一帧级伸缩因子与第二帧级伸缩因子确定的,所述第一帧级伸缩因子是对所述当前帧中的块级伸缩因子进行筛选得到的,其中,所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;
    第一重建与滤波部分,被配置为基于所述初始残差进行图像块重建与滤波,确定第一重建图像块与第一滤波图像块;
    第一确定部分,被配置为利用所述帧级伸缩因子、所述第一重建图像块和所述第一滤波图像块进行缩放处理,得到所述当前块对应的修正图像块。
  26. 一种编码器,包括:
    第二重建与滤波部分,被配置为基于当前帧中每一块的初始残差进行图像重建与滤波,确定所述每一块对应的第二重建图像块与第二滤波图像块;
    第二确定部分,被配置为基于所述第二重建图像块与所述第二滤波图像块进行伸缩因子计算,确定所述每一块对应的块级伸缩因子;对当前帧对应的所述每一块对应的块级伸缩因子进行筛选,确定第一帧级伸缩因子;所述第一帧级伸缩因子不包括差异块对应的块级伸缩因子;所述差异块为所述当前帧中与其他块具有差异性的块;基于所述当前帧对应的第二重建图像与第二滤波图像进行伸缩因子计算,得到所述当前帧对应的第二帧级伸缩因子;所述第二重建图像通过每一块对应的第二重建图像块确定;所述第二滤波图像通过每一块对应的第二滤波图像块确定;基于所述第一帧级伸缩因子与所述第二帧级伸缩因子,确定所述帧级伸缩因子。
  27. 一种解码器,包括:
    第一存储器和第一处理器;
    所述第一存储器存储有可在第一处理器上运行的计算机程序,所述第一处理器执行所述程序时实现权利要求1至9中任一项所述的方法。
  28. 一种编码器,包括:
    第二存储器和第二处理器;
    所述第二存储器存储有可在第二处理器上运行的计算机程序,所述第二处理器执行所述程序时 实现权利要求10至24中任一项所述的方法。
  29. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被第一处理器执行时,实现权利要求1至9中任一项所述的方法;或者,该计算机程序被第二处理器执行时,实现权利要求10至24中任一项所述的方法。
PCT/CN2022/099527 2022-06-17 2022-06-17 一种滤波方法、解码器、编码器及计算机可读存储介质 WO2023240618A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/099527 WO2023240618A1 (zh) 2022-06-17 2022-06-17 一种滤波方法、解码器、编码器及计算机可读存储介质
TW112122079A TW202404349A (zh) 2022-06-17 2023-06-13 一種濾波方法、解碼器、編碼器及電腦可讀儲存媒介

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099527 WO2023240618A1 (zh) 2022-06-17 2022-06-17 一种滤波方法、解码器、编码器及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023240618A1 true WO2023240618A1 (zh) 2023-12-21

Family

ID=89192983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099527 WO2023240618A1 (zh) 2022-06-17 2022-06-17 一种滤波方法、解码器、编码器及计算机可读存储介质

Country Status (2)

Country Link
TW (1) TW202404349A (zh)
WO (1) WO2023240618A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006283A1 (en) * 2015-06-30 2017-01-05 Microsoft Technology Licensing, Llc Computationally efficient sample adaptive offset filtering during video encoding
CN112655212A (zh) * 2018-11-28 2021-04-13 Oppo广东移动通信有限公司 视频编码优化方法、装置及计算机存储介质
CN114025164A (zh) * 2021-09-30 2022-02-08 浙江大华技术股份有限公司 图像编码方法、图像解码方法、编码器以及解码器
US20220103816A1 (en) * 2020-09-29 2022-03-31 Qualcomm Incorporated Filtering process for video coding
WO2022071847A1 (en) * 2020-09-29 2022-04-07 Telefonaktiebolaget Lm Ericsson (Publ) Filter strength control for adaptive loop filtering
CN114365490A (zh) * 2019-09-09 2022-04-15 北京字节跳动网络技术有限公司 高精度图像和视频编解码的系数缩放

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170006283A1 (en) * 2015-06-30 2017-01-05 Microsoft Technology Licensing, Llc Computationally efficient sample adaptive offset filtering during video encoding
CN112655212A (zh) * 2018-11-28 2021-04-13 Oppo广东移动通信有限公司 视频编码优化方法、装置及计算机存储介质
CN114365490A (zh) * 2019-09-09 2022-04-15 北京字节跳动网络技术有限公司 高精度图像和视频编解码的系数缩放
US20220103816A1 (en) * 2020-09-29 2022-03-31 Qualcomm Incorporated Filtering process for video coding
WO2022071847A1 (en) * 2020-09-29 2022-04-07 Telefonaktiebolaget Lm Ericsson (Publ) Filter strength control for adaptive loop filtering
CN114025164A (zh) * 2021-09-30 2022-02-08 浙江大华技术股份有限公司 图像编码方法、图像解码方法、编码器以及解码器

Also Published As

Publication number Publication date
TW202404349A (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
CN110199521B (zh) 用于有损视频编码的低复杂度混合域协同环内滤波器
TWI737137B (zh) 視訊編碼之非線性適應性迴圈濾波方法和裝置
WO2021203394A1 (zh) 环路滤波的方法与装置
US20220116664A1 (en) Loop filtering method and device
WO2021134706A1 (zh) 环路滤波的方法与装置
CN113475064A (zh) 使用交叉分量线性模型的视频编解码
WO2021185008A1 (zh) 编码方法、解码方法、编码器、解码器以及电子设备
US20230209051A1 (en) Filtering method and apparatus, and device
WO2022179504A1 (zh) 编解码方法、装置及其设备
WO2020192085A1 (zh) 图像预测方法、编码器、解码器以及存储介质
US20230396780A1 (en) Illumination compensation method, encoder, and decoder
KR20190117352A (ko) 영상 부호화 또는 복호화 장치 및 방법
WO2023240618A1 (zh) 一种滤波方法、解码器、编码器及计算机可读存储介质
CN113676732A (zh) 图像分量预测方法、编码器、解码器以及存储介质
KR101443865B1 (ko) 인터 예측 방법 및 장치
WO2022178686A1 (zh) 编解码方法、编解码设备、编解码系统以及计算机可读存储介质
JP7059410B2 (ja) 画像符号化装置、画像復号装置、及びプログラム
CN113196762A (zh) 图像分量预测方法、装置及计算机存储介质
WO2022227062A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2023130226A1 (zh) 一种滤波方法、解码器、编码器及计算机可读存储介质
WO2023197230A1 (zh) 滤波方法、编码器、解码器以及存储介质
WO2024016156A1 (zh) 滤波方法、编码器、解码器、码流以及存储介质
WO2022257049A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2023193253A1 (zh) 解码方法、编码方法、解码器以及编码器
WO2023044900A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946301

Country of ref document: EP

Kind code of ref document: A1