WO2024077576A1 - 基于神经网络的环路滤波、视频编解码方法、装置和系统 - Google Patents

基于神经网络的环路滤波、视频编解码方法、装置和系统 Download PDF

Info

Publication number
WO2024077576A1
WO2024077576A1 PCT/CN2022/125231 CN2022125231W WO2024077576A1 WO 2024077576 A1 WO2024077576 A1 WO 2024077576A1 CN 2022125231 W CN2022125231 W CN 2022125231W WO 2024077576 A1 WO2024077576 A1 WO 2024077576A1
Authority
WO
WIPO (PCT)
Prior art keywords
residual
nnlf
image
adjustment
reconstructed image
Prior art date
Application number
PCT/CN2022/125231
Other languages
English (en)
French (fr)
Inventor
戴震宇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/125231 priority Critical patent/WO2024077576A1/zh
Publication of WO2024077576A1 publication Critical patent/WO2024077576A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the embodiments of the present disclosure relate to but are not limited to video technology, and more specifically, to a loop filtering method based on a neural network, a video encoding and decoding method, device and system.
  • Digital video compression technology mainly compresses huge digital video data for transmission and storage.
  • the original video sequence contains brightness and chrominance components.
  • the encoder reads black and white or color images and divides each frame into the largest coding unit (LCU: largest coding unit) of the same size (such as 128x128, 64x64, etc.).
  • LCU largest coding unit
  • Each largest coding unit can be divided into rectangular coding units (CU: coding unit) according to the rules, and can be further divided into prediction units (PU: prediction unit), transform units (TU: transform unit), etc.
  • the hybrid coding framework includes prediction, transform, quantization, entropy coding, in-loop filter and other modules.
  • the prediction module can use intra prediction and inter prediction.
  • Intra-frame prediction predicts the pixel information within the current block based on the information of the same image, which is used to eliminate spatial redundancy; inter-frame prediction can refer to the information of different images and use motion estimation to search for the motion vector information that best matches the current block, which is used to eliminate temporal redundancy; the transformation can convert the predicted residual into the frequency domain to redistribute its energy, and combined with quantization, it can remove information that the human eye is not sensitive to, which is used to eliminate visual redundancy; entropy coding can eliminate character redundancy based on the current context model and the probability information of the binary code stream to generate a code stream.
  • An embodiment of the present disclosure provides a neural network based loop filter (NNLF) method, which is applied to a NNLF filter at a decoding end.
  • the NNLF filter includes a neural network and a jump connection branch from an input to an output of the NNLF filter.
  • the method includes:
  • the residual adjustment of the decoded reconstructed image uses a flag roflag, wherein roflag is used to indicate whether residual adjustment is required when performing NNLF on the reconstructed image;
  • the first mode is used to perform NNLF on the reconstructed image;
  • the second mode is used to perform NNLF on the reconstructed image;
  • the first mode is a NNLF mode that does not perform residual adjustment on the residual image output by the neural network
  • the second mode is a NNLF mode that performs residual adjustment on the residual image.
  • An embodiment of the present disclosure further provides a loop filtering method based on a neural network, which is applied to a NNLF filter at a decoding end, wherein the NNLF filter includes a neural network and a jump connection branch from the input to the output of the NNLF filter.
  • the method includes performing the following processing on each component when performing NNLF on a reconstructed image including three components input to the neural network:
  • a residual adjustment flag roflag is used for decoding the component of the reconstructed image, wherein the roflag is used to indicate whether residual adjustment is required when performing NNLF on the component of the reconstructed image;
  • the first mode is used to perform NNLF on the component of the reconstructed image;
  • the second mode is used to perform NNLF on the component of the reconstructed image;
  • the first mode is an NNLF mode that does not perform residual adjustment on the component of the residual image output by the neural network
  • the second mode is an NNLF mode that performs residual adjustment on the component of the residual image.
  • An embodiment of the present disclosure also provides a video decoding method, which is applied to a video decoding device, including: when performing neural network-based loop filtering NNLF on a reconstructed image, performing the following processing: when NNLF allows residual adjustment, performing NNLF on the reconstructed image according to the NNLF method described in any embodiment of the present disclosure applied to the NNLF filter at the decoding end.
  • An embodiment of the present disclosure further provides a loop filtering method based on a neural network, which is applied to a NNLF filter at an encoding end, wherein the NNLF filter includes a neural network and a jump connection branch from an input to an output of the NNLF filter, and the method includes:
  • the first mode is selected to perform NNLF on the reconstructed image
  • cost 2 cost 1
  • the second mode is selected to perform NNLF on the reconstructed image
  • An embodiment of the present disclosure further provides a loop filtering method based on a neural network, which is applied to a NNLF filter at an encoding end, wherein the NNLF filter includes a neural network and a jump connection branch from an input to an output of the NNLF filter, and the method includes:
  • the reconstructed image and the residual image both include three components;
  • the first mode is an NNLF mode in which residual adjustment is not performed on the component of the residual image
  • the second mode is an NNLF mode in which residual adjustment is performed on the component of the residual image
  • cost 2 select the second mode to perform NNLF on the component of the reconstructed image
  • An embodiment of the present disclosure further provides a video encoding method, which is applied to a video encoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, performing the following processing:
  • NNLF allows residual adjustment
  • a residual adjustment usage flag of the reconstructed image is encoded to indicate whether residual adjustment is required when performing NNLF on the reconstructed image.
  • An embodiment of the present disclosure further provides a code stream, wherein the code stream is generated by the video encoding method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a neural network-based loop filter, comprising a processor and a memory storing a computer program, wherein the processor, when executing the computer program, can implement the neural network-based loop filtering method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a video decoding device, including a processor and a memory storing a computer program, wherein the processor can implement the video decoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding device, including a processor and a memory storing a computer program, wherein the processor can implement the video encoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding and decoding system, comprising the video encoding device described in any embodiment of the present disclosure and the video decoding device described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, can implement the neural network-based loop filtering method as described in any embodiment of the present disclosure, or implement the video decoding method as described in any embodiment of the present disclosure, or implement the video encoding method as described in any embodiment of the present disclosure.
  • FIG. 1A is a schematic diagram of a coding and decoding system according to an embodiment
  • FIG. 1B is a framework diagram of an encoding end in FIG. 1A
  • FIG. 1C is a framework diagram of a decoding end in FIG. 1A ;
  • FIG2 is a block diagram of a filter unit according to an embodiment
  • FIG3A is a network structure diagram of an NNLF filter according to an embodiment
  • FIG3B is a structure diagram of a residual block in FIG3A
  • FIG3C is a schematic diagram of input and output of the NNLF filter in FIG3A ;
  • FIG4A is a structural diagram of a backbone network in an NNLF filter of another embodiment
  • FIG4B is a structural diagram of a residual block in FIG4A
  • FIG4C is a schematic diagram of input and output of the NNLF filter of FIG4 ;
  • FIG5 is a schematic diagram of the basic network structure of the residual network
  • FIG6A is a schematic diagram of iterative training of a NNLF model according to an embodiment
  • FIG6B is a schematic diagram of encoding testing of the NNLF model in FIG6A ;
  • FIG7 is a flowchart of an NNLF method applied to an encoding end according to an embodiment of the present disclosure
  • FIG8 is a flowchart of an NNLF method applied to an encoding end according to another embodiment of the present disclosure.
  • FIG9 is a flowchart of a video encoding method according to an embodiment of the present disclosure.
  • FIG10 is a flow chart of an NNLF method applied to a decoding end according to an embodiment of the present disclosure
  • FIG11 is a schematic diagram of a NNLF capable of mode selection according to an embodiment of the present disclosure.
  • FIG12 is a flow chart of an NNLF method applied to a decoding end according to another embodiment of the present disclosure.
  • FIG13 is a flow chart of a video decoding method according to an embodiment of the present disclosure.
  • FIG14 is a schematic diagram of the structure of a filter unit according to an embodiment of the present disclosure.
  • FIG15 is a schematic diagram of a NNLF filter according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic diagram of residual value adjustment according to an embodiment of the present disclosure.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or explanations. Any embodiment described as “exemplary” or “for example” in the present disclosure should not be interpreted as being more preferred or advantageous than other embodiments.
  • "And/or” in this article is a description of the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone. "Multiple” means two or more than two.
  • words such as “first” and “second” are used to distinguish between identical or similar items with basically the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the quantity and execution order, and words such as “first” and “second” do not limit them to be necessarily different.
  • the specification may have presented the method and/or process as a specific sequence of steps. However, to the extent that the method or process does not rely on the specific order of steps described herein, the method or process should not be limited to the steps of the specific order. As will be understood by those of ordinary skill in the art, other sequences of steps are also possible. Therefore, the specific sequence of steps set forth in the specification should not be interpreted as a limitation to the claims. In addition, the claims for the method and/or process should not be limited to the steps of performing them in the order written, and those of ordinary skill in the art can easily understand that these sequences can change and still remain within the spirit and scope of the disclosed embodiments.
  • the loop filtering method and video coding method based on neural network in the disclosed embodiments can be applied to various video coding standards, such as: H.264/Advanced Video Coding (Advanced Video Coding, AVC), H.265/High Efficiency Video Coding (High Efficiency Video Coding, HEVC), H.266/Versatile Video Coding (Versatile Video Coding, VVC), AVS (Audio Video Coding Standard), and other standards formulated by MPEG (Moving Picture Experts Group), AOM (Alliance for Open Media), JVET (Joint Video Experts Team) and extensions of these standards, or any other customized standards.
  • AVC Advanced Video Coding
  • H.265/High Efficiency Video Coding High Efficiency Video Coding
  • H.266/Versatile Video Coding Very Video Coding Standard
  • MPEG Motion Picture Experts Group
  • AOM Alliance for Open Media
  • JVET Joint Video Experts Team
  • FIG. 1A is a block diagram of a video encoding and decoding system that can be used in an embodiment of the present disclosure. As shown in the figure, the system is divided into an encoding end 1 and a decoding end 2, and the encoding end 1 generates a code stream.
  • the decoding end 2 can decode the code stream.
  • the decoding end 2 can receive the code stream from the encoding end 1 via a link 3.
  • the link 3 includes one or more media or devices that can move the code stream from the encoding end 1 to the decoding end 2.
  • the link 3 includes one or more communication media that enable the encoding end 1 to send the code stream directly to the decoding end 2.
  • the encoding end 1 modulates the code stream according to the communication standard and sends the modulated code stream to the decoding end 2.
  • the one or more communication media may include wireless and/or wired communication media, and may form part of a packet network.
  • the code stream can also be output from the output interface 15 to a storage device, and the decoding end 2 can read the stored data from the storage device via streaming or downloading.
  • the encoding end 1 includes a data source 11, a video encoding device 13 and an output interface 15.
  • the data source 11 includes a video capture device (such as a camera), an archive containing previously captured data, a feed interface for receiving data from a content provider, a computer graphics system for generating data, or a combination of these sources.
  • the video encoding device 13 may also be called a video encoder, which is used to encode the data from the data source 11 and output it to the output interface 15.
  • the output interface 15 may include at least one of a regulator, a modem and a transmitter.
  • the decoding end 2 includes an input interface 21, a video decoding device 23 and a display device 25.
  • the input interface 21 includes at least one of a receiver and a modem.
  • the input interface 21 can receive a code stream via a link 3 or from a storage device.
  • the video decoding device 23 is also called a video decoder, which is used to decode the received code stream.
  • the display device 25 is used to display the decoded data.
  • the display device 25 can be integrated with other devices of the decoding end 2 or set separately.
  • the display device 25 is optional for the decoding end. In other examples, the decoding end may include other devices or equipment that apply the decoded data.
  • FIG1B is a block diagram of an exemplary video encoding device that can be used in an embodiment of the present disclosure.
  • the video encoding device 10 includes:
  • the division unit 101 is configured to cooperate with the prediction unit 100 to divide the received video data into slices, coding tree units (CTUs) or other larger units.
  • the received video data may be a video sequence including video frames such as I frames, P frames or B frames.
  • the prediction unit 100 is configured to divide the CTU into coding units (CUs) and perform intra-frame prediction coding or inter-frame prediction coding on the CU.
  • the CU may be divided into one or more prediction units (PUs).
  • the prediction unit 100 includes an inter prediction unit 121 and an intra prediction unit 126 .
  • the inter-frame prediction unit 121 is configured to perform inter-frame prediction on the PU and generate prediction data of the PU, wherein the prediction data includes a prediction block of the PU, motion information of the PU, and various syntax elements.
  • the inter-frame prediction unit 121 may include a motion estimation (ME: motion estimation) unit and a motion compensation (MC: motion compensation) unit.
  • the motion estimation unit may be used for motion estimation to generate a motion vector
  • the motion compensation unit may be used to obtain or generate a prediction block according to the motion vector.
  • the intra prediction unit 126 is configured to perform intra prediction on the PU and generate prediction data of the PU.
  • the prediction data of the PU may include a prediction block of the PU and various syntax elements.
  • the residual generating unit 102 (indicated by the circle with a plus sign after the dividing unit 101 in the figure) is configured to generate a residual block of the CU based on the original block of the CU minus the prediction block of the PU into which the CU is divided.
  • the transform processing unit 104 is configured to divide the CU into one or more transform units (TUs), and the division of the prediction unit and the transform unit may be different.
  • the residual block associated with the TU is a sub-block obtained by dividing the residual block of the CU.
  • the coefficient block associated with the TU is generated by applying one or more transforms to the residual block associated with the TU.
  • the quantization unit 106 is configured to quantize the coefficients in the coefficient block based on the quantization parameter.
  • the quantization degree of the coefficient block can be changed by adjusting the quantization parameter (QP: Quantizer Parameter).
  • the inverse quantization unit 108 and the inverse transformation unit 110 are respectively configured to apply inverse quantization and inverse transformation to the coefficient block to obtain a reconstructed residual block associated with the TU.
  • the reconstruction unit 112 (indicated by the circle with a plus sign after the inverse transform processing unit 110 in the figure) is configured to add the reconstructed residual block and the prediction block generated by the prediction unit 100 to generate a reconstructed image.
  • the filter unit 113 is configured to perform loop filtering on the reconstructed image.
  • the decoded image buffer 114 is configured to store the reconstructed image after loop filtering.
  • the intra prediction unit 126 may extract a reference image of a block adjacent to the current block from the decoded image buffer 114 to perform intra prediction.
  • the inter prediction unit 121 may use the reference image of the previous frame cached in the decoded image buffer 114 to perform inter prediction on the PU of the current frame image.
  • the entropy coding unit 115 is configured to perform entropy coding operations on received data (such as syntax elements, quantized coefficient blocks, motion information, etc.) to generate a video bit stream.
  • the video encoding device 10 may include more, fewer, or different functional components than in this example, such as the transform processing unit 104, the inverse transform processing unit 110, etc. may be eliminated.
  • FIG1C is a block diagram of an exemplary video decoding device that can be used in an embodiment of the present disclosure. As shown in the figure, the video decoding device 15 includes:
  • the entropy decoding unit 150 is configured to perform entropy decoding on the received encoded video bitstream, extract syntax elements, quantized coefficient blocks, motion information of PU, etc.
  • the prediction unit 152, the inverse quantization unit 154, the inverse transform processing unit 156, the reconstruction unit 158 and the filter unit 159 can all perform corresponding operations based on the syntax elements extracted from the bitstream.
  • the inverse quantization unit 154 is configured to perform inverse quantization on the coefficient block associated with the quantized TU.
  • the inverse transform processing unit 156 is configured to apply one or more inverse transforms to the inverse quantized coefficient block to generate a reconstructed residual block of the TU.
  • the prediction unit 152 includes an inter-prediction unit 162 and an intra-prediction unit 164. If the current block is encoded using intra-prediction, the intra-prediction unit 164 determines the intra-prediction mode of the PU based on the syntax elements decoded from the code stream, and performs intra-prediction in combination with the reconstructed reference information of the current block neighboring the current block obtained from the decoded image buffer 160. If the current block is encoded using inter-prediction, the inter-prediction unit 162 determines the reference block of the current block based on the motion information of the current block and the corresponding syntax elements, and performs inter-prediction on the reference block obtained from the decoded image buffer 160.
  • the reconstruction unit 158 (represented by a circle with a plus sign after the inverse transform processing unit 155 in the figure) is set to perform intra-frame prediction or inter-frame prediction on the prediction unit 152 based on the reconstructed residual block associated with the TU and the prediction block of the current block to obtain a reconstructed image.
  • the filter unit 159 is configured to perform loop filtering on the reconstructed image.
  • the decoded image buffer 160 is configured to store the reconstructed image after loop filtering as a reference image for subsequent motion compensation, intra-frame prediction, inter-frame prediction, etc.
  • the filtered reconstructed image can also be output as decoded video data for presentation on a display device.
  • the video decoding device 15 may include more, fewer or different functional components.
  • the inverse transform processing unit 155 may be eliminated in some cases.
  • the current block can be a block-level coding unit such as the current coding tree unit (CTU), the current coding unit (CU), the current prediction unit (PU), etc. in the current image.
  • CTU current coding tree unit
  • CU current coding unit
  • PU current prediction unit
  • a frame of image is divided into blocks, intra-frame prediction or inter-frame prediction or other algorithms are performed on the current block to generate a prediction block of the current block, the original block of the current block is subtracted from the prediction block to obtain a residual block, the residual block is transformed and quantized to obtain a quantization coefficient, and the quantization coefficient is entropy encoded to generate a bit stream.
  • intra-frame prediction or inter-frame prediction is performed on the current block to generate a prediction block of the current block, and on the other hand, the quantization coefficient obtained by decoding the bit stream is inversely quantized and inversely transformed to obtain a residual block, and the prediction block and the residual block are added to obtain a reconstructed block, and the reconstructed block constitutes a reconstructed image, and the reconstructed image is loop-filtered based on the image or based on the block to obtain a decoded image.
  • the encoding end also obtains a decoded image through operations similar to those of the decoding end, which can also be called a reconstructed image after loop filtering.
  • the reconstructed image after loop filtering can be used as a reference frame for inter-frame prediction of subsequent frames.
  • the block division information determined by the encoding end, the mode information and parameter information such as prediction, transformation, quantization, entropy coding, and loop filtering can be written into the bit stream.
  • the decoding end determines the block division information, prediction, transformation, quantization, entropy coding, loop filtering and other mode information and parameter information used by the encoding end by decoding the bit stream or analyzing according to the setting information, so as to ensure that the decoded image obtained by the encoding end is the same as the decoded image obtained by the decoding end.
  • the embodiments of the present disclosure relate to, but are not limited to, the filter unit in the above-mentioned encoding end and decoding end (the filter unit may also be referred to as a loop filtering module) and a corresponding loop filtering method.
  • the filter units at the encoding end and the decoding end include tools such as a deblocking filter (DBF: DeBlocking Filter) 20, a sample adaptive offset filter (SAO: Sample adaptive Offset) 22, and an adaptive correction filter (ALF: Adaptive loop filter) 26.
  • DPF DeBlocking Filter
  • SAO Sample adaptive Offset
  • ALF adaptive correction filter
  • NNLF Neural Network based Loop Filter
  • the filter unit performs loop filtering on the reconstructed image, which can compensate for the distortion information and provide a better reference for the subsequent encoding pixels.
  • a neural network-based loop filtering NNLF solution is provided, and the model used adopts the filtering network shown in Figure 3A.
  • the NNLF is denoted as NNLF1 in the text, and the filter that executes NNLF1 is called NNLF1 filter.
  • the backbone network (backbone) of the filtering network includes a plurality of residual blocks (ResBlock) connected in sequence, and also includes a convolution layer (represented by Conv in the figure), an activation function layer (ReLU in the figure), a merging (concat) layer (represented by Cat in the figure), and a pixel reorganization layer (represented by PixelShuffle in the figure).
  • each residual block is shown in Figure 3B, including a convolution layer with a convolution kernel size of 1 ⁇ 1, a ReLU layer, a convolution layer with a convolution kernel size of 1 ⁇ 1, and a convolution layer with a convolution kernel size of 3 ⁇ 3 connected in sequence.
  • the input of the NNLF1 filter includes the brightness information (i.e., Y component) and chrominance information (i.e., U component and V component) of the reconstructed image (rec_YUV), as well as various auxiliary information, such as the brightness information and chrominance information of the predicted image (pred_YUV), QP information, and frame type information.
  • the QP information includes the default baseline quantization parameter (BaseQP: Base Quantization Parameter) in the encoding profile and the slice quantization parameter (SliceQP: Slice Quantization Parameter) of the current slice
  • the frame type information includes the slice type (SliceType), i.e., the type of frame to which the current slice belongs.
  • the output of the model is the filtered image (output_YUV) after NNLF1 filtering.
  • the filtered image output by the NNLF1 filter can also be used as the reconstructed image input to the subsequent filter.
  • NNLF1 uses a model to filter the YUV component of the reconstructed image (rec_YUV) and outputs the YUV component of the filtered image (out_YUV), as shown in Figure 3C, in which auxiliary input information such as the YUV component of the predicted image is omitted.
  • the filter network of this model has a jump connection branch between the input reconstructed image and the output filtered image, as shown in Figure 3A.
  • NNLF2 uses two models, one model is used to filter the luminance component of the reconstructed image, and the other model is used to filter the two chrominance components of the reconstructed image.
  • the two models can use the same filtering network, and there is also a jump connection branch between the reconstructed image input to the NNLF2 filter and the filtered image output by the NNLF2 filter.
  • the backbone network of the filtering network includes a plurality of residual blocks (AttRes Block) with attention mechanisms connected in sequence, a convolutional layer (Conv 3 ⁇ 3) for realizing feature mapping, and a reorganization layer (Shuffle).
  • each residual block with attention mechanism is shown in Figure 4B, including a convolutional layer (Conv 3 ⁇ 3), an activation layer (PReLU), a convolutional layer (Conv 3 ⁇ 3) and an attention layer (Attintion) connected in sequence, M represents the number of feature maps, and N represents the number of samples in one dimension.
  • Model 1 of NNLF2 for filtering the luminance component of the reconstructed image is shown in FIG4C , and its input information includes the luminance component of the reconstructed image (rec_Y), and the output is the luminance component of the filtered image (out_Y). Auxiliary input information such as the luminance component of the predicted image is omitted in the figure.
  • Model 2 of NNLF2 for filtering the two chrominance components of the reconstructed image is shown in FIG4C , and its input information includes the two chrominance components of the reconstructed image (rec_UV), and the luminance component of the reconstructed image as auxiliary input information (rec_Y), and the output of model 2 is the two chrominance components of the filtered image (out_UV).
  • Model 1 and model 2 may also include other auxiliary input information, such as QP information, block partition image, deblocking filter boundary strength information, etc.
  • NNLF1 scheme and NNLF2 scheme can be implemented using neural network based common software (NCS: Neural Network based Common Software) in neural network based video coding (NNVC: Neural Network based Video Coding), which serves as the baseline tool in the reference software testing platform of NNVC, namely baseline NNLF.
  • NCS Neural Network based Common Software
  • NNVC Neural Network based Video Coding
  • NNLF1 and NNLF2 draw on the concept of residual learning. See Figure 5.
  • Their filter networks include a neural network (NN) and a skip connection branch from the reconstructed image of the input filter to the filtered image output by the filter.
  • the filtered image cnn rec+res output by NNLF1 and NNLF2, where rec represents the input reconstructed image and res represents the residual image output by the neural network.
  • the neural network includes other parts of the filter network except the skip connection branch.
  • the neural network has the function of predicting residual information.
  • NNLF1 and NNLF2 predict the residual information of the input reconstructed image relative to the original image through the neural network, that is, the residual image, and then superimpose the residual image on the input reconstructed image (that is, add it to the reconstructed image) to obtain the filtered image output by the filter, which can make the quality of the filtered image closer to the original image.
  • inter-frame prediction technology enables the current frame to refer to the image information of the previous frame, which improves the coding performance.
  • the coding effect of the previous frame will also affect the coding effect of the subsequent frame.
  • the training process of the model includes an initial training stage and an iterative training stage, and multiple rounds of training are adopted.
  • the model to be trained has not been deployed in the encoder.
  • the model is trained in the first round through the sample data of the collected reconstructed image to obtain the model after the first round of training.
  • the iterative training stage the models are deployed in the encoder.
  • the model after the first round of training is first deployed in the encoder, the sample data of the reconstructed image is recollected, and the model after the first round of training is trained in the second round to obtain the model after the second round of training; then, the model after the second round of training is deployed in the encoder, the sample data of the reconstructed image is recollected, and the model after the second round of training is trained in the third round to obtain the model after the third round of training, and the training is iteratively carried out; finally, the model after each round of training is encoded on the validation set to find the model with the best encoding performance for actual deployment.
  • FIG6A is a schematic diagram of the N+1th round of training.
  • the model model_N after the Nth training is deployed in the encoder, and the training data of multiple frames of reconstructed images are collected.
  • the boxes marked with 0, 1, 2, ... in the figure represent the reconstructed images of the 1st frame, the 2nd frame, the 3rd frame, ..., and the model model_N+1 after the N+1th round of training is obtained by training. Assuming that the performance of model_N+1 is the best after the coding test, the training is completed.
  • model_N+1 When the model_N+1 is tested for coding, the model model_N+1 is deployed in the encoder or decoder. As shown in FIG6B , the preamble frame referenced by the current frame using inter-frame prediction coding is generated based on loop filtering of model_N+1, and there is a lag in training relative to testing. However, for model_N+1, its applicable environment is the environment during the N+1 round of training, and the preamble frame referenced by its current frame is loop filtered using model_N, which is different from the environment for coding testing of model_N+1.
  • model_N+1 Since the performance of model_N+1 is better than that of model_N, during the coding test, the preamble frame referenced by the current frame is filtered using model_N+1, and the performance of its preamble frame is further improved, which makes the quality of the input reconstructed image improved (the residual with the original image becomes smaller) when the model_N+1 is coded and tested, which is different from the expected quality in the training environment.
  • model_N+1 still predicts the residual according to its trained ability, resulting in the residual output of the neural network in model_N+1 may be too large, and there is currently no solution to try to adjust the residual.
  • the residual adjustment that makes the residual of the residual image smaller means that the residual value in the residual image is closer to 0, that is, the absolute value of the residual value becomes smaller, such as 3 becomes 2, -3 becomes -2, and does not mean a change from -3 to -4.
  • the residual reduction refers to the whole residual image, which can be that the absolute value of the residual value of some pixels becomes smaller, and the residual value of some pixels remains unchanged, except for the zero residual value. It can be that the absolute value of all non-zero residual values becomes smaller, or the absolute value of some non-zero residual values becomes smaller. For example, the residual values in the value interval [1,2], [-1,-2] can remain unchanged, while the absolute value of the residual values greater than or equal to 3 and less than or equal to -3 becomes smaller.
  • An embodiment of the present disclosure provides a loop filtering method based on a neural network, which is applied to a NNLF filter at an encoding end.
  • the NNLF filter includes a neural network and a jump connection branch from an input to an output of the NNLF filter.
  • the method includes:
  • Step S110 inputting the reconstructed image into the neural network to obtain a residual image output by the neural network
  • Step S120 calculating a rate-distortion cost cost 1 of performing NNLF on the reconstructed image using the first mode, and a rate-distortion cost cost 2 of performing NNLF on the reconstructed image using the second mode;
  • the first mode is an NNLF mode in which residual adjustment is not performed on the residual image
  • the second mode is an NNLF mode in which residual adjustment is performed on the residual image
  • the loop filtering method based on a neural network is such that the encoding end can select a mode with a lower rate-distortion cost from a mode with residual adjustment and a mode without residual adjustment to perform NNLF, thereby compensating for the performance loss caused by the lag of NNLF mode training relative to the encoding test to a certain extent, thereby improving the effect of NNLF filtering and enhancing encoding performance.
  • the residual adjustment in this article refers to the residual adjustment made to the residual image when the reconstructed image is subjected to loop filtering based on a neural network.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block, but may also be a reconstructed image of other coding units.
  • the reconstructed image for the NNLF filter in this article may be a coding unit at different levels, such as an image level (including a frame, a slice), a block level, etc.
  • the residual adjustment reduces the residual in the residual image.
  • the calculation of the rate distortion cost cost 1 of performing NNLF on the reconstructed image using the first mode includes: adding the residual image to the reconstructed image to obtain a first filtered image; and, calculating the cost 1 according to the difference between the first filtered image and the corresponding original image; in the case where the reconstructed image and the residual image both include 3 components, such as Y component, U component and V component.
  • the cost 1 can be obtained by calculating the sum of squared errors (SSD) between the first filtered image and the original image on the 3 components: and weighting and adding the SSD on the 3 components.
  • SSD sum of squared errors
  • the selection of the first mode to perform NNLF on the reconstructed image includes: using the first filtered image obtained by adding the residual image to the reconstructed image as the filtered image output after performing NNLF on the reconstructed image.
  • the filtering method of NNLF1 or NNLF2 mentioned above can be used, or other filtering methods that do not perform residual adjustment on the residual image can be used.
  • the calculating a rate-distortion cost cost 2 of performing NNLF on the reconstructed image using the second mode includes:
  • the residual image is subjected to residual adjustment and added to the reconstructed image to obtain a second filtered image, and a rate distortion cost is calculated according to the difference between the second filtered image and the original image;
  • the minimum rate-distortion cost among all the calculated rate-distortion costs is taken as cost 2 .
  • the residual image is residually adjusted and added to the reconstructed image. It can be that after the residual image is residually adjusted, the result of the residual adjustment is added to the reconstructed image.
  • the residual adjustment method is, for example, to subtract 1 from the positive residual value in the residual image and add 1 to the negative residual value, that is, not to adjust the residual value of 0, so that the residual in the residual image is reduced as a whole.
  • the residual image is first added to the reconstructed image, and the image obtained by the addition also includes the residual image, and then the residual image is residually adjusted.
  • the residual adjustment is to subtract 1 from the residual value of the pixel.
  • the residual adjustment is performed on the residual image (or its components) and the residual image (or its components) is added to the reconstructed image (or its components).
  • the reconstructed image and the residual image both include three components
  • the rate-distortion cost is calculated based on the difference between the second filtered image and the original image, including: calculating the square error and SSD of the second filtered image and the original image on the three components, and then weighting the SSD on the three components and adding them to obtain the rate-distortion cost.
  • the set residual adjustment method includes one or more of the following types of residual adjustment methods:
  • a fixed value is added to or subtracted from a non-zero residual value in the residual image, so that the absolute value of the non-zero residual value becomes smaller; for example, a positive residual value in the residual image is subtracted by 1, and a negative residual value is increased by 1.
  • the non-zero residual value in the residual image is added or subtracted from the adjustment value corresponding to the interval according to the interval in which it is located, so that the absolute value of the non-zero residual value becomes smaller; wherein there are multiple intervals, and the larger the value in the interval, the larger the corresponding adjustment value.
  • the residual value in the residual image that is greater than or equal to 1 and less than or equal to 5 is reduced by 1
  • the residual value that is greater than 5 is reduced by 2
  • the residual value that is less than or equal to -1 and greater than or equal to -5 is increased by 1
  • the residual value that is less than -5 is increased by 2.
  • the above-mentioned embodiment improves the coding performance by adjusting the residual information output by the filtering network.
  • the residual of the residual image output by the neural network may be too large.
  • the residual adjustment can reduce the residual and help improve the coding performance.
  • the residual value of each pixel therein may be positive or negative.
  • the positive residual value can be subtracted from a fixed value (positive number), and the negative residual value can be added to the fixed value.
  • the residual value is 0, no adjustment is made, so that the overall residual value becomes smaller, that is, closer to 0.
  • the fixed value is (+1)
  • the residual value of each pixel in the original residual image is shown on the left
  • the residual value of each pixel in the residual image after residual adjustment is shown on the right.
  • the residual image after residual adjustment is superimposed on the input reconstructed image to obtain a filtered image, which can also be called a reconstructed image after NNLF.
  • Multiple fixed values can be set to correspond to multiple residual adjustment methods.
  • the rate-distortion cost under each residual adjustment method the fixed value adopted by the residual adjustment method with the smallest rate-distortion cost is selected, and the index corresponding to the residual adjustment method is encoded into the bitstream for the decoding end to read and process.
  • residual adjustment method In addition to using a fixed value residual adjustment method, other types of residual adjustment methods can also be used. For example, the residual value is segmented according to its size, and adjustment operations with different precisions are tried. For example, for a residual value with a larger absolute value, an adjustment value with a larger absolute value is set; for a residual value with a smaller absolute value, an adjustment value with a smaller absolute value is set.
  • the adjustment value to be decided is RO_FACTOR.
  • the specific strategy for deriving the adjustment value is as follows.
  • ⁇ x1, x2, x3 ⁇ represents a positive residual value
  • ⁇ y1, y2, y3 ⁇ represents a negative residual value
  • ⁇ a1, a2, a3 ⁇ and ⁇ b1, b2, b3 ⁇ are preset candidate fixed values.
  • the above scheme does not adjust the residual value of zero, and for the non-zero residual value, finds the interval it falls into (a total of 6 intervals are set) to determine the adjustment value to be used.
  • the set multiple residual adjustment modes may include one type of residual adjustment mode, or may include multiple types of residual adjustment modes.
  • the above embodiment uniformly adjusts the three components in the residual image and uses the same residual adjustment method.
  • Using this residual adjustment method to perform residual adjustment on the residual image is the best result overall under the premise of uniform adjustment of the three components, but it is not necessarily the best residual adjustment method for specific components in the residual image. In this regard, whether to perform residual adjustment and the residual adjustment method can be selected for each component separately to further optimize the encoding performance.
  • An embodiment of the present disclosure provides a loop filtering method based on a neural network, which is applied to a NNLF filter at an encoding end.
  • the NNLF filter includes a neural network and a jump connection branch from the input to the output of the NNLF filter. As shown in FIG8 , the method includes:
  • Step S210 inputting the reconstructed image into the neural network to obtain a residual image output by the neural network;
  • the reconstructed image and the residual image both include three components, such as a Y component, a U component and a V component;
  • Step S210 performing the following processing on each of the three components, which may be referred to as mode selection processing:
  • the first mode is an NNLF mode in which residual adjustment is not performed on the component of the residual image
  • the second mode is an NNLF mode in which residual adjustment is performed on the component of the residual image
  • cost 1 cost 2
  • cost 2 cost 1
  • cost 2 cost 2
  • cost 1 cost 2
  • This embodiment can perform mode selection processing on each component separately, so that the coding performance can be further optimized based on the aforementioned embodiment of uniformly adjusting the three components. Since the correlation operation is performed at the output end of the NNLF filter, it has little impact on the computational complexity.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block.
  • the residual adjustment reduces the residual in the components of the residual image.
  • the rate-distortion cost cost 1 includes: adding the component of the residual image to the component of the reconstructed image to obtain the filtered component; and, according to the difference between the filtered component and the component of the corresponding original image, calculating the cost1; in one example, the difference is represented by SSD, that is, the SSD between the filtered component and the component of the corresponding original image is used as the cost1. In other examples, the difference can also be represented by other indicators such as mean square error (MSE: Mean Squared Error), mean absolute error (MAE: Mean Absolute Error), etc., and the present disclosure is not limited to this. The other embodiments of the present disclosure are the same.
  • MSE Mean Squared Error
  • MAE Mean Absolute Error
  • the first mode is selected to perform NNLF on the component of the reconstructed image, including: adding the component of the residual image and the component of the reconstructed image to obtain the filtered component, as the component of the filtered image output after the reconstructed image is subjected to NNLF.
  • the component of the filtered image obtained by performing NNLF on the component of the reconstructed image according to the first mode does not perform residual adjustment on the component of the residual image, so the component of the filtered image can be the same as the component of the filtered image obtained by the NNLF scheme without residual adjustment (such as NNLF1, NNLF2).
  • the calculating of the rate-distortion cost cost 2 of performing NNLF on the component of the reconstructed image using the second mode includes:
  • the component of the residual image is residually adjusted and added to the component of the reconstructed image to obtain the filtered component; and then a rate-distortion cost of the component is calculated according to the difference between the filtered component and the component of the corresponding original image; for example, the SSD between the filtered component and the component of the corresponding original image is used as the rate-distortion cost of the component; and
  • the minimum rate-distortion cost among all rate-distortion costs of the component calculated is used as cost 2 of the component; wherein there are one or more residual adjustment modes set for the component.
  • the selecting the second mode to perform NNLF on the component of the reconstructed image includes:
  • the component of the residual image is subjected to residual adjustment in accordance with the residual adjustment method corresponding to the cost 2 of the component, and the filtered component is obtained by adding the residual adjustment to the component of the reconstructed image, and the filtered component is used as the component of the filtered image output after the NNLF is performed on the reconstructed image.
  • the component of the residual image is residually adjusted and added to the component of the reconstructed image.
  • the residual adjustment may be performed on the component of the residual image and then the result of the residual adjustment is added to the component of the reconstructed image.
  • it is not necessary to calculate in this order.
  • the residual adjustment methods set for the three components are the same or different; for example, the residual adjustment method set for the Y component is: subtract 1 from the positive residual value in the residual image, and add 1 to the negative residual value; and the residual adjustment method set for the U component and the V component is: subtract 1 from the residual value greater than 2 in the residual image, and add 1 to the residual value less than -2.
  • the residual adjustment method set for at least one of the three components includes one or more of the following types of residual adjustment methods:
  • the non-zero residual value in the residual image is added or subtracted with the adjustment value corresponding to the interval according to the interval in which the non-zero residual value is located, so that the absolute value of the non-zero residual value becomes smaller; wherein there are multiple intervals, and the larger the value in the interval, the larger the corresponding adjustment value.
  • An embodiment of the present disclosure further provides a video encoding method, which is applied to a video encoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, as shown in FIG8 , performing the following processing:
  • Step S310 when NNLF allows residual adjustment, performing NNLF on the reconstructed image according to the NNLF method described in any embodiment of the present disclosure
  • Step S320 Encode a residual adjustment usage flag of the reconstructed image to indicate whether residual adjustment is required when performing NNLF on the reconstructed image.
  • the disclosed embodiment can choose to adjust or not adjust the residual image according to the rate-distortion cost, which can compensate for the performance loss caused by the lag of NNLF mode training relative to the coding test and improve the coding performance.
  • the residual adjustment usage flag is a picture level syntax element or a block level syntax element.
  • the residual adjustment permission flag of the decoded picture level is used to determine whether the NNLF allows residual adjustment according to the value of the residual adjustment permission flag.
  • the frame containing the input reconstructed image is an inter-frame coded frame as a necessary condition for NNLF to allow residual adjustment, etc.
  • the residual adjustment of NNLF may also be enabled all the time. In this case, there is no need to judge by a flag, and the residual adjustment of NNLF is allowed by default.
  • the method further includes: when it is determined that NNLF does not allow residual adjustment, skipping the encoding of the residual use flag, adding the reconstructed image input to the neural network and the residual image output by the neural network, and obtaining a filtered image output after performing NNLF on the reconstructed image. That is, at this time, the NNLF without residual adjustment can be used to implement filtering of the reconstructed image.
  • the method is to perform NNLF on the reconstructed image according to any of the above-mentioned embodiments of uniformly performing residual adjustment on the three components of the residual image; the number of flags roflag used for the residual adjustment of the reconstructed image is 1; when the first mode is selected to perform NNLF on the reconstructed image, the roflag is set to a value indicating that no residual adjustment is required, such as 0; when the second mode is selected to perform NNLF on the reconstructed image, the roflag is set to a value indicating that residual adjustment is required, such as 1.
  • the method further includes: when the roflag is set to a value indicating that residual adjustment is required, and there are multiple residual adjustment methods set, continue to encode the residual adjustment method index of the reconstructed image, and the residual adjustment method index is used to indicate the residual adjustment method based on which the residual adjustment is performed.
  • the residual adjustment method index can be a 2-bit flag, and when the value of the flag is 0, 1, and 2, it represents the three residual adjustment methods respectively, and the corresponding relationship between the value and the residual adjustment method is agreed upon in advance at the encoding end and the decoding end, such as defined in the standard or protocol.
  • This embodiment uses two flags, namely, a residual adjustment use flag and a residual adjustment method index, to respectively indicate whether residual adjustment is required and the residual adjustment method based on which the residual adjustment is performed (when multiple residual adjustment methods are set).
  • the residual adjustment use flag is also used to indicate the residual adjustment method based on which the residual adjustment is performed, that is, this embodiment uses the residual adjustment use flag to simultaneously indicate whether residual adjustment is required and the residual adjustment method based on which the residual adjustment is performed.
  • a 2-bit residual adjustment use flag roflag is used, and the four values of roflag can respectively indicate that residual adjustment is not required, residual adjustment is performed using the first residual adjustment method, residual adjustment is performed using the second residual adjustment method, and residual adjustment is performed using the third residual adjustment method.
  • the embodiment using two flags only needs to encode a 1-bit flag, namely the residual adjustment use flag, and does not need to encode the residual adjustment mode index; while the embodiment using one flag needs to encode a 2-bit residual adjustment use flag.
  • the embodiment using two flags needs to encode a 1-bit residual adjustment use flag and a 2-bit residual adjustment mode index, and the embodiment using one flag needs to encode a 2-bit residual adjustment use flag.
  • the method performs NNLF on the reconstructed image according to any of the above-mentioned embodiments of performing residual adjustment on the three components of the residual image respectively;
  • roflag(j) when the first mode is selected to perform NNLF on the j-th component of the reconstructed image, roflag(j) is set to a value indicating that residual adjustment is not required, such as 0, and when the second mode is selected to perform NNLF on the j-th component of the reconstructed image, roflag(j) is set to a value indicating that residual adjustment is required, such as 1.
  • the method also includes: when the roflag(j) is set to a value indicating that residual adjustment needs to be performed and there are multiple residual adjustment methods set for the j-th component, continue to encode the residual adjustment method index index(j) of the j-th component of the reconstructed image to indicate the residual adjustment method based on which the residual adjustment is performed on the j-th component of the residual image.
  • the residual adjustment usage flag of the j-th component is also used to indicate the residual adjustment method based on which the residual adjustment is performed, that is, this embodiment uses the residual adjustment usage flag of the j-th component to simultaneously indicate whether residual adjustment is required and the residual adjustment method based on which the residual adjustment is performed.
  • An embodiment of the present disclosure further provides a loop filtering method based on a neural network, which is applied to a NNLF filter at a decoding end, wherein the NNLF filter includes a neural network and a jump connection branch from the input to the output of the NNLF filter, as shown in FIG10 , and the method includes:
  • Step S410 decoding the residual adjustment flag roflag of the reconstructed image, wherein roflag is used to indicate whether residual adjustment is required when performing NNLF on the reconstructed image;
  • Step S420 when it is determined according to the roflag that residual adjustment is not required, the first mode is used to perform NNLF on the reconstructed image; when it is determined according to the roflag that residual adjustment is required, the second mode is used to perform NNLF on the reconstructed image;
  • the first mode is a NNLF mode that does not perform residual adjustment on the residual image output by the neural network
  • the second mode is a NNLF mode that performs residual adjustment on the residual image.
  • FIG11 is a schematic diagram of the decoding end of this embodiment performing NNLF on the reconstructed image.
  • NN neural network
  • RO residual adjustment module, which is used to perform residual adjustment
  • This embodiment is based on the loop filtering method of the neural network.
  • a better mode is selected from two modes of residual adjustment and no residual adjustment, which can enhance the filtering effect of NNLF and improve the quality of the decoded image.
  • roflag can be a 1-bit flag, and the value of roflag can indicate whether residual adjustment is required. For example, when the value of roflag is 1, it is determined that residual adjustment is required, and when the value of roflag is 0, it is determined that residual adjustment is not required. The same applies to other embodiments of the present disclosure.
  • the residual adjustment reduces the residual of the residual image.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block.
  • the residual adjustment usage flag is a picture level syntax element or a block level syntax element.
  • the first mode is used to perform NNLF on the reconstructed image, including: adding the residual image output by the neural network to the reconstructed image input to the neural network, to obtain a filtered image output after the reconstructed image is subjected to NNLF.
  • the second mode is used to perform NNLF on the reconstructed image, including: performing residual adjustment on the residual image according to one of the set residual adjustment methods and adding the residual image to the reconstructed image, to obtain a filtered image output after the reconstructed image is subjected to NNLF.
  • the residual adjustment of the residual image according to one of the set residual adjustment methods includes: continuing to decode the residual adjustment method index index of the reconstructed image, the index is used to indicate the residual adjustment method based on which the residual adjustment is performed; and, performing residual adjustment on the residual image according to the residual adjustment method indicated by the index.
  • This embodiment is based on using two flags to indicate whether residual adjustment is required and the residual adjustment method based on which the residual adjustment is based.
  • the encoding end uses one flag, i.e., the residual adjustment flag roflag, to indicate whether residual adjustment is required and the residual adjustment method based on which the residual adjustment is based.
  • the decoding end continues to determine the residual adjustment method based on which the residual adjustment is based according to the roflag of the reconstructed image, and performs residual adjustment on the residual image according to the determined residual adjustment method.
  • the set residual adjustment method includes one or more of the following types of residual adjustment methods:
  • the non-zero residual value in the residual image is added or subtracted with the adjustment value corresponding to the interval according to the interval in which the non-zero residual value is located, so that the absolute value of the non-zero residual value becomes smaller; wherein there are multiple intervals, and the larger the value in the interval, the larger the corresponding adjustment value.
  • An embodiment of the present disclosure further provides a neural network-based loop filtering NNLF method, which is applied to a NNLF filter at a decoding end, wherein the NNLF filter includes a neural network and a jump connection branch from the input to the output of the NNLF filter.
  • the method includes, when performing NNLF on a reconstructed image including three components input to the neural network, as shown in FIG12 , performing the following processing on each component:
  • Step S510 decoding the residual adjustment use flag roflag of the component of the reconstructed image, wherein the roflag is used to indicate whether residual adjustment is required when performing NNLF on the component of the reconstructed image;
  • Step S520 when it is determined according to the roflag that residual adjustment is not required, the first mode is used to perform NNLF on the component of the reconstructed image; when it is determined according to the roflag that residual adjustment is required, the second mode is used to perform NNLF on the component of the reconstructed image;
  • the first mode is an NNLF mode that does not perform residual adjustment on the component of the residual image output by the neural network
  • the second mode is an NNLF mode that performs residual adjustment on the component of the residual image.
  • This embodiment is based on the loop filtering method of the neural network.
  • a better mode is selected from the two NNLF modes of performing residual adjustment and not performing residual adjustment to perform NNLF on the component.
  • the mode selection is performed uniformly for multiple components, which can further enhance the filtering effect of NNLF and improve the quality of the decoded image.
  • the residual adjustment reduces the residual in the components of the residual image.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block.
  • the residual adjustment usage flag is a picture level syntax element or a block level syntax element.
  • the adopting the first mode to perform NNLF on the component of the reconstructed image includes: adding the component of the residual image to the component of the reconstructed image to obtain the component of the filtered image output after performing NNLF on the reconstructed image;
  • the use of the second mode to perform NNLF on the component of the reconstructed image includes: performing residual adjustment on the component of the residual image and adding it to the component of the reconstructed image according to one of the residual adjustment methods set for the component, so as to obtain the component of the filtered image output after the NNLF of the reconstructed image; there are one or more residual adjustment methods set for the component.
  • the residual adjustment of the component of the residual image according to one of the set residual adjustment methods includes: continuing to decode the residual adjustment method index index of the component of the reconstructed image, the index is used to indicate the residual adjustment method based on which the residual adjustment is performed; and, performing residual adjustment on the component of the residual image according to the residual adjustment method indicated by the index.
  • the image header is shown in the following table:
  • ro_enable_flag indicates the sequence-level residual adjustment enable flag.
  • the residual adjustment uses the flag picture_ro_enable_flag (equivalent to roflag in other embodiments);
  • Residual adjustment method index picture_ro_index
  • the compIdx in the above table indicates the serial number of the color component. For YUV format images, it is usually 0/1/2.
  • NNLF may be performed in units of blocks (such as CTUs), and in this case, a residual adjustment usage flag and a residual adjustment mode index are defined as block-level syntax elements.
  • This embodiment is based on using two flags to indicate whether a component needs to be adjusted for residuals and the residual adjustment method based on which the residual adjustment is based.
  • the encoding end uses one flag, i.e., the residual adjustment flag roflag, to simultaneously indicate whether the component needs to be adjusted for residuals and the residual adjustment method based on which the residual adjustment is based.
  • the decoding end determines the residual adjustment method based on which the residual adjustment is based according to the roflag of the component, and performs residual adjustment on the component of the residual image according to the determined residual adjustment method.
  • the residual adjustment methods set for the three components are the same or different; the residual adjustment method set for at least one of the three components includes one or more of the following types of residual adjustment methods:
  • the non-zero residual value in the residual image is added or subtracted with the adjustment value corresponding to the interval according to the interval in which the non-zero residual value is located, so that the absolute value of the non-zero residual value becomes smaller; wherein there are multiple intervals, and the larger the value in the interval, the larger the corresponding adjustment value.
  • An embodiment of the present disclosure further provides a video decoding method, which is applied to a video decoding device, comprising: when performing a neural network-based loop filtering on a reconstructed image, as shown in FIG13 , performing the following processing:
  • Step S610 determining whether NNLF allows residual adjustment
  • Step S620 When NNLF allows residual adjustment, perform NNLF on the reconstructed image according to the NNLF method described in any embodiment of the present disclosure applied to the NNLF filter at the decoding end.
  • the video decoding method of this embodiment uses a decoding residual adjustment flag to select a better mode for each component from two NNLF modes of performing residual adjustment and not performing residual adjustment to perform NNLF on the component.
  • the mode selection is performed uniformly for multiple components, which can further enhance the filtering effect of NNLF and improve the quality of the decoded image.
  • the residual adjustment permission flag of the decoded picture level is used to determine whether the NNLF allows residual adjustment according to the residual adjustment permission flag.
  • the frame containing the input reconstructed image is an inter-frame coded frame as a necessary condition for NNLF to allow residual adjustment, etc.
  • sequence header of the video sequence is shown in the following table:
  • the ro_enable_flag in the table is the sequence-level residual adjustment enable flag.
  • the method further includes: when NNLF does not allow residual adjustment, adding the reconstructed image input to the neural network and the residual image output by the neural network to obtain a filtered image output after NNLF is performed on the reconstructed image.
  • the NNLF filter is arranged after the deblocking filter or the sample adaptive offset filter and before the adaptive correction filter.
  • the structure of the filter unit (or loop filter module, see FIG. 1B and FIG. 1C) is shown in FIG. 14, where DBF represents the deblocking filter, SAO represents the sample adaptive offset filter, and ALF represents the adaptive correction filter.
  • NN represents the neural network used for loop filtering in the NNLF filter, which can be the same as the neural network of the NNLF filter such as NNLF1 and NNLF2 that does not perform residual adjustment.
  • the NNLF filter also includes a residual adjustment module (RO: Residual Offset) and two jump connection branches from the NN input to the two path outputs respectively, and RO is used to perform residual adjustment on the residual network output by the neural network.
  • RO Residual Offset
  • These filters are all components of the filter unit for reconstructing the image. During loop filtering, some or all of DBF, SAO, and ALF may not be enabled.
  • the location where the NNLF filter is deployed is not limited to the location described in the present embodiment. It is easy to understand that the implementation of the NNLF method of the present disclosure is not limited to its deployment location.
  • the filters in the filter unit are not limited to those shown in FIG. 14 , and there may be more or fewer filters, or other types of filters.
  • An embodiment of the present disclosure provides a loop filtering method based on a neural network.
  • the encoder performs loop filtering on the reconstructed image
  • the encoding end performs processing according to the deployed filter sequence.
  • the following processing is performed:
  • the first step is to determine whether residual adjustment is allowed in the current sequence according to the sequence-level residual adjustment permission flag ro_enable_flag. If ro_enable_flag is "1", it means that residual adjustment is allowed for the current sequence, and jump to the second step; if ro_enable_flag is "0", it means that residual adjustment is not allowed for the current sequence, and the process ends (skipping subsequent processing);
  • the second step is to input the reconstructed image of the current frame into the neural network of NNLF for prediction, obtain the residual image from the output of NNLF, and superimpose the residual image on the input reconstructed image to obtain the first filtered image;
  • the third step is to perform residual adjustment on the residual image and then superimpose it on the input reconstructed image to obtain a second filtered image;
  • the fourth step is to compare the first filtered image with the original image of the current frame to calculate the rate distortion cost C NNLF ; and to compare the second filtered image with the original image of the current frame to calculate the rate distortion cost C RO .
  • the fifth step is to compare the two costs. If C RO ⁇ C NNLF , the second filtered image is used as the filtered image output by the NNLF filter, that is, the second mode is selected to perform NNLF on the reconstructed image. If C RO ⁇ C NNLF , the first filtered image is used as the filtered image output by the filter, that is, the first mode is selected to perform NNLF on the reconstructed image.
  • SSD(*) means finding the SSD for a certain color component
  • Wy, Wu, and Wv represent the weight values of the SSD of the Y component, the U component, and the V component, respectively, such as 10:1:1 or 8:1:1.
  • M represents the length of the reconstructed image of the current frame
  • N represents the width of the reconstructed image of the current frame
  • rec(x,y) and org(x,y) represent the pixel values of the reconstructed image and the original image at the pixel point (x,y) respectively.
  • Step 6 Encode the residual adjustment use flag picture_ro_enable_flag of the current frame and the residual adjustment method index picture_ro_index into the bitstream;
  • Step 7 If all blocks in the current frame have been processed, the processing of the current frame is terminated, and the next frame can be loaded for processing. If there are still blocks in the current frame that have not been processed, return to step 2.
  • NNLF processing is performed based on the reconstructed image of the current frame.
  • NNLF processing may also be performed based on other coding units such as blocks (such as CTU) and slices in the current frame.
  • This embodiment selects the NNLF1 baseline tool as a comparison.
  • mode selection processing is performed on inter-frame coded frames (i.e., non-I frames), and two residual adjustment methods using fixed values are set, and the fixed values are set to 1 and 2 respectively.
  • JVET Joint Video Experts Team
  • Tables 1 and 2 The results are shown in Tables 1 and 2.
  • Encoding Time 10X% means that after the reference row sorting technology is integrated, the encoding time is 10X% compared with before integration, which means that the encoding time increases by X%.
  • DecT Decoding Time, 10X% means that after the reference row sorting technology is integrated, the decoding time is 10X% compared with before integration, which means that the decoding time increases by X%.
  • ClassA1 and Class A2 are test video sequences with a resolution of 3840x2160
  • ClassB is a test sequence with a resolution of 1920x1080
  • ClassC is 832x480
  • ClassD is 416x240
  • ClassE is 1280x720
  • ClassF is a screen content sequence with several different resolutions.
  • Y, U, V are the three color components.
  • the columns where Y, U, V are located represent the BD-rate of the test results on Y, U, V. rate) indicator, the smaller the value, the better the encoding performance.
  • the method of this embodiment can also be used to select the NNLF mode for intra-coded frames (I frames).
  • An embodiment of the present disclosure further provides a code stream, wherein the code stream is generated by the video encoding method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a loop filter based on a neural network, as shown in Figure 15, including a processor and a memory storing a computer program, wherein the processor can implement the loop filtering method based on a neural network described in any embodiment of the present disclosure when executing the computer program.
  • the processor and the memory are connected via a system bus, and the loop filter may also include other components such as a memory and a network interface.
  • An embodiment of the present disclosure further provides a video decoding device, see FIG. 15 , comprising a processor and a memory storing a computer program, wherein the processor can implement the video decoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding device, see FIG. 15 , comprising a processor and a memory storing a computer program, wherein the processor can implement the video encoding method as described in any embodiment of the present disclosure when executing the computer program.
  • the processor of the above-mentioned embodiments of the present disclosure may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP for short), a microprocessor, etc., or other conventional processors, etc.; the processor may also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA), discrete logic or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other equivalent integrated or discrete logic circuits, or a combination of the above devices. That is, the processor of the above-mentioned embodiment may be any processing device or device combination for implementing the various methods, steps and logic block diagrams disclosed in the embodiments of the present disclosure.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the instructions for the software may be stored in a suitable non-volatile computer-readable storage medium, and one or more processors may be used to execute the instructions in hardware to implement the methods of the embodiments of the present disclosure.
  • processors used herein may refer to the above-mentioned structure or any other structure suitable for implementing the technology described herein.
  • An embodiment of the present disclosure further provides a video encoding and decoding system, see FIG. 1A , which includes the video encoding device described in any embodiment of the present disclosure and the video decoding device described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, can implement the neural network-based loop filtering method as described in any embodiment of the present disclosure, or implement the video decoding method as described in any embodiment of the present disclosure, or implement the video encoding method as described in any embodiment of the present disclosure.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on a computer-readable medium or transmitted via a computer-readable medium as one or more instructions or codes, and executed by a hardware-based processing unit.
  • a computer-readable medium may include a computer-readable storage medium corresponding to a tangible medium such as a data storage medium, or a communication medium that facilitates a computer program, such as any medium transmitted from one place to another according to a communication protocol. In this way, a computer-readable medium may generally correspond to a non-temporary tangible computer-readable storage medium or a communication medium such as a signal or carrier wave.
  • a data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, codes, and/or data structures for implementing the technology described in the present disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage, flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures and can be accessed by a computer.
  • any connection may also be referred to as a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of medium.
  • disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), floppy disks, or Blu-ray disks, etc., where disks typically reproduce data magnetically, while optical disks use lasers to reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种基于神经网络的环路滤波、视频编解码方法、装置和系统,编码端对重建图像进行NNLF时,可以选择进行残差调整的NNLF模式或不进行残差调整的NNLF模式;解码端根据标志,选用其中一种模式对重建图像进行NNLF。可以提高编码性能。

Description

基于神经网络的环路滤波、视频编解码方法、装置和系统 技术领域
本公开实施例涉及但不限于视频技术,更具体地,涉及一种基于神经网络的环路滤波方法、视频编解码方法、装置和系统。
背景技术
数字视频压缩技术主要是将庞大的数字影像视频数据进行压缩,以便于传输以及存储等。原始视频序列的图像包含亮度分量和色度分量,在数字视频编码过程中,编码器读取黑白或者彩色图像,将每一帧图像分割成相同大小(如128x128,64x64等)的最大编码单元(LCU:largest coding unit)。每个最大编码单元可根据规则划分成矩形的编码单元(CU:coding unit),还可以进一步划分成预测单元(PU:prediction unit),变换单元(TU:transform unit)等。混合编码框架包括预测(prediction)、变换(transform)、量化(quantization)、熵编码(entropy coding)、环路滤波(in loop filter)等模块。预测模块可采用帧内预测(intra prediction)和帧间预测(inter prediction)。帧内预测基于同一图像的信息预测当前块内的像素信息,用于消除空间冗余;帧间预测可以参考不同图像的信息,利用运动估计搜索与当前块最匹配的运动矢量信息,用于消除时间冗余;变换可将预测后的残差转换到频率域,使其能量重新分布,结合量化可以将人眼不敏感的信息去除,用于消除视觉冗余;熵编码可以根据当前上下文模型以及二进制码流的概率信息消除字符冗余,生成码流。
随着互联网视频的激增以及人们对视频清晰度的要求越来越高,尽管已有的数字视频压缩标准能够节省不少视频数据,但目前仍然需要追求更好的数字视频压缩技术,以减少数字视频传输的带宽和流量压力。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本公开一实施例提供了一种基于神经网络的环路滤波(NNLF,Neural Network based Loop Filter)方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
解码重建图像的残差调整使用标志roflag,所述roflag用于表示对重建图像进行NNLF时是否需要进行残差调整;
根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像进行NNLF;
其中,所述第一模式是不对神经网络输出的残差图像进行残差调整的NNLF模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式。
本公开一实施例还提供了一种基于神经网络的环路滤波方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括,对输入神经网络的包括3分量的重建图像进行NNLF时,对每一分量分别执行以下处理:
解码重建图像的该分量的残差调整使用标志roflag,所述roflag用于表示对所述重建图像的该分量进行NNLF时是否需要进行残差调整;
根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像的该分量进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像的该分量进行NNLF;
其中,所述第一模式是不对神经网络输出的残差图像的该分量进行残差调整的NNLF模式,所述第二模式是对所述残差图像的该分量进行残差调整的NNLF模式。
本公开一实施例还提供了一种视频解码方法,应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:在NNLF允许残差调整的情况下,按照本公开应用于解码端NNLF滤波器的任一实施例所述的NNLF方法对所述重建图像进行NNLF。
本公开一实施例还提供了一种基于神经网络的环路滤波方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;
计算采用第一模式对所述重建图像进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像进行NNLF的率失真代价cost 2;其中,所述第一模式是不对所述残差图像进行残差调整的NNLF模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式;
在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像进行NNLF;在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
本公开一实施例还提供了一种基于神经网络的环路滤波方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;所述重建图像和残差图像均包括3个分量;及
对所述3个分量中的每一分量分别执行以下处理:
计算采用第一模式对所述重建图像的该分量进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像的该分量进行NNLF的率失真代价cost 2;所述第一模式是不对所述残差图像的该分量进行残差调整的NNLF模式,所述第二模式是对所述残差图像的该分量进行残差调整的NNLF模式;
在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像的该分量进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像的该分量进行NNLF;在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
本公开一实施例还提供了一种视频编码方法,应用于视频编码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
在NNLF允许残差调整的情况下,按照本公开应用于编码端NNLF滤波器的任一实施例所述的NNLF方法对所述重建图像进行NNLF;
编码所述重建图像的残差调整使用标志,以表示对重建图像进行NNLF时是否需要进行残差调整。
本公开一实施例还提供了一种码流,其中,所述码流通过本公开任一实施例所述的视频编码方法生成。
本公开一实施例还提供了一种基于神经网络的环路滤波器,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现本公开任一实施例所述的基于神经网络的环路滤波方法。
本公开一实施例还提供了一种视频解码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频解码方法。
本公开一实施例还提供了一种视频编码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频编码方法。
本公开一实施例还提供了一种视频编解码系统,包括本公开任一实施例所述的视频编码装置和本公开任一实施例所述的视频解码装置。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时能够实现如本公开任一实施例所述的基于神经网络的环路滤波方法,或实现如本公开任一实施例所述的视频解码方法,或实现如本公开任一实施例所述的视频编码方法。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
附图用来提供对本公开实施例的理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1A是一实施例编解码系统的示意图,图1B是图1A中编码端的框架图,图1C是图1A中解码 端的框架图;
图2是一实施例的滤波器单元的模块图;
图3A是一实施例NNLF滤波器的网络结构图,图3B是图3A中残差块的结构图;图3C是图3A中NNLF滤波器的输入和输出的示意图;
图4A是另一实施例NNLF滤波器中骨干网络的结构图,图4B是图4A中残差块的结构图;图4C是图4ANNLF滤波器的输入和输出的示意图;
图5是残差网络的基本网络结构的示意图;
图6A是一实施例对NNLF模型迭代训练的示意图;图6B是对图6A中的NNLF模型进行编码测试的示意图;
图7是本公开一实施例应用于编码端的NNLF方法的流程图;
图8是本公开另一实施例应用于编码端的NNLF方法的流程图;
图9是本公开一实施例视频编码方法的流程图;
图10是本公开一实施例应用于解码端的NNLF方法的流程图;
图11是本公开一实施例可进行模式选择的NNLF的示意图;
图12是本公开另一实施例应用于解码端的NNLF方法的流程图;
图13是本公开一实施例视频解码方法的流程图;
图14是本公开一实施例滤波器单元的结构示意图;
图15是本公开一实施例NNLF滤波器的示意图;
图16是本公开一实施例残差值调整的示意图。
详述
本公开描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本邻域的普通技术人员来说显而易见的是,在本公开所描述的实施例包含的范围内可以有更多的实施例和实现方案。
本公开的描述中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何实施例不应被解释为比其他实施例更优选或更具优势。本文中的“和/或”是对关联对象的关联关系的一种描述,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。另外,为了便于清楚描述本公开实施例的技术方案,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本邻域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
在描述具有代表性的示例性实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本邻域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本邻域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本公开实施例的精神和范围内。
本公开实施例基于神经网络的环路滤波方法、视频编解码方法可以应用于各种视频编解码标准,例如:H.264/Advanced Video Coding(高级视频编码,AVC),H.265/High Efficiency Video Coding(高效视频编码,HEVC),H.266/Versatile Video Coding(多功能视频编码,VVC),AVS(Audio Video coding Standard,音视频编码标准),以及MPEG(Moving Picture Experts Group,动态图像专家组)、AOM(开放媒体联盟,Alliance for Open Media)、JVET(联合视频专家组,Joint Video Experts Team)制订的其他标准以及这些标准的拓展,或任何自定义的其他标准等。
图1A是可用于本公开实施例的一种视频编解码系统的框图。如图所示,该系统分为编码端1和解码端2,编码端1产生码流。解码端2可对码流进行解码。解码端2可经由链路3从编码端1接收码流。链路3包括能够将码流从编码端1移动到解码端2的一个或多个媒体或装置。在一个示例中, 链路3包括使得编码端1能够将码流直接发送到解码端2的一个或多个通信媒体。编码端1根据通信标准来调制码流,将经调制的码流发送到解码端2。所述一个或多个通信媒体可包含无线和/或有线通信媒体,可形成分组网络的一部分。在另一示例中,也可将码流从输出接口15输出到一个存储装置,解码端2可经由流式传输或下载从该存储装置读取所存储的数据。
如图所示,编码端1包含数据源11、视频编码装置13和输出接口15。数据源11包括视频捕获装置(例如摄像机)、含有先前捕获的数据的存档、用以从内容提供者接收数据的馈入接口,用于产生数据的计算机图形系统,或这些来源的组合。视频编码装置13也可称为视频编码器,用于对来自数据源11的数据进行编码后输出到输出接口15,输出接口15可包含调节器、调制解调器和发射器中的至少之一。解码端2包含输入接口21、视频解码装置23和显示装置25。输入接口21包含接收器和调制解调器中的至少之一。输入接口21可经由链路3或从存储装置接收码流。视频解码装置23也称为视频解码器,用于对接收的码流进行解码。显示装置25用于显示解码后的数据,显示装置25可以与解码端2的其他装置集成在一起或者单独设置,显示装置25对于解码端来说是可选的。在其他示例中,解码端可以包含应用解码后数据的其他装置或设备。
图1B为可用于本公开实施例的一示例性的视频编码装置的框图。如图所示,该视频编码装置10包括:
划分单元101,设置为与预测单元100配合,将接收的视频数据划分为切片(Slice)、编码树单元(CTU:Coding Tree Unit)或其它较大的单元。所述接收的视频数据可以是包括I帧、P帧或B帧等视频帧的视频序列。
预测单元100,设置为将CTU划分为编码单元(CU:Coding Unit),对CU执行帧内预测编码或帧间预测编码。对CU做帧内预测和帧间预测时,可以将CU划分为一个或多个预测单元(PU:prediction unit)。
预测单元100包含帧间预测单元121和帧内预测单元126。
帧间预测单元121,设置为对PU执行帧间预测,产生PU的预测数据,所述预测数据包括PU的预测块、PU的运动信息和各种语法元素。帧间预测单元121可以包括运动估计(ME:motion estimation)单元和运动补偿(MC:motion compensation)单元。运动估计单元可以用于运动估计以产生运动矢量,运动补偿单元可以用于根据运动矢量获得或生成预测块。
帧内预测单元126,设置为对PU执行帧内预测,产生PU的预测数据。PU的预测数据可包含PU的预测块和各种语法元素。
残差产生单元102(图中用划分单元101后的带加号的圆圈表示),设置为基于CU的原始块减去CU划分成的PU的预测块,产生CU的残差块。
变换处理单元104,设置为将CU划分为一个或多个变换单元(TU:Transform Unit),预测单元和变换单元的划分可以不同。TU关联的残差块是CU的残差块划分得到的子块。通过将一种或多种变换应用于TU关联的残差块来产生TU关联的系数块。
量化单元106,设置为基于量化参数对系数块中的系数进行量化,通过调整量化参数(QP:Quantizer Parameter)可以改变系数块的量化程度。
反量化单元108和反变换单元110分别设置为将反量化和反变换应用于系数块,得到TU关联的重建残差块。
重建单元112(图中用反变换处理单元110后的带加号的圆圈表示),设置为将重建残差块和预测单元100产生的预测块相加,产生重建图像。
滤波器单元113,设置为对重建图像执行环路滤波。
解码图像缓冲器114,设置为存储环路滤波后的重建图像。帧内预测单元126可以从解码图像缓冲器114中提取与当前块邻近的块的参考图像以执行帧内预测。帧间预测单元121可使用解码图像缓冲器114缓存的上一帧的参考图像对当前帧图像的PU执行帧间预测。
熵编码单元115,设置为对接收的数据(如语法元素、量化后的系数块、运动信息等)执行熵编码操作,生成视频码流。
在其他示例中,视频编码装置10可以包含比该示例更多、更少或不同功能组件,如可以取消变换 处理单元104、反变换处理单元110等。
图1C为可用于本公开实施例的一示例性的视频解码装置的框图。如图所示,视频解码装置15包括:
熵解码单元150,设置为对接收的已编码视频码流进行熵解码,提取语法元素、量化后的系数块和PU的运动信息等。预测单元152、反量化单元154、反变换处理单元156、重建单元158以及滤波器单元159均可基于从码流提取的语法元素来执行相应的操作。
反量化单元154,设置为对量化后的TU关联的系数块进行反量化。
反变换处理单元156,设置为将一种或多种反变换应用于反量化后的系数块以便产生TU的重建残差块。
预测单元152包含帧间预测单元162和帧内预测单元164。如果当前块使用帧内预测编码,帧内预测单元164基于从码流解码出的语法元素确定PU的帧内预测模式,结合从解码图像缓冲器160获取的当前块邻近的已重建参考信息执行帧内预测。如果当前块使用帧间预测编码,帧间预测单元162基于当前块的运动信息和相应的语法元素确定当前块的参考块,从解码图像缓冲器160获取的所述参考块执行帧间预测。
重建单元158(图中用反变换处理单元155后的带加号的圆圈表示),设置为基于TU关联的重建残差块和预测单元152执行帧内预测或帧间预测产生的当前块的预测块,得到重建图像。
滤波器单元159,设置为对重建图像执行环路滤波。
解码图像缓冲器160,设置为存储环路滤波后的重建图像,作为参考图像用于后续运动补偿、帧内预测、帧间预测等,也可将滤波后的重建图像作为已解码视频数据输出,在显示装置上的呈现。
在其它实施例中,视频解码装置15可以包含更多、更少或不同的功能组件,如在某些情况下可以取消反变换处理单元155等。
本文中,当前块(current block)可以是当前图像中的当前编码树单元(CTU),当前编码单元(CU)、当前预测单元(PU)等块级编码单位。
基于上述视频编码装置和视频解码装置,可以执行以下基本的编解码流程,在编码端,将一帧图像划分成块,对当前块进行帧内预测或帧间预测或其他算法产生当前块的预测块,使用当前块的原始块减去预测块得到残差块,对残差块进行变换和量化得到量化系数,对量化系数进行熵编码生成码流。在解码端,对当前块进行帧内预测或帧间预测产生当前块的预测块,另一方面对解码码流得到的量化系数进行反量化、反变换得到残差块,将预测块和残差块相加得到重建块,重建块组成重建图像,基于图像或基于块对重建图像进行环路滤波得到解码图像。编码端同样通过和解码端类似的操作以获得解码图像,也可称为环路滤波后的重建图像。环路滤波后的重建图像可以作为对后续帧进行帧间预测的参考帧。编码端确定的块划分信息,预测、变换、量化、熵编码、环路滤波等模式信息和参数信息可以写入码流。解码端通过解码码流或根据设定信息进行分析,确定编码端使用的块划分信息,预测、变换、量化、熵编码、环路滤波等模式信息和参数信息,从而保证编码端获得的解码图像和解码端获得的解码图像相同。
以上虽然是以基于块的混合编码框架为示例,但本公开实施例并不局限于此,随着技术的发展,该框架中的一个或多个模块,及该流程中的一个或多个步骤可以被替换或优化。
本公开实施例涉及但不限于上述编码端和解码端中的滤波器单元(该滤波器单元也可称为环路滤波模块)及相应的环路滤波方法。
在一实施例中,编码端和解码端的滤波器单元包含去块滤波器(DBF:DeBlocking Filter)20、样值自适应补偿滤波器(SAO:Sample adaptive Offset)22和自适应修正滤波器(ALF:Adaptive loop filter)26等工具,在SAO和ALF之间,还包括基于神经网络的环路滤波器(NNLF,Neural Network based Loop Filter)26,如图2所示。滤波器单元对重建图像执行环路滤波,可以弥补失真信息,为后续编码像素提供更好的参考。
在一示例性的实施例中提供了一种基于神经网络的环路滤波NNLF方案,使用的模型采用如图3A所示的滤波网络,文中将该NNLF记为NNLF1,将执行NNLF1的滤波器称为NNLF1滤波器。如图所示,该滤波网络的骨干网络(backbone)包括采用多个依次连接的残差块(ResBlock),还包括卷 积层(图中用Conv表示)、激活函数层(如图中的ReLU),合并(concat)层(图中用Cat表示),及像素重组层(图中用PixelShuffle表示)。每个残差块的结构如图3B所示,包括依次连接的卷积核大小为1×1的卷积层、ReLU层、卷积核大小为1×1的卷积层和卷积核大小为3×3的卷积层。
如图3A所示,NNLF1滤波器的输入包括重建图像(rec_YUV)的亮度信息(即Y分量)和色度信息(即U分量和V分量),以及多种辅助信息,例如预测图像(pred_YUV)的亮度信息和色度信息、QP信息、帧类型信息。QP信息包括编码配置文件中默认的基线量化参数(BaseQP:Base Quantization Parameter)和当前切片的切片量化参数(SliceQP:Slice Quantization Parameter),帧类型信息包括切片类型(SliceType),即当前slice所属的帧的类型。模型的输出为经NNLF1滤波后的滤波图像(output_YUV),NNLF1滤波器输出的滤波图像也可以作为输入到后续滤波器的重建图像。
NNLF1使用一个模型对重建图像的YUV分量(rec_YUV)进行滤波,输出滤波图像的YUV分量(out_YUV),如图3C所示,图中略去了预测图像的YUV分量等辅助输入信息。该模型的滤波网络在输入的重建图像到输出的滤波图像之间存在一条跳连接支路,如图3A所示。
另一示例性的实施例提供了另一种NNLF方案,记为NNLF2。NNLF2使用两个模型,一个模型用于对重建图像的亮度分量进行滤波,另一个模型用于对重建图像的两个色度分量进行滤波,该两个模型可以采用相同的滤波网络,输入NNLF2滤波器的重建图像到NNLF2滤波器输出的滤波图像之间也存在一条跳连接支路。如图4A所示,该滤波网络的骨干网络包括依次连接的多个带注意力机制的残差块(AttRes Block)、用于实现特征映射的卷积层(Conv 3×3)以及重组层(Shuffle)。每个带注意力机制的残差块的结构如图4B所示,包括依次连接的卷积层(Conv 3×3)、激活层(PReLU)、卷积层(Conv 3×3)和注意力层(Attintion),M表示特征图的数量,N代表一维中的样本数。
NNLF2用于对重建图像的亮度分量滤波的模型一如图4C所示,其输入信息包括重建图像的亮度分量(rec_Y),输出为滤波图像的亮度分量(out_Y),图中略去了预测图像的亮度分量等辅助输入信息。NNLF2用于对重建图像的两个色度分量滤波的模型二如图4C所示,其输入信息包括重建图像的两个色度分量(rec_UV),及作为辅助输入信息的重建图像的亮度分量(rec_Y),模型二的输出是滤波图像的两个色度分量(out_UV)。模型一和模型二还可以包括其他的辅助输入信息,如QP信息、块划分图像、去块滤波边界强度信息等。
上述NNLF1方案和NNLF2方案在基于神经网络的视频编码(NNVC:Neural Network based Video Coding)中可以用基于神经网络的通用软件(NCS:Neural Network based Common Software)实现,作为NNVC的参考软件测试平台中的基线工具即基线NNLF。
在深度学习领域,提出了残差学习(Residual learning)的理念,通过输入端到输出端的简单跳连接(Skip Connection)结构,让网络专注于学习图像的残差信息,提升了网络的学习能力和预测性能,残差网络(ResNet)的基本结构如图5所示。NNLF1和NNLF2借鉴了残差学习的理念,参见图5,其滤波网络包括神经网络(NN),及从输入滤波器的重建图像到滤波器输出的滤波图像的跳连接支路。NNLF1和NNLF2输出的滤波图像cnn=rec+res,其中rec表示输入的重建图像,res表示神经网络输出的残差图像,该神经网络包括滤波网络中除了上述跳连接支路外的其他部分,该神经网络具有预测残差信息的功能。NNLF1和NNLF2通过神经网络预测输入的重建图像相对于原始图像的残差信息即残差图像,再把残差图像叠加到输入的重建图像(即与重建图像相加),得到滤波器输出的滤波图像,可以使该滤波图像的质量更加接近于原始图像。
在视频编码中,帧间预测技术使得当前帧可以参考前序帧的图像信息,提升了编码性能,而前序帧的编码效果也会影响后续帧的编码效果。在NNLF1和NNLF2的方案中,为了能让滤波网络适应帧间预测技术的影响,其模型的训练过程包括初始训练阶段和迭代训练阶段,采用多轮训练的方式。在初始训练阶段,待训练的模型还没有部署在编码器中,通过采集的重建图像的样本数据对模型进行第一轮训练,得到第一轮训练后的模型。在迭代训练阶段,模型均部署在编码器中,先将第一轮训练后的模型部署在编码器中,重新采集重建图像的样本数据,对第一轮训练后的模型进行第二轮训练,得到第二轮训练后的模型;之后,再将第二轮训练后的模型部署在编码器中,重新采集重建图像的样本数据,对第二轮训练后的模型进行第三轮训练,得到第三轮训练后的模型,如此迭代训练;最后对每一轮训练后的模型在验证集上进行编码测试,找到编码性能最佳的模型用于实际部署。
然而,这种多轮训练的操作,训练相对编码测试仍然存在一定的滞后性。分析如下:图6A是第N+1轮训练的示意图,如图所示,进行第N+1轮训练时,是将第N次训练后的模型model_N部署在 编码器中,采集多帧重建图像的训练数据,图中标记有0,1,2,…的方框表示第1帧、第2帧、第3帧、……的重建图像,训练得到第N+1轮训练后的模型model_N+1。假定经编码测试model_N+1的性能最佳,训练完成。
对model_N+1进行编码测试时,是将模型model_N+1部署到编码器或解码器中进行的,如图6B所示,使用帧间预测编码的当前帧参考的前序帧是基于model_N+1进行环路滤波而生成的,训练相对测试存在滞后性。但对于model_N+1而言,其适用环境是第N+1轮训练时的环境,其当前帧所参考的前序帧是使用model_N进行环路滤波,不同于对model_N+1进行编码测试的环境。由于model_N+1的性能优于model_N,编码测试时,当前帧所参考的前序帧使用了model_N+1进行滤波处理后,其前序帧的性能进一步提升,这使得对model_N+1进行编码测试时,输入的重建图像的质量提升(与原始图像的残差变小),不同于训练环境下所预期的质量。而model_N+1仍旧按照其训练好的能力来预测残差,导致model_N+1中的神经网络输出的残差可能会偏大,目前还没有方案尝试对该残差进行调整。
本文中,对残差图像的残差值而言,残差调整使得残差图像的残差变小是指残差图像中的残差值更为接近于0,即残差值的绝对值变小,如3变为2,-3变为-2,而并非指-3变为-4这样的变化。且残差变小是对残差图像的整体而言,可以是部分像素的残差值的绝对值变小,部分像素的残差值不变,除了为零的残差值不变化,可以是所有非零残差值的绝对值变小,也可以是部分非零残差值的绝对值变小,例如,可以是位于取值区间[1,2]、[-1,-2]的残差值不变,而大于等于3和小于等于-3的残差值的绝对值变小。
本公开一实施例提供一种基于神经网络的环路滤波方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
步骤S110,将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;
步骤S120,计算采用第一模式对所述重建图像进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像进行NNLF的率失真代价cost 2
其中,所述第一模式是不对所述残差图像进行残差调整的NNLF模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式;
步骤S130,在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像进行NNLF;在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
本实施例基于神经网络的环路滤波方法,编码端可以从进行残差调整的模式和不进行残差调整的模式中选择一种率失真代价较小的模式进行NNLF,在一定程度上补偿NNLF模式的训练相对编码测试的滞后性带来的性能损失,从而提高NNLF滤波的效果,提升编码性能。
如无相反限定,本文中的残差调整是指对重建图像进行基于神经网络的环路滤波时对残差图像所做的残差调整。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片(slice)或当前块的重建图像,但也可以是其他编码单位的重建图像。本文中进行NNLF滤波器的重建图像可以是图像级(包括帧、切片)、块级等不同级别的编码单位。
在本公开一示例性的实施例中,所述残差调整使得所述残差图像中的残差变小。
在本公开一示例性的实施例中,所述计算采用第一模式对所述重建图像进行NNLF的率失真代价cost 1,包括:将所述残差图像与所述重建图像相加得到一第一滤波图像;及,根据该第一滤波图像与相应原始图像的差异计算得到所述cost 1;在所述重建图像和所述残差图像均包括3个分量,如Y分量、U分量和V分量的情况下。所述cost 1可以通过计算所述第一滤波图像与所述原始图像在3个分量上的平方误差和(SSD):及对所述3个分量上的SSD加权后相加而得到。本实施例,所述选择所述第一模式对所述重建图像进行NNLF,包括:将所述残差图像与所述重建图像相加得到的第一滤波图像,作为对所述重建图像进行NNLF后输出的滤波图像。本实施例采用第一模式对重建图像进行NNLF时,可以采用上述NNLF1或NNLF2的滤波方法,也可以采用其他不对残差图像进行残差调整的滤波方法。
在本公开一示例性的实施例中,所述计算采用第二模式对所述重建图像进行NNLF的率失真代价cost 2,包括:
按照设定的每一种残差调整方式,对所述残差图像进行残差调整并与所述重建图像相加,得到一第二滤波图像,再根据该第二滤波图像与所述原始图像的差异计算一率失真代价;及,
将计算得到的所有率失真代价中最小的率失真代价作为cost 2
在本实施例的一示例中,所述设定的残差调整方式有一种,则计算得到一个率失真代价,该率失真代价即cost 2。在本实施例的另一示例中,所述设定的残差调整方式有多种,假定有2种,则计算得到2个率失真代价,此时以该2个率失真代价中最小的率失真代价为cost 2
在本实施例的一示例中,对所述残差图像进行残差调整并与所述重建图像相加,可以是对所述残差图像进行残差调整后,将残差调整的结果与所述重建图像相加,残差调整的方式例如是将残差图像中正的残差值减1,负的残差值加1,即对为0的残差值不进行调整,使得残差图像中残差整体上变小。但在具体实现时,并不一定要按照该顺序计算,如先将所述残差图像与所述重建图像相加,该相加得到的图像中也包含残差图像,再对其中的残差图像进行残差调整也是可以的。以图像中的任一像素点来说,假定残差图像中该像素点的值(即残差值)为x,重建图像中该像素点的值(即重建值)为y,残差调整是将该像素点的残差值减1,则计算第二滤波图像中该像素点的值时,先将x减1再加y,还是先将x加y再减1,结果是一样的。本公开的其他实施例,包括解码端的实施例,对残差图像(或其分量)进行残差调整并与重建图像(或其分量)相加的具体实现也是如此。
本实施例的一示例中,所述重建图像和所述残差图像均包括3个分量,所述根据该第二滤波图像与所述原始图像的差异计算一率失真代价,包括:计算该第二滤波图像与所述原始图像在3个分量上的平方误差和SSD,再对所述3个分量上的SSD加权后相加,得到该率失真代价。
在本公开一示例性的实施例中,所述设定的残差调整方式包括以下一种或多种类型的残差调整方式:
将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;例如,将残差图像中正的残差值减1,负的残差值加1。
将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。例如,将残差图像中大于等于1且小于等于5的残差值减1,大于5的残差值减2,小于等于-1且大于等于-5的残差值加1,小于-5的残差值加2。
上述本实施例通过调整滤波网络输出的残差信息来提升编码性能。如上文所述,神经网络输出的残差图像的残差可能偏大,通过残差调整将残差调小且助于提升编码性能。对于一残差图像,其中的每个像素点的残差值可能为正也可能为负,调小残差时,可以将正的残差值减去固定值(正数),将负的残差值加上该固定值,残差值为0时不做调整,使得整体的残差值变小即更接近于0。如图16所示,假设固定值为(+1),原始残差图像中各像素点的残差值见图左,经过残差调整后的残差图像中各像素点的残差值见图右。将经残差调整后的残差图像叠加到输入的重建图像上,得到滤波图像,也可以称为经NNLF的重建图像。可以设置多个固定值,对应多种残差调整方式,通过计算每一种残差调整方式下的率失真代价,选择率失真代价最小的一种残差调整方式所采用的固定值,并把该残差调整方式对应的索引编入码流中,供解码端读取并处理。
除了采用固定值的残差调整方式外,也可以采用其他类型的残差调整方式,例如:根据残差值的大小,将其进行分段,尝试不同精度的调整操作,例如对于绝对值较大的残差值,设置绝对值较大的调整值;对于绝对值较小的残差值,设置绝对值较小的调整值。
一个示例的伪代码如下:
假设对于当前帧的当前像素点,其对应的残差值为res,需要决策出的调整值为RO_FACTOR,具体的调整值导出的策略如下。
if(res=0)  RO_FACTOR=0;
else if(0<res<=x1)  RO_FACTOR=a1;
else if(x1<res<=x2)  RO_FACTOR=a2;
else if(x2<res)  RO_FACTOR=a3;
else if(y1<=res<0)  RO_FACTOR=b1;
else if(y2<=res<y1)  RO_FACTOR=b2;
else if(res<y2)  RO_FACTOR=b3;
其中,{x1,x2,x3}表示正残差值,{y1,y2,y3}表示负残差值,{a1,a2,a3}和{b1,b2,b3}均为预设的候选固定值。
以上方案是对为零的残差值不做调整,对非零残差值,查找其落入的区间(共设定6个区间),从而确定应使用的调整值。
本实施例中,设定的多种残差调整方式可以包括一种类型的残差调整方式,也可以包括多种类型的残差调整方式。
上述实施例是对残差图像中的3个分量统一进行调整,使用相同的残差调整方式。使用该残差调整方式对残差图像进行残差调整,是在3分量统一调整的前提下整体上最优的结果,但是对于残差图像中的具体分量来说,并不一定是最优的残差调整方式。对此,可以对每个分量单独进行是否进行残差调整以及残差调整方式的选择,使编码性能进一步优化。
本公开一实施例提供了一种基于神经网络的环路滤波方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,如图8所示,所述方法包括:
步骤S210,将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;所述重建图像和残差图像均包括3个分量,如Y分量、U分量和V分量;及
步骤S210,对所述3个分量中的每一分量分别执行以下处理,该处理可称为模式选择处理:
计算采用第一模式对所述重建图像的该分量进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像的该分量进行NNLF的率失真代价cost 2;第一模式是不对所述残差图像的该分量进行残差调整的NNLF模式,第二模式是对所述残差图像的该分量进行残差调整的NNLF模式;
在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像的该分量进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像的该分量进行NNLF在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
本实施例可以对每个分量单独进行模式选择处理,可以使得编码性能在对3个分量统一调整的前述实施例的基础上进一步优化,由于相关运算是NNLF滤波器的输出端进行,对于运算复杂度的影响不大。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片或当前块的重建图像。
在本公开一示例性的实施例中,所述残差调整使得所述残差图像的分量中的残差变小。
在本公开一示例性的实施例中,所述计算采用第一模式对所述重建图像的该分量进行NNLF的率失真代价cost 1,包括:将所述残差图像的该分量与所述重建图像的该分量相加,得到滤波后的该分量;及,根据滤波后的该分量与相应原始图像的该分量之间的差异,计算得到所述cost1;在一个示例中,所述差异用SSD表示,即将滤波后的该分量与相应原始图像的该分量之间的SSD作为所述cost1。在其他示例中,所述差异也可以用均方误差(MSE:Mean Squared Error)、平均绝对误差(MAE:Mean Absolute Error)等其他的指标来表示,本公开对此不做局限。本公开的其他实施例同此。
本实施例的一示例中,选择所述第一模式对所述重建图像的该分量进行NNLF,包括:将所述残差图像的该分量与所述重建图像的该分量相加得到的滤波后的该分量,作为对所述重建图像进行NNLF后输出的滤波图像的该分量。本实施例按照第一模式对重建图像的该分量进行NNLF得到的滤波图像的该分量,没有对残差图像的该分量进行残差调整,因此滤波图像的该分量与不进行残差调整的NNLF方案(如NNLF1、NNLF2)得到的滤波图像的该方量可以相同。
在本公开一示例性的实施例中,所述计算采用第二模式对所述重建图像的该分量进行NNLF的率失真代价cost 2,包括:
按照为该分量设定的每一种残差调整方式,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加,得到滤波后的该分量;再根据滤波后的该分量与相应原始图像的该分量之间的差异,计算该分量的一个率失真代价;例如,将滤波后的该分量与相应原始图像的该分量之间的SSD作为该分量的率失真代价;及
将计算得到的该分量的所有率失真代价中最小的率失真代价,作为该分量的cost 2;其中,为该分量设定的残差调整方式有一种或多种。
在本实施例的一示例中,所述选择所述第二模式对所述重建图像的该分量进行NNLF,包括:
将按照该分量的cost 2对应的残差调整方式,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加得到的滤波后的该分量,作为对所述重建图像进行NNLF后输出的滤波图像的该分量。
在本实施例的一示例中,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加,可以是对所述残差图像的该分量进行残差调整后,将残差调整的结果与所述重建图像的该分量相加,但在具体实现时,并不一定要按照该顺序计算。
在本实施例的一示例中,为所述3个分量设定的残差调整方式相同或不同;例如,对于Y分量设定的残差调整方式为:将残差图像中正的残差值减1,负的残差值加1;而为U分量和V分量设定的残差调整方式为:将残差图像中大于2的残差值减1,小于-2的残差值加1。
在本实施例的一示例中,为所述3个分量中至少一个分量设定的残差调整方式包括以下一种或多种类型的残差调整方式:
将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
本公开一实施例还提供了一种视频编码方法,应用于视频编码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,如图8所示,执行以下处理:
步骤S310,在NNLF允许残差调整的情况下,按照本公开任一实施例所述的NNLF方法对所述重建图像进行NNLF;
步骤S320,编码所述重建图像的残差调整使用标志,以表示对所述重建图像进行NNLF时是否需要进行残差调整。
本公开实施例在对重建图像进行基于神经网络的环路滤波时,可以根据率失真代价选择对残差图像进行调整或者不进行调整,可以补偿NNLF模式的训练相对编码测试的滞后性带来的性能损失,提高编码性能。
在本公开一示例性的实施例中,所述残差调整使用标志为图像级语法元素或者块级语法元素。
在本公开一示例性的实施例中,在满足以下一种或多种条件的情况下,确定NNLF允许残差调整:
解码序列级的残差调整允许标志,根据该残差调整允许标志的值确定NNLF允许残差调整;
解码图像级的残差调整允许标志,根据该残差调整允许标志的值确定NNLF允许残差调整。
除上述条件外,还可以增加其他条件,例如,将输入的重建图像所在的帧为帧间编码帧作为NNLF允许残差调整的必要条件,等。
在本公开的其他实施例中,也可以一直启用NNLF的残差调整,此时不需要通过标志来判断,默认NNLF允许残差调整。
在本公开一示例性的实施例中,所述方法还包括:确定NNLF不允许残差调整的情况下,跳过对所述残差使用标志的编码,将输入所述神经网络的所述重建图像与所述神经网络输出的残差图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。即此时可使用不进行残差调整的NNLF实现对重建图像的滤波。
在本公开一示例性的实施例中,所述方法是按照上述对残差图像的3个分量统一进行残差调整的任一实施例对所述重建图像进行NNLF;所述重建图像的残差调整使用标志roflag的个数为1个;在选择所述第一模式对所述重建图像进行NNLF的情况下,所述roflag被置为表示不需要进行残差调整的值如0;在选择所述第二模式对所述重建图像进行NNLF的情况下,所述roflag被置为表示需要进行残差调整的值如1。
在本实施例的一示例中,所述方法还包括:在所述roflag被置为表示需要进行残差调整的值,且所述设定的残差调整方式有多种的情况下,继续编码所述重建图像的残差调整方式索引,所述残差调 整方式索引用于指示进行残差调整时所基于的残差调整方式。例如,设定的残差调整方式有3种时,残差调整方式索引可以是2bit标志,在该标志的值为0,1,2时分别表示3种残差调整方式,值与残差调整方式的对应关系在编码端和解码端事先约定好,例如在标准、协议中定义。
本实施例是采用两个标志,即残差调整使用标志和残差调整方式索引来分别表示是否需要进行残差调整,以及进行残差调整所基于的残差调整方式(设定有多种残差调整方式时)。但在本公开另一示例性的实施例中,在设定的残差调整方式有多种的情况下,所述残差调整使用标志还用于表示进行残差调整所基于的残差调整方式,即本实施例是使用残差调整使用标志同时表示是否需要进行残差调整及残差调整所基于的残差调整方式。例如,在设定的残差调整方式有3种的情况下,使用2bit的残差调整使用标志roflag,该roflag的4个值可以分别表示不需要进行残差调整,使用第1种残差调整方式进行残差调整,使用第2种残差调整方式进行残差调整,及使用第3种残差调整方式进行残差调整。
在设定的残差调整方式有3种、不需要进行残差调整的情况下,使用2个标志的实施例只需要编码1个1bit的标志即残差调整使用标志,不需要编码残差调整方式索引;而使用1个标志的实施例,需要编码1个2bit的残差调整使用标志。在设定的残差调整方式有3种、需要进行残差调整的情况下,在需要进行残差调整的情况下,使用2个标志的实施例需要编码1个1bit的残差调整使用标志和1个2bit的残差调整方式索引,使用1个标志的实施例需要编码1个2bit的残差调整使用标志.
在本公开一示例性的实施例中,所述方法是按照上述对残差图像的3个分量分别进行残差调整的任一实施例对所述重建图像进行NNLF;所述重建图像的残差调整使用标志roflag(j)的个数为3,j=1,2,3,roflag(j)用于表示对所述重建图像的第j个分量进行NNLF时是否需要进行残差调整;在选择所述第一模式对所述重建图像的第j个分量进行NNLF的情况下,roflag(j)被置为表示不需要进行残差调整的值如0,在选择所述第二模式对所述重建图像的第j个分量进行NNLF的情况下,roflag(j)被置为表示需要进行残差调整的值如1。
在本实施例的一示例中,所述方法还包括:在所述roflag(j)被置为表示需要进行残差调整的值,且为第j个分量设定的残差调整方式有多种的情况下,继续编码所述重建图像的第j个分量的残差调整方式索引index(j),以指示对所述残差图像的第j个分量进行残差调整时所基于的残差调整方式。
在本公开另一示例性的实施例中,在为第j个分量设定的残差调整方式有多种的情况下,所述第j个分量的残差调整使用标志还用于表示进行残差调整所基于的残差调整方式,即本实施例使用第j个分量的残差调整使用标志同时表示是否需要进行残差调整及残差调整所基于的残差调整方式。
本公开一实施例还提供了一种基于神经网络的环路滤波方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,如图10所示,所述方法包括:
步骤S410,解码重建图像的残差调整使用标志roflag,所述roflag用于表示对重建图像进行NNLF时是否需要进行残差调整;
步骤S420,根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像进行NNLF;
其中,所述第一模式是不对神经网络输出的残差图像进行残差调整的NNLF模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式。
图11是本实施例解码端对重建图像进行NNLF的示意图,如图所示,NNLF存在两种路径,一条是将神经网络(NN)输出的残差图像与重建图像相加,作为NNLF输出的滤波图像;另一条需要对残差图像进行残差调整(图中的RO表示残差调整模块,用于进行残差调整)并与重建图像相加,而两条路径的选择根据解码的残差调整使用标志确定。本实施例是在NNLF输出端进行模式选择,标志解码可以在得到滤波图像之前,但不局限于此。
本实施例基于神经网络的环路滤波方法,通过解码残差调整使用标志,从进行残差调整和不进行残差调整的两种模式中选择一种较优的模式,可以增强NNLF的滤波效果,提升解码的图像的质量。
在本公开一示例性的实施例中,roflag可以采用1位的标志,则根据roflag的值就可以表示是否需要进行残差调整。例如,在roflag的值为1时,确定需要进行残差调整,而在roflag的值为0时, 确定不需要进行残差调整.。本公开的其他实施例同此。
在本公开一示例性的实施例中,所述残差调整使得所述残差图像的残差变小。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片或当前块的重建图像。
在本公开一示例性的实施例中,所述残差调整使用标志为图像级语法元素或者块级语法元素。
在本公开一示例性的实施例中,所述采用第一模式对所述重建图像进行NNLF,包括:将所述神经网络输出的残差图像与输入所述神经网络的所述重建图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。所述采用第二模式对所述重建图像进行NNLF,包括:按照设定的残差调整方式中的一种对所述残差图像进行残差调整并与所述重建图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。
在本实施例的一示例中,所述设定的残差调整方式有多种,所述按照设定的残差调整方式中的一种对所述残差图像进行残差调整,包括:继续解码所述重建图像的残差调整方式索引index,所述index用于指示进行残差调整时所基于的残差调整方式;及,根据所述index指示的残差调整方式对所述残差图像进行残差调整。
本实施例是基于使用2个标志分别表示是否需要进行残差调整以及残差调整所基于的残差调整方式。在另一示例性的实施例中,在设定的残差调整方式有多种时,编码端使用1个标志即残差调整使用标志roflag同时表示是否需要进行残差调整以及残差调整所基于的残差调整方式,此时解码端继续根据所述重建图像的roflag确定进行残差调整时所基于的残差调整方式,及根据确定的残差调整方式对所述残差图像进行残差调整。
在本公开一示例性的实施例中,所述设定的残差调整方式包括以下一种或多种类型的残差调整方式:
将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
本公开一实施例还提供了一种基于神经网络的环路滤波NNLF方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括,对输入神经网络的包括3分量的重建图像进行NNLF时,如图12所示,对每一分量分别执行以下处理:
步骤S510,解码重建图像的该分量的残差调整使用标志roflag,所述roflag用于表示对所述重建图像的该分量进行NNLF时是否需要进行残差调整;
步骤S520,根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像的该分量进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像的该分量进行NNLF;
其中,所述第一模式是不对神经网络输出的残差图像的该分量进行残差调整的NNLF模式,所述第二模式是对所述残差图像的该分量进行残差调整的NNLF模式。
本实施例基于神经网络的环路滤波方法,通过解码残差调整使用标志,对每一分量,从进行残差调整和不进行残差调整的两种NNLF模式中选择一种较优的模式对该分量进行NNLF,相对多分量统一进行模式选择,可以进一步增强NNLF的滤波效果,提升解码图像的质量。
在本公开一示例性的实施例中,所述残差调整使得所述残差图像的分量中的残差变小。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片或当前块的重建图像。
在本公开一示例性的实施例中,所述残差调整使用标志为图像级语法元素或者块级语法元素。
在本公开一示例性的实施例中,所述采用第一模式对所述重建图像的该分量进行NNLF,包括:将所述残差图像的该分量与所述重建图像的该分量相加,得到对所述重建图像进行NNLF后输出的滤波图像的该分量;
所述采用第二模式对所述重建图像的该分量进行NNLF,包括:按照为该分量设定的残差调整方式中的一种,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加,得到对所述重 建图像进行NNLF后输出的滤波图像的该分量;所述为该分量设定的残差调整方式有一种或多种。
在本实施例的一示例中,所述为该分量设定的残差调整方式有多种,所述按照设定的残差调整方式中的一种对所述残差图像的该分量进行残差调整,包括:继续解码所述重建图像的该分量的残差调整方式索引index,所述index用于指示进行残差调整时所基于的残差调整方式;及,根据所述index指示的残差调整方式对所述残差图像的该分量进行残差调整。
本实施例的一示例中,图像头如下表所示:
Figure PCTCN2022125231-appb-000001
表中,ro_enable_flag表示序列级的残差调整允许标志,ro_enable_flag为1时,再定义以下语义:
残差调整使用标志picture_ro_enable_flag(相当于其他实施例的roflag);
当picture_ro_enable_flag为1时,定义以下语义:
残差调整方式索引picture_ro_index。
上表中的compIdx表示颜色分量的序号。对于YUV格式的图像来说,一般取0/1/2。
在其他示例,可以以块(如CTU)为单位进行NNLF,此时残差调整使用标志、残差调整方式索引定义为块级的语法元素。
本实施例是基于使用2个标志分别表示分量是否需要进行残差调整以及残差调整所基于的残差调整方式。在另一示例性的实施例中,为该分量设定的残差调整方式有多种时,编码端使用1个标志即残差调整使用标志roflag同时表示该分量是否需要进行残差调整以及残差调整所基于的残差调整方式,此时解码端根据该分量的roflag确定进行残差调整时所基于的残差调整方式,及根据确定的残差调整方式对所述残差图像的该分量进行残差调整。
在本公开一示例性的实施例中,为所述3个分量设定的残差调整方式相同或不同;为所述3个分量中至少一个分量设定的残差调整方式包括以下一种或多种类型的残差调整方式:
将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
本公开一实施例还提供了一种视频解码方法,应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波时,如图13所示,执行以下处理:
步骤S610,确定NNLF是否允许残差调整;
步骤S620,在NNLF允许残差调整的情况下,按照本公开应用于解码端NNLF滤波器的任一实施例所述的NNLF方法对所述重建图像进行NNLF。
本实施例视频解码方法,通过解码残差调整使用标志,对每一分量,从进行残差调整和不进行残差调整的两种NNLF模式中选择一种较优的模式对该分量进行NNLF,相对多分量统一进行模式选择,可以进一步增强NNLF的滤波效果,提升解码图像的质量。
在本公开一示例性的实施例中,在满足以下一种或多种条件的情况下,确定NNLF允许残差调整:
解码序列级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整;
解码图像级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整。
除上述条件外,还可以增加其他条件,例如,将输入的重建图像所在的帧为帧间编码帧作为NNLF允许残差调整的必要条件,等。
使用序列级的残差调整允许标志的一示例中,视频序列的序列头如下表所示:
Figure PCTCN2022125231-appb-000002
表中的ro_enable_flag即序列级的残差调整允许标志。
在本公开一示例性的实施例中,所述方法还包括:在NNLF不允许残差调整的情况下,将输入所述神经网络的所述重建图像与所述神经网络输出的所述残差图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。
在本公开一示例性的实施例中,所述NNLF滤波器设置在去块滤波器或样值自适应补充偿滤波器之后,及在自适应修正滤波器之前。在本实施例的一示例中,滤波器单元(或称环路滤波模块,参见图1B和图1C)的结构如图14所示,图中的DBF表示去块滤波器,SAO表示样值自适应补充偿滤波器,ALF表示自适应修正滤波器。NN表示NNLF滤波器中用于环路滤波的神经网络,可以与NNLF1、NNLF2等不进行残差调整的NNLF滤波器的神经网络相同。NNLF滤波器还包括残差调整模块(RO:Residual Offset)和两条从NN输入分别到两条路径输出的跳连接支路,RO用于对神经网络输出的残差网络进行残差调整。这些滤波器均属于重建图像的滤波器单元的组成分部。在环路滤波时,DBF,SAO、ALF中的部分或全部可以不启用。NNLF滤波器部署的位置并不局限于本实施例所述的位置,容易理解,本公开NNLF方法的实现并不受限于其部署位置。此外滤波器单元中的滤波器也不局限于图14所示,可以有更多、更少的滤波器,或其他类型的滤波器。
本公开一实施例提供了一种基于神经网络的环路滤波方法,编码端对重建图像进行环路滤波时,按照部署的滤波器顺序进行处理,当进入NNLF时,执行以下处理:
第一步,根据序列级的残差调整允许标志ro_enable_flag判断当前序列下是否允许残差调整。若ro_enable_flag为“1”,表示允许对当前序列尝试进行残差调整,跳至第二步;若ro_enable_flag为“0”,表示当前序列不允许进行残差调整,结束(略过后续处理);
第二步,将当前帧的重建图像输入NNLF的神经网络进行预测,从NNLF的输出得到残差图像,将残差图像叠加到输入的重建图像上,得到第一滤波图像;
第三步,对残差图像进行残差调整再叠加到输入的重建图像上,得到第二滤波图像;
第四步,将第一滤波图像与当前帧的原始图像相比较,计算率失真代价C NNLF;将第二滤波图像与当前帧的原始图像比较,计算率失真代价C RO
第五步,比较两种代价,如果C RO<C NNLF,将第二滤波图像作为NNLF滤波器输出的滤波图像,即选择第二模式对重建图像进行NNLF;如果C RO≥C NNLF,将第一滤波图像作为滤波器输出的滤波图像,即选择第一模式对重建图像进行NNLF;
本实施例的率失真代价cost的计算公式为:
cost=Wy*SSD(Y)+Wu*SSD(U)+Wv*SSD(V)
其中,SSD(*)表示对于某颜色分量求SSD;Wy,Wu,Wv分别表示Y分量、U分量和V分量的SSD的权重值,如可以取10:1:1或8:1:1等。
其中,SSD的计算公式如下:
Figure PCTCN2022125231-appb-000003
其中,M表示当前帧重建图像的长度、N表示当前帧重建图像的宽度,rec(x,y)和org(x,y)分别表示重建图像和原始图像在像素点(x,y)处的像素值。。
第六步,将当前帧的残差调整使用标志picture_ro_enable_flag,及残差调整方式索引picture_ro_index编入码流中;
第七步,若当前帧中的块均已完成处理,则结束当前帧的处理,之后可以继续加载下一帧进行处理,若当前帧还有块没有处理,则返回第二步。
本实施例是以当前帧的重建图像为单位进行NNLF处理,在其他实施例中,也可以基于当前帧中的块(如CTU)、切片等其他编码单位进行NNLF处理。
本实施例选择NNLF1基线工具作为对比,在NNLF1的基础上,对帧间编码帧(即非I帧)进行模式选择处理,设定2种采用固定值的残差调整方式,固定值分别设置为1和2。在通用测试条件随机接入(Random Access)和低延迟(Low Delay)B配置下,对联合视频专家组(JVET:Joint Video Experts Team)规定的通用序列进行测试,对比的锚(anchor)为NNLF1,结果如表1和表2所示。
表1:Random Access配置下本实施例对比基线NNLF1的性能
Figure PCTCN2022125231-appb-000004
表2 Low Delay B配置下本实施例对比基线NNLF1的性能
Figure PCTCN2022125231-appb-000005
Figure PCTCN2022125231-appb-000006
表中的参数含义如下:
EncT:Encoding Time,编码时间,10X%代表当集成了参考行排序技术后,与没集成前相比,编码时间为10X%,这意味有X%的编码时间增加。
DecT:Decoding Time,解码时间,10X%代表当集成了参考行排序技术后,与没集成前相比,解码时间为10X%,这意味有X%的解码时间增加。
ClassA1和Class A2是分辨率为3840x2160的测试视频序列,ClassB为1920x1080分辨率的测试序列,ClassC为832x480,ClassD为416x240,ClassE为1280x720;ClassF为若干个不同分辨率的屏幕内容序列(Screen content)。
Y,U,V是颜色三分量,Y,U,V所在列表示测试结果在Y,U,V上的BD-rate(
Figure PCTCN2022125231-appb-000007
rate)指标,值越小表示编码性能越好。
分析两表的数据可以看到,通过引入残差调整的优化方法,能够在NNLF1的基础上,进一步提升编码性能,尤其是在色度分量上。本实施例的残差调整对解码复杂度影响不大。
对帧内编码帧(I帧)也可以使用本实施例方法进行NNLF模式选择。
本公开一实施例还提供了一种码流,其中,所述码流通过本公开任一实施例所述的视频编码方法生成。
本公开一实施例还提供了一种基于神经网络的环路滤波器,如图15所示,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现本公开任一实施例所述的基于神经网络的环路滤波方法。如图所示,处理器和存储器通过系统总线相连,该环路滤波器还可以包括内存、网络接口等其他部件。
本公开一实施例还提供了一种视频解码装置,参见图15,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能实现如本公开任一实施例所述的视频解码方法。
本公开一实施例还提供了一种视频编码装置,参见图15,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频编码方法。
本公开上述实施例的处理器可以是通用处理器,包括中央处理器(CPU)、网络处理器(Network Processor,简称NP)、微处理器等等,也可以是其他常规的处理器等;所述处理器还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)、离散逻辑或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,或其它等效集成或离散的逻辑电路,也可以是上述器件的组合。即上述实施例的处理器可以是实现本发明实施例中公开的各方法、步骤及逻辑框图的任何处理器件或器件组合。如果部分地以软件来实施本公开实施例,那么可将用于软件的指令存储在合适的非易失性计算机可读存储媒体中,且可使用一个或多个处理器在硬件中执行所述指令从而实施本公开实施例的方法。本文中所使用的术语“处理器”可指上述结构或适合于实施本文中所描述的技术的任意其它结构。
本公开一实施例还提供了一种视频编解码系统,参见图1A,包括本公开任一实施例所述的视频编码装置和本公开任一实施例所述的视频解码装置。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算 机程序,其中,所述计算机程序时被处理器执行时能够实现如本公开任一实施例所述的基于神经网络的环路滤波方法,或实现如本公开任一实施例所述的视频解码方法,或实现如本公开任一实施例所述的视频编码方法。
在以上一个或多个示例性实施例中,所描述的功能可以硬件、软件、固件或其任一组合来实施。如果以软件实施,那么功能可作为一个或多个指令或代码存储在计算机可读介质上或经由计算机可读介质传输,且由基于硬件的处理单元执行。计算机可读介质可包含对应于例如数据存储介质等有形介质的计算机可读存储介质,或包含促进计算机程序例如根据通信协议从一处传送到另一处的任何介质的通信介质。以此方式,计算机可读介质通常可对应于非暂时性的有形计算机可读存储介质或例如信号或载波等通信介质。数据存储介质可为可由一个或多个计算机或者一个或多个处理器存取以检索用于实施本公开中描述的技术的指令、代码和/或数据结构的任何可用介质。计算机程序产品可包含计算机可读介质。
举例来说且并非限制,此类计算机可读存储介质可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来以指令或数据结构的形式存储所要程序代码且可由计算机存取的任何其它介质。而且,还可以将任何连接称作计算机可读介质举例来说,如果使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)或例如红外线、无线电及微波等无线技术从网站、服务器或其它远程源传输指令,则同轴电缆、光纤电缆、双纹线、DSL或例如红外线、无线电及微波等无线技术包含于介质的定义中。然而应了解,计算机可读存储介质和数据存储介质不包含连接、载波、信号或其它瞬时(瞬态)介质,而是针对非瞬时有形存储介质。如本文中所使用,磁盘及光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)、软磁盘或蓝光光盘等,其中磁盘通常以磁性方式再生数据,而光盘使用激光以光学方式再生数据。上文的组合也应包含在计算机可读介质的范围内。

Claims (41)

  1. 一种基于神经网络的环路滤波NNLF方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
    解码重建图像的残差调整使用标志roflag,所述roflag用于表示对重建图像进行NNLF时是否需要进行残差调整;
    根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像进行NNLF;
    其中,所述第一模式是不对所述神经网络输出的残差图像进行残差调整的NNLF模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式。
  2. 如权利要求1所述的方法,其特征在于:
    所述残差调整使得所述残差图像的残差变小;
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述残差调整使用标志为图像级语法元素或者块级语法元素。
  3. 如权利要求1所述的方法,其特征在于:
    所述采用第一模式对所述重建图像进行NNLF,包括:将所述神经网络输出的残差图像与输入所述神经网络的所述重建图像相加,得到对所述重建图像进行NNLF后输出的滤波图像;
    所述采用第二模式对所述重建图像进行NNLF,包括:按照设定的残差调整方式中的一种对所述残差图像进行残差调整并与所述重建图像相加,得到对所述重建图像进行NNLF后输出的滤波图像;其中,所述设定的残差调整方式有一种或多种。
  4. 如权利要求3所述的方法,其特征在于:
    所述设定的残差调整方式有多种,所述按照设定的残差调整方式中的一种对所述残差图像进行残差调整,包括:继续解码所述重建图像的残差调整方式索引index,所述index用于指示进行残差调整时所基于的残差调整方式;及,根据所述index指示的残差调整方式对所述残差图像进行残差调整。
  5. 如权利要求1所述的方法,其特征在于:
    所述设定的残差调整方式包括以下一种或多种类型的残差调整方式:
    将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
    将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
  6. 一种基于神经网络的环路滤波NNLF方法,应用于解码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括,对输入神经网络的包括3分量的重建图像进行NNLF时,对每一分量分别执行以下处理:
    解码重建图像的该分量的残差调整使用标志roflag,所述roflag用于表示对所述重建图像的该分量进行NNLF时是否需要进行残差调整;
    根据所述roflag确定不需要进行残差调整的情况下,采用第一模式对所述重建图像的该分量进行NNLF;根据所述roflag确定需要进行残差调整的情况下,采用第二模式对所述重建图像的该分量进行NNLF;
    其中,所述第一模式是不对神经网络输出的残差图像的该分量进行残差调整的NNLF模式,所述第二模式是对所述残差图像的该分量进行残差调整的NNLF模式。
  7. 如权利要求6所述的方法,其特征在于:
    所述残差调整使得所述残差图像的分量中的残差变小;
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述残差调整使用标志为图像级语法元素或者块级语法元素。
  8. 如权利要求6所述的方法,其特征在于:
    所述采用第一模式对所述重建图像的该分量进行NNLF,包括:将所述残差图像的该分量与所述重建图像的该分量相加,得到对所述重建图像进行NNLF后输出的滤波图像的该分量;
    所述采用第二模式对所述重建图像的该分量进行NNLF,包括:按照为该分量设定的残差调整方式中的一种,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加,得到对所述重建图像进行NNLF后输出的滤波图像的该分量;所述为该分量设定的残差调整方式有一种或多种。
  9. 如权利要求8所述的方法,其特征在于:
    所述为该分量设定的残差调整方式有多种,所述按照设定的残差调整方式中的一种对所述残差图像的该分量进行残差调整,包括:继续解码所述重建图像的该分量的残差调整方式索引index,所述index用于指示进行残差调整时所基于的残差调整方式;及,根据所述index指示的残差调整方式对所述残差图像的该分量进行残差调整。
  10. 如权利要求6所述的方法,其特征在于:
    为所述3个分量设定的残差调整方式相同或不同;
    为所述3个分量中至少一个分量设定的残差调整方式包括以下一种或多种类型的残差调整方式:
    将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
    将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
  11. 一种视频解码方法,应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
    在NNLF允许残差调整的情况下,按照如权利要求1至10中任一所述的方法对所述重建图像进行NNLF。
  12. 如权利要求11所述的方法,其特征在于:
    在满足以下一种或多种条件的情况下,确定NNLF允许残差调整:
    解码序列级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整;
    解码图像级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整。
  13. 如权利要求11所述的方法,其特征在于:
    所述方法还包括:在NNLF不允许残差调整的情况下,将输入所述神经网络的所述重建图像与所述神经网络输出的所述残差图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。
  14. 如权利要求10所述的方法,其特征在于:
    所述NNLF滤波器设置在去块滤波器或样值自适应补充偿滤波器之后,及在自适应修正滤波器之前。
  15. 一种基于神经网络的环路滤波NNLF方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
    将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;
    计算采用第一模式对所述重建图像进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像进行NNLF的率失真代价cost 2;其中,所述第一模式是不对所述残差图像进行残差调整的NNLF 模式,所述第二模式是对所述残差图像进行残差调整的NNLF模式;
    在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像进行NNLF;在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
  16. 如权利要求15所述的方法,其特征在于:
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述残差调整使得所述残差图像中的残差变小。
  17. 如权利要求15所述的方法,其特征在于:
    所述计算采用第一模式对所述重建图像进行NNLF的率失真代价cost 1,包括:将所述残差图像与所述重建图像相加得到一第一滤波图像;及,根据该第一滤波图像与相应原始图像的差异计算得到所述cost 1
    所述选择所述第一模式对所述重建图像进行NNLF,包括:将所述残差图像与所述重建图像相加得到的第一滤波图像,作为对所述重建图像进行NNLF后输出的滤波图像。
  18. 如权利要求17所述的方法,其特征在于:
    所述计算采用第二模式对所述重建图像进行NNLF的率失真代价cost 2,包括:
    按照设定的每一种残差调整方式,对所述残差图像进行残差调整并与所述重建图像相加,得到一第二滤波图像,再根据该第二滤波图像与所述原始图像的差异计算一率失真代价;及
    将计算得到的所有率失真代价中最小的率失真代价作为cost 2;其中,所述设定的残差调整方式有一种或多种。
  19. 如权利要求18所述的方法,其特征在于:
    所述选择所述第二模式对所述重建图像进行NNLF,包括:
    将按照cost 2对应的残差调整方式对所述残差图像进行残差调整并与所述重建图像相加得到的第二滤波图像,作为对所述重建图像进行NNLF后输出的滤波图像。
  20. 如权利要求18所述的方法,其特征在于:
    所述重建图像和所述残差图像均包括3个分量;
    所述cost 1是通过计算所述第一滤波图像与所述原始图像在3个分量上的平方误差和SSD,及对所述3个分量上的SSD加权后相加而得到;
    所述根据该第二滤波图像与所述原始图像的差异计算一率失真代价,包括:计算该第二滤波图像与所述原始图像在3个分量上的平方误差和SSD,再对所述3个分量上的SSD加权后相加,得到该率失真代价。
  21. 如权利要求18所述的方法,其特征在于:
    所述设定的残差调整方式包括以下一种或多种类型的残差调整方式:
    将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
    将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
  22. 一种基于神经网络的环路滤波NNLF方法,应用于编码端的NNLF滤波器,所述NNLF滤波器包括神经网络,以及一条从NNLF滤波器的输入到输出的跳连接支路,所述方法包括:
    将重建图像输入所述神经网络,获取所述神经网络输出的残差图像;所述重建图像和残差图像均包括3个分量;及
    对所述3个分量中的每一分量分别执行以下处理:
    计算采用第一模式对所述重建图像的该分量进行NNLF的率失真代价cost 1,及采用第二模式对所述重建图像的该分量进行NNLF的率失真代价cost 2;所述第一模式是不对所述残差图像的该分量进行残差调整的NNLF模式,所述第二模式是对所述残差图像的该分量进行残差调整的NNLF模式;
    在cost 1<cost 2的情况下,选择所述第一模式对所述重建图像的该分量进行NNLF;在cost 2<cost 1的情况下,选择所述第二模式对所述重建图像的该分量进行NNLF;在cost 1=cost 2的情况下,选择所述第一模式或第二模式对所述重建图像进行NNLF。
  23. 如权利要求22所述的方法,其特征在于:
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述残差调整使得所述残差图像的分量中的残差变小。
  24. 如权利要求24所述的方法,其特征在于:
    所述计算采用第一模式对所述重建图像的该分量进行NNLF的率失真代价cost 1,包括:将所述残差图像的该分量与所述重建图像的该分量相加,得到滤波后的该分量;及,根据滤波后的该分量与相应原始图像的该分量之间的差异,计算得到所述cost 1
    所述选择所述第一模式对所述重建图像的该分量进行NNLF,包括:将所述残差图像的该分量与所述重建图像的该分量相加得到的滤波后的该分量,作为对所述重建图像进行NNLF后输出的滤波图像的该分量。
  25. 如权利要求22所述的方法,其特征在于:
    所述计算采用第二模式对所述重建图像的该分量进行NNLF的率失真代价cost 2,包括:
    按照为该分量设定的每一种残差调整方式,对所述残差图像的该分量进行残差调整并与所述重建图像的该分量相加,得到滤波后的该分量;再根据滤波后的该分量与相应原始图像的该分量之间的差异,计算该分量的一个率失真代价;
    将计算得到的该分量的所有率失真代价中最小的率失真代价,作为该分量的cost 2;其中,为该分量设定的残差调整方式有一种或多种。
  26. 如权利要求25所述的方法,其特征在于:
    所述选择所述第二模式对所述重建图像的该分量进行NNLF,包括:
    将按照该分量的cost 2对应的残差调整方式对所述残差图像的该分量进行残差调整,并与所述重建图像的该分量相加得到的滤波后的该分量,作为对所述重建图像进行NNLF后输出的滤波图像的该分量。
  27. 如权利要求25所述的方法,其特征在于:
    为所述3个分量设定的残差调整方式相同或不同;
    为所述3个分量中至少一个分量设定的残差调整方式包括以下一种或多种类型的残差调整方式:
    将残差图像中的非零残差值加上或减去固定值,使得所述非零残差值的绝对值变小;
    将残差图像中的非零残差值按照其所在区间加上或减去该区间对应的调整值,使得所述非零残差值的绝对值变小;其中,所述区间有多个,且区间中的值越大,对应的调整值也越大。
  28. 一种视频编码方法,应用于视频编码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
    在NNLF允许残差调整的情况下,按照如权利要求15至27中任一所述的方法对所述重建图像进行NNLF;
    编码所述重建图像的残差调整使用标志,以表示对重建图像进行NNLF时是否需要进行残差调整。
  29. 如权利要求28所述的方法,其特征在于:
    所述残差调整使用标志为图像级语法元素或者块级语法元素。
  30. 如权利要求28所述的方法,其特征在于:
    在满足以下一种或多种条件的情况下,确定NNLF允许残差调整:
    解码序列级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整;
    解码图像级的残差调整允许标志,根据该残差调整允许标志确定NNLF允许残差调整。
  31. 如权利要求28所述的方法,其特征在于:
    所述方法还包括:确定NNLF不允许残差调整的情况下,跳过对所述残差使用标志的编码,将输入所述神经网络的所述重建图像与所述神经网络输出的残差图像相加,得到对所述重建图像进行NNLF后输出的滤波图像。
  32. 如权利要求28所述的方法,其特征在于:
    所述方法是按照如权利要求15至21所述的方法对所述重建图像进行NNLF;
    所述重建图像的残差调整使用标志roflag的个数为1个;在选择所述第一模式对所述重建图像进行NNLF的情况下,所述roflag被置为表示不需要进行残差调整的值;在选择所述第二模式对所述重建图像进行NNLF的情况下,所述roflag被置为表示需要进行残差调整的值。
  33. 如权利要求32所述的方法,其特征在于:
    所述方法是按照如权利要求18至21所述的方法对所述重建图像进行NNLF;
    所述方法还包括:在所述roflag被置为表示需要进行残差调整的值,且所述设定的残差调整方式有多种的情况下,继续编码所述重建图像的残差调整方式索引,以指示进行残差调整时所基于的残差调整方式。
  34. 如权利要求28所述的方法,其特征在于:
    所述方法是按照如权利要求22至27所述的方法对所述重建图像进行NNLF;
    所述重建图像的残差调整使用标志roflag(j)的个数为3,j=1,2,3,roflag(j)用于表示对所述重建图像的第j个分量进行NNLF时是否需要进行残差调整;在选择所述第一模式对所述重建图像的第j个分量进行NNLF的情况下,roflag(j)被置为表示不需要进行残差调整的值,在选择所述第二模式对所述重建图像的第j个分量进行NNLF的情况下,roflag(j)被置为表示需要进行残差调整的值。
  35. 如权利要求34所述的方法,其特征在于:
    所述方法是按照如权利要求25至27所述的方法对所述重建图像进行NNLF;
    所述方法还包括:在所述roflag(j)被置为表示需要进行残差调整的值,且为第j个分量设定的残差调整方式有多种的情况下,继续编码所述重建图像的第j个分量的残差调整方式索引index(j),以指示对所述残差图像的第j个分量进行残差调整时所基于的残差调整方式。
  36. 一种码流,其中,所述码流通过如权利要求28至35中任一所述的视频编码方法生成。
  37. 一种基于神经网络的环路滤波器,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求1至10、15至27中任一所述的基于神经网络的环路滤波方法。
  38. 一种视频解码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求11至14中任一所述的视频解码方法。
  39. 一种视频编码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所 述计算机程序时能够实现如权利要求28至35中任一所述的视频编码方法。
  40. 一种视频编解码系统,其中,包括如权利要求39所述的视频编码装置和如权利要求38所述的视频解码装置。
  41. 一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时能够实现如权利要求1至10、15至22中任一所述的基于神经网络的环路滤波方法,或实现如权利要求11至14中任一所述的视频解码方法,或实现如权利要求28至35中任一所述的视频编码方法。
PCT/CN2022/125231 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统 WO2024077576A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/125231 WO2024077576A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/125231 WO2024077576A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2024077576A1 true WO2024077576A1 (zh) 2024-04-18

Family

ID=90668431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125231 WO2024077576A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Country Status (1)

Country Link
WO (1) WO2024077576A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021249290A1 (zh) * 2020-06-10 2021-12-16 华为技术有限公司 环路滤波方法和装置
WO2022072239A1 (en) * 2020-09-29 2022-04-07 Qualcomm Incorporated Filtering process for video coding
WO2022166462A1 (zh) * 2021-02-08 2022-08-11 华为技术有限公司 编码、解码方法和相关设备
US20220286695A1 (en) * 2021-03-04 2022-09-08 Lemon Inc. Neural Network-Based In-Loop Filter With Residual Scaling For Video Coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021249290A1 (zh) * 2020-06-10 2021-12-16 华为技术有限公司 环路滤波方法和装置
WO2022072239A1 (en) * 2020-09-29 2022-04-07 Qualcomm Incorporated Filtering process for video coding
WO2022166462A1 (zh) * 2021-02-08 2022-08-11 华为技术有限公司 编码、解码方法和相关设备
US20220286695A1 (en) * 2021-03-04 2022-09-08 Lemon Inc. Neural Network-Based In-Loop Filter With Residual Scaling For Video Coding

Similar Documents

Publication Publication Date Title
KR102288109B1 (ko) 비디오 압축에서의 양방향 예측
US9955186B2 (en) Block size decision for video coding
RU2580066C2 (ru) Эффективное по памяти моделирование контекста
TW201841503A (zh) 視頻寫碼中之內濾波旗標
TW202005399A (zh) 基於區塊之自適應迴路濾波器(alf)之設計及發信令
WO2021139770A1 (en) Signaling quantization related parameters
KR20170123632A (ko) 비디오 인코딩을 위한 적응적 모드 체킹 순서
WO2015031499A1 (en) Residual prediction for intra block copying
KR20220036982A (ko) 비디오 인코딩 및 디코딩을 위한 이차 변환
US20130128971A1 (en) Transforms in video coding
KR20170126889A (ko) 저 복잡도 샘플 적응 오프셋 (sao) 코딩
KR20220016935A (ko) 영상 복호화 장치
US11611765B2 (en) Refinement mode processing in video encoding and decoding
CN111903124B (zh) 图像处理装置和图像处理方法
US20210037247A1 (en) Encoding and decoding with refinement of the reconstructed picture
WO2024077576A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
WO2024077575A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
WO2024077574A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
CN116982262A (zh) 视频编码中依赖性量化的状态转换
KR102020953B1 (ko) 카메라 영상의 복호화 정보 기반 영상 재 부호화 방법 및 이를 이용한 영상 재부호화 시스템
WO2024138705A1 (zh) 一种帧内预测方法、视频编解码方法、装置和系统
WO2024007157A1 (zh) 多参考行索引列表排序方法、视频编解码方法、装置和系统
WO2023245349A1 (zh) 一种局部光照补偿方法、视频编解码方法、装置和系统
WO2023130226A1 (zh) 一种滤波方法、解码器、编码器及计算机可读存储介质
WO2024145744A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22961765

Country of ref document: EP

Kind code of ref document: A1