WO2024077574A1 - 基于神经网络的环路滤波、视频编解码方法、装置和系统 - Google Patents

基于神经网络的环路滤波、视频编解码方法、装置和系统 Download PDF

Info

Publication number
WO2024077574A1
WO2024077574A1 PCT/CN2022/125229 CN2022125229W WO2024077574A1 WO 2024077574 A1 WO2024077574 A1 WO 2024077574A1 CN 2022125229 W CN2022125229 W CN 2022125229W WO 2024077574 A1 WO2024077574 A1 WO 2024077574A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
nnlf
reconstructed image
flag
adjustment
Prior art date
Application number
PCT/CN2022/125229
Other languages
English (en)
French (fr)
Inventor
戴震宇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/125229 priority Critical patent/WO2024077574A1/zh
Publication of WO2024077574A1 publication Critical patent/WO2024077574A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing

Definitions

  • the embodiments of the present disclosure relate to but are not limited to video technology, and more specifically, to a loop filtering method based on a neural network, a video encoding and decoding method, device and system.
  • Digital video compression technology is mainly to compress huge digital video data for easy transmission and storage.
  • the image of the original video sequence contains brightness component and chrominance component.
  • the encoder reads black and white or color images and divides each frame into the largest coding unit (LCU: largest coding unit) of the same size (such as 128x128, 64x64, etc.).
  • LCU largest coding unit
  • Each largest coding unit can be divided into rectangular coding units (CU: coding unit) according to the rules, and can be further divided into prediction units (PU: prediction unit), transform units (TU: transform unit), etc.
  • the hybrid coding framework includes prediction, transform, quantization, entropy coding, in-loop filter and other modules.
  • the prediction module can use intra prediction and inter prediction.
  • Intra-frame prediction predicts the pixel information within the current block based on the information of the same image, which is used to eliminate spatial redundancy; inter-frame prediction can refer to the information of different images and use motion estimation to search for the motion vector information that best matches the current block, which is used to eliminate temporal redundancy; the transformation can convert the predicted residual into the frequency domain to redistribute its energy, and combined with quantization, it can remove information that the human eye is not sensitive to, which is used to eliminate visual redundancy; entropy coding can eliminate character redundancy based on the current context model and the probability information of the binary code stream to generate a code stream.
  • An embodiment of the present disclosure provides a Neural Network based Loop Filter (NNLF) method, which is applied to a video decoding device.
  • the method includes:
  • the NNLF mode includes a first mode and a second mode, the second mode includes a chrominance information adjustment mode, and the chrominance information adjustment mode adds a process of performing specified adjustments on the input chrominance information before filtering compared to the first mode.
  • An embodiment of the present disclosure further provides a video decoding method applied to a video decoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, performing the following processing:
  • NNLF allows adjustment of chrominance information
  • NNLF is performed on the reconstructed image according to the NNLF method described in any embodiment of the present disclosure applied to the decoding end.
  • An embodiment of the present disclosure further provides a loop filtering method based on a neural network, which is applied to a video encoding device.
  • the method includes:
  • the first mode and the second mode are both set NNLF modes, and the second mode includes a chrominance information adjustment mode.
  • the chrominance information adjustment mode adds a process of specifying adjustment of the input chrominance information before filtering.
  • An embodiment of the present disclosure further provides a video encoding method, which is applied to a video encoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, performing the following processing:
  • NNLF allows adjustment of chrominance information
  • a first flag of the reconstructed image is encoded, wherein the first flag includes information of a NNLF mode used when performing NNLF on the reconstructed image.
  • An embodiment of the present disclosure further provides a code stream, wherein the code stream is generated by the video encoding method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a neural network-based loop filter, comprising a processor and a memory storing a computer program, wherein the processor, when executing the computer program, can implement the neural network-based loop filtering method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a video decoding device, including a processor and a memory storing a computer program, wherein the processor can implement the video decoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding device, including a processor and a memory storing a computer program, wherein the processor can implement the video encoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding and decoding system, comprising the video encoding device described in any embodiment of the present disclosure and the video decoding device described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, can implement the neural network-based loop filtering method as described in any embodiment of the present disclosure, or implement the video decoding method as described in any embodiment of the present disclosure, or implement the video encoding method as described in any embodiment of the present disclosure.
  • FIG. 1A is a schematic diagram of a coding and decoding system according to an embodiment
  • FIG. 1B is a framework diagram of an encoding end in FIG. 1A
  • FIG. 1C is a framework diagram of a decoding end in FIG. 1A ;
  • FIG2 is a block diagram of a filter unit according to an embodiment
  • FIG3A is a network structure diagram of an NNLF filter according to an embodiment, and FIG3B is a structure diagram of a residual block in FIG3A ;
  • FIG3C is a schematic diagram of input and output of the NNLF filter in FIG3A ;
  • FIG4A is a structural diagram of a backbone network in a NNLF filter according to another embodiment, and FIG4B is a structural diagram of a residual block in FIG4A ;
  • FIG4C is a schematic diagram of input and output of the NNLF filter in FIG4A ;
  • FIG5 is a schematic diagram of an arrangement of feature graphs of input information
  • FIG6 is a flow chart of an NNLF method applied to an encoding end according to an embodiment of the present disclosure
  • FIG7 is a module diagram of a filter unit according to an embodiment of the present disclosure.
  • FIG8 is a flow chart of a video encoding method according to an embodiment of the present disclosure.
  • FIG9 is a flow chart of an NNLF method applied to a decoding end according to an embodiment of the present disclosure
  • FIG10 is a flowchart of a video decoding method according to an embodiment of the present disclosure.
  • FIG11 is a schematic diagram of the hardware structure of a NNLF filter according to an embodiment of the present disclosure.
  • FIG12A is a schematic diagram of the input and output of the NNLF filter when the order of chrominance information is not adjusted according to an embodiment of the present disclosure
  • FIG. 12B is a schematic diagram of the input and output of the NNLF filter when the order of chrominance information is not adjusted according to an embodiment of the present disclosure.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or explanations. Any embodiment described as “exemplary” or “for example” in the present disclosure should not be interpreted as being more preferred or advantageous than other embodiments.
  • "And/or” in this article is a description of the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone. "Multiple” means two or more than two.
  • words such as “first” and “second” are used to distinguish between identical or similar items with substantially the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the quantity and execution order, and words such as “first” and “second” do not limit them to be necessarily different.
  • the specification may have presented the method and/or process as a specific sequence of steps. However, to the extent that the method or process does not rely on the specific order of steps described herein, the method or process should not be limited to the steps of the specific order. As will be understood by those of ordinary skill in the art, other sequences of steps are also possible. Therefore, the specific sequence of steps set forth in the specification should not be interpreted as a limitation to the claims. In addition, the claims for the method and/or process should not be limited to the steps of performing them in the order written, and those of ordinary skill in the art can easily understand that these sequences can change and still remain within the spirit and scope of the disclosed embodiments.
  • the loop filtering method and video coding method based on neural network in the disclosed embodiments can be applied to various video coding standards, such as: H.264/Advanced Video Coding (Advanced Video Coding, AVC), H.265/High Efficiency Video Coding (High Efficiency Video Coding, HEVC), H.266/Versatile Video Coding (Versatile Video Coding, VVC), AVS (Audio Video Coding Standard), and other standards formulated by MPEG (Moving Picture Experts Group), AOM (Alliance for Open Media), JVET (Joint Video Experts Team) and extensions of these standards, or any other customized standards.
  • AVC Advanced Video Coding
  • H.265/High Efficiency Video Coding High Efficiency Video Coding
  • H.266/Versatile Video Coding Very Video Coding Standard
  • MPEG Motion Picture Experts Group
  • AOM Alliance for Open Media
  • JVET Joint Video Experts Team
  • FIG. 1A is a block diagram of a video encoding and decoding system that can be used in an embodiment of the present disclosure. As shown in the figure, the system is divided into an encoding end 1 and a decoding end 2, and the encoding end 1 generates a code stream.
  • the decoding end 2 can decode the code stream.
  • the decoding end 2 can receive the code stream from the encoding end 1 via a link 3.
  • the link 3 includes one or more media or devices that can move the code stream from the encoding end 1 to the decoding end 2.
  • the link 3 includes one or more communication media that enable the encoding end 1 to send the code stream directly to the decoding end 2.
  • the encoding end 1 modulates the code stream according to the communication standard and sends the modulated code stream to the decoding end 2.
  • the one or more communication media may include wireless and/or wired communication media, and may form part of a packet network.
  • the code stream can also be output from the output interface 15 to a storage device, and the decoding end 2 can read the stored data from the storage device via streaming or downloading.
  • the encoding end 1 includes a data source 11, a video encoding device 13 and an output interface 15.
  • the data source 11 includes a video capture device (such as a camera), an archive containing previously captured data, a feed interface for receiving data from a content provider, a computer graphics system for generating data, or a combination of these sources.
  • the video encoding device 13 may also be called a video encoder, which is used to encode the data from the data source 11 and output it to the output interface 15.
  • the output interface 15 may include at least one of a regulator, a modem and a transmitter.
  • the decoding end 2 includes an input interface 21, a video decoding device 23 and a display device 25.
  • the input interface 21 includes at least one of a receiver and a modem.
  • the input interface 21 can receive a code stream via a link 3 or from a storage device.
  • the video decoding device 23 is also called a video decoder, which is used to decode the received code stream.
  • the display device 25 is used to display the decoded data.
  • the display device 25 can be integrated with other devices of the decoding end 2 or set separately.
  • the display device 25 is optional for the decoding end. In other examples, the decoding end may include other devices or equipment that apply the decoded data.
  • FIG1B is a block diagram of an exemplary video encoding device that can be used in an embodiment of the present disclosure.
  • the video encoding device 10 includes:
  • the division unit 101 is configured to cooperate with the prediction unit 100 to divide the received video data into slices, coding tree units (CTUs) or other larger units.
  • the received video data may be a video sequence including video frames such as I frames, P frames or B frames.
  • the prediction unit 100 is configured to divide the CTU into coding units (CUs) and perform intra-frame prediction coding or inter-frame prediction coding on the CU.
  • the CU may be divided into one or more prediction units (PUs).
  • the prediction unit 100 includes an inter prediction unit 121 and an intra prediction unit 126 .
  • the inter-frame prediction unit 121 is configured to perform inter-frame prediction on the PU and generate prediction data of the PU, wherein the prediction data includes a prediction block of the PU, motion information of the PU, and various syntax elements.
  • the inter-frame prediction unit 121 may include a motion estimation (ME: motion estimation) unit and a motion compensation (MC: motion compensation) unit.
  • the motion estimation unit may be used for motion estimation to generate a motion vector
  • the motion compensation unit may be used to obtain or generate a prediction block according to the motion vector.
  • the intra prediction unit 126 is configured to perform intra prediction on the PU and generate prediction data of the PU.
  • the prediction data of the PU may include a prediction block of the PU and various syntax elements.
  • the residual generating unit 102 (indicated by the circle with a plus sign after the dividing unit 101 in the figure) is configured to generate a residual block of the CU based on the original block of the CU minus the prediction block of the PU into which the CU is divided.
  • the transform processing unit 104 is configured to divide the CU into one or more transform units (TUs), and the division of the prediction unit and the transform unit may be different.
  • the residual block associated with the TU is a sub-block obtained by dividing the residual block of the CU.
  • the coefficient block associated with the TU is generated by applying one or more transforms to the residual block associated with the TU.
  • the quantization unit 106 is configured to quantize the coefficients in the coefficient block based on the quantization parameter.
  • the quantization degree of the coefficient block can be changed by adjusting the quantization parameter (QP: Quantizer Parameter).
  • the inverse quantization unit 108 and the inverse transformation unit 110 are respectively configured to apply inverse quantization and inverse transformation to the coefficient block to obtain a reconstructed residual block associated with the TU.
  • the reconstruction unit 112 (indicated by the circle with a plus sign after the inverse transform processing unit 110 in the figure) is configured to add the reconstructed residual block and the prediction block generated by the prediction unit 100 to generate a reconstructed image.
  • the filter unit 113 is configured to perform loop filtering on the reconstructed image.
  • the decoded image buffer 114 is configured to store the reconstructed image after loop filtering.
  • the intra prediction unit 126 may extract a reference image of a block adjacent to the current block from the decoded image buffer 114 to perform intra prediction.
  • the inter prediction unit 121 may use the reference image of the previous frame cached in the decoded image buffer 114 to perform inter prediction on the PU of the current frame image.
  • the entropy coding unit 115 is configured to perform entropy coding operations on received data (such as syntax elements, quantized coefficient blocks, motion information, etc.) to generate a video bit stream.
  • the video encoding device 10 may include more, fewer, or different functional components than those in this example, such as eliminating the transform processing unit 104 and the inverse transform processing unit 110 .
  • FIG1C is a block diagram of an exemplary video decoding device that can be used in an embodiment of the present disclosure. As shown in the figure, the video decoding device 15 includes:
  • the entropy decoding unit 150 is configured to perform entropy decoding on the received encoded video bitstream, extract syntax elements, quantized coefficient blocks, motion information of PU, etc.
  • the prediction unit 152, the inverse quantization unit 154, the inverse transform processing unit 156, the reconstruction unit 158 and the filter unit 159 can all perform corresponding operations based on the syntax elements extracted from the bitstream.
  • the inverse quantization unit 154 is configured to perform inverse quantization on the coefficient block associated with the quantized TU.
  • the inverse transform processing unit 156 is configured to apply one or more inverse transforms to the inverse quantized coefficient block to generate a reconstructed residual block of the TU.
  • the prediction unit 152 includes an inter-prediction unit 162 and an intra-prediction unit 164. If the current block is encoded using intra-prediction, the intra-prediction unit 164 determines the intra-prediction mode of the PU based on the syntax elements decoded from the code stream, and performs intra-prediction in combination with the reconstructed reference information of the current block neighboring the current block obtained from the decoded image buffer 160. If the current block is encoded using inter-prediction, the inter-prediction unit 162 determines the reference block of the current block based on the motion information of the current block and the corresponding syntax elements, and performs inter-prediction on the reference block obtained from the decoded image buffer 160.
  • the reconstruction unit 158 (represented by a circle with a plus sign after the inverse transform processing unit 155 in the figure) is set to perform intra-frame prediction or inter-frame prediction on the prediction unit 152 based on the reconstructed residual block associated with the TU and the prediction block of the current block to obtain a reconstructed image.
  • the filter unit 159 is configured to perform loop filtering on the reconstructed image.
  • the decoded image buffer 160 is configured to store the reconstructed image after loop filtering as a reference image for subsequent motion compensation, intra-frame prediction, inter-frame prediction, etc.
  • the filtered reconstructed image can also be output as decoded video data for presentation on a display device.
  • the video decoding device 15 may include more, fewer or different functional components.
  • the inverse transform processing unit 155 may be eliminated in some cases.
  • a frame of image is divided into blocks, intra-frame prediction or inter-frame prediction or other algorithms are performed on the current block to generate a prediction block of the current block, the original block of the current block is subtracted from the prediction block to obtain a residual block, the residual block is transformed and quantized to obtain a quantization coefficient, and the quantization coefficient is entropy encoded to generate a bit stream.
  • intra-frame prediction or inter-frame prediction is performed on the current block to generate a prediction block of the current block, and on the other hand, the quantization coefficient obtained by decoding the bit stream is inversely quantized and inversely transformed to obtain a residual block, and the prediction block and the residual block are added to obtain a reconstructed block, and the reconstructed block constitutes a reconstructed image, and the reconstructed image is loop-filtered based on the image or based on the block to obtain a decoded image.
  • the encoding end also obtains a decoded image through operations similar to those of the decoding end, which can also be called a reconstructed image after loop filtering.
  • the reconstructed image after loop filtering can be used as a reference frame for inter-frame prediction of subsequent frames.
  • the block division information determined by the encoding end, the mode information and parameter information such as prediction, transformation, quantization, entropy coding, and loop filtering can be written into the bit stream.
  • the decoding end determines the block division information, prediction, transformation, quantization, entropy coding, loop filtering and other mode information and parameter information used by the encoding end by decoding the bit stream or analyzing according to the setting information, so as to ensure that the decoded image obtained by the encoding end is the same as the decoded image obtained by the decoding end.
  • the embodiments of the present disclosure relate to, but are not limited to, the filter unit in the above-mentioned encoding end and decoding end (the filter unit may also be referred to as a loop filtering module) and a corresponding loop filtering method.
  • the filter units at the encoding end and the decoding end include tools such as a deblocking filter (DBF: DeBlocking Filter) 20, a sample adaptive offset filter (SAO: Sample adaptive Offset) 22, and an adaptive correction filter (ALF: Adaptive loop filter) 26.
  • DPF DeBlocking Filter
  • SAO Sample adaptive Offset
  • ALF adaptive correction filter
  • NNLF Neural Network based Loop Filter
  • the filter unit performs loop filtering on the reconstructed image, which can compensate for the distortion information and provide a better reference for the subsequent encoding pixels.
  • a neural network-based loop filtering NNLF solution is provided, and the model used (also referred to as a network model) uses a filtering network as shown in FIG3A .
  • the NNLF is denoted as NNLF1 in the text, and the filter that executes NNLF1 is referred to as a NNLF1 filter.
  • the backbone network (backbone) of the filtering network includes the use of multiple residual blocks (ResBlock) connected in sequence, and also includes a convolution layer (represented by Conv in the figure), an activation function layer (ReLU in the figure), a merging (concat) layer (represented by Cat in the figure), and a pixel reorganization layer (represented by PixelShuffle in the figure).
  • the structure of each residual block is shown in FIG3B , including a convolution layer with a convolution kernel size of 1 ⁇ 1, a ReLU layer, a convolution layer with a convolution kernel size of 1 ⁇ 1, and a convolution layer with a convolution kernel size of 3 ⁇ 3 connected in sequence.
  • the input of the NNLF1 filter includes the brightness information (i.e., Y component) and chrominance information (i.e., U component and V component) of the reconstructed image (rec_YUV), as well as various auxiliary information, such as the brightness information and chrominance information of the predicted image (pred_YUV), QP information, and frame type information.
  • the QP information includes the default baseline quantization parameter (BaseQP: Base Quantization Parameter) in the encoding profile and the slice quantization parameter (SliceQP: Slice Quantization Parameter) of the current slice
  • the frame type information includes the slice type (SliceType), i.e., the type of frame to which the current slice belongs.
  • the output of the model is the filtered image (output_YUV) after NNLF1 filtering.
  • the filtered image output by the NNLF1 filter can also be used as the reconstructed image input to the subsequent filter.
  • NNLF1 uses a model to filter the YUV component of the reconstructed image (rec_YUV) and outputs the YUV component of the filtered image (out_YUV), as shown in Figure 3C, in which auxiliary input information such as the YUV component of the predicted image is omitted.
  • the filter network of this model has a jump connection branch between the input reconstructed image and the output filtered image, as shown in Figure 3A.
  • NNLF2 needs to train two models separately, one model is used to filter the brightness component of the reconstructed image, and the other model is used to filter the two chrominance components of the reconstructed image.
  • the two models can use the same filtering network, and there is also a jump connection branch between the reconstructed image input to the NNLF2 filter and the filtered image output by the NNLF2 filter.
  • the backbone network of the filtering network includes a plurality of residual blocks (AttRes Block) with attention mechanisms connected in sequence, a convolutional layer (Conv 3 ⁇ 3) for realizing feature mapping, and a reorganization layer (Shuffle).
  • each residual block with attention mechanism is shown in Figure 4B, including a convolutional layer (Conv 3 ⁇ 3), an activation layer (PReLU), a convolutional layer (Conv 3 ⁇ 3) and an attention layer (Attintion) connected in sequence, M represents the number of feature maps, and N represents the number of samples in one dimension.
  • Model 1 of NNLF2 for filtering the luminance component of the reconstructed image is shown in FIG4C , and its input information includes the luminance component of the reconstructed image (rec_Y), and the output is the luminance component of the filtered image (out_Y). Auxiliary input information such as the luminance component of the predicted image is omitted in the figure.
  • Model 2 of NNLF2 for filtering the two chrominance components of the reconstructed image is shown in FIG4C , and its input information includes the two chrominance components of the reconstructed image (rec_UV), and the luminance component of the reconstructed image as auxiliary input information (rec_Y), and the output of model 2 is the two chrominance components of the filtered image (out_UV).
  • Model 1 and model 2 may also include other auxiliary input information, such as QP information, block partition image, deblocking filter boundary strength information, etc.
  • NNLF1 scheme and NNLF2 scheme can be implemented using neural network based common software (NCS: Neural Network based Common Software) in neural network based video coding (NNVC: Neural Network based Video Coding), which serves as the baseline tool in the reference software testing platform of NNVC, namely baseline NNLF.
  • NCS Neural Network based Common Software
  • NNVC Neural Network based Video Coding
  • NNLF1 ⁇ and NNLF2 for the luminance component and chrominance component of the reconstructed image input into the neural network, NNLF1 adopts a joint input method, as shown in Figure 3C, and only one network model needs to be trained; in NNLF2, the luminance component and chrominance component of the reconstructed image are input separately, as shown in Figure 4C, and two models need to be trained.
  • NNLF1 ⁇ and NNLF2 both adopt a joint input method, that is, there is a binding relationship, as shown in Figure 5.
  • the disclosed embodiment proposes a method for adjusting chrominance information, by adjusting the chrominance information input to the NNLF filter, such as swapping the order of the U component and the V component, to further optimize the encoding performance of the NNLF filter.
  • An embodiment of the present disclosure provides a neural network-based loop filtering (NNLF) method, which is applied to a video encoding device. As shown in FIG6 , the method includes:
  • the first mode and the second mode are both set NNLF modes, and the second mode includes a chrominance information adjustment mode.
  • the chrominance information adjustment mode adds a process of specifying adjustment of the input chrominance information before filtering.
  • This embodiment can select an optimal mode from the second mode of adjusting the chrominance information and the first mode of not adjusting the chrominance information according to the rate-distortion cost, thereby improving the encoding performance.
  • the specified adjustment of the chromaticity information includes any one or more of the following adjustment methods:
  • the order of the two chrominance components of the reconstructed image is swapped, for example, the sorting order of the U component first and the V component second is adjusted to the sorting order of the V component first and the U component second;
  • the weighted average value and the square error value of the two chrominance components of the reconstructed image are calculated, and the weighted average value and the square error value are used as the chrominance information input in the NNLF mode.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block, but may also be a reconstructed image of other coding units.
  • the reconstructed image for which the NNLF filter is applied herein may belong to coding units of different levels, such as image level (including frame, slice), block level, etc.
  • the network structures of the models used in the first mode and the second mode are the same or different.
  • the second mode further includes a chrominance information fusion mode
  • the training data used by the model used in the chrominance information fusion mode during training includes the expanded data obtained by performing the specified adjustment on the chrominance information of the reconstructed image in the original data, or includes the original data and the expanded data;
  • the network model used in the first mode is trained using the original data (ie, the supplementary data obtained after the chromaticity information is adjusted is not used as training data).
  • the second mode further includes a chrominance information adjustment and fusion mode
  • the chromaticity information adjustment and fusion mode adds a process of performing a specified adjustment on the input chromaticity information before filtering, and the training data used in the training of the model used in the chromaticity information fusion mode includes the expanded data obtained after performing the specified adjustment on the chromaticity information of the reconstructed image in the original data, or includes the original data and the expanded data;
  • the network model used in the first mode is trained using the original data.
  • the calculating the rate distortion cost cost 1 of performing NNLF on the input reconstructed image using the first mode includes: obtaining a first filtered image output after performing NNLF on the reconstructed image using the first mode, and calculating cost 1 according to the difference between the first filtered image and the corresponding original image;
  • the calculating the rate distortion cost cost 2 of performing NNLF on the reconstructed image using the second mode includes: for each of the second modes, obtaining a second filtered image output after performing NNLF on the reconstructed image using the second mode, and calculating the cost 2 of the second mode according to the difference between the second filtered image and the original image.
  • the mode with the minimum rate-distortion cost among the first mode and the second mode is a mode corresponding to the minimum value of the calculated cost 1 and multiple costs 2 .
  • the difference is represented by the sum of squared errors (SSD).
  • the difference can also be represented by other indicators such as mean square error (MSE: Mean Squared Error) and mean absolute error (MAE: Mean Absolute Error), and the present disclosure is not limited to this. The same applies to other embodiments of the present disclosure.
  • the method further includes: using the filtered image obtained when the mode with the smallest rate distortion cost is used to perform NNLF on the reconstructed image as the filtered image output after performing NNLF on the reconstructed image.
  • the NNLF filter used by the encoding end and/or the decoding end to perform NNLF processing is set after the deblocking filter or the sample adaptive compensation filter, and before the adaptive correction filter.
  • the structure of the filter unit (or loop filter module, see Figure 1B and Figure 1C) is shown in Figure 7, where DBF represents a deblocking filter, SAO represents a sample adaptive compensation filter, and ALF represents an adaptive correction filter.
  • NNLF-A represents an NNLF filter using a first mode
  • NNLF-B represents an NNLF filter using a second mode such as a chroma information adjustment mode, which can also be called a chroma information adjustment module (Chroma Adjustment, CA).
  • the NNLF-A filter can be the same as the aforementioned NNLF1, NNLF2, etc., which do not perform chroma information adjustment.
  • NNLF-A filter and NNLF-B filter are shown in FIG7 , a model can be used in actual implementation, and NNLF-B filter can be regarded as NNLF-A filter with added adjustment for chrominance information.
  • loop filtering is performed using the model without adjusting the input chrominance information (such as exchanging UV order) (i.e., NNLF is performed on the reconstructed image using the first mode, or NNLF-A filter is used to perform NNLF on the reconstructed image), and the first-rate distortion cost cost 1 is calculated based on the difference between the output first filtered image and the original image, such as the sum of square errors (SSD); when the input chrominance information is adjusted, loop filtering is performed using the model once (i.e., NNLF is performed on the reconstructed image using the second mode, or NNLF-B filter is used to perform NNLF on the reconstructed image), and the first-rate distortion cost cost 2 is calculated based on the difference between the output second
  • the information of the used NNLF mode (i.e., the selected NNLF mode) is represented by the first flag and encoded into the bitstream for the decoder to read.
  • the NNLF mode actually used by the encoding end is determined by decoding the first flag, and the determined NNLF mode is used to perform NNLF on the input reconstructed image.
  • the first mode is used to perform NNLF on the reconstructed image
  • the second mode is used to perform NNLF on the reconstructed image
  • the deployment position of the NNLF filter is not limited to the position described in the figure. It is easy to understand that the implementation of the NNLF method disclosed in the present invention is not limited to its deployment position.
  • the filter in the filter unit is not limited to the filter shown in Figure 7, and there may be more or fewer filters, or other types of filters may be used.
  • An embodiment of the present disclosure further provides a video encoding method, which is applied to a video encoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, as shown in FIG8 , performing the following processing:
  • Step S210 when NNLF allows adjustment of chrominance information, performing NNLF on the reconstructed image according to any NNLF method applied to the encoding end of the present disclosure
  • Step S220 Encode a first flag of the reconstructed image, where the first flag includes information about the NNLF mode used when performing NNLF on the reconstructed image.
  • the first mode and the second mode are both set NNLF modes, and the second mode includes a chrominance information adjustment mode.
  • the chrominance information adjustment mode adds a process of specifying adjustment of the input chrominance information before filtering.
  • an optimal mode can be selected from the second mode for adjusting chrominance information and the first mode for not adjusting chrominance information according to the rate-distortion cost, and the selected mode information can be encoded into the bitstream, thereby improving the encoding performance.
  • the first flag is a picture-level syntax element or a block-level syntax element.
  • the NNLF allows chrominance information adjustment:
  • the chrominance information adjustment permission flag of the decoded picture level is determined based on the chrominance information adjustment permission flag to determine whether the NNLF allows the chrominance information adjustment.
  • the method further includes: when it is determined that NNLF does not allow chrominance information adjustment, using the first mode to perform NNLF on the reconstructed image, and skipping encoding of the chrominance information adjustment permission flag.
  • the second mode is a chromaticity information adjustment mode
  • the first flag is used to indicate whether to perform chromaticity information adjustment when performing NNLF on the reconstructed image
  • the first flag for encoding the reconstructed image includes: when it is determined that the first mode is used to perform NNLF on the reconstructed image, the first flag is set to a value indicating that no chromaticity information adjustment is performed; when it is determined that the second mode is used to perform NNLF on the reconstructed image, the first flag is set to a value indicating that chromaticity information adjustment is performed.
  • the first flag is used to indicate whether the second mode is used when performing NNLF on the reconstructed image
  • the first flag of encoding the reconstructed image comprises:
  • the first flag is set to a value indicating the use of the second mode, and the second flag is continuously encoded, where the second flag contains index information of a second mode with the minimum rate-distortion cost.
  • An embodiment of the present disclosure provides a video encoding method applied to an encoding end, which mainly involves the processing of NNLF.
  • the second mode of this embodiment is a chrominance information adjustment mode.
  • the processing is performed in the prescribed filter order.
  • the data input to the NNLF filter such as the reconstructed image (which may be the filtered image output by other filters) is obtained, the following processing is performed:
  • Step a) judging whether NNLF in the current sequence allows chroma information adjustment according to the sequence-level chroma information adjustment permission flag ca_enable_flag; if ca_enable_flag is "1", then the chroma information adjustment process is attempted for the current sequence, and the process jumps to step b); if ca_enable_flag is "0", then NNLF in the current sequence does not allow chroma information adjustment, and at this time, the first mode is used to perform NNLF on the reconstructed image, and the encoding of the first flag is skipped, and the process ends;
  • Step b) for the reconstructed image of the current frame of the current sequence, first use the first mode to perform NNLF, that is, input the original input information into the NNLF model for filtering, and obtain the first filtered image from the output of the model;
  • Step c) using the second mode (the chrominance information adjustment mode in this embodiment) to perform NNLF, that is, the order of the U component and the V component of the input reconstructed image is interchanged, and then input into the NNLF model for filtering, and a second filtered image is obtained from the output of the model;
  • NNLF the chrominance information adjustment mode in this embodiment
  • Step d) calculating a rate-distortion cost C NNLF according to the difference between the first filtered image and the original image, and calculating a rate-distortion cost C CA according to the difference between the second filtered image and the original image; comparing the two rate-distortion costs, if C CA ⁇ C NNLF , determining to use the second mode to perform NNLF on the reconstructed image, and using the second filtered image as the filtered image finally output after the reconstructed image is subjected to NNLF filtering; if C CA ⁇ C NNLF , determining to use the first mode to perform NNLF on the reconstructed image, and using the first filtered image as the filtered image finally output after the reconstructed image is subjected to NNLF filtering;
  • SSD(*) indicates the SSD for a certain color component
  • Wy, Wu, and Wv represent the weight values of the SSD of the Y component, U component, and V component, respectively, such as 10:1:1 or 8:1:1.
  • M represents the length of the reconstructed image of the current frame
  • N represents the width of the reconstructed image of the current frame
  • rec(x, y) and org(x, y) represent the pixel values of the reconstructed image and the original image at the pixel point (x, y), respectively.
  • Step e) encoding a first flag according to the mode used by the current frame (i.e., the selected mode) to indicate whether to perform chrominance information adjustment.
  • the first flag may also be referred to as a chrominance information adjustment flag picture_ca_enable_flag, which is used to indicate that chrominance information adjustment is required when performing NNLF on the reconstructed image;
  • the next frame of reconstructed image is loaded and processed in the same way.
  • NNLF processing is performed based on the reconstructed image of the current frame.
  • NNLF processing may also be performed based on other coding units such as blocks (such as CTU) and slices in the current frame.
  • the arrangement order of its input information can be ⁇ recY, recU, recV, predY, predU, predV, baseQP, sliceQP, slicetype,... ⁇
  • the arrangement order of its output information can be ⁇ cnnY, cnnU, cnnV ⁇ , where rec represents the reconstructed image, pred represents the predicted image, and cnn represents the output filtered image.
  • the order of filter input information is adjusted to ⁇ recY, recV, recU, predY, predV, predU, baseQP, sliceQP, slicetype, ... ⁇ , and the order of output network information is adjusted to ⁇ cnnY, cnnV, cnnU ⁇ , as shown in FIG12B .
  • This embodiment explores the generalization of NNLF input by adjusting the input order of U component and V component in chrominance information, which can further improve the filtering performance of neural network. At the same time, only a few bits of flag need to be encoded as a control switch, which has little effect on decoding complexity.
  • the three YUV components in the image are decided by the joint rate-distortion cost. In other embodiments, more refined processing can also be attempted. For each component, the rate-distortion cost in different modes is calculated separately, and the mode with the smallest rate-distortion cost is selected.
  • An embodiment of the present disclosure further provides a neural network-based loop filtering (NNLF) method, which is applied to a video decoding device. As shown in FIG9 , the method includes:
  • Step S310 decoding a first flag of a reconstructed image, where the first flag includes information of an NNLF mode used when performing NNLF on the reconstructed image;
  • Step S320 determining an NNLF mode to be used when performing NNLF on the reconstructed image according to the first flag, and performing NNLF on the reconstructed image according to the determined NNLF mode;
  • the NNLF mode includes a first mode and a second mode, the second mode includes a chrominance information adjustment mode, and the chrominance information adjustment mode adds a process of performing specified adjustments on the input chrominance information before filtering compared to the first mode.
  • the loop filtering method of the neural network in this embodiment selects a better mode from two modes of adjusting chrominance information and not adjusting chrominance information through the first flag, which can enhance the filtering effect of NNLF and improve the quality of the decoded image.
  • the specified adjustment of the chromaticity information includes any one or more of the following adjustment methods:
  • the weighted average value and the square error value of the two chrominance components of the reconstructed image are calculated, and the weighted average value and the square error value are used as the chrominance information input in the NNLF mode.
  • the reconstructed image is a reconstructed image of a current frame or a current slice or a current block.
  • the first flag is a picture-level syntax element or a block-level syntax element.
  • the second mode further includes a chrominance information fusion mode
  • the training data used by the model used in the chrominance information fusion mode during training includes the expanded data obtained by performing the specified adjustment on the chrominance information of the reconstructed image in the original data, or includes the original data and the expanded data;
  • the model used in the first mode is trained using the original data.
  • the second mode further includes a chrominance information adjustment and fusion mode
  • the chromaticity information adjustment and fusion mode adds a process of performing a specified adjustment on the input chromaticity information before filtering, and the training data used in the training of the model used in the chromaticity information fusion mode includes the expanded data obtained by performing the specified adjustment on the chromaticity information of the reconstructed image in the original data, or includes the original data and the expanded data;
  • the model used in the first mode is trained using the original data.
  • first mode there is one first mode and one or more second modes; the network structures of the models used in the first mode and the second mode are the same or different.
  • the second mode is a chromaticity information adjustment mode
  • the first flag is used to indicate whether to perform chromaticity information adjustment when performing NNLF on the reconstructed image
  • Determining the NNLF mode to be used when performing NNLF on the reconstructed image according to the first flag includes: when the first flag indicates to adjust the chromaticity information, determining to use the second mode when performing NNLF on the reconstructed image; when the first flag indicates not to adjust the chromaticity information, determining to use the first mode when performing NNLF on the reconstructed image.
  • the image header is defined as follows:
  • the ca_enable_flag in the table is a sequence-level chrominance information adjustment permission flag.
  • picture_ca_enable_flag indicates the image-level chrominance information adjustment usage flag, that is, the first flag mentioned above.
  • picture_ca_enable_flag indicates that chrominance information adjustment is performed when NNLF is performed on the reconstructed image, that is, the second mode (chrominance information adjustment mode in this embodiment) is used; when picture_ca_enable_flag is 0, it indicates that chrominance information adjustment is not performed when NNLF is performed on the reconstructed image, that is, the first mode is used.
  • ca_enable_flag the decoding and encoding of picture_ca_enable_flag is skipped.
  • the first flag is used to indicate whether the second mode is used when performing NNLF on the reconstructed image
  • the determining, according to the first flag, a NNLF mode used when performing NNLF on the reconstructed image includes:
  • the first flag indicates to use the second mode
  • An embodiment of the present disclosure further provides a video decoding method, which is applied to a video decoding device, comprising: when performing a neural network-based loop filter NNLF on a reconstructed image, as shown in FIG10 , performing the following processing:
  • Step S410 determining whether NNLF allows chrominance information adjustment
  • Step S420 When NNLF allows adjustment of chrominance information, perform NNLF on the reconstructed image according to the NNLF method described in any embodiment of the present disclosure applied to the decoding end.
  • the video decoding method of this embodiment selects a better mode from two modes of adjusting chrominance information and not adjusting chrominance information through the first flag, which can enhance the filtering effect of NNLF and improve the quality of the decoded image.
  • the NNLF allows chrominance information adjustment:
  • the chrominance information adjustment permission flag of the decoded picture level is determined based on the chrominance information adjustment permission flag to determine whether the NNLF allows the chrominance information adjustment.
  • sequence header of the video sequence is shown in the following table:
  • the ca_enable_flag in the table is the sequence-level chrominance information adjustment enable flag.
  • the method further includes: when NNLF does not allow adjustment of chrominance information, skipping decoding of the first flag, and performing NNLF on the reconstructed image using the first mode.
  • the NNLF1 baseline tool is selected as a comparison.
  • a mode selection process including a chrominance information adjustment mode is performed on the inter-coded frames (i.e., non-I frames).
  • JVET Joint Video Experts Team
  • the anchor for comparison is NNLF1. The results are shown in Tables 1 and 2.
  • Encoding Time 10X% means that after the reference row sorting technology is integrated, the encoding time is 10X% compared with before integration, which means that the encoding time increases by X%.
  • DecT Decoding Time, 10X% means that after the reference row sorting technology is integrated, the decoding time is 10X% compared with before integration, which means that the decoding time increases by X%.
  • ClassA1 and Class A2 are test video sequences with a resolution of 3840x2160
  • ClassB is a test sequence with a resolution of 1920x1080
  • ClassC is 832x480
  • ClassD is 416x240
  • ClassE is 1280x720
  • ClassF is a screen content sequence with several different resolutions.
  • the method of this embodiment can also be used to select the NNLF mode for intra-coded frames (I frames).
  • An embodiment of the present disclosure further provides a code stream, wherein the code stream is generated by the video encoding method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a loop filter based on a neural network, as shown in Figure 11, including a processor and a memory storing a computer program, wherein the processor can implement the loop filtering method based on a neural network described in any embodiment of the present disclosure when executing the computer program.
  • the processor and the memory are connected via a system bus, and the loop filter may also include other components such as a memory and a network interface.
  • An embodiment of the present disclosure further provides a video decoding device, see FIG11 , comprising a processor and a memory storing a computer program, wherein the processor can implement the video decoding method as described in any embodiment of the present disclosure when executing the computer program.
  • An embodiment of the present disclosure further provides a video encoding device, see FIG11 , comprising a processor and a memory storing a computer program, wherein the processor can implement the video encoding method as described in any embodiment of the present disclosure when executing the computer program.
  • the processor of the above-mentioned embodiment of the present disclosure may be a general-purpose processor, including a central processing unit (CPU), a network processor (NP for short), a microprocessor, etc., or other conventional processors, etc.; the processor may also be a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), discrete logic or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or other equivalent integrated or discrete logic circuits, or a combination of the above devices. That is, the processor of the above-mentioned embodiment may be any processing device or device combination for implementing the various methods, steps and logic block diagrams disclosed in the embodiment of the present invention.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • the instructions for the software may be stored in a suitable non-volatile computer-readable storage medium, and one or more processors may be used to execute the instructions in hardware to implement the method of the embodiment of the present disclosure.
  • processors used herein may refer to the above-mentioned structure or any other structure suitable for implementing the technology described herein.
  • An embodiment of the present disclosure further provides a video encoding and decoding system, see FIG. 1A , which includes the video encoding device described in any embodiment of the present disclosure and the video decoding device described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, can implement the neural network-based loop filtering method as described in any embodiment of the present disclosure, or implement the video decoding method as described in any embodiment of the present disclosure, or implement the video encoding method as described in any embodiment of the present disclosure.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or codes on a computer-readable medium or transmitted via a computer-readable medium and executed by a hardware-based processing unit.
  • a computer-readable medium may include a computer-readable storage medium corresponding to a tangible medium such as a data storage medium, or a communication medium that facilitates the transmission of a computer program from one place to another, such as according to a communication protocol. In this manner, a computer-readable medium may generally correspond to a non-transitory tangible computer-readable storage medium or a communication medium such as a signal or carrier wave.
  • a data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, codes, and/or data structures for implementing the techniques described in the present disclosure.
  • a computer program product may include a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage, flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures and can be accessed by a computer.
  • any connection may also be referred to as a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave
  • coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of medium.
  • disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), floppy disks, or Blu-ray disks, etc., where disks typically reproduce data magnetically, while optical disks use lasers to reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

一种基于神经网络的环路滤波、视频编解码方法、装置和系统,编码端对重建图像进行NNLF时,可以选择进行色度信息调整的NNLF模式或不进行色度信息调整的NNLF模式;解码端根据相应标志,选用其中一种模式对重建图像进行NNLF。可以提高编码性能。

Description

基于神经网络的环路滤波、视频编解码方法、装置和系统 技术领域
本公开实施例涉及但不限于视频技术,更具体地,涉及一种基于神经网络的环路滤波方法、视频编解码方法、装置和系统。
背景技术
数字视频压缩技术主要是将庞大的数字影像视频数据进行压缩,以便于传输以及存储等。原始视频序列的图像包含亮度分量和色度分量,在数字视频编码过程中,编码器读取黑白或者彩色图像,将每一帧图像分割成相同大小(如128x128,64x64等)的最大编码单元(LCU:largest coding unit)。每个最大编码单元可根据规则划分成矩形的编码单元(CU:coding unit),还可以进一步划分成预测单元(PU:prediction unit),变换单元(TU:transform unit)等。混合编码框架包括预测(prediction)、变换(transform)、量化(quantization)、熵编码(entropy coding)、环路滤波(in loop filter)等模块。预测模块可使用帧内预测(intra prediction)和帧间预测(inter prediction)。帧内预测基于同一图像的信息预测当前块内的像素信息,用于消除空间冗余;帧间预测可以参考不同图像的信息,利用运动估计搜索与当前块最匹配的运动矢量信息,用于消除时间冗余;变换可将预测后的残差转换到频率域,使其能量重新分布,结合量化可以将人眼不敏感的信息去除,用于消除视觉冗余;熵编码可以根据当前上下文模型以及二进制码流的概率信息消除字符冗余,生成码流。
随着互联网视频的激增以及人们对视频清晰度的要求越来越高,尽管已有的数字视频压缩标准能够节省不少视频数据,但目前仍然需要追求更好的数字视频压缩技术,以减少数字视频传输的带宽和流量压力。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本公开一实施例提供了一种基于神经网络的环路滤波(NNLF,Neural Network based Loop Filter)方法,应用于视频解码装置,所述方法包括:
解码重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息;
根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,根据确定的NNLF模式对所述重建图像进行NNLF;
其中,所述NNLF模式包括第一模式和第二模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
本公开一实施例还提供了一种视频解码方法应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
在NNLF允许色度信息调整的情况下,按照本公开应用于解码端的任一实施例所述的NNLF方法对所述重建图像进行NNLF。
本公开一实施例还提供了一种基于神经网络的环路滤波方法,应用于视频编码装置,所述方法包括:
计算使用第一模式对输入的重建图像进行NNLF的率失真代价,及使用第二模式对所述重建图像进行NNLF的率失真代价;
确定使用所述第一模式和所述第二模式中率失真代价最小的一种模式对所述重建图像进行NNLF;
其中,所述第一模式和第二模式均为设定的NNLF模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
本公开一实施例还提供了一种视频编码方法,应用于视频编码装置,,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
在NNLF允许色度信息调整的情况下,按照本公开应用于编码端的任一实施例所述的NNLF方法对所述重建图像进行NNLF;
编码所述重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息。
本公开一实施例还提供了一种码流,其中,所述码流通过本公开任一实施例所述的视频编码方法生成。
本公开一实施例还提供了一种基于神经网络的环路滤波器,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现本公开任一实施例所述的基于神经网络的环路滤波方法。
本公开一实施例还提供了一种视频解码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频解码方法。
本公开一实施例还提供了一种视频编码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频编码方法。
本公开一实施例还提供了一种视频编解码系统,包括本公开任一实施例所述的视频编码装置和本公开任一实施例所述的视频解码装置。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时能够实现如本公开任一实施例所述的基于神经网络的环路滤波方法,或实现如本公开任一实施例所述的视频解码方法,或实现如本公开任一实施例所述的视频编码方法。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
附图用来提供对本公开实施例的理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。
图1A是一实施例编解码系统的示意图,图1B是图1A中编码端的框架图,图1C是图1A中解码端的框架图;
图2是一实施例的滤波器单元的模块图;
图3A是一实施例NNLF滤波器的网络结构图,图3B是图3A中残差块的结构图;图3C是图3A中NNLF滤波器的输入和输出的示意图;
图4A是另一实施例NNLF滤波器中骨干网络的结构图,图4B是图4A中残差块的结构图;图4C是图4A中NNLF滤波器的输入和输出的示意图;
图5是一种输入信息的特征图排列的示意图;
图6是本公开一实施例应用于编码端的NNLF方法的流程图;
图7是本公开一实施例滤波器单元的模块图;
图8是本公开一实施例视频编码方法的流程图;
图9是本公开一实施例应用于解码端的NNLF方法的流程图;
图10是本公开一实施例视频解码方法的流程图;
图11是本公开一实施例NNLF滤波器的硬件结构示意图;
图12A是本公开一实施例不对色度信息的顺序调整时NNLF滤波器输入和输出的示意图;
图12B是本公开一实施例不对色度信息的顺序调整时NNLF滤波器输入和输出的示意图。
详述
本公开描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本邻域的普通技术人员来说显而易见的是,在本公开所描述的实施例包含的范围内可以有更多的实施例和实现方案。
本公开的描述中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何实施例不应被解释为比其他实施例更优选或更具优势。本文中的“和/或”是对关联对象的关联关系的一种描述,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。另外,为了便于清楚描述本公开实施例的技术方案,使用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本邻域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
在描述具有代表性的示例性实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本邻域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本邻域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本公开实施例的精神和范围内。
本公开实施例基于神经网络的环路滤波方法、视频编解码方法可以应用于各种视频编解码标准,例如:H.264/Advanced Video Coding(高级视频编码,AVC),H.265/High Efficiency Video Coding(高效视频编码,HEVC),H.266/Versatile Video Coding(多功能视频编码,VVC),AVS(Audio Video coding Standard,音视频编码标准),以及MPEG(Moving Picture Experts Group,动态图像专家组)、AOM(开放媒体联盟,Alliance for Open Media)、JVET(联合视频专家组,Joint Video Experts Team)制订的其他标准以及这些标准的拓展,或任何自定义的其他标准等。
图1A是可用于本公开实施例的一种视频编解码系统的框图。如图所示,该系统分为编码端1和解码端2,编码端1产生码流。解码端2可对码流进行解码。解码端2可经由链路3从编码端1接收码流。链路3包括能够将码流从编码端1移动到解码端2的一个或多个媒体或装置。在一个示例中,链路3包括使得编码端1能够将码流直接发送到解码端2的一个或多个通信媒体。编码端1根据通信标准来调制码流,将经调制的码流发送到解码端2。所述一个或多个通信媒体可包含无线和/或有线通信媒体,可形成分组网络的一部分。在另一示例中,也可将码流从输出接口15输出到一个存储装置,解码端2可经由流式传输或下载从该存储装置读取所存储的数据。
如图所示,编码端1包含数据源11、视频编码装置13和输出接口15。数据源11包括视频捕获装置(例如摄像机)、含有先前捕获的数据的存档、用以从内容提供者接收数据的馈入接口,用于产生数据的计算机图形系统,或这些来源的组合。视频编码装置13也可称为视频编码器,用于对来自数据源11的数据进行编码后输出到输出接口15,输出接口15可包含调节器、调制解调器和发射器中的至少之一。解码端2包含输入接口21、视频解码装置23和显示装置25。输入接口21包含接收器和调制解调器中的至少之一。输入接口21可经由链路3或从存储装置接收码流。视频解码装置23也称为视频解码器,用于对接收的码流进行解码。显示装置25用于显示解码后的数据,显示装置25可以与解码端2的其他装置集成在一起或者单独设置,显示装置25对于解码端来说是可选的。在其他示例中,解码端可以包含应用解码后数据的其他装置或设备。
图1B为可用于本公开实施例的一示例性的视频编码装置的框图。如图所示,该视频编码装置10包括:
划分单元101,设置为与预测单元100配合,将接收的视频数据划分为切片(Slice)、编码树单元(CTU:Coding Tree Unit)或其它较大的单元。所述接收的视频数据可以是包括I帧、P帧或B帧等视频帧的视频序列。
预测单元100,设置为将CTU划分为编码单元(CU:Coding Unit),对CU执行帧内预测编码或帧间预测编码。对CU做帧内预测和帧间预测时,可以将CU划分为一个或多个预测单元(PU:prediction unit)。
预测单元100包含帧间预测单元121和帧内预测单元126。
帧间预测单元121,设置为对PU执行帧间预测,产生PU的预测数据,所述预测数据包括PU的预测块、PU的运动信息和各种语法元素。帧间预测单元121可以包括运动估计(ME:motion estimation)单元和运动补偿(MC:motion compensation)单元。运动估计单元可以用于运动估计以产生运动矢量,运动补偿单元可以用于根据运动矢量获得或生成预测块。
帧内预测单元126,设置为对PU执行帧内预测,产生PU的预测数据。PU的预测数据可包含PU的预测块和各种语法元素。
残差产生单元102(图中用划分单元101后的带加号的圆圈表示),设置为基于CU的原始块减去CU划分成的PU的预测块,产生CU的残差块。
变换处理单元104,设置为将CU划分为一个或多个变换单元(TU:Transform Unit),预测单元和变换单元的划分可以不同。TU关联的残差块是CU的残差块划分得到的子块。通过将一种或多种变换应用于TU关联的残差块来产生TU关联的系数块。
量化单元106,设置为基于量化参数对系数块中的系数进行量化,通过调整量化参数(QP:Quantizer Parameter)可以改变系数块的量化程度。
反量化单元108和反变换单元110分别设置为将反量化和反变换应用于系数块,得到TU关联的重建残差块。
重建单元112(图中用反变换处理单元110后的带加号的圆圈表示),设置为将重建残差块和预测单元100产生的预测块相加,产生重建图像。
滤波器单元113,设置为对重建图像执行环路滤波。
解码图像缓冲器114,设置为存储环路滤波后的重建图像。帧内预测单元126可以从解码图像缓冲器114中提取与当前块邻近的块的参考图像以执行帧内预测。帧间预测单元121可使用解码图像缓冲器114缓存的上一帧的参考图像对当前帧图像的PU执行帧间预测。
熵编码单元115,设置为对接收的数据(如语法元素、量化后的系数块、运动信息等)执行熵编码操作,生成视频码流。
在其他示例中,视频编码装置10可以包含比该示例更多、更少或不同功能组件,如可以取消变换处理单元104、反变换处理单元110等。
图1C为可用于本公开实施例的一示例性的视频解码装置的框图。如图所示,视频解码装置15包括:
熵解码单元150,设置为对接收的已编码视频码流进行熵解码,提取语法元素、量化后的系数块和PU的运动信息等。预测单元152、反量化单元154、反变换处理单元156、重建单元158以及滤波器单元159均可基于从码流提取的语法元素来执行相应的操作。
反量化单元154,设置为对量化后的TU关联的系数块进行反量化。
反变换处理单元156,设置为将一种或多种反变换应用于反量化后的系数块以便产生TU的重建残差块。
预测单元152包含帧间预测单元162和帧内预测单元164。如果当前块使用帧内预测编码,帧内预测单元164基于从码流解码出的语法元素确定PU的帧内预测模式,结合从解码图像缓冲器160获取的当前块邻近的已重建参考信息执行帧内预测。如果当前块使用帧间预测编码,帧间预测单元162基于当前块的运动信息和相应的语法元素确定当前块的参考块,从解码图像缓冲器160获取的所述参考块执行帧间预测。
重建单元158(图中用反变换处理单元155后的带加号的圆圈表示),设置为基于TU关联的重建残差块和预测单元152执行帧内预测或帧间预测产生的当前块的预测块,得到重建图像。
滤波器单元159,设置为对重建图像执行环路滤波。
解码图像缓冲器160,设置为存储环路滤波后的重建图像,作为参考图像用于后续运动补偿、帧内预测、帧间预测等,也可将滤波后的重建图像作为已解码视频数据输出,在显示装置上的呈现。
在其它实施例中,视频解码装置15可以包含更多、更少或不同的功能组件,如在某些情况下可以取消反变换处理单元155等。
基于上述视频编码装置和视频解码装置,可以执行以下基本的编解码流程,在编码端,将一帧图像划分成块,对当前块进行帧内预测或帧间预测或其他算法产生当前块的预测块,使用当前块的原始块减去预测块得到残差块,对残差块进行变换和量化得到量化系数,对量化系数进行熵编码生成码流。在解码端,对当前块进行帧内预测或帧间预测产生当前块的预测块,另一方面对解码码流得到的量化 系数进行反量化、反变换得到残差块,将预测块和残差块相加得到重建块,重建块组成重建图像,基于图像或基于块对重建图像进行环路滤波得到解码图像。编码端同样通过和解码端类似的操作以获得解码图像,也可称为环路滤波后的重建图像。环路滤波后的重建图像可以作为对后续帧进行帧间预测的参考帧。编码端确定的块划分信息,预测、变换、量化、熵编码、环路滤波等模式信息和参数信息可以写入码流。解码端通过解码码流或根据设定信息进行分析,确定编码端使用的块划分信息,预测、变换、量化、熵编码、环路滤波等模式信息和参数信息,从而保证编码端获得的解码图像和解码端获得的解码图像相同。
以上虽然是以基于块的混合编码框架为示例,但本公开实施例并不局限于此,随着技术的发展,该框架中的一个或多个模块,及该流程中的一个或多个步骤可以被替换或优化。
本公开实施例涉及但不限于上述编码端和解码端中的滤波器单元(该滤波器单元也可称为环路滤波模块)及相应的环路滤波方法。
在一实施例中,编码端和解码端的滤波器单元包含去块滤波器(DBF:DeBlocking Filter)20、样值自适应补偿滤波器(SAO:Sample adaptive Offset)22和自适应修正滤波器(ALF:Adaptive loop filter)26等工具,在SAO和ALF之间,还包括基于神经网络的环路滤波器(NNLF,Neural Network based Loop Filter)26,如图2所示。滤波器单元对重建图像执行环路滤波,可以弥补失真信息,为后续编码像素提供更好的参考。
在一示例性的实施例中提供了一种基于神经网络的环路滤波NNLF方案,使用的模型(也可称为网络模型)使用如图3A所示的滤波网络,文中将该NNLF记为NNLF1,将执行NNLF1的滤波器称为NNLF1滤波器。如图所示,该滤波网络的骨干网络(backbone)包括使用多个依次连接的残差块(ResBlock),还包括卷积层(图中用Conv表示)、激活函数层(如图中的ReLU),合并(concat)层(图中用Cat表示),及像素重组层(图中用PixelShuffle表示)。每个残差块的结构如图3B所示,包括依次连接的卷积核大小为1×1的卷积层、ReLU层、卷积核大小为1×1的卷积层和卷积核大小为3×3的卷积层。
如图3A所示,NNLF1滤波器的输入包括重建图像(rec_YUV)的亮度信息(即Y分量)和色度信息(即U分量和V分量),以及多种辅助信息,例如预测图像(pred_YUV)的亮度信息和色度信息、QP信息、帧类型信息。QP信息包括编码配置文件中默认的基线量化参数(BaseQP:Base Quantization Parameter)和当前切片的切片量化参数(SliceQP:Slice Quantization Parameter),帧类型信息包括切片类型(SliceType),即当前slice所属的帧的类型。模型的输出为经NNLF1滤波后的滤波图像(output_YUV),NNLF1滤波器输出的滤波图像也可以作为输入到后续滤波器的重建图像。
NNLF1使用一个模型对重建图像的YUV分量(rec_YUV)进行滤波,输出滤波图像的YUV分量(out_YUV),如图3C所示,图中略去了预测图像的YUV分量等辅助输入信息。该模型的滤波网络在输入的重建图像到输出的滤波图像之间存在一条跳连接支路,如图3A所示。
另一示例性的实施例提供了另一种NNLF方案,记为NNLF2。NNLF2需要分别训练两个模型,一个模型用于对重建图像的亮度分量进行滤波,另一个模型用于对重建图像的两个色度分量进行滤波,该两个模型可以使用相同的滤波网络,输入NNLF2滤波器的重建图像到NNLF2滤波器输出的滤波图像之间也存在一条跳连接支路。如图4A所示,该滤波网络的骨干网络包括依次连接的多个带注意力机制的残差块(AttRes Block)、用于实现特征映射的卷积层(Conv 3×3)以及重组层(Shuffle)。每个带注意力机制的残差块的结构如图4B所示,包括依次连接的卷积层(Conv 3×3)、激活层(PReLU)、卷积层(Conv 3×3)和注意力层(Attintion),M表示特征图的数量,N代表一维中的样本数。
NNLF2用于对重建图像的亮度分量滤波的模型一如图4C所示,其输入信息包括重建图像的亮度分量(rec_Y),输出为滤波图像的亮度分量(out_Y),图中略去了预测图像的亮度分量等辅助输入信息。NNLF2用于对重建图像的两个色度分量滤波的模型二如图4C所示,其输入信息包括重建图像的两个色度分量(rec_UV),及作为辅助输入信息的重建图像的亮度分量(rec_Y),模型二的输出是滤波图像的两个色度分量(out_UV)。模型一和模型二还可以包括其他的辅助输入信息,如QP信息、块划分图像、去块滤波边界强度信息等。
上述NNLF1方案和NNLF2方案在基于神经网络的视频编码(NNVC:Neural Network based Video Coding)中可以用基于神经网络的通用软件(NCS:Neural Network based Common Software)实现,作为NNVC的参考软件测试平台中的基线工具即基线NNLF。
如上所述,在NNLF1`和NNLF2中,对于输入神经网络的重建图像的亮度分量和色度分量,NNLF1采取了联合输入的方式,如图3C所示,仅需训练1个网络模型;NNLF2中,重建图像的亮度分量和色度分量是分开输入的,如图4C所示,需要训练2个模型。对于两个色度分量即U分量和V分量,NNLF1`和NNLF2都采取联合输入的方式即存在绑定关系,如图5所示,将堆叠好的特征图像送入模型进行预测和输出时,每帧图像中的3个分量均是按照Y、U、V的顺序排列,U分量在前,V分量在后。目前缺少方案研究调整U分量和V分量的输入顺序对神经网络的影响。
本公开实施例提出了一种对色度信息进行调整的方法,通过对输入NNLF滤波器的色度信息进行调整,如将U分量和V分量的顺序互换,进一步优化NNLF滤波器的编码性能。
本公开一实施例提供了一种基于神经网络的环路滤波NNLF方法,应用于视频编码装置,如图6所示,所述方法包括:
S110,计算使用第一模式对输入的重建图像进行NNLF的率失真代价,及使用第二模式对所述重建图像进行NNLF的率失真代价;
S120,确定使用所述第一模式和所述第二模式中率失真代价最小的一种模式对所述重建图像进行NNLF;
其中,所述第一模式和第二模式均为设定的NNLF模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
经测试证明,对输入的色度信息进行调整,可以影响编码性能。本实施例根据率失真代价可以从对色度信息调整的第二模式和不对色度信息调整的第一模式选择一种最佳的模式,从而提升编码性能。
在本公开一示例性的实施例中,所述对色度信息进行指定调整,包括以下调整方式中的任意一种或多种:
将重建图像的两个色度分量的顺序互换,例如,将U分量在前、V分量在后的排序顺序,调整为V分量在前、U分量在后的排序顺序\;
计算重建图像的两个色度分量的加权平均值和平方误差值,将所述加权平均值和平方误差值作为NNLF模式下输入的色度信息。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片(slice)或当前块的重建图像,但也可以是其他编码单位的重建图像。本文中进行NNLF滤波器的重建图像可以属于图像级(包括帧、切片)、块级等不同级别的编码单位。
在本公开一示例性的实施例中,所述第一模式和所述第二模式下使用的模型的网络结构相同或不同。
在本公开一示例性的实施例中,所述第二模式还包括色度信息融合模式,所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
所述第一模式使用的网络模型使用所述原始数据训练得到(即不使用经色度信息调整后得到的补充数据作为训练数据)。
在本公开一示例性的实施例中,所述第二模式还包括色度信息调整及融合模式;
所述色度信息调整及融合模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理,且所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
所述第一模式使用的网络模型使用所述原始数据训练得到。
在本公开一示例性的实施例中,所述第一模式有一种,所述第二模式有一种或多种;
所述计算使用第一模式对输入的重建图像进行NNLF的率失真代价cost 1,包括:获取使用第一模式对所述重建图像进行NNLF后输出的第一滤波图像,根据所述第一滤波图像与相应原始图像的差异计算cost 1
所述计算使用第二模式对所述重建图像进行NNLF的率失真代价cost 2,包括:对每一种所述第二模式,获取使用该第二模式对所述重建图像进行NNLF后输出的一第二滤波图像,根据该第二滤波图 像与所述原始图像的差异计算该第二模式的cost 2
在本实施例的一示例中,所述第二模式有多种;所述第一模式和第二模式中率失真代价最小的一种模式是计算得到的cost 1和多个cost 2中的最小值对应的一种模式。
在本实施例的一示例中,所述差异用平方误差和(SSD)表示。在其他示例中,所述差异也可以用均方误差(MSE:Mean Squared Error)、平均绝对误差(MAE:Mean Absolute Error)等其他的指标来表示,本公开对此不做局限。本公开的其他实施例同此。
在本公开一示例性的实施例中,所述方法还包括:将使用率失真代价最小的该模式对所述重建图像进行NNLF时得到的滤波图像,作为对所述重建图像进行NNLF后输出的滤波图像
在本公开一示例性的实施例中,编码端和/或解码端用于执行NNLF处理的NNLF滤波器设置在去块滤波器或样值自适应补充偿滤波器之后,及在自适应修正滤波器之前。在本实施例的一示例中,滤波器单元(或称环路滤波模块,参见图1B和图1C)的结构如图7所示,图中的DBF表示去块滤波器,SAO表示样值自适应补充偿滤波器,ALF表示自适应修正滤波器。NNLF-A表示使用第一模式的NNLF滤波器,NNLF-B表示使用第二模式如色度信息调整模式的NNLF滤波器,也可以称为色度信息调整模块(Chroma Adjustment,CA)。其中NNLF-A滤波器可以与前述NNLF1、NNLF2等不进行色度信息调整的NNLF滤波器相同。
虽然图7中表示为NNLF-A滤波器和NNLF-B滤波器,但是在实际实现时可以使用一个模型,NNLF-B滤波器可视为增加了对色度信息调整的NNLF-A滤波器。基于该模型,在不对输入的色度信息进行调整(如互换UV顺序)的情况下使用该模型进行环路滤波(即使用第一模式对重建图像进行NNLF,也可说是使用NNLF-A滤波器对重建图像进行NNLF),并根据输出的第一滤波图像与原始图像的差异如平方误差和(SSD)计算得到一率失真代价cost 1;在对输入的色度信息进行调整的情况下,使用该模型进行一次环路滤波(即使用第二模式对重建图像进行NNLF,也可说是使用NNLF-B滤波器对重建图像进行NNLF),并根据输出的第二滤波图像与原始图像的差异计算得到一率失真代价cost 2,就可以根据cost 1和cost 2的大小确定使用的NNLF模式。而使用的NNLF模式(即选中的NNLF模式)的信息用第一标志表示,编入码流供解码器读取。在解码端,通过解码第一标志确定编码端实际使用的NNLF模式,并使用确定的NNLF模式对输入的重建图像进行NNLF。
例如,在cost 1<cost 2的情况下,使用所述第一模式对所述重建图像进行NNLF;在cost 2<cost 1的情况下,使用所述第二模式对所述重建图像进行NNLF;在cost 1=cost 2的情况下,使用所述第一模式或第二模式对所述重建图像进行NNLF。
在环路滤波时,DBF,SAO、ALF中的部分或全部可以不启用。此外,NNLF滤波器的部署位置并不局限于图中所述的位置,容易理解,本公开NNLF方法的实现并不受限于其部署位置。此外,滤波器单元中的滤波器也不局限于图7所示的滤波器,可以有更多、更少的滤波器,或者使用其他类型的滤波器。
本公开一实施例还提供了一种视频编码方法,应用于视频编码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,如图8所示,执行以下处理:
步骤S210,在NNLF允许色度信息调整的情况下,按照本公开应用于编码端的任一NNLF方法对所述重建图像进行NNLF;
步骤S220,编码所述重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息。
其中,所述第一模式和第二模式均为设定的NNLF模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
本实施例根据率失真代价可以从对色度信息调整的第二模式和不对色度信息调整的第一模式选择一种最佳的模式,并将选择的模式信息编入码流,从而提升编码性能。
在本公开一示例性的实施例中,所述第一标志为图像级语法元素或者块级语法元素。
在本公开一示例性的实施例中,在满足以下一种或多种条件的情况下,确定NNLF允许色度信息调整:
解码序列级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调 整;
解码图像级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整。
在本公开一示例性的实施例中,所述方法还包括:确定NNLF不允许色度信息调整的情况下,使用所述第一模式对所述重建图像进行NNLF,并跳过对所述色度信息调整允许标志的编码。
在本公开一示例性的实施例中,所述第二模式为一种色度信息调整模式;所述第一标志用于指示对所述重建图像进行NNLF时是否进行色度信息调整;
所述编码所述重建图像的第一标志,包括:在确定使用所述第一模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示不进行色度信息调整的值;在确定使用所述第二模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示进行色度信息调整的值。
在本公开一示例性的实施例中,所述第二模式有多种;所述第一标志用于指示对所述重建图像进行NNLF时是否使用所述第二模式;
所述编码所述重建图像的第一标志,包括:
在确定使用所述第一模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示不使用所述第二模式的值;
在确定使用所述第二模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示使用所述第二模式的值,并继续编码第二标志,所述第二标志包含率失真代价最小的一种第二模式的索引信息。
本公开一实施例提供了一种应用于编码端的视频编码方法,主要涉及NNLF的处理,本实施例的第二模式为一种色度信息调整模式。
对重建图像进行环路滤波时,如滤波器单元有多个滤波器,则按照规定的滤波器顺序进行处理,当得到输入NNLF滤波器的数据如重建图像(可以是其他滤波器输出的滤波图像)时,执行以下处理:
步骤a),根据序列级的色度信息调整允许标志ca_enable_flag判断当前序列下NNLF是否允许色度信息调整;若ca_enable_flag为“1”,则对当前序列尝试进行色度信息调整处理,跳至步骤b);若ca_enable_flag为“0”,则当前序列下NNLF不允许色度信息调整,此时使用第一模式对重建图像进行NNLF,跳过对第一标志的编码,结束;
步骤b),对于当前序列的当前帧的重建图像,先使用第一模式进行NNLF,即将原始的输入信息输入NNLF的模型进行滤波,从模型的输出得到第一滤波图像;
步骤c),再使用第二模式(本实施例为色度信息调整模式)进行NNLF,即将输入的重建图像的U分量和V分量的顺序互换,然后输入NNLF的模型进行滤波,从模型的输出得到第二滤波图像;
步骤d),根据第一滤波图像与原始图像的差异计算率失真代价C NNLF,根据第二滤波图像与原始图像的差异计算率失真代价C CA;比较两个率失真代价,如果C CA<C NNLF,则确定使用第二模式对重建图像进行NNLF,将第二滤波图像作为对重建图像进行NNLF滤波后最终输出的滤波图像;如果C CA≥C NNLF,则确定使用第一模式对重建图像进行NNLF,将第一滤波图像作为对重建图像进行NNLF滤波后最终输出的滤波图像;
本实施例的率失真代价的计算公式为:
cost=Wy*SSD(Y)+Wu*SSD(U)+Wv*SSD(V)
其中SSD(*)表示对于某颜色分量求SSD;Wy,Wu,Wv分别表示Y分量、U分量和V分量的SSD的权重值,如可以取10:1:1或8:1:1等。
其中,SSD的计算公式如下:
Figure PCTCN2022125229-appb-000001
M表示当前帧重建图像的长度、N表示当前帧重建图像的宽度,rec(x,y)和org(x,y)分别表示重建图像和原始图像在像素点(x,y)处的像素值。
步骤e)根据当前帧使用的模式(即选中的模式)编码第一标志以表示是否进行色度信息调整,此时该第一标志也可以称为色度信息调整使用标志picture_ca_enable_flag,用于表示对重建图像进行NNLF时需要进行色度信息调整;
若当前帧已完成处理,则加载下一帧重建图像按相同的方式进行处理。
本实施例是以当前帧的重建图像为单位进行NNLF处理,在其他实施例中,也可以基于当前帧中的块(如CTU)、切片等其他编码单位进行NNLF处理。
如图12A所示,对于某一NNLF滤波器(即图中的NNLF模型),使用第一模式时,其输入信息的排列顺序可以为{recY,recU,recV,predY,predU,predV,baseQP,sliceQP,slicetype,…},其输出信息的排列顺序可以为{cnnY,cnnU,cnnV},其中rec表示重建图像,pred表示预测图像,cnn表示输出的滤波图像。
当使用进行色度信息调整的第二模式时,滤波器输入信息的排列顺序调整为{recY,recV,recU,predY,predV,predU,baseQP,sliceQP,slicetype,…},其输出网络的信息排列顺序调整为{cnnY,cnnV,cnnU}。如图12B所示。
本实施例通过调整色度信息中U分量和V分量的输入顺序,对NNLF的输入的泛化性进行了探索,可进一步提升神经网络的滤波性能。同时仅需编码很少比特的标志作为控制开关,对解码复杂度基本没有影响。
本实施例对于图像中的YUV三个分量,是通过联合的率失真代价来决策,在其他实施例中,也可尝试更精细化的处理,对每一个分量,分别计算在不同模式下的率失真代价,选择使用率失真代价最小的模式。
本公开一实施例还提供了一种基于神经网络的环路滤波NNLF方法,应用于视频解码装置,如图9所示,所述方法包括:
步骤S310,解码重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息;
步骤S320,根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,根据确定的NNLF模式对所述重建图像进行NNLF;
其中,所述NNLF模式包括第一模式和第二模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
本实施例基于神经网络的环路滤波方法,通过第一标志,从进行色度信息调整和不进行色度信息调整的两种模式中选择一种较优的模式,可以增强NNLF的滤波效果,提升解码的图像的质量。
在本公开一示例性的实施例中,所述对色度信息进行指定调整,包括以下调整方式中的任意一种或多种:
将重建图像的两个色度分量的顺序互换;
计算重建图像的两个色度分量的加权平均值和平方误差值,将所述加权平均值和平方误差值作为NNLF模式下输入的色度信息。
在本公开一示例性的实施例中,所述重建图像为当前帧或当前切片或当前块的重建图像。
在本公开一示例性的实施例中,所述第一标志为图像级语法元素或者块级语法元素。
在本公开一示例性的实施例中,所述第二模式还包括色度信息融合模式,所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
所述第一模式使用的模型使用所述原始数据训练得到。
在本公开一示例性的实施例中,所述第二模式还包括色度信息调整及融合模式;
所述色度信息调整及融合模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整 的处理,且所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
所述第一模式使用的模型使用所述原始数据训练得到。
在本公开一示例性的实施例中,第一模式有一种,第二模式有一种或多种;第一模式和第二模式下使用的模型的网络结构相同或不同。
在本公开一示例性的实施例中,所述第二模式为一种色度信息调整模式;所述第一标志用于指示对所述重建图像进行NNLF时是否进行色度信息调整;
根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,包括:在所述第一标志指示进行色度信息调整时,确定对所述重建图像进行NNLF时使用所述第二模式,在所述第一标志指示不进行色度信息调整时,确定对所述重建图像进行NNLF时使用所述第一模式。
在本实施例的一示例中,图像头定义如下:
Figure PCTCN2022125229-appb-000002
表中的ca_enable_flag为序列级色度信息调整允许标志,ca_enable_flag为1时,定义以下语义:picture_ca_enable_flag,表示图像级的色度信息调整使用标志,也即上述的第一标志,picture_ca_enable_flag为1时,指示对重建图像进行NNLF时进行色度信息调整,也即使用第二模式(本实施例为色度信息调整模式);picture_ca_enable_flag为0时,指示对重建图像进行NNLF时不进行色度信息调整,也即使用第一模式。在ca_enable_flag为0时,跳过对picture_ca_enable_flag的解码和编码。
在本公开一示例性的实施例中,所述第二模式有多种;所述第一标志用于指示对所述重建图像进行NNLF时是否使用所述第二模式;
所述根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,包括:
在所述第一标志指示不使用所述第二模式时,确定对所述重建图像进行NNLF时使用所述第一模式;
在所述第一标志指示使用所述第二模式时,继续解码第二标志,所述第二标志包含应使用的一种第二模式的索引信息;及,根据所述第二标志确定对所述重建图像进行NNLF时使用该第二模式。
本公开一实施例还提供了一种视频解码方法,应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,如图10所示,执行以下处理:
步骤S410,确定NNLF是否允许色度信息调整;
步骤S420,在NNLF允许色度信息调整的情况下,按照本公开应用于解码端的任一实施例所述的NNLF方法对所述重建图像进行NNLF。
本实施例视频解码方法,通过第一标志,从进行色度信息调整和不进行色度信息调整的两种模式中选择一种较优的模式,可以增强NNLF的滤波效果,提升解码的图像的质量。
在本公开一示例性的实施例中,在满足以下一种或多种条件的情况下,确定NNLF允许色度信息调整:
解码序列级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整;
解码图像级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整。
在使用序列级的色度信息调整允许标志的一示例中,视频序列的序列头如下表所示:
Figure PCTCN2022125229-appb-000003
表中的ca_enable_flag即序列级的色度信息调整允许标志。
在本公开一示例性的实施例中,所述方法还包括:在NNLF不允许色度信息调整的情况下,跳过对所述第一标志的解码,使用所述第一模式对所述重建图像进行NNLF。
本实施例选择NNLF1基线工具作为对比,在NNLF1的基础上,对帧间编码帧(即非I帧)进行包含色度信息调整模式的模式选择处理,在通用测试条件随机接入(Random Access)和低延迟(Low Delay)B配置下,对联合视频专家组(JVET:Joint Video Experts Team)规定的通用序列进行测试,对比的锚(anchor)为NNLF1,结果如表1和表2所示。
表1:Random Access配置下本实施例对比基线NNLF1的性能
Figure PCTCN2022125229-appb-000004
表2 Low Delay B配置下本实施例对比基线NNLF1的性能
Figure PCTCN2022125229-appb-000005
Figure PCTCN2022125229-appb-000006
表中的参数含义如下:
EncT:Encoding Time,编码时间,10X%代表当集成了参考行排序技术后,与没集成前相比,编码时间为10X%,这意味有X%的编码时间增加。
DecT:Decoding Time,解码时间,10X%代表当集成了参考行排序技术后,与没集成前相比,解码时间为10X%,这意味有X%的解码时间增加。
ClassA1和Class A2是分辨率为3840x2160的测试视频序列,ClassB为1920x1080分辨率的测试序列,ClassC为832x480,ClassD为416x240,ClassE为1280x720;ClassF为若干个不同分辨率的屏幕内容序列(Screen content)。
Y,U,V是颜色三分量,Y,U,V所在列表示测试结果在Y,U,V上的BD-rate(
Figure PCTCN2022125229-appb-000007
rate)指标,值越小表示编码性能越好。
分析两表的数据可以看到,通过引入色度信息调整的优化方法,能够在NNLF1的基础上,进一步提升编码性能,尤其是在色度分量上。
对帧内编码帧(I帧)也可以使用本实施例方法进行NNLF模式选择。
本公开一实施例还提供了一种码流,其中,所述码流通过本公开任一实施例所述的视频编码方法生成。
本公开一实施例还提供了一种基于神经网络的环路滤波器,如图11所示,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现本公开任一实施例所述的基于神经网络的环路滤波方法。如图所示,处理器和存储器通过系统总线相连,该环路滤波器还可以包括内存、网络接口等其他部件。
本公开一实施例还提供了一种视频解码装置,参见图11,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能实现如本公开任一实施例所述的视频解码方法。
本公开一实施例还提供了一种视频编码装置,参见图11,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如本公开任一实施例所述的视频编码方法。
本公开上述实施例的处理器可以是通用处理器,包括中央处理器(CPU)、网络处理器(Network Processor,简称NP)、微处理器等等,也可以是其他常规的处理器等;所述处理器还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)、离散逻辑或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,或其它等效集成或离散的逻辑电路,也可以是上述器件的组合。即上述实施例的处理器可以是实现本发明实施例中公开的各方法、步骤及逻辑框图的任何处理器件或器件组合。如果部分地以软件来实施本公开实施例,那么可将用于软件的指令存储在合适的非易失性计算机可读存储媒体中,且可使用一个或多个处理器在硬件中执行所述指令从而实施本公开实施例的方法。本文中所使用的术语“处理器”可指上述结构或适合于实施本文中所描述的技术的任意其它结构。
本公开一实施例还提供了一种视频编解码系统,参见图1A,包括本公开任一实施例所述的视频编码装置和本公开任一实施例所述的视频解码装置。
本公开一实施例还提供了一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时能够实现如本公开任一实施例所述的基于神经网络的环路滤波方法,或实现如本公开任一实施例所述的视频解码方法,或实现如本公开任一实施例所述的视频编码方法。
在以上一个或多个示例性实施例中,所描述的功能可以硬件、软件、固件或其任一组合来实施。如果以软件实施,那么功能可作为一个或多个指令或代码存储在计算机可读介质上或经由计算机可读 介质传输,且由基于硬件的处理单元执行。计算机可读介质可包含对应于例如数据存储介质等有形介质的计算机可读存储介质,或包含促进计算机程序例如根据通信协议从一处传送到另一处的任何介质的通信介质。以此方式,计算机可读介质通常可对应于非暂时性的有形计算机可读存储介质或例如信号或载波等通信介质。数据存储介质可为可由一个或多个计算机或者一个或多个处理器存取以检索用于实施本公开中描述的技术的指令、代码和/或数据结构的任何可用介质。计算机程序产品可包含计算机可读介质。
举例来说且并非限制,此类计算机可读存储介质可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来以指令或数据结构的形式存储所要程序代码且可由计算机存取的任何其它介质。而且,还可以将任何连接称作计算机可读介质举例来说,如果使用同轴电缆、光纤电缆、双绞线、数字订户线(DSL)或例如红外线、无线电及微波等无线技术从网站、服务器或其它远程源传输指令,则同轴电缆、光纤电缆、双纹线、DSL或例如红外线、无线电及微波等无线技术包含于介质的定义中。然而应了解,计算机可读存储介质和数据存储介质不包含连接、载波、信号或其它瞬时(瞬态)介质,而是针对非瞬时有形存储介质。如本文中所使用,磁盘及光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)、软磁盘或蓝光光盘等,其中磁盘通常以磁性方式再生数据,而光盘使用激光以光学方式再生数据。上文的组合也应包含在计算机可读介质的范围内。

Claims (30)

  1. 一种基于神经网络的环路滤波NNLF方法,应用于视频解码装置,所述方法包括:
    解码重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息;
    根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,根据确定的NNLF模式对所述重建图像进行NNLF;
    其中,所述NNLF模式包括第一模式和第二模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
  2. 如权利要求1所述的方法,其特征在于:
    所述对色度信息进行指定调整,包括以下调整方式中的任意一种或多种:
    将重建图像的两个色度分量的顺序互换;
    计算重建图像的两个色度分量的加权平均值和平方误差值,将所述加权平均值和平方误差值作为NNLF模式下输入的色度信息。
  3. 如权利要求1所述的方法,其特征在于:
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述第一标志为图像级语法元素或者块级语法元素。
  4. 如权利要求1或2所述的方法,其特征在于:
    所述第二模式还包括色度信息融合模式,所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
    所述第一模式使用的模型使用所述原始数据训练得到。
  5. 如权利要求1或2所述的方法,其特征在于:
    所述第二模式还包括色度信息调整及融合模式;
    所述色度信息调整及融合模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理,且所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
    所述第一模式使用的模型使用所述原始数据训练得到。
  6. 如权利要求1所述的方法,其特征在于:
    所述第一模式有一种,所述第二模式有一种或多种;所述第一模式和所述第二模式下使用的模型的网络结构相同或不同。
  7. 如权利要求1所述的方法,其特征在于:
    所述第二模式为一种色度信息调整模式;所述第一标志用于指示对所述重建图像进行NNLF时是否进行色度信息调整;
    根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,包括:在所述第一标志指示进行色度信息调整时,确定对所述重建图像进行NNLF时使用所述第二模式,在所述第一标志指示不进行色度信息调整时,确定对所述重建图像进行NNLF时使用所述第一模式。
  8. 如权利要求1所述的方法,其特征在于:
    所述第二模式有多种;所述第一标志用于指示对所述重建图像进行NNLF时是否使用所述第二模 式;
    所述根据所述第一标志确定对所述重建图像进行NNLF时使用的NNLF模式,包括:
    在所述第一标志指示不使用所述第二模式时,确定对所述重建图像进行NNLF时使用所述第一模式;
    在所述第一标志指示使用所述第二模式时,继续解码第二标志,所述第二标志包含应使用的一种第二模式的索引信息;及,根据所述第二标志确定对所述重建图像进行NNLF时使用该第二模式。
  9. 一种视频解码方法,应用于视频解码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
    在NNLF允许色度信息调整的情况下,按照如权利要求1至8中任一所述的方法对所述重建图像进行NNLF。
  10. 如权利要求9所述的方法,其特征在于:
    在满足以下一种或多种条件的情况下,确定NNLF允许色度信息调整:
    解码序列级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整;
    解码图像级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整。
  11. 如权利要求9所述的方法,其特征在于:
    所述方法还包括:在NNLF不允许色度信息调整的情况下,跳过对所述第一标志的解码,使用所述第一模式对所述重建图像进行NNLF。
  12. 一种基于神经网络的环路滤波NNLF方法,应用于视频编码装置,所述方法包括:
    计算使用第一模式对输入的重建图像进行NNLF的率失真代价,及使用第二模式对所述重建图像进行NNLF的率失真代价;
    确定使用所述第一模式和所述第二模式中率失真代价最小的一种模式对所述重建图像进行NNLF;
    其中,所述第一模式和第二模式均为设定的NNLF模式,所述第二模式包括色度信息调整模式,所述色度信息调整模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理。
  13. 如权利要求12所述的方法,其特征在于:
    所述对色度信息进行指定调整,包括以下调整方式中的任意一种或多种:
    将重建图像的两个色度分量的顺序互换;
    计算重建图像的两个色度分量的加权平均值和平方误差值,将所述加权平均值和平方误差值作为NNLF模式下输入的色度信息。
  14. 如权利要求12所述的方法,其特征在于:
    所述重建图像为当前帧或当前切片或当前块的重建图像;所述第一模式和所述第二模式下使用的模型的网络结构相同或不同。
  15. 如权利要求12或13所述的方法,其特征在于:
    所述第二模式还包括色度信息融合模式,所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
    所述第一模式使用的模型使用所述原始数据训练得到。
  16. 如权利要求12或13所述的方法,其特征在于:
    所述第二模式还包括色度信息调整及融合模式;
    所述色度信息调整及融合模式相对所述第一模式增加了在滤波前对输入的色度信息进行指定调整的处理,且所述色度信息融合模式下使用的模型在训练时使用的训练数据包括对原始数据中重建图像的色度信息进行所述指定调整后得到的扩充数据,或者包括所述原始数据和扩充数据;
    所述第一模式使用的模型使用所述原始数据训练得到。
  17. 如权利要求12所述的方法,其特征在于:
    所述第一模式有一种,所述第二模式有一种或多种;
    所述计算使用第一模式对输入的重建图像进行NNLF的率失真代价cost 1,包括:获取使用第一模式对所述重建图像进行NNLF后输出的第一滤波图像,根据所述第一滤波图像与相应原始图像的差异计算cost 1
    所述计算使用第二模式对所述重建图像进行NNLF的率失真代价cost 2,包括:对每一种所述第二模式,获取使用该第二模式对所述重建图像进行NNLF后输出的一第二滤波图像,根据该第二滤波图像与所述原始图像的差异计算该第二模式的cost 2
  18. 如权利要求17所述的方法,其特征在于:
    所述方法还包括:将使用率失真代价最小的该模式对所述重建图像进行NNLF得到的滤波图像,作为对所述重建图像进行NNLF后输出的滤波图像。
  19. 一种视频编码方法,应用于视频编码装置,包括:对重建图像进行基于神经网络的环路滤波NNLF时,执行以下处理:
    在NNLF允许色度信息调整的情况下,按照如权利要求12至18中任一所述的方法对所述重建图像进行NNLF;
    编码所述重建图像的第一标志,所述第一标志包含对所述重建图像进行NNLF时使用的NNLF模式的信息。
  20. 如权利要求19所述的方法,其特征在于:
    所述第一标志为图像级语法元素或者块级语法元素。
  21. 如权利要求19所述的方法,其特征在于:
    在满足以下一种或多种条件的情况下,确定NNLF允许色度信息调整:
    解码序列级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整;
    解码图像级的色度信息调整允许标志,根据该色度信息调整允许标志确定NNLF允许色度信息调整。
  22. 如权利要求19所述的方法,其特征在于:
    所述方法还包括:确定NNLF不允许色度信息调整的情况下,使用所述第一模式对所述重建图像进行NNLF,并跳过对所述色度信息调整允许标志的编码。
  23. 如权利要求19所述的方法,其特征在于:
    所述第二模式为一种色度信息调整模式;所述第一标志用于指示对所述重建图像进行NNLF时是否进行色度信息调整;
    所述编码所述重建图像的第一标志,包括:在确定使用所述第一模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示不进行色度信息调整的值;在确定使用所述第二模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示进行色度信息调整的值。
  24. 如权利要求19所述的方法,其特征在于:
    所述第二模式有多种;所述第一标志用于指示对所述重建图像进行NNLF时是否使用所述第二模式;
    所述编码所述重建图像的第一标志,包括:
    在确定使用所述第一模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示不使用所述第二模式的值;
    在确定使用所述第二模式对所述重建图像进行NNLF的情况下,将所述第一标志置为指示使用所述第二模式的值,并继续编码第二标志,所述第二标志包含率失真代价最小的一种第二模式的索引信息。
  25. 一种码流,其中,所述码流通过如权利要求19至24中任一所述的视频编码方法生成。
  26. 一种基于神经网络的环路滤波器,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求1至8、12至18中任一所述的基于神经网络的环路滤波方法。
  27. 一种视频解码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求9至11中任一所述的视频解码方法。
  28. 一种视频编码装置,包括处理器以及存储有计算机程序的存储器,其中,所述处理器执行所述计算机程序时能够实现如权利要求19至24中任一所述的视频编码方法。
  29. 一种视频编解码系统,其中,包括如权利要求28所述的视频编码装置和如权利要求27所述的视频解码装置。
  30. 一种非瞬态计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时能够实现如权利要求1至8、12至18中任一所述的基于神经网络的环路滤波方法,或实现如权利要求9至11中任一所述的视频解码方法,或实现如权利要求19至24中任一所述的视频编码方法。
PCT/CN2022/125229 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统 WO2024077574A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/125229 WO2024077574A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/125229 WO2024077574A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2024077574A1 true WO2024077574A1 (zh) 2024-04-18

Family

ID=90668429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125229 WO2024077574A1 (zh) 2022-10-13 2022-10-13 基于神经网络的环路滤波、视频编解码方法、装置和系统

Country Status (1)

Country Link
WO (1) WO2024077574A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190273948A1 (en) * 2019-01-08 2019-09-05 Intel Corporation Method and system of neural network loop filtering for video coding
CN113489977A (zh) * 2021-07-02 2021-10-08 浙江大华技术股份有限公司 环路滤波方法、视频/图像编解码方法及相关装置
CN114208203A (zh) * 2019-09-20 2022-03-18 英特尔公司 基于分类器的卷积神经网络环路滤波器
US20220286695A1 (en) * 2021-03-04 2022-09-08 Lemon Inc. Neural Network-Based In-Loop Filter With Residual Scaling For Video Coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190273948A1 (en) * 2019-01-08 2019-09-05 Intel Corporation Method and system of neural network loop filtering for video coding
CN114208203A (zh) * 2019-09-20 2022-03-18 英特尔公司 基于分类器的卷积神经网络环路滤波器
US20220286695A1 (en) * 2021-03-04 2022-09-08 Lemon Inc. Neural Network-Based In-Loop Filter With Residual Scaling For Video Coding
CN113489977A (zh) * 2021-07-02 2021-10-08 浙江大华技术股份有限公司 环路滤波方法、视频/图像编解码方法及相关装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
L. WANG (TENCENT), S. LIN (TENCENT), X. XU (TENCENT), S. LIU (TENCENT), F. GALPIN (INTERDIGITAL): "EE1-1.4: Neural network based in-loop filter with 2 models", 27. JVET MEETING; 20220713 - 20220722; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 15 July 2022 (2022-07-15), XP030302839 *
L. WANG (TENCENT), S. LIN (TENCENT), X. XU (TENCENT), S. LIU (TENCENT), F. GALPIN (INTERDIGITAL): "EE1-1.5: Neural network based in-loop filter with a single model", 27. JVET MEETING; 20220713 - 20220722; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 15 July 2022 (2022-07-15), XP030302850 *

Similar Documents

Publication Publication Date Title
KR102288109B1 (ko) 비디오 압축에서의 양방향 예측
TWI782904B (zh) 合併用於視訊寫碼之用於多類別區塊之濾波器
US8995523B2 (en) Memory efficient context modeling
US9955186B2 (en) Block size decision for video coding
JP6266605B2 (ja) 映像コーディングにおけるロスレスコーディングモード及びパルスコード変調(pcm)モードのシグナリングの統一
TW201841503A (zh) 視頻寫碼中之內濾波旗標
TW202005399A (zh) 基於區塊之自適應迴路濾波器(alf)之設計及發信令
KR20170123632A (ko) 비디오 인코딩을 위한 적응적 모드 체킹 순서
TW201517599A (zh) 內部運動補償延伸
WO2016130318A1 (en) Near visually lossless video recompression
CN107258081B (zh) 对使用非正方形分割编码视频数据的优化
TW201313031A (zh) 用於大色度區塊的可變長度寫碼係數寫碼
US20210006839A1 (en) Picture filtering method and apparatus, and video codec
KR20220036982A (ko) 비디오 인코딩 및 디코딩을 위한 이차 변환
WO2019086033A1 (zh) 视频数据解码方法及装置
JP2014207536A (ja) 画像処理装置および方法
EP3114837A1 (en) Flicker detection and mitigation in video coding
US20210037247A1 (en) Encoding and decoding with refinement of the reconstructed picture
WO2024077574A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
WO2024077575A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
WO2024077576A1 (zh) 基于神经网络的环路滤波、视频编解码方法、装置和系统
WO2021263251A1 (en) State transition for dependent quantization in video coding
RU2772813C1 (ru) Видеокодер, видеодекодер и соответствующие способы кодирования и декодирования
WO2023004590A1 (zh) 一种视频解码、编码方法及设备、存储介质
WO2024007157A1 (zh) 多参考行索引列表排序方法、视频编解码方法、装置和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22961763

Country of ref document: EP

Kind code of ref document: A1