WO2024149395A1 - Method, apparatus, and medium for visual data processing - Google Patents

Method, apparatus, and medium for visual data processing Download PDF

Info

Publication number
WO2024149395A1
WO2024149395A1 PCT/CN2024/072165 CN2024072165W WO2024149395A1 WO 2024149395 A1 WO2024149395 A1 WO 2024149395A1 CN 2024072165 W CN2024072165 W CN 2024072165W WO 2024149395 A1 WO2024149395 A1 WO 2024149395A1
Authority
WO
WIPO (PCT)
Prior art keywords
convolutional layer
visual data
coding
bitstream
syntax element
Prior art date
Application number
PCT/CN2024/072165
Other languages
French (fr)
Inventor
Zhaobin Zhang
Semih Esenlik
Yaojun Wu
Meng Wang
Yanchen ZUO
Kai Zhang
Li Zhang
Original Assignee
Douyin Vision Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co., Ltd., Bytedance Inc. filed Critical Douyin Vision Co., Ltd.
Publication of WO2024149395A1 publication Critical patent/WO2024149395A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component

Definitions

  • Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to neural network-based visual data coding.
  • Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable rate-distortion (R-D) performance with Versatile Video Coding (VVC) . With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, coding quality and coding efficiency of neural network-based image/video coding is generally expected to be further improved.
  • Embodiments of the present disclosure provide a solution for visual data processing.
  • a method for visual data processing comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • NN neural network
  • the prediction fusion module in the NN-based model comprises less than 6 convolutional layers.
  • the proposed method can advantageously simplify the prediction fusion module and thus reduce the time consumed at this stage. Thereby, the coding efficiency can be improved.
  • an apparatus for visual data processing comprises a processor and a non-transitory memory with instructions thereon.
  • a non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
  • non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing.
  • the method comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • NN neural network
  • a method for storing a bitstream of visual data comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6; and storing the bitstream in a non-transitory computer-readable recording medium.
  • NN neural network
  • Fig. 1 illustrates a block diagram that illustrates an example visual data coding system, in accordance with some embodiments of the present disclosure
  • Fig. 2 illustrates a typical transform coding scheme
  • Fig. 3 illustrates an image from the Kodak dataset and different representations of the image
  • Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model
  • Fig. 5 illustrates a block diagram of a combined model
  • Fig. 6 illustrates an encoding process of the combined model
  • Fig. 7 illustrates a decoding process of the combined model
  • Fig. 8 illustrates a decoder architecture with decoupled processing
  • Fig. 9 illustrates examples of a simplified decoupled architecture in accordance with embodiments of the present disclosure.
  • Fig. 10 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure
  • Fig. 11 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure
  • Fig. 12 illustrates exemplary variants of the simplified prediction fusion net in accordance with embodiments of the present disclosure
  • Fig. 13 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure
  • Fig. 14 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure
  • Fig. 15 illustrates exemplary variants of the enhanced hyper decoder net in accordance with embodiments of the present disclosure
  • Fig. 16 illustrates a flowchart of a method for visual data processing in accordance with embodiments of the present disclosure.
  • Fig. 17 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
  • references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • Fig. 1 is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure.
  • the visual data coding system 100 may include a source device 110 and a destination device 120.
  • the source device 110 can be also referred to as a visual data encoding device, and the destination device 120 can be also referred to as a visual data decoding device.
  • the source device 110 can be configured to generate encoded visual data and the destination device 120 can be configured to decode the encoded visual data generated by the source device 110.
  • the source device 110 may include a visual data source 112, a visual data encoder 114, and an input/output (I/O) interface 116.
  • I/O input/output
  • the visual data source 112 may include a source such as a visual data capture device.
  • Examples of the visual data capture device include, but are not limited to, an interface to receive visual data from a visual data provider, a computer graphics system for generating visual data, and/or a combination thereof.
  • the visual data may comprise one or more pictures of a video or one or more images.
  • the visual data encoder 114 encodes the visual data from the visual data source 112 to generate a bitstream.
  • the bitstream may include a sequence of bits that form a coded representation of the visual data.
  • the bitstream may include coded pictures and associated visual data.
  • the coded picture is a coded representation of a picture.
  • the associated visual data may include sequence parameter sets, picture parameter sets, and other syntax structures.
  • the I/O interface 116 may include a modulator/demodulator and/or a transmitter.
  • the encoded visual data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A.
  • the encoded visual data may also be stored onto a storage medium/server 130B for access by destination device 120.
  • the destination device 120 may include an I/O interface 126, a visual data decoder 124, and a display device 122.
  • the I/O interface 126 may include a receiver and/or a modem.
  • the I/O interface 126 may acquire encoded visual data from the source device 110 or the storage medium/server 130B.
  • the visual data decoder 124 may decode the encoded visual data.
  • the display device 122 may display the decoded visual data to a user.
  • the display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device.
  • the visual data encoder 114 and the visual data decoder 124 may operate according to a visual data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards.
  • a visual data coding standard such as video coding standard or still picture coding standard and other current and/or further standards.
  • a neural network-based image and video compression method comprising an auto-regressive subnetwork, an entropy coding engine, wherein entropy coding is performed independently of the auto-regressive subnetwork, namely the decoupled architecture.
  • the decoupled architecture is simplified to reduce the decoding time complexity.
  • Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable R-D performance with Versatile Video Coding (VVC) , the latest video coding standard developed by Joint Video Experts Team (JVET) with experts from MPEG and VCEG. With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, neural network-based video coding still remains in its infancy due to the inherent difficulty of the problem.
  • VVC Versatile Video Coding
  • Image/video compression usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission.
  • the binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression.
  • Most of the efforts are devoted to lossy compression since lossless reconstruction is not necessary in most scenarios.
  • the performance of image/video compression algorithms is evaluated from two aspects, i.e. compression ratio and reconstruction quality. Compression ratio is directly related to the number of binary codes, the less the better; Reconstruction quality is measured by comparing the reconstructed image/video with the original image/video, the higher the better.
  • Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods.
  • Classical video coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., DCT or wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime.
  • Neural network-based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs.
  • Neural network-based image/video compression is not a new invention since there were a number of researchers working on neural network-based image coding. But the network architectures were relatively shallow, and the performance was not satisfactory. Benefit from the abundance of data and the support of powerful computing resources, neural network-based methods are better exploited in a variety of applications. At present, neural network-based image/video compression has shown promising improvements, confirmed its feasibility. Nevertheless, this technology is still far from mature and a lot of challenges need to be addressed.
  • Neural networks also known as artificial neural networks (ANN)
  • ANN artificial neural networks
  • One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing layers is learned from massive data using a general machine learning procedure. Deep learning eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, whilst processing such data has been a longstanding difficulty in the artificial intelligence field.
  • the optimal method for lossless coding can reach the minimal coding rate -log 2 p (x) where p (x) is the probability of symbol x.
  • p (x) is the probability of symbol x.
  • a number of lossless coding methods were developed in literature and among them arithmetic coding is believed to be among the optimal ones.
  • arithmetic coding ensures that the coding rate to be as close as possible to its theoretical limit -log 2 p (x) without considering the rounding error. Therefore, the remaining problem is to how to determine the probability, which is however very challenging for natural image/video due to the curse of dimensionality.
  • one way to model p (x) is to predict pixel probabilities one by one in a raster scan order based on previous observations, where x is an image.
  • p (x) p (x 1 ) p (x 2
  • k is a pre-defined constant controlling the range of the context.
  • condition may also take the sample values of other color components into consideration.
  • R sample is dependent on previously coded pixels (including R/G/B samples)
  • the current G sample may be coded according to previously coded pixels and the current R sample
  • the previously coded pixels and the current R and G samples may also be taken into consideration.
  • Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of p (x i ) given its context x 1 , x 2 , ..., x i-1 .
  • the pixel probability is proposed for binary images, i.e., x i ⁇ ⁇ -1, +1 ⁇ .
  • the neural autoregressive distribution estimator (NADE) is designed for pixel probability modeling, where is a feed-forward network with a single hidden layer. A similar work is presented, where the feed-forward network also has connections skipping the hidden layer, and the parameters are also shared. Experiments have been performed on the binarized MNIST dataset.
  • NADE is extended to a real-valued model RNADE, where the probability p (x i
  • Their feed-forward network also has a single hidden layer, but the hidden layer is with rescaling to avoid saturation and uses rectified linear unit (ReLU) instead of sigmoid.
  • ReLU rectified linear unit
  • Multi-dimensional long short-term memory (LSTM) is proposed, which is working together with mixtures of conditional Gaussian scale mixtures for probability modeling.
  • LSTM is a special kind of recurrent neural networks (RNNs) and is proven to be good at modeling sequential data.
  • RNNs recurrent neural networks
  • the spatial variant of LSTM is used for images later.
  • Several different neural networks are studied, including RNNs and CNNs namely PixelRNN and PixelCNN, respectively.
  • PixelRNN two variants of LSTM, called row LSTM and diagonal BiLSTM are proposed, where the latter is specifically designed for images.
  • PixelRNN incorporates residual connections to help train deep neural networks with up to 12 layers.
  • PixelCNN masked convolutions are used to suit for the shape of the context. Comparing with previous works, PixelRNN and PixelCNN are more dedicated to natural images: they consider pixels as discrete values (e.g., 0, 1, ..., 255) and predict a multinomial distribution over the discrete values; they deal with color images in RGB color space; they work well on large-scale image dataset ImageNet. Gated PixelCNN is proposed to improve the PixelCNN and achieves comparable performance with PixelRNN but with much less complexity.
  • PixelCNN++ is proposed with the following improvements upon PixelCNN: a discretized logistic mixture likelihood is used rather than a 256-way multinomial distribution; down-sampling is used to capture structures at multiple resolutions; additional short-cut connections are introduced to speed up training; dropout is adopted for regularization; RGB is combined for one pixel.
  • PixelSNAIL is proposed, in which casual convolutions are combined with self-attention.
  • the additional condition can be image label information or high-level representations.
  • Auto-encoder originates from the well-known work proposed by Hinton and Salakhutdinov.
  • the method is trained for dimensionality reduction and consists of two parts: encoding and decoding.
  • the encoding part converts the high-dimension input signal to low-dimension representations, typically with reduced spatial size but a greater number of channels.
  • the decoding part attempts to recover the high-dimension input from the low-dimension representation.
  • Auto-encoder enables automated learning of representations and eliminates the need of hand-crafted features, which is also believed to be one of the most important advantages of neural networks.
  • Fig. 2 is an illustration of a typical transform coding scheme.
  • the original image x is transformed by the analysis network g a to achieve the latent representation y.
  • the latent representation y is quantized and compressed into bits.
  • the number of bits R is used to measure the coding rate.
  • the quantized latent representation is then inversely transformed by a synthesis network g s to obtain the reconstructed image
  • the distortion is calculated in a perceptual space by transforming x and with the function g p .
  • the prototype auto-encoder for image compression is in Fig. 2, which can be regarded as a transform coding strategy.
  • the synthesis network will inversely transform the quantized latent representation back to obtain the reconstructed image
  • the framework is trained with the rate-distortion loss function, i.e., where D is the distortion between x and R is the rate calculated or estimated from the quantized representation and ⁇ is the Lagrange multiplier. It should be noted that D can be calculated in either pixel domain or perceptual domain. All existing research works follow this prototype and the difference might only be the network structure or loss function.
  • RNNs and CNNs are the most widely used architectures.
  • a general framework was proposed for variable rate image compression using RNN. They use binary quantization to generate codes and do not consider rate during training.
  • the framework indeed provides a scalable coding functionality, where RNN with convolutional and deconvolution layers is reported to perform decently.
  • an improved version was proposed by upgrading the encoder with a neural network similar to PixelRNN to compress the binary codes. The performance is reportedly better than JPEG on Kodak image dataset using MS-SSIM evaluation metric.
  • the RNN-based solution was further improved by introducing hidden-state priming.
  • an SSIM-weighted loss function is also designed, and spatially adaptive bitrates mechanism is enabled. They achieve better results than BPG on Kodak image dataset using MS-SSIM as evaluation metric.
  • a general framework was designed for rate-distortion optimized image compression. They use multiary quantization to generate integer codes and consider the rate during training, i.e. the loss is the joint rate-distortion cost, which can be MSE or others. They add random uniform noise to stimulate the quantization during training and use the differential entropy of the noisy codes as a proxy for the rate. They use generalized divisive normalization (GDN) as the network structure, which consists of a linear mapping followed by a nonlinear parametric normalization. The effectiveness of GDN on image coding is verified. An improved version was proposed, where they use 3 convolutional layers each followed by a down-sampling layer and a GDN layer as the forward transform.
  • GDN generalized divisive normalization
  • the inverse transform is implemented with a subnet h s attempting to decode from the quantized side information to the standard deviation of the quantized which will be further used during the arithmetic coding of
  • their method is slightly worse than BPG in terms of PSNR.
  • the structures were further explored in the residue space by introducing an autoregressive model to estimate both the standard deviation and the mean. In the latest work Gaussian mixture model was used to further remove redundancy in the residue. The reported performance is on par with VVC on the Kodak image set using PSNR as evaluation metric.
  • the encoder subnetwork (section 2.3.2) transforms the image vector x using a parametric analysis transform into a latent representation y, which is then quantized to form Because is discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
  • the left hand of the models is the encoder g a and decoder g s (explained in section 2.3.2) .
  • the right-hand side is the additional hyper encoder h a and hyper decoder h s networks that are used to obtain
  • the encoder subjects the input image x to g a , yielding the responses y with spatially varying standard deviations.
  • the responses y are fed into h a , summarizing the distribution of standard deviations in z.
  • z is then quantized compressed, and transmitted as side information.
  • the encoder uses the quantized vector to estimate ⁇ , the spatial distribution of standard deviations, and uses it to compress and transmit the quantized image representation
  • the decoder first recovers from the compressed signal. It then uses h s to obtain ⁇ , which provides it with the correct probability estimates to successfully recover as well. It then feeds into g s to obtain the reconstructed image.
  • the spatial redundancies of the quantized latent are reduced.
  • the rightmost image in Fig. 3 correspond to the quantized latent when hyper encoder/decoder are used. Compared to middle right image, the spatial redundancies are significantly reduced, as the samples of the quantized latent are less correlated.
  • Fig. 3 illustrates an image from the Kodak dataset and different representations of the image.
  • the leftmost image in Fig. 3 shows an image from the Kodak dataset.
  • the middle left image in Fig. 3 shows visualization of a latent representation y of that image.
  • the middle right image in Fig. 3 shows standard deviations ⁇ of the latent.
  • the rightmost image in Fig. 3 shows latents y after the hyper prior (hyper encoder and decoder) network is introduced.
  • Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model.
  • the left side shows an image autoencoder network, the right side corresponds to the hyperprior subnetwork.
  • the analysis and synthesis transforms are denoted as g a and g s , respectively.
  • Q represents quantization
  • AE, AD represent arithmetic encoder and arithmetic decoder, respectively.
  • the hyperprior model consists of two subnetworks, hyper encoder (denoted with h a ) and hyper decoder (denoted with h s ) .
  • the hyper prior model generates a quantized hyper latent which comprises information about the probability distribution of the samples of the quantized latent is included in the bitsteam and transmitted to the receiver (decoder) along with
  • hyperprior model improves the modelling of the probability distribution of the quantized latent
  • additional improvement can be obtained by utilizing an autoregressive model that predicts quantized latents from their causal context (Context Model) .
  • auto-regressive means that the output of a process is later used as input to it.
  • the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample.
  • a joint architecture was used where both hyperprior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized.
  • the hyperprior and the context model are combined to learn a probabilistic model over quantized latents which is then used for entropy coding.
  • the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean ⁇ and scale (or variance) ⁇ parameters for a Gaussian probability model.
  • the gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module.
  • AE arithmetic encoder
  • the gaussian probability model is utilized to obtain the quantized latents from the bitstream by arithmetic decoder (AD) module.
  • Fig. 5 illustrates a combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder.
  • Real-valued latent representations are quantized (Q) to create quantized latents and quantized hyper-latents which are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD) .
  • AE arithmetic encoder
  • AD arithmetic decoder
  • the highlighted region corresponds to the components that are executed by the receiver (i.e. a decoder) to recover an image from a compressed bitstream.
  • the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to) .
  • the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale) , the joint model is used to estimate the mean and variance (denoted as ⁇ and ⁇ ) .
  • Fig. 5 corresponds to the state-of-the-art compression method. In this section and the next, the encoding and decoding processes will be described separately.
  • Fig. 6 depicts the encoding process.
  • the input image is first processed with an encoder subnetwork.
  • the encoder transforms the input image into a transformed representation called latent, denoted by y.
  • y is then input to a quantizer block, denoted by Q, to obtained the quantized latent is then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE) .
  • the arithmetic encoding block converts each sample of the into a bitstream (bits1) one by one, in a sequential order.
  • the modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latent
  • the latent y is input to hyper encoder, which outputs the hyper latent (denoted by z) .
  • the hyper latent is then quantized and a second bitstream (bits2) is generated using arithmetic encoding (AE) module.
  • AE arithmetic encoding
  • the factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream.
  • the quantized hyper latent includes information about the probability distribution of the quantized latent
  • the Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latent
  • the information that is generated by the Entropy Parameters typically include a mean ⁇ and scale (or variance) ⁇ parameters, that are together used to obtain a gaussian probability distribution.
  • a gaussian distribution of a random variable x is defined as wherein the parameter ⁇ is the mean or expectation of the distribution (and also its median and mode) , while the parameter ⁇ is its standard deviation (or variance, or scale) .
  • the mean and the variance need to be determined.
  • the entropy parameters module is used to estimate the mean and the variance values.
  • the subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module.
  • the context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module.
  • the quantized latent is typically a matrix composed of many samples. The samples can be indicated using indices, such as or depending on the dimensions of the matrix
  • the samples are encoded by AE one by one, typically using a raster scan order. In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right.
  • the context module In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream) , the context module generates the information pertaining to a sample using the samples encoded before, in raster scan order.
  • the information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latent into bitstream (bits1) .
  • the first and the second bitstream are transmitted to the decoder as result of the encoding process.
  • encoder The analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder) .
  • Fig. 7 depicts the state-of-the-art decoding process.
  • the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder.
  • the bits2 is first decoded by the arithmetic decoding (AD) module by utilizing the probability distributions generated by the factorized entropy subnetwork.
  • the factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution.
  • the output of the arithmetic decoding process of the bits2 is which is the quantized hyper latent.
  • the AD process reverts to AE process that was applied in the encoder.
  • the processes of AE and AD are lossless, meaning that the quantized hyper latent that was generated by the encoder can be reconstructed at the decoder without any change.
  • the hyper decoder After obtaining of it is processed by the hyper decoder, whose output is fed to entropy parameters module.
  • the three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore, the exact same probability distributions can be obtained in the decoder (as in encoder) , which is essential for reconstructing the quantized latent without any loss. As a result, the identical version of the quantized latent that was obtained in the encoder can be obtained in the decoder.
  • the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1.
  • autoregressive model the context model
  • the fully reconstructed quantized latent is input to the synthesis transform (denoted as decoder in Fig. 7) module to obtain the reconstructed image.
  • decoder The synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder) .
  • neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based image compression but needs far more efforts to solve the challenges due to its complexity.
  • 2017 a few researchers have been working on neural network-based video compression schemes.
  • video compression needs efficient methods to remove inter-picture redundancy.
  • Inter-picture prediction is then a crucial step in these works.
  • Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently.
  • Random access it requires the decoding can be started from any point of the sequence, typically divides the entire sequence into multiple individual segments and each segment can be decoded independently.
  • low-latency case it aims at reducing decoding time thereby usually merely temporally previous frames can be used as reference frames to decode subsequent frames.
  • the early work first splits the video sequence frames into blocks and each block will choose one from two available modes, either intra coding or inter coding. If intra coding is selected, there is an associated auto-encoder to compress the block. If inter coding is selected, motion estimation and compensation are performed with tradition methods and a trained neural network will be used for residue compression. The outputs of auto-encoders are directly quantized and coded by the Huffman method.
  • Another end-to-end neural network-based video compression framework is then proposed, in which all the modules are implemented with neural networks.
  • the scheme accepts current frame and the prior reconstructed frame as inputs and optical flow will be derived with a pre-trained neural network as the motion information.
  • the motion information will be warped with the reference frame followed by a neural network generating the motion compensated frame.
  • the residues and the motion information are compressed with two separate neural auto-encoders.
  • the whole framework is trained with a single rate-distortion loss function. It achieves better performance than H. 264.
  • An advanced neural network-based video compression scheme is proposed. It inherits and extends traditional video coding schemes with neural networks with the following major features: 1) using only one auto-encoder to compress motion information and residues; 2) motion compensation with multiple frames and multiple optical flows; 3) an on-line state is learned and propagated through the following frames over time. This scheme achieves better performance in MS-SSIM than HEVC reference software.
  • An extended end-to-end neural network-based video compression framework is proposed afterwards.
  • multiple frames are used as references. It is thereby able to provide more accurate prediction of current frame by using multiple reference frames and associated motion information.
  • motion field prediction is deployed to remove motion redundancy along temporal channel.
  • Postprocessing networks are also introduced in this work to remove reconstruction artifacts from previous processes. The performance is better than H. 265 by a noticeable margin in terms of both PSNR and MS-SSIM.
  • the scale-space flow is then proposed to replace commonly used optical flow by adding a scale parameter. It is reportedly achieving better performance than H. 264.
  • a multi-resolution representation for optical flows is proposed. Concretely, the motion estimation network produces multiple optical flows with different resolutions and let the network to learn which one to choose under the loss function. The performance is better than H. 265.
  • a frame interpolation based method was initially designed.
  • the key frames are first compressed with a neural image compressor and the remaining frames are compressed in a hierarchical order. They perform motion compensation in the perceptual domain, i.e. deriving the feature maps at multiple spatial scales of the original frame and using motion to warp the feature maps, which will be used for the image compressor.
  • the method is reportedly on par with H. 264.
  • interpolation-based video compression is then proposed, wherein the interpolation model combines motion information compression and image synthesis, and the same auto-encoder is used for image and residual.
  • a neural network-based video compression method based on variational auto-encoders with a deterministic encoder is proposed.
  • the model consists of an auto-encoder and an auto-regressive prior. Different from previous methods, this method accepts a group of pictures (GOP) as inputs and incorporates a 3D autoregressive prior by taking into account of the temporal correlation while coding the laten representations. It provides comparative performance as H. 265.
  • GOP group of pictures
  • a grayscale digital image can be represented by where is the set of values of a pixel, m is the image height and n is the image width. For example, is a common setting and in this case thus the pixel can be represented by an 8-bit integer.
  • An uncompressed grayscale digital image has 8 bits-per-pixel (bpp) , while compressed bits are definitely less.
  • a color image is typically represented in multiple channels to record the color information.
  • an image can be denoted by with three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp.
  • Digital images/videos can be represented in different color spaces.
  • the neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to represent the video sequences.
  • YUV color space an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components.
  • the benefits come from that Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components.
  • a color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps.
  • MSE mean-squared-error
  • the quality of the reconstructed image compared with the original image can be measured by peak signal-to-noise ratio (PSNR) :
  • SSIM structural similarity
  • MS-SSIM multi-scale SSIM
  • Fig. 8 illustrates a decoder architecture with decoupled processing.
  • the decoder with decoupled processing decouples the arithmetic decoding process (the process of receiving bits and generates ) from the autoregressive loop (the combined module comprising hyper decoder, Mask Conv and Prediction Fusion network) .
  • the autoregressive loop consumes significant of time since it repeats the process until all the elements are decoded.
  • simplifying this part is of significant important.
  • the Prediction Fusion network consumes the most of time in the autoregressive process. For example, in decoding luma component, the Prediction fusion network consumes 72.58%time of the whole autoregressive process.
  • Existing methods have not tried to simplify the decoupled module (including the hyper decoder, Mask Conv network and the Prediction fusion network) to reduce the decoder complexity.
  • Fig. 9 illustrates examples of a simplified decoupled architecture in accordance with embodiments of the present disclosure.
  • the original design of the autoregressive loop involves a Hyper Decoder Net, a Context Model Net and a Prediction Net. Since the Prediction Fusion Net consumes the most decoding time, it should be simplified. In addition, in case the performance might drop, modifications on the Hyper Decoder Net can be applied to compensate the performance drop. Since the Prediction Fusion Net is used in a repetitive way in the autoregressive loop and the Hyper Decoder Net is only used once, simplifying the Prediction Fusion Net while increasing the Hyper Decoder Net would also lead to decreased decoding time.
  • the target of the solution is to design a simplified decoupled processing module (including the Prediction Fusion Net, the Hyper Decoder Net and the Context Model Net) with reduced decoding complexity while maintaining the coding efficiency or improved coding efficiency.
  • the solution modifies the Prediction Fusion Net architecture by reducing the number of convolutional layers, adjusting the number of channels or any combinations of these means.
  • the Hyper Decoder Net could be modified accordingly to compensate the coding efficiency drop from simplifying the Prediction Fusion Net.
  • the following examples as shown in Fig. 9 illustrate the possible ways to simplify the decoupled architecture.
  • Example 1 remove one convolutional layer from the Prediction Fusion Net. The last convolutional layer is removed as illustrated followed by changing the number of last convolutional layer output channels to C.
  • Example 2 remove multiple convolutional layers from the Prediction Fusion Net.
  • Example 3 remove multiple convolutional layers from the Prediction Fusion Net and change the number of channels. It is possible to increase the number of channels for each convolutional layers without impacting the decoding time. And, in this example, adjusting the number of channels include both increasing and decreasing the number of channels.
  • Example 4 modifications can be applied to the Hyper Decoder Net with anyone from Example 1-3 or any combinations of them. Modifying the Hyper Decoder Net could be adding one or more convolutional layers, adjusting the number channels, modifying the activation layers, replacing one or more convolutional layer (s) with one or more new convolutional layer (s) . The objective is to compensate the possible coding efficiency drop resulting from simplifying the Prediction Fusion Net.
  • the Prediction Fusion Net can be modified to simplify the decoupled architecture.
  • one or more layers could be removed from the Prediction Fusion Net.
  • the number of channels can be adjusted accordingly.
  • one or more of the layers can be replaced with one or more new layers.
  • multiple Prediction Fusion Net architectures could exist in the decoder and a syntax element (SE) such as a flag may be signaled in the bitstreams to indicate which one is used.
  • SE syntax element
  • a SE is used to indicate how many Prediction Fusion Net architectures are involved in the decoder.
  • one flag may be used to indicate whether the simplified Prediction Fusion Net or the original design is used.
  • the modified Prediction Fusion Net can be applied to either luma or chroma or both.
  • the Hyper Decoder Net can be modified.
  • one or more convolutional layers can be added to the Hyper Decoder Net to increase its decoding capability.
  • the number of channels can be adjusted accordingly.
  • one convolutional layer can be replaced by more convolutional layer.
  • a flag may be signaled in the bitstreams to indicate whether the modified Hyper Decoder Net or the original Hyper Decoder Net is used.
  • multiple modified Hyper Decoder Nets can be included in the decoder and a syntax may be used to indicate which one is used.
  • the modified Hyper Decoder Net can be applied either luma or chroma or both.
  • Whether to and/or how to apply the disclosed methods above may be signalled at block level/sequence level/group of pictures level/picture level/slice level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/APS/slice header/tile group header.
  • coded information such as block size, colour format, single/dual tree partitioning, colour component, slice/picture type.
  • a syntax element disclosed above may be binarized as a flag, a fixed length code, an EG (x) code, a unary code, a truncated unary code, a truncated binary code, etc. It can be signed or unsigned.
  • a syntax element disclosed above may be coded with at least one context model. Or it may be bypass coded.
  • a syntax element disclosed above may be signaled in a conditional way.
  • the SE is signaled only if the corresponding function is applicable.
  • the SE is signaled only if the dimensions (width and/or height) of the block satisfy a condition.
  • a syntax element disclosed above may be signaled at block level/sequence level/group of pictures level/picture level/slice level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/APS/slice header/tile group header.
  • the simplified Prediction Fusion Net consumes less time, and its time reduction contribution is nontrivial since it is used repeatedly in the autoregressive loop.
  • the modified Hyper Decoder Net can compensate the possible coding efficiency drop from the simplified Prediction Fusion Net.
  • the joint efforts provide a decoupled decoder with reduced decoding complexity yet maintaining or outperforming the original design in terms of coding efficiency.
  • the simplified decouple architecture can be approached from two aspects: 1) simplifying the Prediction Fusion Net. 2) Enhancing the Hyper Decoder Net.
  • the Prediction Fusion Net is involved in the autoregressive loop and executed in a repetitive manner thus incurs the most decoding time. Therefore, the disclosure proposes to simplify the Prediction Fusion Net first. However, the coding efficiency could possibly be reduced due to the simplified Prediction Fusion Net. If that happens, the Hyper Decoder Net can be enhanced accordingly to compensate the loss. Alternatively, if there is no loss on coding efficiency after simplifying the Prediction Fusion Net, the Hyper Decoder Net can be optionally enhanced to improve the coding efficiency.
  • Fig. 10 illustrates the simplified decoupled architecture obtained by removing one convolutional layer from the Prediction Fusion Net.
  • the Hyper Decoder Net is the original design.
  • Fig. 10 the simplified decoupled architecture is implemented by removing one convolutional layer from the Prediction Net.
  • the original design of Prediction Fusion Net has the following configurations as shown in Table 1, where //is the floor division operator. Each convolutional layer is followed by a Leaky ReLu activation layer except for the final convolutional layer.
  • the simplified Prediction Fusion Net is by removing the Conv #6 layer followed by changing the output of Conv #5 to C.
  • the configurations of the simplified Prediction Fusion Net are tabulated in Table 2.
  • any other single convolutional layer may be removed, such as Conv #5, Conv #4, Conv #3, Conv #2, Conv #1 with corresponding channel number adjustments of the proceeding or following convolutional layer.
  • Fig. 11 illustrates the simplified decoupled architecture obtained by removing multiple convolutional layers from the Prediction Fusion Net.
  • the Hyper Decoder is the original design.
  • Fig. 11 an implementation is illustrated after Conv #2, #3 and #5 are removed from the Prediction Fusion Net, followed by corresponding channel number adjustments of the proceeding or following convolutional layers.
  • the number of convolutional layers removed can be 2, 3, 4 or 5, i.e., at least one convolutional layer should be remained and preferably 2 or more.
  • Three convolutional layers are included, each followed by a LeakyRelu activation layer except for the last convolutional layer, with the number of channels of 4C-4C-C, 4C-2C-C or 2C-2C-C.
  • the LeakyRelu layer can also be removed, i.e., removing the LeakyRelu layer between the concatenation operation and the single convolutional layer in Fig. 12.
  • Fig. 13 illustrates an integration of the 1-convolutional layer Prediction Fusion Net (as shown in Fig. 12) with Hyper Decoder Net.
  • the simplified Prediction Fusion Net has only one convolutional layer.
  • a LeakyRelu layer is optionally inserted between this convolutional layer and the concatenation operation.
  • a LeakyRelu can also be added to all other implementations as well, including Fig. 10, Fig. 11, and Fig. 12.
  • Fig. 14 illustrates the simplified decoupled architecture with the enhanced Hyper Decoder Net.
  • the simplified prediction fusion net is obtained by removing multiple convolutional layers and changing the number of channels.
  • the hyper decoder net is enhanced by replacing the 4 th convolutional layer with two convolutions layers.
  • Fig. 14 shows Variation 3 of simplified decoupled architecture. It includes a simplified Prediction Fusion Net after removing 3 convolutional layers and an enhanced Hyper Decoder Net.
  • the original Conv #4 is replaced by two new convolutional layers. In the original design, Conv #4 is responsible for upscaling the spatial size and increasing the number of channels. In the new design, the first layer only increases the number of channels and the second layer only upscale the spatial size.
  • the Hyper Decoder Net exemplified in the Variation 3 is shown in Table 5.
  • the original Conv #4 is replaced with two new convolutional layers Conv#4-1 and Conv #4-2.
  • Variation 3 can also be implemented in the following manner.
  • Hyper Decoder Net can be implemented as shown in Fig. 15, including:
  • visual data may refer to a video, an image, a picture in a video, or any other visual data suitable to be coded.
  • an autoregressive loop in a neural network (NN) -based model comprises a context model net, prediction fusion net, and a hyper decoder net.
  • the prediction fusion net may consume a large amount of time during the autoregressive process. This results in an increase of time need for the whole coding process, and thus the coding efficiency deteriorates.
  • Fig. 16 illustrates a flowchart of a method 1600 for visual data processing in accordance with some embodiments of the present disclosure.
  • a conversion between visual data and a bitstream of the visual data is performed with a neural network (NN) -based model.
  • the conversion may include encoding the visual data into the bitstream.
  • the conversion may include decoding the visual data from the bitstream.
  • the decoding model shown in Fig. 8 may be employed for decoding the visual data from the bitstream.
  • an NN-based model comprises a prediction fusion module comprising at least one convolutional layer.
  • the number of the at least one convolutional layer is smaller than 6.
  • an NN-based model may be a model based on neural network technologies.
  • an NN-based model may specify sequence of neural network modules (also called architecture) and model parameters.
  • the neural network module may comprise a set of neural network layers. Each neural network layer specifies a tensor operation which receives and outputs tensor, and each layer has trainable parameters.
  • a convolutional layer may also be referred to as convolution layer.
  • a prediction fusion module may also be referred to as prediction fusion network or prediction fusion net or prediction fusion for short.
  • a hyper decoder module may also be referred to hyper decoder network or hyper decoder net or hyper decoder for short. It should be understood that the possible implementations of the NN-based model described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way.
  • the number of the at least one convolutional layer may be 5. In other words, the prediction fusion module may comprise 5 convolutional layers. An example for this case is shown in Fig. 10. Alternatively, the number of the at least one convolutional layer may be 3. In other words, the prediction fusion module may comprise 3 convolutional layers. An example for this case is shown in Fig. 11. In some embodiments, the number of the at least one convolutional layer may be 1. In other words, the prediction fusion module may comprise only one convolutional layer. An example for this case is shown in Fig. 13. It should be understood that the number of the at least one convolutional layer may be any other suitable number that is smaller 6, such as 2 or 4. The scope of the present disclosure is not limited in this respect.
  • the prediction fusion module in the NN-based model comprises less than 6 convolutional layers.
  • the proposed method can advantageously simplify the prediction fusion module and thus reduce the time consumed at this stage. Thereby, the coding efficiency can be improved.
  • the at least one convolutional layer may share the same kernel size.
  • all 3 convolutional layers may share the same kernel size, such as 1 ⁇ 1, 3 ⁇ 3, or the like.
  • the at least one convolutional layer may have different kernel sizes.
  • the first 2 convolutional layers may share the same kernel size (such as 1 ⁇ 1 or the like)
  • the last convolutional layer may have a different kernel size (such as 3 ⁇ 3 or the like) .
  • the prediction fusion module may further comprise at least one non-linear activation unit.
  • a non-linear activation unit may be a rectified linear unit.
  • the rectified linear unit may be denoted as ReLU () , and its element-wise function may be as follows:
  • a non-linear activation unit may be a leaky rectified linear unit.
  • the leaky rectified linear unit may be denoted as LeakyReLU () , and its element-wise function may be as follows:
  • variable negative_slope may be equal to 0.01.
  • each of the at least one convolutional layer, except for the last convolutional layer in the at least one convolutional layer, may be followed by a non-linear activation unit.
  • the prediction fusion module may further comprise 4 non-linear activation units, as shown in Fig. 10.
  • the prediction fusion module may further comprise 2 non-linear activation units, as shown in Fig. 10. In aid of the non-linear activation unit, non-linearity is introduced, and coding quality can be improved.
  • the at least one convolutional layer may comprise a first convolutional layer and a second convolutional layer following the first convolutional layer.
  • the number of input channels of the second convolutional layer may be the same as the number of output channels of the first convolutional layer. That is, the number of input channels of a convolutional layer is equal to the number of output channels of a further convolutional layer immediately preceding the convolutional layer.
  • the number of input channels of each of the at least one convolutional layer may be a multiple of a first predetermined number
  • the number of output channels of each of the at least one convolutional layer may be a multiple of the first predetermined number.
  • the first predetermined number may be an integer, such as 32 or the like.
  • the number of output channels of each of the at least one convolutional layer may be a power of 2.
  • the prediction fusion module may be applied to a luma component of the visual data and/or a chroma component of the visual data.
  • the NN-based model may comprise a plurality of prediction fusion modules comprising the above-described simplified prediction fusion module.
  • the bitstream may comprise a first syntax element indicating one of the plurality of prediction fusion modules that is used for the conversion.
  • the first syntax element may comprise a flag.
  • the bitstream may further comprise a second syntax element indicating the number of the plurality of prediction fusion modules.
  • the NN-based model may further comprise a hyper decoder module comprising a plurality of convolutional layers.
  • the number of the plurality of convolutional layers may be larger than 5.
  • the proposed method can advantageously enhance the coding efficiency.
  • the plurality of convolutional layers comprise a third convolutional layer and a fourth convolutional layer following the third convolutional layer.
  • the number of input channels of the fourth convolutional layer may be the same as the number of output channels of the third convolutional layer. That is, the number of input channels of a convolutional layer is equal to the number of output channels of a further convolutional layer immediately preceding the convolutional layer.
  • the number of input channels of at least one of the plurality of convolutional layers may be a multiple of a second predetermined number, and the number of output channels of at least one of the plurality of convolutional layers may be a multiple of the second predetermined number.
  • the second predetermined number may be an integer, such as 32 or the like.
  • the number of input channels of at least one of the plurality of convolutional layers may be a power of 2
  • the number of output channels of at least one of the plurality of convolutional layers may be a power of 2.
  • the plurality of convolutional layers may comprise a single convolutional layer for upsampling a spatial size of an input tensor of the single convolutional layer, and a further single convolutional layer for increasing the number of channels of an input tensor of the further single convolutional layer.
  • a single convolutional layer is used to implement these two functions, i.e., spatial size upsampling and channel number increasing.
  • the hyper decoder module may be applied to a luma component of the visual data and/or a chroma component of the visual data.
  • the NN-based model may comprise a plurality of hyper decoder modules comprising the above-described enhanced hyper decoder module.
  • the bitstream may comprise a third syntax element indicating one of the plurality of hyper decoder modules that is used for the conversion.
  • the third syntax element may comprise a flag.
  • whether to and/or how to apply the method may be indicated at a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level. Additionally or alternatively, whether to and/or how to apply the method may be indicated in one of the following: a coding structure of a coding tree unit (CTU) , a coding structure of a coding unit (CU) , a coding structure of a transform unit (TU) , a coding structure of a prediction unit (PU) , a coding structure of a coding tree block (CTB) , a coding structure of a coding block (CB) , a coding structure of a transform block (TB) , a coding structure of a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a video parameter set (VPS) , a dependency parameter set (DPS) , a decoding capability information
  • CTU
  • whether to and/or how to apply the method may be dependent on the coded information.
  • the coded information may comprise a block size, a color format, a single dual tree partitioning, a dual tree partitioning, a color component, a slice type, a picture type, and/or the like.
  • the method may also be applicable to a coding tool requiring chroma fusion.
  • a syntax element may be binarized as a flag, a fixed length code, an exponential Golomb (EG) code, a unary code, a truncated unary code, a truncated binary code, or the like. Additionally or alternatively, the syntax element may be signed or unsigned. In some embodiments, a syntax element may be coded with at least one context model. Alternatively, the syntax element may be bypass coded.
  • EG exponential Golomb
  • whether a syntax element is indicated in the bitstream may be dependent on a condition.
  • a condition such as a width or a height is larger than a threshold.
  • a syntax element may be indicated at a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level. Additionally or alternatively, the syntax element may be indicated in one of the following: a coding structure of CTU, a coding structure of CU, a coding structure of TU, a coding structure of PU, a coding structure of CTB, a coding structure of CB, a coding structure of TB, a coding structure of PB, a sequence header, a picture header, an SPS, a VPS, a DPS, a DCI, a PPS, an APS, a slice header, or a tile group header.
  • the solutions in accordance with some embodiments of the present disclosure can advantageously improve coding efficiency and coding quality.
  • a non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing.
  • a conversion between the visual data and the bitstream with a neural network (NN) -based model is performed.
  • the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • a method for storing bitstream of a video is provided.
  • a conversion between the visual data and the bitstream with a neural network (NN) -based model is performed.
  • the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • the bitstream is stored in a non-transitory computer-readable recording medium.
  • a method for visual data processing comprising: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • NN neural network
  • Clause 2 The method of clause 1, wherein the number of the at least one convolutional layer is one of 1, 2, 3, 4, or 5.
  • Clause 5 The method of any of clauses 1-4, wherein each of the at least one convolutional layer, except for the last convolutional layer in the at least one convolutional layer, is followed by a non-linear activation unit.
  • Clause 7 The method of any of clauses 1-6, wherein the at least one convolutional layer comprises a first convolutional layer and a second convolutional layer following the first convolutional layer, the number of input channels of the second convolutional layer is the same as the number of output channels of the first convolutional layer.
  • Clause 8 The method of any of clauses 1-7, wherein the number of input channels of each of the at least one convolutional layer is a multiple of a first predetermined number, the number of output channels of each of the at least one convolutional layer is a multiple of the first predetermined number, and the first predetermined number is an integer.
  • Clause 10 The method of any of clauses 1-7, wherein the number of input channels of each of the at least one convolutional layer is a power of 2, and the number of output channels of each of the at least one convolutional layer is a power of 2.
  • bitstream further comprises a second syntax element indicating the number of the plurality of prediction fusion modules.
  • Clause 14 The method of any of clauses 12-13, wherein the first syntax element comprises a flag.
  • Clause 15 The method of any of clauses 1-14, wherein the NN-based model further comprises a hyper decoder module comprising a plurality of convolutional layers, and the number of the plurality of convolutional layers is larger than 5.
  • Clause 16 The method of clause 15, wherein the plurality of convolutional layers comprise a third convolutional layer and a fourth convolutional layer following the third convolutional layer, the number of input channels of the fourth convolutional layer is the same as the number of output channels of the third convolutional layer.
  • Clause 17 The method of any of clauses 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a multiple of a second predetermined number, the number of output channels of at least one of the plurality of convolutional layers is a multiple of the second predetermined number, and the second predetermined number is an integer.
  • Clause 18 The method of clause 17, wherein the second predetermined number is 32.
  • Clause 19 The method of any of clauses 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a power of 2, and the number of output channels of at least one of the plurality of convolutional layers is a power of 2.
  • Clause 20 The method of any of clauses 15-19, wherein the plurality of convolutional layers comprises a single convolutional layer for upsampling a spatial size of an input tensor of the single convolutional layer, and a further single convolutional layer for increasing the number of channels of an input tensor of the further single convolutional layer.
  • Clause 21 The method of any of clauses 15-20, wherein the hyper decoder module is applied to at last one of the following: a luma component of the visual data, or a chroma component of the visual data.
  • Clause 23 The method of clause 22, wherein the third syntax element comprises a flag.
  • Clause 24 The method of any of clauses 1-23, wherein whether to and/or how to apply the method is indicated at one of the following: a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level.
  • Clause 25 The method of any of clauses 1-23, wherein whether to and/or how to apply the method is indicated in one of the following: a coding structure of a coding tree unit (CTU) , a coding structure of a coding unit (CU) , a coding structure of a transform unit (TU) , a coding structure of a prediction unit (PU) , a coding structure of a coding tree block (CTB) , a coding structure of a coding block (CB) , a coding structure of a transform block (TB) , a coding structure of a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a video parameter set (VPS) , a dependency parameter set (DPS) , a decoding capability information (DCI) , a picture parameter set (PPS) , an adaptation parameter sets (APS) , a slice header, or a tile group header.
  • CTU
  • Clause 26 The method of any of clauses 1-25, wherein whether to and/or how to apply the method is dependent on the coded information.
  • Clause 27 The method of clause 26, wherein the coded information comprises at least one of the following: a block size, a color format, a single dual tree partitioning, a dual tree partitioning, a color component, a slice type, or a picture type.
  • Clause 28 The method of any of clauses 1-27, wherein the method is applicable to a coding tool requiring chroma fusion.
  • Clause 29 The method of any of clauses 12-14 and 22-23, wherein a syntax element is binarized as one of the following: a flag, a fixed length code, an exponential Golomb (EG) code, a unary code, a truncated unary code, or a truncated binary code.
  • a syntax element is binarized as one of the following: a flag, a fixed length code, an exponential Golomb (EG) code, a unary code, a truncated unary code, or a truncated binary code.
  • EG exponential Golomb
  • Clause 30 The method of any of clauses 12-14 and 22-23, wherein a syntax element is signed or unsigned.
  • Clause 31 The method of any of clauses 12-14 and 22-23, wherein a syntax element is coded with at least one context model or bypass coded.
  • Clause 32 The method of any of clauses 12-14 and 22-23, wherein whether a syntax element is indicated in the bitstream is dependent on a condition.
  • Clause 33 The method of clause 32, wherein if a function corresponding to the syntax element is applicable, the syntax element is indicated in the bitstream.
  • Clause 34 The method of clause 32, wherein if a dimension of a block satisfies a condition, the syntax element is indicated in the bitstream.
  • Clause 35 The method of any of clauses 12-14 and 22-23, wherein a syntax element is indicated at one of the following: a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level.
  • Clause 36 The method of any of clauses 12-14 and 22-23, wherein a syntax element is indicated in one of the following: a coding structure of CTU, a coding structure of CU, a coding structure of TU, a coding structure of PU, a coding structure of CTB, a coding structure of CB, a coding structure of TB, a coding structure of PB, a sequence header, a picture header, an SPS, a VPS, a DPS, a DCI, a PPS, an APS, a slice header, or a tile group header.
  • a syntax element is indicated in one of the following: a coding structure of CTU, a coding structure of CU, a coding structure of TU, a coding structure of PU, a coding structure of CTB, a coding structure of CB, a coding structure of TB, a coding structure of PB, a sequence header, a picture header, an SPS,
  • Clause 37 The method of any of clauses 1-36, wherein the visual data comprise a picture of a video or an image.
  • Clause 38 The method of any of clauses 1-37, wherein the conversion includes encoding the visual data into the bitstream.
  • Clause 39 The method of any of clauses 1-37, wherein the conversion includes decoding the visual data from the bitstream.
  • Clause 40 An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-39.
  • Clause 41 A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-39.
  • a non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  • NN neural network
  • a method for storing a bitstream of visual data comprising: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6; and storing the bitstream in a non-transitory computer-readable recording medium.
  • NN neural network
  • Fig. 17 illustrates a block diagram of a computing device 1700 in which various embodiments of the present disclosure can be implemented.
  • the computing device 1700 may be implemented as or included in the source device 110 (or the visual data encoder 114) or the destination device 120 (or the visual data decoder 124) .
  • computing device 1700 shown in Fig. 17 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
  • the computing device 1700 includes a general-purpose computing device 1700.
  • the computing device 1700 may at least comprise one or more processors or processing units 1710, a memory 1720, a storage unit 1730, one or more communication units 1740, one or more input devices 1750, and one or more output devices 1760.
  • the computing device 1700 may be implemented as any user terminal or server terminal having the computing capability.
  • the server terminal may be a server, a large-scale computing device or the like that is provided by a service provider.
  • the user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof.
  • the computing device 1700 can support any type of interface to a user (such as “wearable” circuitry and the like) .
  • the processing unit 1710 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1720. In a multi-processor system, multiple processing units execute computer executable instructions in parallel so as to improve the parallel processing capability of the computing device 1700.
  • the processing unit 1710 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
  • the computing device 1700 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1700, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium.
  • the memory 1720 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination thereof.
  • the storage unit 1730 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or visual data and can be accessed in the computing device 1700.
  • a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or visual data and can be accessed in the computing device 1700.
  • the computing device 1700 may further include additional detachable/non-detachable, volatile/non-volatile memory medium.
  • additional detachable/non-detachable, volatile/non-volatile memory medium may be provided.
  • a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk
  • an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk.
  • each drive may be connected to a bus (not shown) via one or more visual data medium interfaces.
  • the communication unit 1740 communicates with a further computing device via the communication medium.
  • the functions of the components in the computing device 1700 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1700 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
  • PCs personal computers
  • the input device 1750 may be one or more of a variety of input devices, such as a mouse, keyboard, tracking ball, voice-input device, and the like.
  • the output device 1760 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like.
  • the computing device 1700 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1700, or any devices (such as a network card, a modem and the like) enabling the computing device 1700 to communicate with one or more other computing devices, if required.
  • Such communication can be performed via input/output (I/O) interfaces (not shown) .
  • some or all components of the computing device 1700 may also be arranged in cloud computing architecture.
  • the components may be provided remotely and work together to implement the functionalities described in the present disclosure.
  • cloud computing provides computing, software, visual data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services.
  • the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols.
  • a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components.
  • the software or components of the cloud computing architecture and corresponding visual data may be stored on a server at a remote position.
  • the computing resources in the cloud computing environment may be merged or distributed at locations in a remote visual data center.
  • Cloud computing infrastructures may provide the services through a shared visual data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
  • the computing device 1700 may be used to implement visual data encoding/decoding in embodiments of the present disclosure.
  • the memory 1720 may include one or more visual data coding modules 1725 having one or more program instructions. These modules are accessible and executable by the processing unit 1710 to perform the functionalities of the various embodiments described herein.
  • the input device 1750 may receive visual data as an input 1770 to be encoded.
  • the visual data may be processed, for example, by the visual data coding module 1725, to generate an encoded bitstream.
  • the encoded bitstream may be provided via the output device 1760 as an output 1780.
  • the input device 1750 may receive an encoded bitstream as the input 1770.
  • the encoded bitstream may be processed, for example, by the visual data coding module 1725, to generate decoded visual data.
  • the decoded visual data may be provided via the output device 1760 as the output 1780.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

Embodiments of the present disclosure provide a solution for visual data processing. A method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.

Description

METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING
FIELDS
Embodiments of the present disclosure relates generally to visual data processing techniques, and more particularly, to neural network-based visual data coding.
BACKGROUND
The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable rate-distortion (R-D) performance with Versatile Video Coding (VVC) . With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, coding quality and coding efficiency of neural network-based image/video coding is generally expected to be further improved.
SUMMARY
Embodiments of the present disclosure provide a solution for visual data processing.
In a first aspect, a method for visual data processing is proposed. The method comprises: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
According to the method in accordance with the first aspect of the present disclosure, the prediction fusion module in the NN-based model comprises less than 6 convolutional layers. Compared with the conventional solution where the prediction fusion module comprises at least 6 convolutional layers, the proposed method can advantageously simplify the prediction fusion module and thus reduce the time consumed at this stage. Thereby, the coding efficiency can be improved.
In a second aspect, an apparatus for visual data processing is proposed. The apparatus comprises a processor and a non-transitory memory with instructions thereon. The instructions upon execution by the processor, cause the processor to perform a method in accordance with the first aspect of the present disclosure.
In a third aspect, a non-transitory computer-readable storage medium is proposed. The non-transitory computer-readable storage medium stores instructions that cause a processor to perform a method in accordance with the first aspect of the present disclosure.
In a fourth aspect, another non-transitory computer-readable recording medium is proposed. The non-transitory computer-readable recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing. The method comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
In a fifth aspect, a method for storing a bitstream of visual data is proposed. The method comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6; and storing the bitstream in a non-transitory computer-readable recording medium.
This Summary is provided to introduce a selection of concepts in a simplified  form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
Through the following detailed description with reference to the accompanying drawings, the above and other objectives, features, and advantages of example embodiments of the present disclosure will become more apparent. In the example embodiments of the present disclosure, the same reference numerals usually refer to the same components.
Fig. 1 illustrates a block diagram that illustrates an example visual data coding system, in accordance with some embodiments of the present disclosure;
Fig. 2 illustrates a typical transform coding scheme;
Fig. 3 illustrates an image from the Kodak dataset and different representations of the image;
Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model;
Fig. 5 illustrates a block diagram of a combined model;
Fig. 6 illustrates an encoding process of the combined model;
Fig. 7 illustrates a decoding process of the combined model;
Fig. 8 illustrates a decoder architecture with decoupled processing;
Fig. 9 illustrates examples of a simplified decoupled architecture in accordance with embodiments of the present disclosure;
Fig. 10 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure;
Fig. 11 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure;
Fig. 12 illustrates exemplary variants of the simplified prediction fusion net in accordance with embodiments of the present disclosure;
Fig. 13 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure;
Fig. 14 illustrates an example simplified decoupled architecture in accordance with embodiments of the present disclosure;
Fig. 15 illustrates exemplary variants of the enhanced hyper decoder net in accordance with embodiments of the present disclosure;
Fig. 16 illustrates a flowchart of a method for visual data processing in accordance with embodiments of the present disclosure; and
Fig. 17 illustrates a block diagram of a computing device in which various embodiments of the present disclosure can be implemented.
Throughout the drawings, the same or similar reference numerals usually refer to the same or similar elements.
DETAILED DESCRIPTION
Principle of the present disclosure will now be described with reference to some embodiments. It is to be understood that these embodiments are described only for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. The disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one  of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an example embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
Example Environment
Fig. 1 is a block diagram that illustrates an example visual data coding system 100 that may utilize the techniques of this disclosure. As shown, the visual data coding system 100 may include a source device 110 and a destination device 120. The source device 110 can be also referred to as a visual data encoding device, and the destination device 120 can be also referred to as a visual data decoding device. In operation, the source device 110 can be configured to generate encoded visual data and the destination device 120 can be configured to decode the encoded visual data generated by the source device 110. The source device 110 may include a visual data source 112, a visual data encoder 114, and an input/output (I/O) interface 116.
The visual data source 112 may include a source such as a visual data capture device. Examples of the visual data capture device include, but are not limited to, an interface to receive visual data from a visual data provider, a computer graphics system for generating visual data, and/or a combination thereof.
The visual data may comprise one or more pictures of a video or one or more images. The visual data encoder 114 encodes the visual data from the visual data source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the visual data. The bitstream may include coded pictures and associated visual data. The coded picture is a coded representation of a picture. The associated visual data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator and/or a transmitter. The encoded visual data may be transmitted directly to destination device 120 via the I/O interface 116 through the network 130A. The encoded visual data may also be stored onto a storage medium/server 130B for access by destination device 120.
The destination device 120 may include an I/O interface 126, a visual data decoder 124, and a display device 122. The I/O interface 126 may include a receiver and/or a modem. The I/O interface 126 may acquire encoded visual data from the source  device 110 or the storage medium/server 130B. The visual data decoder 124 may decode the encoded visual data. The display device 122 may display the decoded visual data to a user. The display device 122 may be integrated with the destination device 120, or may be external to the destination device 120 which is configured to interface with an external display device.
The visual data encoder 114 and the visual data decoder 124 may operate according to a visual data coding standard, such as video coding standard or still picture coding standard and other current and/or further standards.
Some exemplary embodiments of the present disclosure will be described in detailed hereinafter. It should be understood that section headings are used in the present document to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to Versatile Video Coding or other specific visual data codecs, the disclosed techniques are applicable to other coding technologies also. Furthermore, while some embodiments describe coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder. Furthermore, the term visual data processing encompasses visual data coding or compression, visual data decoding or decompression and visual data transcoding in which visual data are represented from one compressed format into another compressed format or at a different compressed bitrate.
1. Brief Summary
A neural network-based image and video compression method comprising an auto-regressive subnetwork, an entropy coding engine, wherein entropy coding is performed independently of the auto-regressive subnetwork, namely the decoupled architecture. In this disclosure, the decoupled architecture is simplified to reduce the decoding time complexity.
2. Introduction
The past decade has witnessed the rapid development of deep learning in a variety of areas, especially in computer vision and image processing. Inspired from the great success of deep learning technology to computer vision areas, many researchers have shifted their attention from conventional image/video compression techniques to neural image/video compression technologies. Neural network was invented originally with the interdisciplinary research of neuroscience and mathematics. It has shown strong capabilities in the context of non-linear transform and classification. Neural network-based image/video compression technology has gained significant progress during the past half decade. It is reported that the latest neural network-based image compression algorithm achieves comparable R-D performance with Versatile Video Coding (VVC) , the latest video coding standard developed by Joint Video Experts Team (JVET) with experts from MPEG and VCEG. With the performance of neural image compression continually being improved, neural network-based video compression has become an actively developing research area. However, neural network-based video coding still remains in its infancy due to the inherent difficulty of the problem.
2.1. Image/Video compression
Image/video compression usually refers to the computing technology that compresses image/video into binary code to facilitate storage and transmission. The binary codes may or may not support losslessly reconstructing the original image/video, termed lossless compression and lossy compression. Most of the efforts are devoted to lossy compression since lossless reconstruction is not necessary in most scenarios. Usually the performance of image/video compression algorithms is evaluated from two aspects, i.e. compression ratio and reconstruction quality. Compression ratio is directly related to the number of binary codes, the less the better; Reconstruction quality is measured by comparing the reconstructed image/video with the original image/video, the higher the better.
Image/video compression techniques can be divided into two branches, the classical video coding methods and the neural-network-based video compression methods. Classical video  coding schemes adopt transform-based solutions, in which researchers have exploited statistical dependency in the latent variables (e.g., DCT or wavelet coefficients) by carefully hand-engineering entropy codes modeling the dependencies in the quantized regime. Neural network-based video compression is in two flavors, neural network-based coding tools and end-to-end neural network-based video compression. The former is embedded into existing classical video codecs as coding tools and only serves as part of the framework, while the latter is a separate framework developed based on neural networks without depending on classical video codecs. In the last three decades, a series of classical video coding standards have been developed to accommodate the increasing visual content. The international standardization organizations ISO/IEC has two expert groups namely Joint Photographic Experts Group (JPEG) and Moving Picture Experts Group (MPEG) , and ITU-T also has its own Video Coding Experts Group (VCEG) which is for standardization of image/video coding technology. The influential video coding standards published by these organizations include JPEG, JPEG 2000, H. 262, H. 264/AVC and H. 265/HEVC. After H. 265/HEVC, the Joint Video Experts Team (JVET) formed by MPEG and VCEG has been working on a new video coding standard Versatile Video Coding (VVC) . The first version of VVC was released in July 2020. An average of 50%bitrate reduction is reported by VVC under the same visual quality compared with HEVC.
Neural network-based image/video compression is not a new invention since there were a number of researchers working on neural network-based image coding. But the network architectures were relatively shallow, and the performance was not satisfactory. Benefit from the abundance of data and the support of powerful computing resources, neural network-based methods are better exploited in a variety of applications. At present, neural network-based image/video compression has shown promising improvements, confirmed its feasibility. Nevertheless, this technology is still far from mature and a lot of challenges need to be addressed.
2.2. Neural networks
Neural networks, also known as artificial neural networks (ANN) , are the computational models used in machine learning technology which are usually composed of multiple processing layers and each layer is composed of multiple simple but non-linear basic computational units. One benefit of such deep networks is believed to be the capacity for processing data with multiple levels of abstraction and converting data into different kinds of representations. Note that these representations are not manually designed; instead, the deep network including the processing layers is learned from massive data using a general machine learning procedure. Deep learning eliminates the necessity of handcrafted representations, and thus is regarded useful especially for processing natively unstructured data, such as acoustic and visual signal, whilst processing such data has been a longstanding difficulty in the artificial intelligence field.
2.3. Neural networks for image compression
Existing neural networks for image compression methods can be classified in two categories, i.e., pixel probability modeling and auto-encoder. The former one belongs to the predictive coding strategy, while the latter one is the transform-based solution. Sometimes, these two methods are combined together in literature.
2.3.1 Pixel Probability Modeling
According to Shannon’s information theory, the optimal method for lossless coding can reach the minimal coding rate -log2p (x) where p (x) is the probability of symbol x. A number of lossless coding methods were developed in literature and among them arithmetic coding is believed to be among the optimal ones. Given a probability distribution p (x) , arithmetic coding ensures that the coding rate to be as close as possible to its theoretical limit -log2p (x) without considering the rounding error. Therefore, the remaining problem is to how to determine the probability, which is however very challenging for natural image/video due to the curse of dimensionality.
Following the predictive coding strategy, one way to model p (x) is to predict pixel probabilities one by one in a raster scan order based on previous observations, where x is an image.
p (x) =p (x1) p (x2|x1) …p (xi|x1, …, xi-1) …p (xm×n|x1, …, xm×n-1)     (1)
where m and n are the height and width of the image, respectively. The previous observation is also known as the context of the current pixel. When the image is large, it can be difficult to estimate the conditional probability, thereby a simplified method is to limit the range of its context.
p (x) =p (x1) p (x2|x1) …p (xi|xi-k, …, xi-1) …p (xm×n|xm×n-k, …, xm×n-1)    (2)
where k is a pre-defined constant controlling the range of the context.
It should be noted that the condition may also take the sample values of other color components into consideration. For example, when coding the RGB color component, R sample is dependent on previously coded pixels (including R/G/B samples) , the current G sample may be coded according to previously coded pixels and the current R sample, while for coding the current B sample, the previously coded pixels and the current R and G samples may also be taken into consideration.
Neural networks were originally introduced for computer vision tasks and have been proven to be effective in regression and classification problems. Therefore, it has been proposed using neural networks to estimate the probability of p (xi) given its context x1, x2, …, xi-1. The pixel probability is proposed for binary images, i.e., xi∈ {-1, +1} . The neural autoregressive distribution estimator (NADE) is designed for pixel probability modeling, where is a feed-forward network with a single hidden layer. A similar work is presented, where the feed-forward network also has connections skipping the hidden layer, and the parameters are also shared. Experiments have been performed on the binarized MNIST dataset. NADE is extended to a real-valued model RNADE, where the probability p (xi|x1, …, xi-1) is derived with a mixture of Gaussians. Their feed-forward network also has a single hidden layer, but the hidden layer is with rescaling to avoid saturation and uses rectified linear unit (ReLU) instead of  sigmoid. NADE and RNADE are improved by using reorganizing the order of the pixels and with deeper neural networks.
Designing advanced neural networks plays an important role in improving pixel probability modeling. Multi-dimensional long short-term memory (LSTM) is proposed, which is working together with mixtures of conditional Gaussian scale mixtures for probability modeling. LSTM is a special kind of recurrent neural networks (RNNs) and is proven to be good at modeling sequential data. The spatial variant of LSTM is used for images later. Several different neural networks are studied, including RNNs and CNNs namely PixelRNN and PixelCNN, respectively. In PixelRNN, two variants of LSTM, called row LSTM and diagonal BiLSTM are proposed, where the latter is specifically designed for images. PixelRNN incorporates residual connections to help train deep neural networks with up to 12 layers. In PixelCNN, masked convolutions are used to suit for the shape of the context. Comparing with previous works, PixelRNN and PixelCNN are more dedicated to natural images: they consider pixels as discrete values (e.g., 0, 1, …, 255) and predict a multinomial distribution over the discrete values; they deal with color images in RGB color space; they work well on large-scale image dataset ImageNet. Gated PixelCNN is proposed to improve the PixelCNN and achieves comparable performance with PixelRNN but with much less complexity. PixelCNN++ is proposed with the following improvements upon PixelCNN: a discretized logistic mixture likelihood is used rather than a 256-way multinomial distribution; down-sampling is used to capture structures at multiple resolutions; additional short-cut connections are introduced to speed up training; dropout is adopted for regularization; RGB is combined for one pixel. PixelSNAIL is proposed, in which casual convolutions are combined with self-attention.
Most of the above methods directly model the probability distribution in the pixel domain. Some researchers also attempt to model the probability distribution as a conditional one upon explicit or latent representations. That being said, it may be estimated:
where h is the additional condition and p (x) =p (h) p (x|h) , meaning the modeling is split into an unconditional one and a conditional one. The additional condition can be image label information or high-level representations.
2.3.2 Auto-encoder
Auto-encoder originates from the well-known work proposed by Hinton and Salakhutdinov. The method is trained for dimensionality reduction and consists of two parts: encoding and decoding. The encoding part converts the high-dimension input signal to low-dimension representations, typically with reduced spatial size but a greater number of channels. The decoding part attempts to recover the high-dimension input from the low-dimension representation. Auto-encoder enables automated learning of representations and eliminates the need of hand-crafted features, which is also believed to be one of the most important advantages of neural networks.
Fig. 2 is an illustration of a typical transform coding scheme. The original image x is transformed by the analysis network ga to achieve the latent representation y. The latent representation y is quantized and compressed into bits. The number of bits R is used to measure the coding rate. The quantized latent representationis then inversely transformed by a synthesis network gs to obtain the reconstructed imageThe distortion is calculated in a perceptual space by transforming x andwith the function gp.
It is intuitive to apply auto-encoder network to lossy image compression. It only needs to encode the learned latent representation from the well-trained neural networks. However, it is not trivial to adapt auto-encoder to image compression since the original auto-encoder is not optimized for compression thereby not efficient by directly using a trained auto-encoder. In addition, there exist other major challenges: First, the low-dimension representation should be quantized before being encoded, but the quantization is not differentiable, which is required in backpropagation while training the neural networks. Second, the objective under compression scenario is different since both the distortion and the rate need to be take into consideration. Estimating the rate is challenging. Third, a practical image coding scheme needs to support  variable rate, scalability, encoding/decoding speed, interoperability. In response to these challenges, a number of researchers have been actively contributing to this area.
The prototype auto-encoder for image compression is in Fig. 2, which can be regarded as a transform coding strategy. The original image x is transformed with the analysis network y= ga (x) , where y is the latent representation which will be quantized and coded. The synthesis network will inversely transform the quantized latent representationback to obtain the reconstructed imageThe framework is trained with the rate-distortion loss function, i.e., where D is the distortion between x andR is the rate calculated or estimated from the quantized representationand λ is the Lagrange multiplier. It should be noted that D can be calculated in either pixel domain or perceptual domain. All existing research works follow this prototype and the difference might only be the network structure or loss function.
In terms of network structure, RNNs and CNNs are the most widely used architectures. In the RNNs relevant category, a general framework was proposed for variable rate image compression using RNN. They use binary quantization to generate codes and do not consider rate during training. The framework indeed provides a scalable coding functionality, where RNN with convolutional and deconvolution layers is reported to perform decently. Then an improved version was proposed by upgrading the encoder with a neural network similar to PixelRNN to compress the binary codes. The performance is reportedly better than JPEG on Kodak image dataset using MS-SSIM evaluation metric. The RNN-based solution was further improved by introducing hidden-state priming. In addition, an SSIM-weighted loss function is also designed, and spatially adaptive bitrates mechanism is enabled. They achieve better results than BPG on Kodak image dataset using MS-SSIM as evaluation metric.
A general framework was designed for rate-distortion optimized image compression. They use multiary quantization to generate integer codes and consider the rate during training, i.e. the loss is the joint rate-distortion cost, which can be MSE or others. They add random uniform noise to stimulate the quantization during training and use the differential entropy of the noisy codes as a proxy for the rate. They use generalized divisive normalization (GDN) as the network  structure, which consists of a linear mapping followed by a nonlinear parametric normalization. The effectiveness of GDN on image coding is verified. An improved version was proposed, where they use 3 convolutional layers each followed by a down-sampling layer and a GDN layer as the forward transform. Accordingly, they use 3 layers of inverse GDN each followed by an up-sampling layer and convolution layer to stimulate the inverse transform. In addition, an arithmetic coding method is devised to compress the integer codes. The performance is reportedly better than JPEG and JPEG 2000 on Kodak dataset in terms of MSE. Furthermore, it was further improved by devising a scale hyper-prior into the auto-encoder. They transform the latent representation y with a subnet ha to z=ha (y) and z will be quantized and transmitted as side information. Accordingly, the inverse transform is implemented with a subnet hs attempting to decode from the quantized side informationto the standard deviation of the quantizedwhich will be further used during the arithmetic coding ofOn the Kodak image set, their method is slightly worse than BPG in terms of PSNR. The structures were further explored in the residue space by introducing an autoregressive model to estimate both the standard deviation and the mean. In the latest work Gaussian mixture model was used to further remove redundancy in the residue. The reported performance is on par with VVC on the Kodak image set using PSNR as evaluation metric.
2.3.3 Hyper Prior Model
In the transform coding approach to image compression, the encoder subnetwork (section 2.3.2) transforms the image vector x using a parametric analysis transforminto a latent representation y, which is then quantized to formBecauseis discrete-valued, it can be losslessly compressed using entropy coding techniques such as arithmetic coding and transmitted as a sequence of bits.
As evident from the middle left and middle right image of Fig. 3, there are significant spatial dependencies among the elements ofNotably, their scales (middle right image) appear to be coupled spatially. An additional set of random variablesare introduced to capture the spatial dependencies and to further reduce the redundancies. In this case the image compression network is depicted in Fig. 4.
In Fig. 4, the left hand of the models is the encoder ga and decoder gs (explained in section 2.3.2) . The right-hand side is the additional hyper encoder ha and hyper decoder hs networks that are used to obtainIn this architecture the encoder subjects the input image x to ga, yielding the responses y with spatially varying standard deviations. The responses y are fed into ha, summarizing the distribution of standard deviations in z. z is then quantizedcompressed, and transmitted as side information. The encoder then uses the quantized vectorto estimate σ, the spatial distribution of standard deviations, and uses it to compress and transmit the quantized image representationThe decoder first recoversfrom the compressed signal. It then uses hs to obtain σ, which provides it with the correct probability estimates to successfully recoveras well. It then feedsinto gs to obtain the reconstructed image.
When the hyper encoder and hyper decoder are added to the image compression network, the spatial redundancies of the quantized latentare reduced. The rightmost image in Fig. 3 correspond to the quantized latent when hyper encoder/decoder are used. Compared to middle right image, the spatial redundancies are significantly reduced, as the samples of the quantized latent are less correlated.
Fig. 3 illustrates an image from the Kodak dataset and different representations of the image. The leftmost image in Fig. 3 shows an image from the Kodak dataset. The middle left image in Fig. 3 shows visualization of a latent representation y of that image. The middle right image in Fig. 3 shows standard deviations σ of the latent. The rightmost image in Fig. 3 shows latents y after the hyper prior (hyper encoder and decoder) network is introduced.
Fig. 4 illustrates a network architecture of an autoencoder implementing the hyperprior model. The left side shows an image autoencoder network, the right side corresponds to the hyperprior subnetwork. The analysis and synthesis transforms are denoted as ga and gs, respectively. Q represents quantization, and AE, AD represent arithmetic encoder and arithmetic decoder, respectively. The hyperprior model consists of two subnetworks, hyper encoder (denoted with ha) and hyper decoder (denoted with hs) . The hyper prior model generates a quantized hyper latentwhich comprises information about the probability distribution of the samples of the  quantized latentis included in the bitsteam and transmitted to the receiver (decoder) along with
2.3.4 Context Model
Although the hyperprior model improves the modelling of the probability distribution of the quantized latentadditional improvement can be obtained by utilizing an autoregressive model that predicts quantized latents from their causal context (Context Model) .
The term auto-regressive means that the output of a process is later used as input to it. For example the context model subnetwork generates one sample of a latent, which is later used as input to obtain the next sample.
A joint architecture was used where both hyperprior model subnetwork (hyper encoder and hyper decoder) and a context model subnetwork are utilized. The hyperprior and the context model are combined to learn a probabilistic model over quantized latentswhich is then used for entropy coding. As depicted in Fig. 5, the outputs of context subnetwork and hyper decoder subnetwork are combined by the subnetwork called Entropy Parameters, which generates the mean μ and scale (or variance) σ parameters for a Gaussian probability model. The gaussian probability model is then used to encode the samples of the quantized latents into bitstream with the help of the arithmetic encoder (AE) module. In the decoder the gaussian probability model is utilized to obtain the quantized latentsfrom the bitstream by arithmetic decoder (AD) module.
Fig. 5 illustrates a combined model jointly optimizes an autoregressive component that estimates the probability distributions of latents from their causal context (Context Model) along with a hyperprior and the underlying autoencoder. Real-valued latent representations are quantized (Q) to create quantized latentsand quantized hyper-latentswhich are compressed into a bitstream using an arithmetic encoder (AE) and decompressed by an arithmetic decoder (AD) . The highlighted region corresponds to the components that are executed by the receiver (i.e. a decoder) to recover an image from a compressed bitstream.
Typically, the latent samples are modeled as gaussian distribution or gaussian mixture models (not limited to) . According to Fig. 5, the context model and hyper prior are jointly used to estimate the probability distribution of the latent samples. Since a gaussian distribution can be defined by a mean and a variance (aka sigma or scale) , the joint model is used to estimate the mean and variance (denoted as μ and σ) .
2.3.5 The encoding process using joint auto-regressive hyper prior model
Fig. 5 corresponds to the state-of-the-art compression method. In this section and the next, the encoding and decoding processes will be described separately.
Fig. 6 depicts the encoding process. The input image is first processed with an encoder subnetwork. The encoder transforms the input image into a transformed representation called latent, denoted by y. y is then input to a quantizer block, denoted by Q, to obtained the quantized latentis then converted to a bitstream (bits1) using an arithmetic encoding module (denoted AE) . The arithmetic encoding block converts each sample of theinto a bitstream (bits1) one by one, in a sequential order.
The modules hyper encoder, context, hyper decoder, and entropy parameters subnetworks are used to estimate the probability distributions of the samples of the quantized latentThe latent y is input to hyper encoder, which outputs the hyper latent (denoted by z) . The hyper latent is then quantizedand a second bitstream (bits2) is generated using arithmetic encoding (AE) module. The factorized entropy module generates the probability distribution, that is used to encode the quantized hyper latent into bitstream. The quantized hyper latent includes information about the probability distribution of the quantized latent
The Entropy Parameters subnetwork generates the probability distribution estimations, that are used to encode the quantized latentThe information that is generated by the Entropy Parameters typically include a mean μ and scale (or variance) σ parameters, that are together used to obtain a gaussian probability distribution. A gaussian distribution of a random variable x is defined aswherein the parameter μ is the mean or expectation of the distribution (and also its median and mode) , while the parameter σ is its standard deviation  (or variance, or scale) . In order to define a gaussian distribution, the mean and the variance need to be determined. The entropy parameters module is used to estimate the mean and the variance values.
The subnetwork hyper decoder generates part of the information that is used by the entropy parameters subnetwork, the other part of the information is generated by the autoregressive module called context module. The context module generates information about the probability distribution of a sample of the quantized latent, using the samples that are already encoded by the arithmetic encoding (AE) module. The quantized latentis typically a matrix composed of many samples. The samples can be indicated using indices, such asordepending on the dimensions of the matrixThe samplesare encoded by AE one by one, typically using a raster scan order. In a raster scan order the rows of a matrix are processed from top to bottom, wherein the samples in a row are processed from left to right. In such a scenario (wherein the raster scan order is used by the AE to encode the samples into bitstream) , the context module generates the information pertaining to a sampleusing the samples encoded before, in raster scan order. The information generated by the context module and the hyper decoder are combined by the entropy parameters module to generate the probability distributions that are used to encode the quantized latentinto bitstream (bits1) .
Finally, the first and the second bitstream are transmitted to the decoder as result of the encoding process.
It is noted that the other names can be used for the modules described above.
In the above description, all of the elements in Fig. 6 are collectively called encoder. The analysis transform that converts the input image into latent representation is also called an encoder (or auto-encoder) .
2.3.6 The decoding process using joint auto-regressive hyper prior model
Fig. 7 depicts the state-of-the-art decoding process. In the decoding process, the decoder first receives the first bitstream (bits1) and the second bitstream (bits2) that are generated by a corresponding encoder. The bits2 is first decoded by the arithmetic decoding (AD) module by  utilizing the probability distributions generated by the factorized entropy subnetwork. The factorized entropy module typically generates the probability distributions using a predetermined template, for example using predetermined mean and variance values in the case of gaussian distribution. The output of the arithmetic decoding process of the bits2 iswhich is the quantized hyper latent. The AD process reverts to AE process that was applied in the encoder. The processes of AE and AD are lossless, meaning that the quantized hyper latentthat was generated by the encoder can be reconstructed at the decoder without any change.
After obtaining ofit is processed by the hyper decoder, whose output is fed to entropy parameters module. The three subnetworks, context, hyper decoder and entropy parameters that are employed in the decoder are identical to the ones in the encoder. Therefore, the exact same probability distributions can be obtained in the decoder (as in encoder) , which is essential for reconstructing the quantized latentwithout any loss. As a result, the identical version of the quantized latentthat was obtained in the encoder can be obtained in the decoder.
After the probability distributions (e.g. the mean and variance parameters) are obtained by the entropy parameters subnetwork, the arithmetic decoding module decodes the samples of the quantized latent one by one from the bitstream bits1. From a practical standpoint, autoregressive model (the context model) is inherently serial, and therefore cannot be sped up using techniques such as parallelization.
Finally, the fully reconstructed quantized latentis input to the synthesis transform (denoted as decoder in Fig. 7) module to obtain the reconstructed image.
In the above description, all of the elements in Fig. 7 are collectively called decoder. The synthesis transform that converts the quantized latent into reconstructed image is also called a decoder (or auto-decoder) .
2.4. Neural networks for video compression
Similar to conventional video coding technologies, neural image compression serves as the foundation of intra compression in neural network-based video compression, thus development of neural network-based video compression technology comes later than neural network-based  image compression but needs far more efforts to solve the challenges due to its complexity. Starting from 2017, a few researchers have been working on neural network-based video compression schemes. Compared with image compression, video compression needs efficient methods to remove inter-picture redundancy. Inter-picture prediction is then a crucial step in these works. Motion estimation and compensation is widely adopted but is not implemented by trained neural networks until recently.
Studies on neural network-based video compression can be divided into two categories according to the targeted scenarios: random access and the low-latency. In random access case, it requires the decoding can be started from any point of the sequence, typically divides the entire sequence into multiple individual segments and each segment can be decoded independently. In low-latency case, it aims at reducing decoding time thereby usually merely temporally previous frames can be used as reference frames to decode subsequent frames.
2.4.1 Low-latency
The early work first splits the video sequence frames into blocks and each block will choose one from two available modes, either intra coding or inter coding. If intra coding is selected, there is an associated auto-encoder to compress the block. If inter coding is selected, motion estimation and compensation are performed with tradition methods and a trained neural network will be used for residue compression. The outputs of auto-encoders are directly quantized and coded by the Huffman method.
Another neural network-based video coding scheme with PixelMotionCNN was proposed. The frames are compressed in the temporal order, and each frame is split into blocks which are compressed in the raster scan order. Each frame will firstly be extrapolated with the preceding two reconstructed frames. When a block is to be compressed, the extrapolated frame along with the context of the current block are fed into the PixelMotionCNN to derive a latent representation. Then the residues are compressed by the variable rate image scheme. This scheme performs on par with H. 264.
Another end-to-end neural network-based video compression framework is then proposed, in which all the modules are implemented with neural networks. The scheme accepts current frame and the prior reconstructed frame as inputs and optical flow will be derived with a pre-trained neural network as the motion information. The motion information will be warped with the reference frame followed by a neural network generating the motion compensated frame. The residues and the motion information are compressed with two separate neural auto-encoders. The whole framework is trained with a single rate-distortion loss function. It achieves better performance than H. 264.
An advanced neural network-based video compression scheme is proposed. It inherits and extends traditional video coding schemes with neural networks with the following major features: 1) using only one auto-encoder to compress motion information and residues; 2) motion compensation with multiple frames and multiple optical flows; 3) an on-line state is learned and propagated through the following frames over time. This scheme achieves better performance in MS-SSIM than HEVC reference software.
An extended end-to-end neural network-based video compression framework is proposed afterwards. In this solution, multiple frames are used as references. It is thereby able to provide more accurate prediction of current frame by using multiple reference frames and associated motion information. In addition, motion field prediction is deployed to remove motion redundancy along temporal channel. Postprocessing networks are also introduced in this work to remove reconstruction artifacts from previous processes. The performance is better than H. 265 by a noticeable margin in terms of both PSNR and MS-SSIM.
The scale-space flow is then proposed to replace commonly used optical flow by adding a scale parameter. It is reportedly achieving better performance than H. 264.
A multi-resolution representation for optical flows is proposed. Concretely, the motion estimation network produces multiple optical flows with different resolutions and let the network to learn which one to choose under the loss function. The performance is better than H. 265.
2.4.2 Random access
A frame interpolation based method was initially designed. The key frames are first compressed with a neural image compressor and the remaining frames are compressed in a hierarchical order. They perform motion compensation in the perceptual domain, i.e. deriving the feature maps at multiple spatial scales of the original frame and using motion to warp the feature maps, which will be used for the image compressor. The method is reportedly on par with H. 264.
Another interpolation-based video compression is then proposed, wherein the interpolation model combines motion information compression and image synthesis, and the same auto-encoder is used for image and residual.
Afterwards, a neural network-based video compression method based on variational auto-encoders with a deterministic encoder is proposed. Concretely, the model consists of an auto-encoder and an auto-regressive prior. Different from previous methods, this method accepts a group of pictures (GOP) as inputs and incorporates a 3D autoregressive prior by taking into account of the temporal correlation while coding the laten representations. It provides comparative performance as H. 265.
2.5. Preliminaries
Almost all the natural image/video is in digital format. A grayscale digital image can be represented bywhereis the set of values of a pixel, m is the image height and n is the image width. For example, is a common setting and in this case thus the pixel can be represented by an 8-bit integer. An uncompressed grayscale digital image has 8 bits-per-pixel (bpp) , while compressed bits are definitely less.
A color image is typically represented in multiple channels to record the color information. For example, in the RGB color space an image can be denoted bywith three separate channels storing Red, Green and Blue information. Similar to the 8-bit grayscale image, an uncompressed 8-bit RGB image has 24 bpp. Digital images/videos can be represented in different color spaces. The neural network-based video compression schemes are mostly developed in RGB color space while the traditional codecs typically use YUV color space to  represent the video sequences. In YUV color space, an image is decomposed into three channels, namely Y, Cb and Cr, where Y is the luminance component and Cb/Cr are the chroma components. The benefits come from that Cb and Cr are typically down sampled to achieve pre-compression since human vision system is less sensitive to chroma components.
A color video sequence is composed of multiple color images, called frames, to record scenes at different timestamps. For example, in the RGB color space, a color video can be denoted by X= {x0, x1, …, xt, …, xT-1} where T is the number of frames in this video sequence,  If m=1080, n=1920, and the video has 50 frames-per-second (fps) , then the data rate of this uncompressed video is 1920×1080×8×3×50=2, 488, 320, 000 bits-per-second (bps) , about 2.32 Gbps, which needs a lot storage thereby definitely needs to be compressed before transmission over the internet.
Usually the lossless methods can achieve compression ratio of about 1.5 to 3 for natural images, which is clearly below requirement. Therefore, lossy compression is developed to achieve further compression ratio, but at the cost of incurred distortion. The distortion can be measured by calculating the average squared difference between the original image and the reconstructed image, i.e., mean-squared-error (MSE) . For a grayscale image, MSE can be calculated with the following equation.
Accordingly, the quality of the reconstructed image compared with the original image can be measured by peak signal-to-noise ratio (PSNR) :
whereis the maximal value ine.g., 255 for 8-bit grayscale images. There are other quality evaluation metrics such as structural similarity (SSIM) and multi-scale SSIM (MS-SSIM) .
To compare different lossless compression schemes, it is sufficient to compare either the compression ratio given the resulting rate or vice versa. However, to compare different lossy  compression methods, it has to take into account both the rate and reconstructed quality. For example, to calculate the relative rates at several different quality levels, and then to average the rates, is a commonly adopted method; the average relative rate is known as Bjontegaard’s delta-rate (BD-rate) . There are other important aspects to evaluate image/video coding schemes, including encoding/decoding complexity, scalability, robustness, and so on.
3. Problems
3.1. The core problem
Fig. 8 illustrates a decoder architecture with decoupled processing. As illustrated in Fig. 8, the decoder with decoupled processing decouples the arithmetic decoding process (the process of receiving bits and generates) from the autoregressive loop (the combined module comprising hyper decoder, Mask Conv and Prediction Fusion network) . In this architecture, the autoregressive loop consumes significant of time since it repeats the process until all the elements are decoded. To reduce the decoding time complexity, simplifying this part is of significant important. Based on our time profiling results, the Prediction Fusion network consumes the most of time in the autoregressive process. For example, in decoding luma component, the Prediction fusion network consumes 72.58%time of the whole autoregressive process. Existing methods have not tried to simplify the decoupled module (including the hyper decoder, Mask Conv network and the Prediction fusion network) to reduce the decoder complexity.
3.2. Background and details of the problem
Fig. 9 illustrates examples of a simplified decoupled architecture in accordance with embodiments of the present disclosure. As illustrated in Fig. 9, the original design of the autoregressive loop involves a Hyper Decoder Net, a Context Model Net and a Prediction Net. Since the Prediction Fusion Net consumes the most decoding time, it should be simplified. In addition, in case the performance might drop, modifications on the Hyper Decoder Net can be applied to compensate the performance drop. Since the Prediction Fusion Net is used in a repetitive way in the autoregressive loop and the Hyper Decoder Net is only used once,  simplifying the Prediction Fusion Net while increasing the Hyper Decoder Net would also lead to decreased decoding time.
4. Detailed solutions
The detailed solutions below should be considered as examples to explain general concepts. These solutions should not be interpreted in a narrow way. Furthermore, these solutions can be combined in any manner.
4.1. Core of the solution
The target of the solution is to design a simplified decoupled processing module (including the Prediction Fusion Net, the Hyper Decoder Net and the Context Model Net) with reduced decoding complexity while maintaining the coding efficiency or improved coding efficiency. Specifically, the solution modifies the Prediction Fusion Net architecture by reducing the number of convolutional layers, adjusting the number of channels or any combinations of these means. Furthermore, the Hyper Decoder Net could be modified accordingly to compensate the coding efficiency drop from simplifying the Prediction Fusion Net. The following examples as shown in Fig. 9 illustrate the possible ways to simplify the decoupled architecture.
Example 1: remove one convolutional layer from the Prediction Fusion Net. The last convolutional layer is removed as illustrated followed by changing the number of last convolutional layer output channels to C.
Example 2: remove multiple convolutional layers from the Prediction Fusion Net.
Example 3: remove multiple convolutional layers from the Prediction Fusion Net and change the number of channels. It is possible to increase the number of channels for each convolutional layers without impacting the decoding time. And, in this example, adjusting the number of channels include both increasing and decreasing the number of channels.
Example 4: modifications can be applied to the Hyper Decoder Net with anyone from Example 1-3 or any combinations of them. Modifying the Hyper Decoder Net could be adding one or more convolutional layers, adjusting the number channels, modifying the activation layers, replacing one or more convolutional layer (s) with one or more new convolutional layer (s) . The  objective is to compensate the possible coding efficiency drop resulting from simplifying the Prediction Fusion Net.
The abovementioned examples can be combined in any manner. Other modifications can also be made, such as changing the Context Model Net, as long as the decoding complexity can be reduced without impacting the coding efficiency.
4.2. Details of the solutions
1. The Prediction Fusion Net can be modified to simplify the decoupled architecture.
a) In one example, one or more layers could be removed from the Prediction Fusion Net.
b) In one example, after removing some layers, the number of channels can be adjusted accordingly.
c) In one example, one or more of the layers can be replaced with one or more new layers.
d) In one example, multiple Prediction Fusion Net architectures could exist in the decoder and a syntax element (SE) such as a flag may be signaled in the bitstreams to indicate which one is used.
e) In one example, a SE is used to indicate how many Prediction Fusion Net architectures are involved in the decoder.
f) In one example, one flag may be used to indicate whether the simplified Prediction Fusion Net or the original design is used.
g) In one example, the modified Prediction Fusion Net can be applied to either luma or chroma or both.
2. The Hyper Decoder Net can be modified.
a) In one example, one or more convolutional layers can be added to the Hyper Decoder Net to increase its decoding capability.
b) In one example, the number of channels can be adjusted accordingly.
i. The number of channels of all or partial of the convolutional layers can be adjusted to be multiple of M, e.g. M=32.
ii. The number of channels of all or partial of the convolutional layers can be adjusted to be the power of 2, i.e., 2n where n=1, 2, 3, ….
c) In one example, one convolutional layer can be replaced by more convolutional layer.
i. For example, replace the 4th convolutional layer, which both upsamples the spatial size and increases the number of channels, with two convolutional layers, wherein each layer performs only one task.
d) In one example, a flag may be signaled in the bitstreams to indicate whether the modified Hyper Decoder Net or the original Hyper Decoder Net is used.
e) In one example, multiple modified Hyper Decoder Nets can be included in the decoder and a syntax may be used to indicate which one is used.
f) In one example, the modified Hyper Decoder Net can be applied either luma or chroma or both.
General aspects
1. Whether to and/or how to apply the disclosed methods above may be signalled at block level/sequence level/group of pictures level/picture level/slice level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/APS/slice header/tile group header.
2. Whether to and/or how to apply the disclosed methods above may be dependent on coded information, such as block size, colour format, single/dual tree partitioning, colour component, slice/picture type.
3. The proposed methods disclosed in this document may be used in other coding tools which require chroma fusion.
4. A syntax element disclosed above may be binarized as a flag, a fixed length code, an EG (x) code, a unary code, a truncated unary code, a truncated binary code, etc. It can be signed or unsigned.
5. A syntax element disclosed above may be coded with at least one context model. Or it may be bypass coded.
6. A syntax element disclosed above may be signaled in a conditional way.
a. The SE is signaled only if the corresponding function is applicable.
b. The SE is signaled only if the dimensions (width and/or height) of the block satisfy a condition.
7. A syntax element disclosed above may be signaled at block level/sequence level/group of pictures level/picture level/slice level/tile group level, such as in coding structures of CTU/CU/TU/PU/CTB/CB/TB/PB, or sequence header/picture header/SPS/VPS/DPS/DCI/PPS/APS/slice header/tile group header.
4.3. Benefit of the solutions
According to the solutions, the simplified Prediction Fusion Net consumes less time, and its time reduction contribution is nontrivial since it is used repeatedly in the autoregressive loop. In addition, the modified Hyper Decoder Net can compensate the possible coding efficiency drop from the simplified Prediction Fusion Net. The joint efforts provide a decoupled decoder with reduced decoding complexity yet maintaining or outperforming the original design in terms of coding efficiency.
5. Embodiments
The simplified decouple architecture can be approached from two aspects: 1) simplifying the Prediction Fusion Net. 2) Enhancing the Hyper Decoder Net.
The Prediction Fusion Net is involved in the autoregressive loop and executed in a repetitive manner thus incurs the most decoding time. Therefore, the disclosure proposes to simplify the Prediction Fusion Net first. However, the coding efficiency could possibly be reduced due to the simplified Prediction Fusion Net. If that happens, the Hyper Decoder Net can be enhanced accordingly to compensate the loss. Alternatively, if there is no loss on coding efficiency after simplifying the Prediction Fusion Net, the Hyper Decoder Net can be optionally enhanced to improve the coding efficiency.
5.1. Simplified Prediction Fusion Net
5.1.1 Variation 1
Fig. 10 illustrates the simplified decoupled architecture obtained by removing one  convolutional layer from the Prediction Fusion Net. The Hyper Decoder Net is the original design.
In Fig. 10, the simplified decoupled architecture is implemented by removing one convolutional layer from the Prediction Net. The original design of Prediction Fusion Net has the following configurations as shown in Table 1, where //is the floor division operator. Each convolutional layer is followed by a Leaky ReLu activation layer except for the final convolutional layer.
Table 1. The original design of Prediction Fusion Net.
The simplified Prediction Fusion Net is by removing the Conv #6 layer followed by changing the output of Conv #5 to C. The configurations of the simplified Prediction Fusion Net are tabulated in Table 2.
Table 2. The simplified Prediction Fusion Net (Variation 1) .

In Variation 1, the following alternatives can also be implemented.
a) Instead of removing Conv #6, any other single convolutional layer may be removed, such as Conv #5, Conv #4, Conv #3, Conv #2, Conv #1 with corresponding channel number adjustments of the proceeding or following convolutional layer.
5.1.2 Variation 2
Alternatively, multiple convolutional layers can be removed from the Prediction Fusion Net. Fig. 11 illustrates the simplified decoupled architecture obtained by removing multiple convolutional layers from the Prediction Fusion Net. The Hyper Decoder is the original design. In Fig. 11, an implementation is illustrated after Conv #2, #3 and #5 are removed from the Prediction Fusion Net, followed by corresponding channel number adjustments of the proceeding or following convolutional layers.
The configurations of Variation 2 of the Prediction Fusion Net are tabulated in Table 3.
Table 3. The simplified Prediction Fusion Net (Variation 2) .
In Variation 2, the following alternatives can also be implemented.
a) The number of convolutional layers removed can be 2, 3, 4 or 5, i.e., at least one convolutional layer should be remained and preferably 2 or more.
b) The number of channels can be adjusted to multiple of M, where M is an integer, for instance, M=32.
c) The number of channels can be adjusted to the power of 2, i.e., 2N (N=1, 2, 3, …) .
5.1.3 Variation 3
In addition to Variation 1 and Variation 2, the following alternatives can be implemented as simplified Prediction Fusion Net (as shown in Fig. 12, which illustrates exemplary variants of the simplified prediction fusion net) , including:
● Four convolutional layers are included, each followed by a LeakyRelu activation layer except for the last convolutional layer, with the number of channels of 4C-2C-2C-C, 4C-4C-2C-C, 4C-4C-4C-C or 4C-C-C-C.
● Three convolutional layers are included, each followed by a LeakyRelu activation layer except for the last convolutional layer, with the number of channels of 4C-4C-C, 4C-2C-C or 2C-2C-C.
● Two convolutional layers are included, each followed by a LeakyRelu activation layer except for the last convolutional layer, with the number of channels of 4C-C or 2C-C.
● Only one convolutional layer is included with the number of channels C. An example after integration with the Hyper Decoder Net is illustrated in Fig. 12. The LeakyRelu layer is right after the concatenation operation and before the convolutional layer.
● In the one-convolutional layer scheme, the LeakyRelu layer can also be removed, i.e., removing the LeakyRelu layer between the concatenation operation and the single convolutional layer in Fig. 12.
Fig. 13 illustrates an integration of the 1-convolutional layer Prediction Fusion Net (as shown in Fig. 12) with Hyper Decoder Net. As shown in Fig. 13, the simplified Prediction Fusion Net has only one convolutional layer. A LeakyRelu layer is optionally inserted between this convolutional layer and the concatenation operation.
In addition to Fig. 13, where the LeakyRelu is added after the concatenation point for one-convolutional layer case, a LeakyRelu can also be added to all other implementations as well, including Fig. 10, Fig. 11, and Fig. 12.
5.2. Enhanced Hyper Decoder Net
Fig. 14 illustrates the simplified decoupled architecture with the enhanced Hyper Decoder Net. the simplified prediction fusion net is obtained by removing multiple convolutional layers and changing the number of channels. The hyper decoder net is enhanced by replacing the 4th convolutional layer with two convolutions layers.
Although simplifying the Prediction Fusion Net can bring decoding time savings, it could also possibly lead to coding efficiency drop. If that happens, enhancing the Hyper Decoder Net is an option to compensate for the incurred loss. Fig. 14 shows Variation 3 of simplified decoupled architecture. It includes a simplified Prediction Fusion Net after removing 3 convolutional layers and an enhanced Hyper Decoder Net. The original Conv #4 is replaced by two new convolutional layers. In the original design, Conv #4 is responsible for upscaling the spatial size and increasing the number of channels. In the new design, the first layer only increases the number of channels and the second layer only upscale the spatial size.
The original design of Hyper Decoder Net is tabulated in Table 4, where C is the number of input channels and //is the floor division operator.
Table 4. The original design of Hyper Decoder Net.

The Hyper Decoder Net exemplified in the Variation 3 is shown in Table 5. The original Conv #4 is replaced with two new convolutional layers Conv#4-1 and Conv #4-2.
Table 5. The enhanced Hyper Decoder Net (Variation 3) .
Alternatively, the Variation 3 can also be implemented in the following manner.
a) Add one or more convolutional layer (s) .
b) Adjusting the number of channels to be multiple of M, for instance, M=32.
c) Adjusting the number of channels to be multiple of the power of 2, i.e., 2N (N= 1, 2, 3, 4, …) .
d) Adjusting the number of channels to any other number (s) .
More Variants of Hyper Decoder Net
In addition, more variants of the Hyper Decoder Net can be implemented as shown in Fig. 15, including:
● Four convolutional layers with number of channels Cx5x5up2-Cx5x5up2-2Cx3x3-2Cx3x3 or Cx5x5up2-2Cx5x5up2-2Cx3x3-2Cx3x3. The first and second convolutional layers are followed by a cropping and LeakyRelu layer, the third convolutional layer is followed by a LeakyRelu layer, wherein C is a predetermined number, such as 64, 128 or the like.
● Four convolutional layers with number of channels Cx4x4up2-Cx4x4up2-2Cx3x3-2Cx3x3 or Cx4x4up2-2Cx4x4up2-2Cx3x3-2Cx3x3. The first and second convolutional layers are followed by a cropping and LeakyRelu layer, the third convolutional layer is followed by a LeakyRelu layer.
More details of the embodiments of the present disclosure will be described below which are related to neural network-based visual data coding. As used herein, the term “visual data” may refer to a video, an image, a picture in a video, or any other visual data suitable to be coded.
As discussed above, in the existing design, an autoregressive loop in a neural network (NN) -based model comprises a context model net, prediction fusion net, and a hyper decoder net. The prediction fusion net may consume a large amount of time during the autoregressive process. This results in an increase of time need for the whole coding process, and thus the coding efficiency deteriorates.
To solve the above problems and some other problems not mentioned, visual data processing solutions as described below are disclosed. The embodiments of the present disclosure should be considered as examples to explain the general concepts and should not be interpreted in a narrow way. Furthermore, these embodiments can be applied individually or combined in any manner.
Fig. 16 illustrates a flowchart of a method 1600 for visual data processing in accordance with some embodiments of the present disclosure. As shown in Fig. 16, at  1602, a conversion between visual data and a bitstream of the visual data is performed with a neural network (NN) -based model. In some embodiments, the conversion may include encoding the visual data into the bitstream. Additionally or alternatively, the conversion may include decoding the visual data from the bitstream. For example, the decoding model shown in Fig. 8 may be employed for decoding the visual data from the bitstream.
Moreover, the NN-based model comprises a prediction fusion module comprising at least one convolutional layer. The number of the at least one convolutional layer is smaller than 6. In some embodiments, an NN-based model may be a model based on neural network technologies. For example, an NN-based model may specify sequence of neural network modules (also called architecture) and model parameters. The neural network module may comprise a set of neural network layers. Each neural network layer specifies a tensor operation which receives and outputs tensor, and each layer has trainable parameters. As used herein, a convolutional layer may also be referred to as convolution layer. A prediction fusion module may also be referred to as prediction fusion network or prediction fusion net or prediction fusion for short. A hyper decoder module may also be referred to hyper decoder network or hyper decoder net or hyper decoder for short. It should be understood that the possible implementations of the NN-based model described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way.
In some embodiments, the number of the at least one convolutional layer may be 5. In other words, the prediction fusion module may comprise 5 convolutional layers. An example for this case is shown in Fig. 10. Alternatively, the number of the at least one convolutional layer may be 3. In other words, the prediction fusion module may comprise 3 convolutional layers. An example for this case is shown in Fig. 11. In some embodiments, the number of the at least one convolutional layer may be 1. In other words, the prediction fusion module may comprise only one convolutional layer. An example for this case is shown in Fig. 13. It should be understood that the number of the at least  one convolutional layer may be any other suitable number that is smaller 6, such as 2 or 4. The scope of the present disclosure is not limited in this respect.
In view of the above, the prediction fusion module in the NN-based model comprises less than 6 convolutional layers. Compared with the conventional solution where the prediction fusion module comprises at least 6 convolutional layers, the proposed method can advantageously simplify the prediction fusion module and thus reduce the time consumed at this stage. Thereby, the coding efficiency can be improved.
In some embodiments, the at least one convolutional layer may share the same kernel size. By way of example rather than limitation, in a case where the prediction fusion module comprises 3 convolutional layers, all 3 convolutional layers may share the same kernel size, such as 1×1, 3×3, or the like. In some alternative embodiments, the at least one convolutional layer may have different kernel sizes. By way of example rather than limitation, in a case where the prediction fusion module comprises 3 convolutional layers, the first 2 convolutional layers may share the same kernel size (such as 1×1 or the like) , while the last convolutional layer may have a different kernel size (such as 3×3 or the like) .
In some embodiments, the prediction fusion module may further comprise at least one non-linear activation unit. In one example, a non-linear activation unit may be a rectified linear unit. The rectified linear unit may be denoted as ReLU () , and its element-wise function may be as follows:
In another example, a non-linear activation unit may be a leaky rectified linear unit. The leaky rectified linear unit may be denoted as LeakyReLU () , and its element-wise function may be as follows:
where the variable negative_slope may be equal to 0.01. It should be understood that the possible implementations of the non-linear activation unit described here are merely illustrative and therefore should not be construed as limiting the present disclosure in any way.
In some embodiments, each of the at least one convolutional layer, except for the last convolutional layer in the at least one convolutional layer, may be followed by a non-linear activation unit. By way of example, in a case where the prediction fusion module comprises 5 convolutional layers, the prediction fusion module may further comprise 4 non-linear activation units, as shown in Fig. 10. In in a case where the prediction fusion module comprises 3 convolutional layers, the prediction fusion module may further comprise 2 non-linear activation units, as shown in Fig. 10. In aid of the non-linear activation unit, non-linearity is introduced, and coding quality can be improved.
In some embodiments, the at least one convolutional layer may comprise a first convolutional layer and a second convolutional layer following the first convolutional layer. The number of input channels of the second convolutional layer may be the same as the number of output channels of the first convolutional layer. That is, the number of input channels of a convolutional layer is equal to the number of output channels of a further convolutional layer immediately preceding the convolutional layer.
In some embodiments, the number of input channels of each of the at least one convolutional layer may be a multiple of a first predetermined number, and the number of output channels of each of the at least one convolutional layer may be a multiple of the first predetermined number. The first predetermined number may be an integer, such as 32 or the like. Thereby, the coding efficiency may be further improved.
In some alternative embodiments, the number of input channels of each of the at least one convolutional layer may be a power of 2, i.e., 2n where n=1, 2, 3, …. Moreover, the number of output channels of each of the at least one convolutional layer may be a power of 2.
In some embodiments, the prediction fusion module may be applied to a luma component of the visual data and/or a chroma component of the visual data.
In some embodiments, the NN-based model may comprise a plurality of prediction fusion modules comprising the above-described simplified prediction fusion module. In addition, the bitstream may comprise a first syntax element indicating one of the plurality of prediction fusion modules that is used for the conversion. By way of example rather than limitation, the first syntax element may comprise a flag. In some additional embodiments, the bitstream may further comprise a second syntax element indicating the number of the plurality of prediction fusion modules.
In some embodiments, the NN-based model may further comprise a hyper decoder module comprising a plurality of convolutional layers. The number of the plurality of convolutional layers may be larger than 5. Compared with the conventional solution where the hyper decoder module comprises 5 convolutional layers, the proposed method can advantageously enhance the coding efficiency.
In some embodiments, the plurality of convolutional layers comprise a third convolutional layer and a fourth convolutional layer following the third convolutional layer. The number of input channels of the fourth convolutional layer may be the same as the number of output channels of the third convolutional layer. That is, the number of input channels of a convolutional layer is equal to the number of output channels of a further convolutional layer immediately preceding the convolutional layer.
In some embodiments, the number of input channels of at least one of the plurality of convolutional layers may be a multiple of a second predetermined number, and the number of output channels of at least one of the plurality of convolutional layers may be a multiple of the second predetermined number. The second predetermined number may be an integer, such as 32 or the like. Alternatively, the number of input channels of at least one of the plurality of convolutional layers may be a power of 2, and the number of output channels of at least one of the plurality of convolutional layers may be a power of  2.
In some embodiments, the plurality of convolutional layers may comprise a single convolutional layer for upsampling a spatial size of an input tensor of the single convolutional layer, and a further single convolutional layer for increasing the number of channels of an input tensor of the further single convolutional layer. In contrast, in the conventional solution, a single convolutional layer is used to implement these two functions, i.e., spatial size upsampling and channel number increasing. By using two convolutional layers to implement these two functions separately, the proposed method can advantageously improve coding efficiency.
In some embodiments, the hyper decoder module may be applied to a luma component of the visual data and/or a chroma component of the visual data.
In some embodiments, the NN-based model may comprise a plurality of hyper decoder modules comprising the above-described enhanced hyper decoder module. In addition, the bitstream may comprise a third syntax element indicating one of the plurality of hyper decoder modules that is used for the conversion. By way of example rather than limitation, the third syntax element may comprise a flag.
In some embodiments, whether to and/or how to apply the method may be indicated at a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level. Additionally or alternatively, whether to and/or how to apply the method may be indicated in one of the following: a coding structure of a coding tree unit (CTU) , a coding structure of a coding unit (CU) , a coding structure of a transform unit (TU) , a coding structure of a prediction unit (PU) , a coding structure of a coding tree block (CTB) , a coding structure of a coding block (CB) , a coding structure of a transform block (TB) , a coding structure of a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a video parameter set (VPS) , a dependency parameter set (DPS) , a decoding capability information (DCI) , a picture parameter set (PPS) , an adaptation parameter sets (APS) , a slice header, or a tile group header.
In some embodiments, whether to and/or how to apply the method may be dependent on the coded information. By way of example rather than limitation, the coded information may comprise a block size, a color format, a single dual tree partitioning, a dual tree partitioning, a color component, a slice type, a picture type, and/or the like. In some embodiments, the method may also be applicable to a coding tool requiring chroma fusion.
In some embodiments, a syntax element may be binarized as a flag, a fixed length code, an exponential Golomb (EG) code, a unary code, a truncated unary code, a truncated binary code, or the like. Additionally or alternatively, the syntax element may be signed or unsigned. In some embodiments, a syntax element may be coded with at least one context model. Alternatively, the syntax element may be bypass coded.
In some embodiments, whether a syntax element is indicated in the bitstream may be dependent on a condition. By way of example, if a function corresponding to the syntax element is applicable, the syntax element may be indicated in the bitstream. In another example, if a dimension of a block satisfies a condition (such as a width or a height is larger than a threshold) , the syntax element may be indicated in the bitstream.
In some embodiments, a syntax element may be indicated at a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level. Additionally or alternatively, the syntax element may be indicated in one of the following: a coding structure of CTU, a coding structure of CU, a coding structure of TU, a coding structure of PU, a coding structure of CTB, a coding structure of CB, a coding structure of TB, a coding structure of PB, a sequence header, a picture header, an SPS, a VPS, a DPS, a DCI, a PPS, an APS, a slice header, or a tile group header.
In view of the above, the solutions in accordance with some embodiments of the present disclosure can advantageously improve coding efficiency and coding quality.
According to further embodiments of the present disclosure, a non-transitory computer-readable recording medium is provided. The non-transitory computer-readable  recording medium stores a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing. In the method, a conversion between the visual data and the bitstream with a neural network (NN) -based model is performed. The NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
According to still further embodiments of the present disclosure, a method for storing bitstream of a video is provided. In the method, a conversion between the visual data and the bitstream with a neural network (NN) -based model is performed. The NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6. Moreover, the bitstream is stored in a non-transitory computer-readable recording medium.
Implementations of the present disclosure can be described in view of the following clauses, the features of which can be combined in any reasonable manner.
Clause 1. A method for visual data processing, comprising: performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
Clause 2. The method of clause 1, wherein the number of the at least one convolutional layer is one of 1, 2, 3, 4, or 5.
Clause 3. The method of any of clauses 1-2, wherein the prediction fusion module further comprises at least one non-linear activation unit.
Clause 4. The method of clause 3, wherein the number of the least one non-linear activation unit is 2.
Clause 5. The method of any of clauses 1-4, wherein each of the at least one  convolutional layer, except for the last convolutional layer in the at least one convolutional layer, is followed by a non-linear activation unit.
Clause 6. The method of any of clauses 3-5, wherein a non-linear activation unit comprises a rectified linear unit or a leaky rectified linear unit.
Clause 7. The method of any of clauses 1-6, wherein the at least one convolutional layer comprises a first convolutional layer and a second convolutional layer following the first convolutional layer, the number of input channels of the second convolutional layer is the same as the number of output channels of the first convolutional layer.
Clause 8. The method of any of clauses 1-7, wherein the number of input channels of each of the at least one convolutional layer is a multiple of a first predetermined number, the number of output channels of each of the at least one convolutional layer is a multiple of the first predetermined number, and the first predetermined number is an integer.
Clause 9. The method of clause 8, wherein the first predetermined number is 32.
Clause 10. The method of any of clauses 1-7, wherein the number of input channels of each of the at least one convolutional layer is a power of 2, and the number of output channels of each of the at least one convolutional layer is a power of 2.
Clause 11. The method of any of clauses 1-10, wherein the prediction fusion module is applied to at last one of the following: a luma component of the visual data, or a chroma component of the visual data.
Clause 12. The method of any of clauses 1-11, wherein the NN-based model comprises a plurality of prediction fusion modules comprising the prediction fusion module, and the bitstream comprises a first syntax element indicating one of the plurality of prediction fusion modules that is used for the conversion.
Clause 13. The method of clause 12, wherein the bitstream further comprises a second syntax element indicating the number of the plurality of prediction fusion modules.
Clause 14. The method of any of clauses 12-13, wherein the first syntax element comprises a flag.
Clause 15. The method of any of clauses 1-14, wherein the NN-based model further comprises a hyper decoder module comprising a plurality of convolutional layers, and the number of the plurality of convolutional layers is larger than 5.
Clause 16. The method of clause 15, wherein the plurality of convolutional layers comprise a third convolutional layer and a fourth convolutional layer following the third convolutional layer, the number of input channels of the fourth convolutional layer is the same as the number of output channels of the third convolutional layer.
Clause 17. The method of any of clauses 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a multiple of a second predetermined number, the number of output channels of at least one of the plurality of convolutional layers is a multiple of the second predetermined number, and the second predetermined number is an integer.
Clause 18. The method of clause 17, wherein the second predetermined number is 32.
Clause 19. The method of any of clauses 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a power of 2, and the number of output channels of at least one of the plurality of convolutional layers is a power of 2.
Clause 20. The method of any of clauses 15-19, wherein the plurality of convolutional layers comprises a single convolutional layer for upsampling a spatial size of an input tensor of the single convolutional layer, and a further single convolutional layer for increasing the number of channels of an input tensor of the further single convolutional layer.
Clause 21. The method of any of clauses 15-20, wherein the hyper decoder  module is applied to at last one of the following: a luma component of the visual data, or a chroma component of the visual data.
Clause 22. The method of any of clauses 15-21, wherein the NN-based model comprises a plurality of hyper decoder modules comprising the hyper decoder module, and the bitstream comprises a third syntax element indicating one of the plurality of hyper decoder modules that is used for the conversion.
Clause 23. The method of clause 22, wherein the third syntax element comprises a flag.
Clause 24. The method of any of clauses 1-23, wherein whether to and/or how to apply the method is indicated at one of the following: a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level.
Clause 25. The method of any of clauses 1-23, wherein whether to and/or how to apply the method is indicated in one of the following: a coding structure of a coding tree unit (CTU) , a coding structure of a coding unit (CU) , a coding structure of a transform unit (TU) , a coding structure of a prediction unit (PU) , a coding structure of a coding tree block (CTB) , a coding structure of a coding block (CB) , a coding structure of a transform block (TB) , a coding structure of a prediction block (PB) , a sequence header, a picture header, a sequence parameter set (SPS) , a video parameter set (VPS) , a dependency parameter set (DPS) , a decoding capability information (DCI) , a picture parameter set (PPS) , an adaptation parameter sets (APS) , a slice header, or a tile group header.
Clause 26. The method of any of clauses 1-25, wherein whether to and/or how to apply the method is dependent on the coded information.
Clause 27. The method of clause 26, wherein the coded information comprises at least one of the following: a block size, a color format, a single dual tree partitioning, a dual tree partitioning, a color component, a slice type, or a picture type.
Clause 28. The method of any of clauses 1-27, wherein the method is applicable  to a coding tool requiring chroma fusion.
Clause 29. The method of any of clauses 12-14 and 22-23, wherein a syntax element is binarized as one of the following: a flag, a fixed length code, an exponential Golomb (EG) code, a unary code, a truncated unary code, or a truncated binary code.
Clause 30. The method of any of clauses 12-14 and 22-23, wherein a syntax element is signed or unsigned.
Clause 31. The method of any of clauses 12-14 and 22-23, wherein a syntax element is coded with at least one context model or bypass coded.
Clause 32. The method of any of clauses 12-14 and 22-23, wherein whether a syntax element is indicated in the bitstream is dependent on a condition.
Clause 33. The method of clause 32, wherein if a function corresponding to the syntax element is applicable, the syntax element is indicated in the bitstream.
Clause 34. The method of clause 32, wherein if a dimension of a block satisfies a condition, the syntax element is indicated in the bitstream.
Clause 35. The method of any of clauses 12-14 and 22-23, wherein a syntax element is indicated at one of the following: a block level, a sequence level, a group of pictures level, a picture level, a slice level, or a tile group level.
Clause 36. The method of any of clauses 12-14 and 22-23, wherein a syntax element is indicated in one of the following: a coding structure of CTU, a coding structure of CU, a coding structure of TU, a coding structure of PU, a coding structure of CTB, a coding structure of CB, a coding structure of TB, a coding structure of PB, a sequence header, a picture header, an SPS, a VPS, a DPS, a DCI, a PPS, an APS, a slice header, or a tile group header.
Clause 37. The method of any of clauses 1-36, wherein the visual data comprise a picture of a video or an image.
Clause 38. The method of any of clauses 1-37, wherein the conversion includes encoding the visual data into the bitstream.
Clause 39. The method of any of clauses 1-37, wherein the conversion includes decoding the visual data from the bitstream.
Clause 40. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of clauses 1-39.
Clause 41. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of clauses 1-39.
Clause 42. A non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
Clause 43. A method for storing a bitstream of visual data, comprising: performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6; and storing the bitstream in a non-transitory computer-readable recording medium.
Example Device
Fig. 17 illustrates a block diagram of a computing device 1700 in which various  embodiments of the present disclosure can be implemented. The computing device 1700 may be implemented as or included in the source device 110 (or the visual data encoder 114) or the destination device 120 (or the visual data decoder 124) .
It would be appreciated that the computing device 1700 shown in Fig. 17 is merely for purpose of illustration, without suggesting any limitation to the functions and scopes of the embodiments of the present disclosure in any manner.
As shown in Fig. 17, the computing device 1700 includes a general-purpose computing device 1700. The computing device 1700 may at least comprise one or more processors or processing units 1710, a memory 1720, a storage unit 1730, one or more communication units 1740, one or more input devices 1750, and one or more output devices 1760.
In some embodiments, the computing device 1700 may be implemented as any user terminal or server terminal having the computing capability. The server terminal may be a server, a large-scale computing device or the like that is provided by a service provider. The user terminal may for example be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile phone, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistant (PDA) , audio/video player, digital camera/video camera, positioning device, television receiver, radio broadcast receiver, E-book device, gaming device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It would be contemplated that the computing device 1700 can support any type of interface to a user (such as “wearable” circuitry and the like) .
The processing unit 1710 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 1720. In a multi-processor system, multiple processing units execute computer executable instructions in  parallel so as to improve the parallel processing capability of the computing device 1700. The processing unit 1710 may also be referred to as a central processing unit (CPU) , a microprocessor, a controller or a microcontroller.
The computing device 1700 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 1700, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 1720 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM) ) , a non-volatile memory (such as a Read-Only Memory (ROM) , Electrically Erasable Programmable Read-Only Memory (EEPROM) , or a flash memory) , or any combination thereof. The storage unit 1730 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or visual data and can be accessed in the computing device 1700.
The computing device 1700 may further include additional detachable/non-detachable, volatile/non-volatile memory medium. Although not shown in Fig. 17, it is possible to provide a magnetic disk drive for reading from and/or writing into a detachable and non-volatile magnetic disk and an optical disk drive for reading from and/or writing into a detachable non-volatile optical disk. In such cases, each drive may be connected to a bus (not shown) via one or more visual data medium interfaces.
The communication unit 1740 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 1700 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 1700 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes.
The input device 1750 may be one or more of a variety of input devices, such as  a mouse, keyboard, tracking ball, voice-input device, and the like. The output device 1760 may be one or more of a variety of output devices, such as a display, loudspeaker, printer, and the like. By means of the communication unit 1740, the computing device 1700 can further communicate with one or more external devices (not shown) such as the storage devices and display device, with one or more devices enabling the user to interact with the computing device 1700, or any devices (such as a network card, a modem and the like) enabling the computing device 1700 to communicate with one or more other computing devices, if required. Such communication can be performed via input/output (I/O) interfaces (not shown) .
In some embodiments, instead of being integrated in a single device, some or all components of the computing device 1700 may also be arranged in cloud computing architecture. In the cloud computing architecture, the components may be provided remotely and work together to implement the functionalities described in the present disclosure. In some embodiments, cloud computing provides computing, software, visual data access and storage service, which will not require end users to be aware of the physical locations or configurations of the systems or hardware providing these services. In various embodiments, the cloud computing provides the services via a wide area network (such as Internet) using suitable protocols. For example, a cloud computing provider provides applications over the wide area network, which can be accessed through a web browser or any other computing components. The software or components of the cloud computing architecture and corresponding visual data may be stored on a server at a remote position. The computing resources in the cloud computing environment may be merged or distributed at locations in a remote visual data center. Cloud computing infrastructures may provide the services through a shared visual data center, though they behave as a single access point for the users. Therefore, the cloud computing architectures may be used to provide the components and functionalities described herein from a service provider at a remote location. Alternatively, they may be provided from a conventional server or installed directly or otherwise on a client device.
The computing device 1700 may be used to implement visual data encoding/decoding in embodiments of the present disclosure. The memory 1720 may include one or more visual data coding modules 1725 having one or more program instructions. These modules are accessible and executable by the processing unit 1710 to perform the functionalities of the various embodiments described herein.
In the example embodiments of performing visual data encoding, the input device 1750 may receive visual data as an input 1770 to be encoded. The visual data may be processed, for example, by the visual data coding module 1725, to generate an encoded bitstream. The encoded bitstream may be provided via the output device 1760 as an output 1780.
In the example embodiments of performing visual data decoding, the input device 1750 may receive an encoded bitstream as the input 1770. The encoded bitstream may be processed, for example, by the visual data coding module 1725, to generate decoded visual data. The decoded visual data may be provided via the output device 1760 as the output 1780.
While this disclosure has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined by the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting.

Claims (43)

  1. A method for visual data processing, comprising:
    performing a conversion between visual data and a bitstream of the visual data with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  2. The method of claim 1, wherein the number of the at least one convolutional layer is one of 1, 2, 3, 4, or 5.
  3. The method of any of claims 1-2, wherein the prediction fusion module further comprises at least one non-linear activation unit.
  4. The method of claim 3, wherein the number of the least one non-linear activation unit is 2.
  5. The method of any of claims 1-4, wherein each of the at least one convolutional layer, except for the last convolutional layer in the at least one convolutional layer, is followed by a non-linear activation unit.
  6. The method of any of claims 3-5, wherein a non-linear activation unit comprises a rectified linear unit or a leaky rectified linear unit.
  7. The method of any of claims 1-6, wherein the at least one convolutional layer comprises a first convolutional layer and a second convolutional layer following the first convolutional layer, the number of input channels of the second convolutional layer is the same as the number of output channels of the first convolutional layer.
  8. The method of any of claims 1-7, wherein the number of input channels of each of the at least one convolutional layer is a multiple of a first predetermined number, the number  of output channels of each of the at least one convolutional layer is a multiple of the first predetermined number, and the first predetermined number is an integer.
  9. The method of claim 8, wherein the first predetermined number is 32.
  10. The method of any of claims 1-7, wherein the number of input channels of each of the at least one convolutional layer is a power of 2, and the number of output channels of each of the at least one convolutional layer is a power of 2.
  11. The method of any of claims 1-10, wherein the prediction fusion module is applied to at last one of the following:
    a luma component of the visual data, or
    a chroma component of the visual data.
  12. The method of any of claims 1-11, wherein the NN-based model comprises a plurality of prediction fusion modules comprising the prediction fusion module, and the bitstream comprises a first syntax element indicating one of the plurality of prediction fusion modules that is used for the conversion.
  13. The method of claim 12, wherein the bitstream further comprises a second syntax element indicating the number of the plurality of prediction fusion modules.
  14. The method of any of claims 12-13, wherein the first syntax element comprises a flag.
  15. The method of any of claims 1-14, wherein the NN-based model further comprises a hyper decoder module comprising a plurality of convolutional layers, and the number of the plurality of convolutional layers is larger than 5.
  16. The method of claim 15, wherein the plurality of convolutional layers comprise a third convolutional layer and a fourth convolutional layer following the third convolutional  layer, the number of input channels of the fourth convolutional layer is the same as the number of output channels of the third convolutional layer.
  17. The method of any of claims 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a multiple of a second predetermined number, the number of output channels of at least one of the plurality of convolutional layers is a multiple of the second predetermined number, and the second predetermined number is an integer.
  18. The method of claim 17, wherein the second predetermined number is 32.
  19. The method of any of claims 15-16, wherein the number of input channels of at least one of the plurality of convolutional layers is a power of 2, and the number of output channels of at least one of the plurality of convolutional layers is a power of 2.
  20. The method of any of claims 15-19, wherein the plurality of convolutional layers comprises a single convolutional layer for upsampling a spatial size of an input tensor of the single convolutional layer, and a further single convolutional layer for increasing the number of channels of an input tensor of the further single convolutional layer.
  21. The method of any of claims 15-20, wherein the hyper decoder module is applied to at last one of the following:
    a luma component of the visual data, or
    a chroma component of the visual data.
  22. The method of any of claims 15-21, wherein the NN-based model comprises a plurality of hyper decoder modules comprising the hyper decoder module, and the bitstream comprises a third syntax element indicating one of the plurality of hyper decoder modules that is used for the conversion.
  23. The method of claim 22, wherein the third syntax element comprises a flag.
  24. The method of any of claims 1-23, wherein whether to and/or how to apply the method is indicated at one of the following:
    a block level,
    a sequence level,
    a group of pictures level,
    a picture level,
    a slice level, or
    a tile group level.
  25. The method of any of claims 1-23, wherein whether to and/or how to apply the method is indicated in one of the following:
    a coding structure of a coding tree unit (CTU) ,
    a coding structure of a coding unit (CU) ,
    a coding structure of a transform unit (TU) ,
    a coding structure of a prediction unit (PU) ,
    a coding structure of a coding tree block (CTB) ,
    a coding structure of a coding block (CB) ,
    a coding structure of a transform block (TB) ,
    a coding structure of a prediction block (PB) ,
    a sequence header,
    a picture header,
    a sequence parameter set (SPS) ,
    a video parameter set (VPS) ,
    a dependency parameter set (DPS) ,
    a decoding capability information (DCI) ,
    a picture parameter set (PPS) ,
    an adaptation parameter sets (APS) ,
    a slice header, or
    a tile group header.
  26. The method of any of claims 1-25, wherein whether to and/or how to apply the method is dependent on the coded information.
  27. The method of claim 26, wherein the coded information comprises at least one of the following:
    a block size,
    a color format,
    a single dual tree partitioning,
    a dual tree partitioning,
    a color component,
    a slice type, or
    a picture type.
  28. The method of any of claims 1-27, wherein the method is applicable to a coding tool requiring chroma fusion.
  29. The method of any of claims 12-14 and 22-23, wherein a syntax element is binarized as one of the following:
    a flag,
    a fixed length code,
    an exponential Golomb (EG) code,
    a unary code,
    a truncated unary code, or
    a truncated binary code.
  30. The method of any of claims 12-14 and 22-23, wherein a syntax element is signed or unsigned.
  31. The method of any of claims 12-14 and 22-23, wherein a syntax element is coded with at least one context model or bypass coded.
  32. The method of any of claims 12-14 and 22-23, wherein whether a syntax element is indicated in the bitstream is dependent on a condition.
  33. The method of claim 32, wherein if a function corresponding to the syntax element is applicable, the syntax element is indicated in the bitstream.
  34. The method of claim 32, wherein if a dimension of a block satisfies a condition, the syntax element is indicated in the bitstream.
  35. The method of any of claims 12-14 and 22-23, wherein a syntax element is indicated at one of the following:
    a block level,
    a sequence level,
    a group of pictures level,
    a picture level,
    a slice level, or
    a tile group level.
  36. The method of any of claims 12-14 and 22-23, wherein a syntax element is indicated in one of the following:
    a coding structure of CTU,
    a coding structure of CU,
    a coding structure of TU,
    a coding structure of PU,
    a coding structure of CTB,
    a coding structure of CB,
    a coding structure of TB,
    a coding structure of PB,
    a sequence header,
    a picture header,
    an SPS,
    a VPS,
    a DPS,
    a DCI,
    a PPS,
    an APS,
    a slice header, or
    a tile group header.
  37. The method of any of claims 1-36, wherein the visual data comprise a picture of a video or an image.
  38. The method of any of claims 1-37, wherein the conversion includes encoding the visual data into the bitstream.
  39. The method of any of claims 1-37, wherein the conversion includes decoding the visual data from the bitstream.
  40. An apparatus for visual data processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform a method in accordance with any of claims 1-39.
  41. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform a method in accordance with any of claims 1-39.
  42. A non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing, wherein the method comprises:
    performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6.
  43. A method for storing a bitstream of visual data, comprising:
    performing a conversion between the visual data and the bitstream with a neural network (NN) -based model, wherein the NN-based model comprises a prediction fusion module comprising at least one convolutional layer, and the number of the at least one convolutional layer is smaller than 6; and
    storing the bitstream in a non-transitory computer-readable recording medium.
PCT/CN2024/072165 2023-01-13 2024-01-12 Method, apparatus, and medium for visual data processing WO2024149395A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2023072104 2023-01-13
CNPCT/CN2023/072104 2023-01-13
US202363480444P 2023-01-18 2023-01-18
US63/480,444 2023-01-18

Publications (1)

Publication Number Publication Date
WO2024149395A1 true WO2024149395A1 (en) 2024-07-18

Family

ID=91897888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/072165 WO2024149395A1 (en) 2023-01-13 2024-01-12 Method, apparatus, and medium for visual data processing

Country Status (1)

Country Link
WO (1) WO2024149395A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107925762A (en) * 2015-09-03 2018-04-17 联发科技股份有限公司 Video coding-decoding processing method and device based on neutral net
CN108184129A (en) * 2017-12-11 2018-06-19 北京大学 A kind of video coding-decoding method, device and the neural network for image filtering
EP3451293A1 (en) * 2017-08-28 2019-03-06 Thomson Licensing Method and apparatus for filtering with multi-branch deep learning
US20190273948A1 (en) * 2019-01-08 2019-09-05 Intel Corporation Method and system of neural network loop filtering for video coding
WO2020062074A1 (en) * 2018-09-28 2020-04-02 Hangzhou Hikvision Digital Technology Co., Ltd. Reconstructing distorted images using convolutional neural network
CN114339221A (en) * 2020-09-30 2022-04-12 脸萌有限公司 Convolutional neural network based filter for video coding and decoding
CN115036011A (en) * 2022-08-10 2022-09-09 梅傲科技(广州)有限公司 System for solid tumor prognosis evaluation based on digital pathological image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107925762A (en) * 2015-09-03 2018-04-17 联发科技股份有限公司 Video coding-decoding processing method and device based on neutral net
EP3451293A1 (en) * 2017-08-28 2019-03-06 Thomson Licensing Method and apparatus for filtering with multi-branch deep learning
CN108184129A (en) * 2017-12-11 2018-06-19 北京大学 A kind of video coding-decoding method, device and the neural network for image filtering
WO2020062074A1 (en) * 2018-09-28 2020-04-02 Hangzhou Hikvision Digital Technology Co., Ltd. Reconstructing distorted images using convolutional neural network
US20190273948A1 (en) * 2019-01-08 2019-09-05 Intel Corporation Method and system of neural network loop filtering for video coding
CN114339221A (en) * 2020-09-30 2022-04-12 脸萌有限公司 Convolutional neural network based filter for video coding and decoding
CN115036011A (en) * 2022-08-10 2022-09-09 梅傲科技(广州)有限公司 System for solid tumor prognosis evaluation based on digital pathological image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
M. ABDOLI (ATEME), E. MORA (ATEME), T. GUIONNET (ATEME), M. RAULET (ATEME): "Non-CE3: Decoder-side Intra Mode Derivation with Prediction Fusion", 14. JVET MEETING; 20190319 - 20190327; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 18 March 2019 (2019-03-18), XP030203490 *

Similar Documents

Publication Publication Date Title
US11895330B2 (en) Neural network-based video compression with bit allocation
US12034916B2 (en) Neural network-based video compression with spatial-temporal adaptation
WO2024020053A1 (en) Neural network-based adaptive image and video compression method
WO2024149395A1 (en) Method, apparatus, and medium for visual data processing
WO2024193607A1 (en) Method, apparatus, and medium for visual data processing
WO2023138686A1 (en) Method, apparatus, and medium for data processing
WO2023165599A9 (en) Method, apparatus, and medium for visual data processing
WO2023165596A1 (en) Method, apparatus, and medium for visual data processing
WO2023138687A1 (en) Method, apparatus, and medium for data processing
WO2024083249A1 (en) Method, apparatus, and medium for visual data processing
WO2024017173A1 (en) Method, apparatus, and medium for visual data processing
WO2023165601A1 (en) Method, apparatus, and medium for visual data processing
WO2024083247A1 (en) Method, apparatus, and medium for visual data processing
WO2024083248A1 (en) Method, apparatus, and medium for visual data processing
WO2024149308A1 (en) Method, apparatus, and medium for video processing
WO2024120499A1 (en) Method, apparatus, and medium for visual data processing
WO2024149394A1 (en) Method, apparatus, and medium for visual data processing
WO2024217423A1 (en) Method, apparatus, and medium for video processing
WO2024083202A1 (en) Method, apparatus, and medium for visual data processing
WO2024169959A1 (en) Method, apparatus, and medium for visual data processing
WO2024149392A1 (en) Method, apparatus, and medium for visual data processing
WO2024208149A1 (en) Method, apparatus, and medium for visual data processing
WO2024188189A1 (en) Method, apparatus, and medium for visual data processing
WO2024140849A1 (en) Method, apparatus, and medium for visual data processing
WO2024169958A1 (en) Method, apparatus, and medium for visual data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24741389

Country of ref document: EP

Kind code of ref document: A1