WO2023031503A1 - A method, an apparatus and a computer program product for video encoding and video decoding - Google Patents

A method, an apparatus and a computer program product for video encoding and video decoding Download PDF

Info

Publication number
WO2023031503A1
WO2023031503A1 PCT/FI2022/050444 FI2022050444W WO2023031503A1 WO 2023031503 A1 WO2023031503 A1 WO 2023031503A1 FI 2022050444 W FI2022050444 W FI 2022050444W WO 2023031503 A1 WO2023031503 A1 WO 2023031503A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural networks
task
output
data
video
Prior art date
Application number
PCT/FI2022/050444
Other languages
French (fr)
Inventor
Nam Hai LE
Ramin GHAZNAVI YOUVALARI
Honglei Zhang
Francesco Cricrì
Hamed REZAZADEGAN TAVAKOLI
Miska Matias Hannuksela
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2023031503A1 publication Critical patent/WO2023031503A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Definitions

  • the project leading to this application has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 876019.
  • JU Joint Undertaking
  • the JU receives support from the European Union’s Horizon 2020 research and innovation programme and Germany, Netherlands, Austria, Romania, France, Sweden, Cyprus, Greece, Lithuania, Portugal, Italy, Finland, Turkey.
  • the present solution generally relates to video coding, and in particular to video coding for machines.
  • Video Coding for Machines VCM
  • an apparatus comprising means for receiving decoded data from a decoder; means for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; means for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and means for providing the decoded data as a second input to said one or more intermediate result processing neural networks; means for providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and means for analysing the data at said one or more task neural networks.
  • a method comprising receiving decoded data from a decoder; performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; providing an output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and providing the decoded data as a second input to said one or more intermediate result processing neural networks; providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and analysing the received data at said one or more task neural networks.
  • an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive decoded data from a decoder; perform an intermediate analysis of the decoded data at one or more intermediate task neural networks; provide the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and provide the decoded data as a second input to said one or more intermediate result processing neural networks; provide an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and analyse the data at said one or more task neural networks.
  • the intermediate result processing neural network comprises means for mapping the first and the second inputs to two sets of feature maps; means for combining the two sets of feature maps; means for mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks.
  • the outputs from said one or more intermediate task neural networks are mapped to a common representation at an intermediate task results mapping neural network.
  • decoded data is a video frame
  • the intermediate output data is one of the following: a predicted frame, a frame obtained by adding a prediction error to a predicted frame; a frame obtained by adding a decompressed prediction error to a predicted frame; or a frame which is output by an in-loop filter.
  • the decoded data is audio data or a text data.
  • the task neural network performs one or more of the following: image classification; video classification; image segmentation; video segmentation; object tracking; anomaly detection; action detection; action classification; event detection; filtering; captioning; or visual question answering.
  • At least the intermediate result processing neural network is trained with a data comprising a dataset of intermediate decoded videos and ground-truth data for intermediate decoded video and one or more task neural networks.
  • an encoder and decoder neural networks are trained jointly with the intermediate result processing neural network.
  • Fig. 1 shows an example of a codec with neural network (NN) components
  • Fig. 2 shows another example of a video coding system with neural network components
  • Fig. 3 shows an example of a neural auto-encoder architecture
  • Fig. 4 shows an example of a neural network-based end-to-end learned video coding system
  • Fig. 5 shows an example of a video coding for machines
  • Fig. 6 shows an example of a pipeline for end-to-end learned approach to video coding for machines
  • Fig. 7 shows an example of training an end-to-end learned system for video coding for machines
  • Fig. 8 shows an example of a baseline system comprising an encoder, a decoder and at least one task-NN;
  • Fig. 9 shows an example of a baseline system comprising an encoder, a decoder, a post-processing neural network, a task-NN;
  • Fig. 10 shows an example of a system according to some of embodiments comprising an Intermediate Result Processing neural network (IRP- NN);
  • IRP- NN Intermediate Result Processing neural network
  • Fig. 11 shows an example of internal components of IRP-NN
  • Fig. 12 shows an example of Intermediate Task Results Mapping neural network (ITRM-NN);
  • Fig. 13 shows an example of training only IRP-NN
  • Fig. 14 shows an example of training one IRP-NN and one or more neural networks of encoder and of decoder
  • Fig. 15 shows an example of training one or more neural networks based on at least a task loss and a rate loss
  • Fig. 16 shows an example of training encoder NN and/or decoder NN jointly with IRP-NN, based on at least a task loss and an intermediate task loss;
  • Fig. 17 is a flowchart illustrating a method according to an embodiment
  • Fig. 18 is a flowchart illustrating a method for training according to an embodiment.
  • Fig. 19 shows an apparatus according to an embodiment.
  • the present embodiments are targeted to using intermediate machine vision tasks at decoder side in video coding for machines.
  • a neural network is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
  • Feed-forward neural networks are such that there is no feedback loop: each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers and provide output to one or more of following layers.
  • Initial layers extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features.
  • semantically low-level features such as edges and textures in images
  • intermediate and final layers extract more high-level features.
  • After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, super-resolution, etc.
  • recurrent neural nets there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
  • Neural networks are being utilized in an ever-increasing number of applications for many different types of devices, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
  • neural nets and other machine learning tools
  • learn properties from input data either in supervised way or in unsupervised way.
  • Such learning is a result of a training algorithm, or of a metalevel neural network providing the training signal.
  • the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output.
  • the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to.
  • Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, cross-entropy, etc.
  • training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
  • model used interchangeably, and the weights of neural networks are sometimes referred to as learnable parameters or simply as parameters.
  • Training a neural network is an optimization process, but the final goal is different from the typical goal of optimization.
  • the only goal is to minimize a function.
  • the goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset.
  • the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization.
  • data may be split into at least two sets, the training set and the validation set.
  • the training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss.
  • the validation set is used for checking the performance of the network on data, which was not used to minimize the loss, as an indication of the final performance of the model.
  • the errors on the training set and on the validation set are monitored during the training process to understand the following things:
  • the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set but performs poorly on a set not used for tuning its parameters.
  • neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec.
  • the most widely used architecture for realizing one component of an image codec is the auto-encoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder.
  • the neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder.
  • the neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
  • Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar.
  • MSE Mean Squared Error
  • PSNR Peak Signal-to-Noise Ratio
  • SSIM Structural Similarity Index Measure
  • Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form.
  • An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • Hybrid video codecs may encode the video information in two phases. Firstly, pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e., the difference between the predicted block of pixels and the original block of pixels, is coded.
  • motion compensation means finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded
  • spatial means using the pixel values around the block to be coded in a specified manner.
  • encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
  • a specified transform e.g., Discrete Cosine Transform (DCT) or a variant of it
  • DCT Discrete Cosine Transform
  • encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
  • Inter prediction which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy.
  • inter prediction the sources of prediction are previously decoded pictures.
  • Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
  • One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients.
  • Many parameters can be entropy- coded more efficiently if they are predicted first from spatially or temporally neighboring parameters.
  • a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded.
  • Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
  • the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame.
  • the decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
  • the motion information may be indicated with motion vectors associated with each motion compensated image block.
  • Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures.
  • those may be coded differentially with respect to block specific predicted motion vectors.
  • the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
  • Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
  • the reference index of previously coded/decoded picture can be predicted.
  • the reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture.
  • high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction.
  • predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
  • the prediction residual after motion compensation may be first transformed with a transform kernel (like DCT) and then coded.
  • a transform kernel like DCT
  • Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g., the desired Macroblock mode and associated motion vectors.
  • This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
  • C D + AR
  • C the Lagrangian cost to be minimized
  • D the image distortion (e.g., Mean Squared Error) with the mode and motion vectors considered
  • R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
  • Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike.
  • SEI Supplemental enhancement information
  • Some video coding specifications include SEI NAL units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike.
  • An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation.
  • SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/VVC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use.
  • the standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance.
  • One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
  • SEI message specifications A design principle has been followed for SEI message specifications: the SEI messages are generally not extended in future amendments or versions of the standard.
  • Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both.
  • in-loop filters the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame.
  • An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded.
  • An out-of-the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
  • NNNs neural networks
  • NNs are used to replace one or more of the components of a traditional codec such as WC/H.266.
  • a traditional codec such as WC/H.266.
  • traditional we mean those codecs whose components and their parameters are typically not learned from data. Examples of such components are:
  • Additional in-loop filter for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
  • Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment.
  • Figure 1 illustrates an encoder, which also includes a decoding loop.
  • Figure 1 is shown to include components described below:
  • a luma intra pred block or circuit 101 This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame.
  • the operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional auto-encoder.
  • a chroma intra pred block or circuit 102 This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame.
  • the chroma intra pred block or circuit 102 may perform crosscomponent prediction, for example, predicting chroma from luma.
  • the operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
  • An intra pred block or circuit 103 and inter-pred block or circuit 104 These blocks or circuit perform intra prediction and inter-prediction, respectively.
  • the intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma.
  • the operations of the intra pred block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders.
  • This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol.
  • the operation of the probability estimation block or circuit 105 may be performed by a neural network.
  • T/Q transform and quantization
  • the transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain.
  • the transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values.
  • One or both transform block or circuit, and quantization block or circuit may be replaced by one or two or more neural networks.
  • One or both inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
  • An in-loop filter block or circuit 107 Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder.
  • the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
  • the postprocessing filter block or circuit 108 A postprocessing filter block or circuit 108.
  • the postprocessing filter block or circuit 108 may be performed only at decoder side, as it may not affect the encoding process.
  • the postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data.
  • the postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
  • a resolution adaptation block or circuit 109 this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution.
  • the operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional autoencoder.
  • An encoder control block or circuit 111 This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like.
  • the operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
  • An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing interframe prediction.
  • ME/MC stands for motion estimation / motion compensation.
  • NNs are used as the main components of the image/video codecs.
  • end-to-end learned compression there are two main options:
  • Option 1 re-use the video coding pipeline but replace most or all the components with NNs.
  • FIG 2 it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment.
  • An example of neural network may include, but is not limited to, a compressed representation of a neural network.
  • Figure 2 is shown to include following components:
  • a neural transform block or circuit 202 this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
  • a quantization block or circuit 204 this block or circuit quantizes an input data 201 to a smaller set of possible values.
  • An inverse transform and inverse quantization blocks or circuits 206 perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
  • An encoder parameter control block or circuit 208 This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits.
  • An entropy coding block or circuit 210 This block or circuit may perform lossless coding, for example based on entropy.
  • One popular entropy coding technique is arithmetic coding.
  • a neural intra-codec block or circuit 212 This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame.
  • An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an auto-encoder neural network.
  • a decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network.
  • An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
  • a deep loop filter block or circuit 220 This block or circuit performs filtering of reconstructed data, in order to enhance it.
  • a decode picture buffer block or circuit 222 is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter- prediction.
  • An inter-prediction block or circuit 228 This block or circuit performs interframe prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby.
  • An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction.
  • ME/MC stands for motion estimation / motion compensation.
  • Option 2 re-design the whole pipeline, as follows. An example of option 2 is described in detail in Figure 3:
  • - Decoder NN performs a non-linear inverse transform.
  • Figure 3 depicts an encoder and a decoder NNs being parts of a neural autoencoder architecture, in accordance with an example.
  • the Analysis Network 301 is an Encoder NN
  • the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural autoencoder.
  • the input data 304 is analyzed by the Encoder NN, Analysis Network 301 , which outputs a new representation of that input data.
  • the new representation may be more compressible.
  • This new representation may then be quantized, by a quantizer 305, to a discrete number of values.
  • the quantized data may be then lossless encoded, for example, by an arithmetic encoder 306, thus obtaining a bitstream 307.
  • the example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306.
  • the arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments.
  • the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder 308.
  • the lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302.
  • the output is the reconstructed or decoded data 309.
  • the lossy steps may comprise the Encoder NN and/or the quantization.
  • a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses.
  • the training loss comprises a reconstruction loss term and a rate loss term.
  • the reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
  • MS-SSIM Multi-scale structural similarity
  • error(f1 , f2) where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm.
  • GANs Generative Adversarial Networks
  • the rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder. By “compressing”, we mean reducing the number of bits output by the encoding stage.
  • rate loss typically encourages the output of the Encoder NN to have low entropy.
  • rate losses are the following:
  • a sparsification loss i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm.
  • One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum.
  • the different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses).
  • These weights may be hyper-parameters of the training session and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
  • a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 404 (for example arithmetic encoder 405 and/or arithmetic decoder 406), a dequantizer 407, and a decoder 408.
  • the encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components.
  • the probability model 403 may also comprise mainly neural network components.
  • Quantizer 402, dequantizer 407 and entropy codec 404 may not be based on neural network components, but they may also comprise neural network components, potentially.
  • the encoder component 401 takes a video as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input.
  • the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location.
  • the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels).
  • the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3.
  • another dimension in the input tensor may be used to represent temporal information.
  • the quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels.
  • Probability model 403 and arithmetic codec component 404 work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side.
  • the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded. Then, the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
  • the arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder to recover the input video/image.
  • the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model is used at encoder side, and another exact copy is used at decoder side.
  • the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks. The system is trained in an end-to-end manner by minimizing the following rate-distortion loss function:
  • the distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM.
  • the rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp).
  • the system contains only the probability model and arithmetic encoder/decoder.
  • the system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
  • Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e., consuming/watching the decoded image.
  • machines i.e., autonomous agents
  • Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc.
  • Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc.
  • the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system. The multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
  • NN machine
  • another NN for detecting cars
  • another machine another NN
  • machine and task neural network are referred to interchangeably, and for such referral any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant.
  • any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant.
  • other assumptions made regarding the machines considered in this disclosure may be specified in further details.
  • receiver-side or “decoder-side” are used to refer to the physical or abstract entity or device which contains one or more machines and runs these one or more machines on some encoded and eventually decoded video representation which is encoded by another physical or abstract entity or device, the “encoder-side device”.
  • the encoded video data may be stored into a memory device, for example as a file.
  • the stored file may later be provided to another device.
  • the encoded video data may be streamed from one device to another.
  • FIG. 5 is a general illustration of the pipeline of Video Coding for Machines.
  • a VCM encoder 502 encodes the input video into a bitstream 504.
  • a bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream.
  • a VCM decoder 510 decodes the bitstream output by the VCM encoder 502.
  • the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502.
  • this data may not be easily understandable by a human by simply rendering the data onto a screen.
  • the output of VCM decoder is then input to one or more task neural networks 514.
  • task-NNs 514 there are three example task-NNs, and a non-specified one (Task-NN X).
  • the goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
  • FIG. 6 illustrates an example of a pipeline for the end- to-end learned approach.
  • the video is input to a neural network encoder 601 .
  • the output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604.
  • the lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded.
  • the probability model 603 may also be learned, for example it may be a neural network.
  • the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606.
  • the output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
  • Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated.
  • a rate loss 705 may be computed from the output of the probability model 703. The rate loss 705 provides an approximation of the bitrate required to encode the input video data.
  • a task loss 710 may be computed 709 from the output 708 of the task-NN 707.
  • the rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
  • an optimization method such as Adam
  • the machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks.
  • a video codec for machines can be realized by using a traditional codec such as H.266/WC.
  • another possible design may comprise using a traditional codec such as H.266/VVC, which includes one or more neural networks.
  • the one or more neural networks may replace one of the components of the traditional codec, such as:
  • the one or more neural networks may function as an additional component, such as:
  • another possible design may comprise using any codec architecture (such as a traditional codec, or a traditional codec which includes one or more neural networks, or an end-to-end learned codec), and having a post-processing neural network which adapts the output of the decoder so that it can be analysed more effectively by one or more machines or task neural networks.
  • the encoder and decoder may be conformant to the H.266/VVC standard, a postprocessing neural network takes the output of the decoder, and the output of the post-processing neural network is then input to an object detection neural network.
  • the object detection neural network is the machine or task neural network.
  • the present embodiments are targeted to a problem on how to increase the ratedistortion performance of a video codec for machines.
  • the present embodiments propose to incorporate one or more intermediate task neural networks at the decoder-side and input an intermediate output frame or video to the one or more intermediate task neural networks.
  • the output of the one or more intermediate task neural networks referred to as intermediate output, is given as input to another neural network that we refer to as Intermediate Result Processing Neural Network (IRP-NN).
  • IRP-NN Intermediate Result Processing Neural Network
  • the output of IRP-NN is then provided as input to one or more task NNs.
  • the intermediate output frame or video may be the output of a traditional video decoder, or the output of the video decoder of an end-to-end learned video codec.
  • the IRP-NN is a post-processing NN.
  • the intermediate output frame or video may be the non-final output of a traditional video decoder, or the non-final output of the video decoder of an end- to-end learned video codec.
  • the IRP-NN is a post-processing NN which is part of the video decoder, instead of being external to the video decoder.
  • the intermediate output frame or video may be one of the following:
  • the IRP-NN is an in-loop filter.
  • the present embodiments also cover several methodologies to train the IRP-NN.
  • the following detailed description is based on compressing and decompressing data which is consumed by machines.
  • the decompressed data may also be consumed by humans, either at the same time or at different times with respect to when the machines consume the decompressed data.
  • the codec may consist of multiple parts, where some parts are used for compressing and/or decompressing data for machine consumption, and some other parts are used for compressing and/or decompressing data for human consumption.
  • An encoder-side device is configured to perform a compression or encoding operation by using an encoder.
  • a decoder-side device is configured to perform decompression or decoding operation by using a decoder.
  • the encoder-side device may also use at least some decoding operations, for example in a coding loop.
  • the encoder-side device and the decoder-side device may be the same physical device, or different physical devices.
  • the present embodiments are not restricted to any specific type of data.
  • example of the data is video data.
  • video one or more video frames are meant, unless specified otherwise. It is to be noticed that a video frame is also considered as an image, whereupon the embodiments are directly applicable with image data as well.
  • Other example types of data are audio, speech, text.
  • machine also refers to task neural network, or task- NNs.
  • An example of task-NN for image or video data is an object detection neural network, performing object detection task.
  • Another example of task-NN for image or video data is a semantic segmentation neural network, performing semantic segmentation.
  • the task-NNs can be NNs for any possible task, for example image classification; video classification; anomaly detection; action detection; action classification; event detection; filtering; captioning; visual question answering, etc.
  • the input to a task-NN may be one or more video frames (or image, audio, speech, or text).
  • the output of the task-NN may be a task result, or task output.
  • An example of task result, for the case of an object detection task-NN is a set of coordinates of one of more bounding boxes, representing the location and spatial extent of detected objects.
  • an object detection task-NN may output other data, such as the category or class of the detected objects, and a confidence value indicating an estimate of the probability that the bounding box and/or its class for a detected object is correct.
  • a task can be an audio event detection, whereupon the corresponding task-NN that performs the task of audio event detection may output a binary flag indicating whether a certain event has been detected or not.
  • Another example of the task is audio event classification, where the corresponding task-NN performing this task may output a list of numerical values, where each value indicates an estimate of the probability that a certain event class was detected in the input signal, or ma output directly an indication of which event class was detected in the input signal.
  • the task can be a speech recognition, i.e., recognizing which letter or syllable or word or sentence (for example) was spelled in the input signal.
  • the task can be text parsing, where the output may be a semantic structure of the text.
  • Another example task is translation from one language to another.
  • An example of task result for the case of a semantic segmentation task-NN, is a tensor of shape (K, H, W), where K may be the total number of semantic classes considered by the task-NN, H and W may be the height and width of the input video frame that was input to the task-NN.
  • K may be the total number of semantic classes considered by the task-NN
  • H and W may be the height and width of the input video frame that was input to the task-NN.
  • Each of the K matrices of size HxW may represent the segmentation of the K-th class, i.e., it may indicate whether each pixel of the input video frame belongs to the K-th class or not.
  • the output of the task-NN may be a tensor of shape (T, K, H, W).
  • the task-NNs are models, such as neural networks, for which it is possible to compute gradients of their output with respect to their input. For example, if they are parametric models, this may be possible by computing the gradients of their output first with respect to their internal parameters and then with respect to their input, by using the chain rule for differentiation in mathematics. In the case of neural networks, backpropagation may be used to obtain the gradients of the output of a NN with respect to its input.
  • a baseline system is considered which comprises at least one encoder, at least one decoder, at least one task-NN. The present embodiments build on top of this baseline system. See an example illustration of a baseline system in Figure 8.
  • the encoder 801 may be any video encoder, such as a traditional encoder which is conformant with the H.266/WC standard, or an encoder which combines a traditional encoder with one or more neural networks, or an end-to-end learned encoder (i.e., comprising mainly neural networks).
  • the decoder 803 may be any video decoder, such a traditional decoder which is conformant with the H.266/VVC standard, or a decoder which combines a traditional decoder with one or more neural networks, or an end-to-end learned decoder (i.e., comprising mainly neural networks).
  • the Task-NN 805 may be any task neural network performing an analysis task or a processing task.
  • a semantic segmentation is considered as an example task.
  • the input video is encoded into a bitstream 802 by an Encoder 801 .
  • a Decoder 803 decodes the bitstream 802 into a decoded video 804.
  • the decoded video 804 is given as input to a task-NN 805.
  • the task-NN 805 outputs some analysis or processing results.
  • the output of a task-NN 805 is referred to either as “output” or as “result” interchangeably.
  • the baseline system may comprise a post-processing neural network which may be part of the decoder or may be external with respect to the decoder.
  • the postprocessing neural network may post-process the decoded video.
  • the postprocessed decoded video may then be input to one or more task-NNs.
  • Figure 9 illustrates an example baseline system which comprises a post-processing neural network 905 that is external with respect to the decoder 903, where PP decoded video 906 refers to the post-processed decoded video.
  • a modification of a decoder-side of the baseline system comprises inputting an intermediate decoded video to one or more Intermediate Task-NNs, obtaining one or more Intermediate Task-NNs’ results or outputs from the one or more Intermediate Task- NNs, inputting the one or more Intermediate Task-NNs results to one or more Intermediate Result Processing Neural Networks (IRP-NNs), inputting also an intermediate decoded video to the one or more IRP-NNs, obtaining one or more IRP decoded video, inputting the one or more IRP decoded video to one or more Task- NNs, obtaining results from the one or more Task-NNs.
  • IRP-NNs Intermediate Result Processing Neural Networks
  • FIG. 10 An example of the workflow according to present embodiments is illustrated in Figure 10, where the intermediate decoded video 1004 is the output of a video decoder 1003, a single IRP-NN 1005 is used (where the single IRP-NN 1005 is a post-processing NN that is external to the decoder 1003), and a single task-NN 1009 is used.
  • IRP video 1008 refers to the IRP decoded video.
  • the intermediate decoded video 1004 may be one or more frames which are output by a traditional video decoder 1003, or output by a decoder 1003 that combines a traditional video decoder with one or more neural networks, or output by a video decoder that is part of an end-to-end learned video codec.
  • the IRP- NN 1005 is a post-processing neural network which is external to the decoder.
  • the IRP-NN may be a post-processing neural network that is part of the decoder.
  • the intermediate decoded video may be one or more frames which are the non-final output of a traditional video decoder, or the non-final output of a video decoder that combines one or more neural networks with a traditional video decoder, or the non-final output of the video decoder that is part of an end-to-end learned video codec.
  • the IRP-NN is a post-processing NN which is part of the video decoder, instead of being external to the video decoder.
  • the IRP-NN may be an in-loop filter.
  • the intermediate decoded video may be one or more frames, where each frame may be one of the following:
  • lossy compression algorithm is quantization.
  • lossless compression algorithm is arithmetic coding.
  • an IRP-NN is used as an example of a post-processing neural network external to the decoder.
  • the one or more Intermediate Task-NNs 1006 may be the same task-NNs as the Task-NNs 1009 that are used for analysing or processing the IRP decoded video
  • the Intermediate Task-NNs 1006 and the Task-NNs 1009 may be two instantiations of the same neural networks, where the architecture and the weights or parameters of the Intermediate Task-NNs 1006 and of the Task-NNs 1009 are substantially the same.
  • the one or more Intermediate Task-NNs 1006 may perform similar tasks or functions as the Task-NNs 1009, but the architecture and/or the weights or parameters of the Intermediate Task-NNs 1006 and of the Task-NNs 1009 may differ. For example, assuming that there is only one Intermediate Task-NN 1006 and one Task-NN 1009, both may perform semantic segmentation task, but they may use a different architecture and/or weights.
  • the one or more Intermediate Task-NNs 1006 may perform different tasks or functions with respect to the tasks or functions performed by the Task-NNs
  • the Intermediate Task-NNs 1006 may extract features from the intermediate decoded videos 1004, therefore each Intermediate Task-NN 1006 may be a feature extractor.
  • each Intermediate Task-NN 1006 performing feature extraction may be specific to a certain Task-NN 1009. This means that the extracted features may be used for obtaining an IRP decoded video 1008 that may be input only to that Task-NN 1009.
  • each Intermediate Task-NN 1006 performing feature extraction may not be specific to a single Task-NN 1009, i.e., the extracted features may be used for obtaining an IRP decoded video 1008 that may be input to more than one Task-NN 1009.
  • Figure 11 illustrates an example of the IRP-NN internal components.
  • the Intermediate Result Processing Neural Network may be a neural network which takes two types of inputs:
  • a first type of input is the intermediate decoded video 1101.
  • a second type of input may be the outputs of one or more Intermediate Task- NNs 1105.
  • the second type of input may be the output of one or more Intermediate Task Results Mapping Neural Network (ITRM-NN) - this is discussed with respect to the second embodiment.
  • ITRM-NN Intermediate Task Results Mapping Neural Network
  • Each of the two types of input may first be mapped to two sets of feature maps, then the two sets of feature maps may be combined (for example by summation or by concatenation) 1104, and then the combined features 1108 may be mapped 1109 to one or more output tensors.
  • Each of the one or more output tensors from IRP-NN may have the same shape as the intermediate decoded video.
  • each of the one or more output tensors from IRP-NN may have a shape which is compatible with one or more Task-NNs.
  • intermediate task results are mapped to common representation.
  • ITRM-NN Intermediate Task Results Mapping neural network
  • Figure 12 illustrates an example of the present embodiments.
  • the input to the ITRM-NN 1209 may be the output 1208 of two or more Intermediate Task-NNs 1207. It is to be noticed that the Intermediate Task-NNs and their respective outputs are referred with a similar reference number for simplicity, however, the various Task-NNs are not necessarily the same, and therefore neither are their respective outputs.
  • the ITRM-NN 1209 may output a single representation or set of features for all its inputs. The output of the ITRM-NN 1209 may then be input to the IRP-NN 1205.
  • the input to the ITRM-NN may be the output of a single Intermediate Task-NNs, and there may be more than one ITRM-NNs, for example there may be as many ITRM-NNs as there are Intermediate Task-NNs.
  • the output of each ITRM-NN may then be input to the IRP-NN.
  • the data needed for training the IRP-NN 1307 may comprise:
  • the corresponding intermediate task-NN’s outputs 1306 may be obtained from one or more Intermediate Task-NNs 1305.
  • the training process may be an iterative process.
  • Each training iteration may comprise obtaining a set of intermediate decoded videos 1304 and a set of corresponding intermediate task-NNs’ outputs 1306, inputting the set of intermediate decoded videos 1304 and the set of corresponding intermediate task- NNs’ outputs 1306 to one or more IRP-NNs 1307, using the one or more IRP-NNs 1307 for obtaining one or more IRP videos 1308, inputting the one or more IRP videos 1308 to one or more Task-NNs 1309, obtaining one or more output 1314 from the one or more Task-NNs 1309, using the one or more output from the one or more Task-NNs for computing a loss 1311 , computing gradients of the loss with respect to the one or more learnable parameters present in one or more IRP-NNs, updating the one or more learnable parameters present in one or more IRP-NNs according to the computed gradients and by using an optimization routine such as Stochastic Gradient Descent (SGD).
  • the iterative training process may continue until a stopping condition is satisfied.
  • a stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc.
  • the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span.
  • the loss 1312, or task loss may be dependent on the specific task that is considered. If there are more than one Task-NNs 1309, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
  • the IRP-NN is trained jointly with one or more neural networks which may be part of the encoder and/or of the decoder.
  • Figure 14 illustrates an example of such embodiment.
  • the encoder 1401 and/or the decoder 1403 may be a combination of one or more neural networks with a traditional encoder and/or a traditional decoder, such as a decoder which includes an in-loop neural network filter.
  • the encoder and/or the decoder may be part of an end-to-end learned codec.
  • the data needed for training the IRP-NN 1405 jointly with one or more neural networks of the Encoder 1401 and/or the Decoder 1403 may comprise:
  • the training process may be an iterative process.
  • Each training iteration may comprise obtaining a set of input videos, inputting one or more input videos to the encoder 1401 , obtaining a bitstream 1402 from the encoder 1401 for each of the one or more input videos, inputting the bitstreams 1402 to the decoder 1403, obtaining one or more intermediate decoded videos 1404 from the decoder 1403, inputting the one or more intermediate decoded videos 1404 to one or more intermediate Task-NNs 1406 to obtain one or more intermediate Task-NNs’ outputs 1407, inputting the one or more intermediate Task-NNs’ outputs 1407 and the one or more intermediate decoded videos 1404 to one or more IRP-NNs 1405, using one or more IRP-NNs 1405 for obtaining one or more IRP videos 1408, inputting the one or more IRP videos 1408 to one or more Task-NNs 1409, obtaining one or more output 1410 from the one or more Task-NNs 1409, using the one or more output 1410 from
  • the iterative training process may continue until a stopping condition is satisfied.
  • a stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc.
  • the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span.
  • the loss, or task loss may be dependent on the specific task that is considered. If there are more than on Task-NNs, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
  • a rate loss may be computed based on the bitstream or based on an intermediate output of the encoder.
  • the rate loss may be an estimate of the number of bits needed to represent the bitstream output by the encoder.
  • the rate loss may be computed based on the output of a probability model, where the probability model may be a neural network that provides an estimate of a probabil ity for one or more element to an entropy codec such as an arithmetic codec.
  • the rate loss may then be used to train one or more neural networks that are used within the encoder, by combining the rate loss with the other loss terms for example by a linear combination.
  • Figure 15 illustrates an example where the rate loss is computed and used to train one or more neural networks of the encoder.
  • the IRP-NN 1610 is trained jointly with one or more neural networks which may be part of the Encoder 1601 and/or of the Decoder 1602, by using an intermediate task loss 1609 that may be computed by using the Intermediate Task NNs’ outputs 1607 and the task groundtruth data 1606.
  • the Encoder 1601 and/or the Decoder 1603 may be a combination of one or more neural networks with a traditional encoder and/or a traditional decoder, such as a decoder which includes an in-loop neural network filter.
  • the Encoder and/or the Decoder may be part of an end-to-end learned codec.
  • the data needed for training the IRP-NN 1610 jointly with one or more neural networks of the Encoder 1601 and/or the Decoder 1603 may comprise:
  • the training process may be an iterative process.
  • Each training iteration may comprise obtaining a set of input videos, inputting one or more input videos to the Encoder 1601 , obtaining a bitstream 1602 from the Encoder 1601 for each of the one or more input videos, inputting the bitstreams 1602 to the Decoder 1603, obtaining one or more intermediate decoded videos 1604 from the Decoder 1603, inputting the one or more intermediate decoded videos 1604 to one or more Intermediate Task-NNs 1605 to obtain one or more Intermediate Task-NNs’ outputs 1607, inputting the one or more Intermediate Task-NNs’ outputs 1607 and the one or more intermediate decoded videos 1604 to one or more IRP-NNs 1610, using one or more IRP-NNs 1610 for obtaining one or more IRP videos 1611 , inputting the one or more IRP videos 1611 to one or more Task-NNs 1612, obtaining one or more output 1613 from the one or more Task-NNs 1612, using the one or more output
  • the iterative training process may continue until a stopping condition is satisfied.
  • a stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc.
  • the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span.
  • the loss, or task loss may be dependent on the specific task that is considered. If there are more than on Task-NNs, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
  • a rate loss may be used to train one or more neural networks in the encoder.
  • MSE Mean-Squared Error
  • the ground-truth data for a certain input video and for one or more tasks may be determined by manually annotating the data. For example, in the case of an object detection task, humans may annotate the bounding boxes for objects in a set of images and/or videos.
  • the ground-truth data for a certain input video and for one or more tasks may be determined by running one or more task-NNs on the input video. The output results from the one or more task- NNs may then be used as the ground-truth data for computing a task loss and/or an intermediate task loss for the considered input video.
  • Other possible ways to determine ground-truth data may be considered, and the present embodiments are not limited to any specific way by which ground-truth is obtained.
  • Ground-truth data may be determined during an offline process with respect to the process when training is performed, or may be determined at substantially the same time when training is performed. After the ground-truth data has been determined, it may be stored for example in a database hosted at a local or at a remote location or device with respect to the location or device where training is performed. If the database is hosted at a remote location, the device performing training may first retrieve the ground-truth data or the database, and then use it for performing training.
  • a loss term may be computed as a distortion metric between features extracted by a task NN when the input is the input video and features extracted by a task NN when the input is the IRP video.
  • Task-features may be one or more feature maps extracted by one or more layers of a task NN.
  • An example distortion metric may be the mean-squared error (MSE).
  • a loss term may be computed as a distortion metric between features extracted by an intermediate task NN when the input is the input video and features extracted by an intermediate task NN when the input is the intermediate decoded video.
  • This loss term may be one or more feature maps extracted by one or more layers of an intermediate task NN.
  • An example distortion metric may be the mean-squared error (MSE).
  • the method generally comprises steps for receiving 1710 decoded data from a decoder; for performing 1720 an intermediate analysis of the decoded data at one or more intermediate task neural networks; for providing 1730 the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and for providing the decoded data as a second input to said one or more intermediate result processing neural networks; for providing 1740 an output from said one or more intermediate result processing neural network to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks, and for analysing 1750 the data at said one or more task neural networks.
  • the method comprises steps for mapping the first and the second inputs to two sets of feature maps; for combining the two sets of feature maps; and for mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks.
  • the method may also comprise mapping the outputs from said one or more intermediate task neural networks to a common representation at an intermediate task results mapping neural network.
  • Each of the previous steps can be implemented by a respective module of a computer system.
  • the method for training generally comprises steps for training 1810 an intermediate result processing neural network, by obtaining a set of intermediate decoded data and a set of corresponding intermediate task neural network outputs; using 1820 one or more intermediate result processing network to obtain one or more IRP data; inputting 1830 the one or more IRP data to one or more task neural networks; obtaining 1840 one or more output from the one or more task neural networks; computing 1850 a loss based on the output from the one or more task neural networks; computing 1860 gradients of the loss with respect to one or more weights present in one or more intermediate results processing neural network; and updating 1870 the weights according to the computed gradients.
  • An apparatus comprises means for receiving decoded data from a decoder; for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and means for providing the decoded data as a second input to said one or more intermediate result processing neural networks; for providing an output from said one or more intermediate result processing neural network to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks, and for analysing the data at said one or more task neural networks.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 17 according to various embodiments.
  • An apparatus for training an intermediate result processing neural network comprises means for obtaining a set of intermediate decoded data and a set of corresponding intermediate task neural network outputs; for using one or more intermediate result processing network to obtain one or more IRP data; for inputting the one or more IRP data to one or more task neural networks; for obtaining one or more output from the one or more task neural networks; for computing a loss based on the output from the one or more task neural networks; for computing gradients of the loss with respect to one or more weights present in one or more intermediate results processing neural network; and for updating the weights according to the computed gradients.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 18 according to various embodiments.
  • FIG 19 illustrates an example of an apparatus.
  • the apparatus is a user equipment for the purposes of the present embodiments.
  • the apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93.
  • the apparatus may also comprise a camera module 95.
  • the apparatus may be configured to receive image and/or video data from an external camera device over a communication network.
  • the memory 92 stores data including computer program code in the apparatus 90.
  • the computer program code is configured to implement the method according to various embodiments by means of various computer modules.
  • the camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91 .
  • the communication interface 93 forwards processed data, i.e., the image file, for example to a display of another device, such a virtual reality headset.
  • processed data i.e., the image file
  • the apparatus 90 is a video source comprising the camera module 95
  • user inputs may be received from the user interface.
  • a device may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
  • a network device like a server may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of embodiments.

Abstract

The embodiments relate to a method and an apparatus for implementing the method. The apparatus comprises means for receiving decoded data from a decoder (1003); means for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks (1006); means for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks (1005) and means for providing the decoded data (1004) as a second input to said one or more intermediate result processing neural networks (1005); means for providing an output from said one or more intermediate result processing neural networks (1008) to one or more task neural networks (1009), the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and means for analysing the data at said one or more task neural networks.

Description

A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND VIDEO DECODING
The project leading to this application has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 876019. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Germany, Netherlands, Austria, Romania, France, Sweden, Cyprus, Greece, Lithuania, Portugal, Italy, Finland, Turkey.
Technical Field
The present solution generally relates to video coding, and in particular to video coding for machines.
Background
One of the elements in image and video compression is to compress data while maintaining the quality to satisfy human perceptual ability. However, in recent development of machine learning, machines can replace humans when analyzing data for example in order to detect events and/or objects in video/image. Thus, when decoded image data is consumed by machines, the quality of the compression may be different from quality perceived by humans. Therefore, a concept Video Coding for Machines (VCM) has been provided.
Summary
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
Various aspects include a method, an apparatus and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims. According to a first aspect, there is provided an apparatus comprising means for receiving decoded data from a decoder; means for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; means for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and means for providing the decoded data as a second input to said one or more intermediate result processing neural networks; means for providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and means for analysing the data at said one or more task neural networks.
According to a second aspect, there is provided a method comprising receiving decoded data from a decoder; performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; providing an output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and providing the decoded data as a second input to said one or more intermediate result processing neural networks; providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and analysing the received data at said one or more task neural networks.
According to a third aspect, there is provided an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive decoded data from a decoder; perform an intermediate analysis of the decoded data at one or more intermediate task neural networks; provide the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and provide the decoded data as a second input to said one or more intermediate result processing neural networks; provide an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and analyse the data at said one or more task neural networks. According to an embodiment, the intermediate result processing neural network comprises means for mapping the first and the second inputs to two sets of feature maps; means for combining the two sets of feature maps; means for mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks.
According to an embodiment, the outputs from said one or more intermediate task neural networks are mapped to a common representation at an intermediate task results mapping neural network.
According to an embodiment, decoded data is a video frame, wherein the intermediate output data is one of the following: a predicted frame, a frame obtained by adding a prediction error to a predicted frame; a frame obtained by adding a decompressed prediction error to a predicted frame; or a frame which is output by an in-loop filter.
According to an embodiment, the decoded data is audio data or a text data.
According to an embodiment, the task neural network performs one or more of the following: image classification; video classification; image segmentation; video segmentation; object tracking; anomaly detection; action detection; action classification; event detection; filtering; captioning; or visual question answering.
According to an embodiment, at least the intermediate result processing neural network is trained with a data comprising a dataset of intermediate decoded videos and ground-truth data for intermediate decoded video and one or more task neural networks.
According to an embodiment, an encoder and decoder neural networks are trained jointly with the intermediate result processing neural network.
Description of the Drawings
In the following, various embodiments will be described in more detail with reference to the appended drawings, in which Fig. 1 shows an example of a codec with neural network (NN) components;
Fig. 2 shows another example of a video coding system with neural network components;
Fig. 3 shows an example of a neural auto-encoder architecture;
Fig. 4 shows an example of a neural network-based end-to-end learned video coding system;
Fig. 5 shows an example of a video coding for machines;
Fig. 6 shows an example of a pipeline for end-to-end learned approach to video coding for machines;
Fig. 7 shows an example of training an end-to-end learned system for video coding for machines;
Fig. 8 shows an example of a baseline system comprising an encoder, a decoder and at least one task-NN;
Fig. 9 shows an example of a baseline system comprising an encoder, a decoder, a post-processing neural network, a task-NN;
Fig. 10 shows an example of a system according to some of embodiments comprising an Intermediate Result Processing neural network (IRP- NN);
Fig. 11 shows an example of internal components of IRP-NN;
Fig. 12 shows an example of Intermediate Task Results Mapping neural network (ITRM-NN);
Fig. 13 shows an example of training only IRP-NN;
Fig. 14 shows an example of training one IRP-NN and one or more neural networks of encoder and of decoder; Fig. 15 shows an example of training one or more neural networks based on at least a task loss and a rate loss;
Fig. 16 shows an example of training encoder NN and/or decoder NN jointly with IRP-NN, based on at least a task loss and an intermediate task loss;
Fig. 17 is a flowchart illustrating a method according to an embodiment;
Fig. 18 is a flowchart illustrating a method for training according to an embodiment; and
Fig. 19 shows an apparatus according to an embodiment.
Description of Example Embodiments
With respect to the VCM, the present embodiments are targeted to using intermediate machine vision tasks at decoder side in video coding for machines.
The following description and drawings are illustrative and are not to be construed as unnecessarily limiting. The specific details are provided for a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, reference to the same embodiment and such references mean at least one of the embodiments.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment in included in at least one embodiment of the disclosure.
A neural network (NN) is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
Two of the most widely used architectures for neural networks are feed-forward and recurrent architectures. Feed-forward neural networks are such that there is no feedback loop: each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers and provide output to one or more of following layers.
Initial layers (those close to the input data) extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features. After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, super-resolution, etc. In recurrent neural nets, there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
Neural networks are being utilized in an ever-increasing number of applications for many different types of devices, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
One of the important properties of neural nets (and other machine learning tools) is that they are able to learn properties from input data, either in supervised way or in unsupervised way. Such learning is a result of a training algorithm, or of a metalevel neural network providing the training signal.
In general, the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output. For example, in the case of classification of objects in images, the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to. Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, cross-entropy, etc. In recent deep learning techniques, training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
In this description, terms “model”, “neural network”, “neural net” and “network” are used interchangeably, and the weights of neural networks are sometimes referred to as learnable parameters or simply as parameters.
Training a neural network is an optimization process, but the final goal is different from the typical goal of optimization. In optimization, the only goal is to minimize a function. In machine learning, the goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset. In other words, the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization. In practice, data may be split into at least two sets, the training set and the validation set. The training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss. The validation set is used for checking the performance of the network on data, which was not used to minimize the loss, as an indication of the final performance of the model. In particular, the errors on the training set and on the validation set are monitored during the training process to understand the following things:
- If the network is learning at all - in this case, the training set error should decrease, otherwise the model is in the regime of underfitting.
- If the network is learning to generalize - in this case, also the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set but performs poorly on a set not used for tuning its parameters.
Lately, neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec. The most widely used architecture for realizing one component of an image codec is the auto-encoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder. The neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder. The neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar. These distortion metrics are meant to be correlated to the human visual perception quality, so that minimizing or maximizing one or more of these distortion metrics results into improving the visual quality of the decoded image as perceived by humans.
Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
Hybrid video codecs, for example ITU-T H.263 and H.264, may encode the video information in two phases. Firstly, pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e., the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g., Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures.
Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy- coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
In video codecs, the motion information may be indicated with motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently, those may be coded differentially with respect to block specific predicted motion vectors. In video codecs, the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, the reference index of previously coded/decoded picture can be predicted. The reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture. Moreover, high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
In video codecs the prediction residual after motion compensation may be first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding.
Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g., the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
C = D + AR where C is the Lagrangian cost to be minimized, D is the image distortion (e.g., Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike. Some video coding specifications include SEI NAL units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike. An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation. Several SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/VVC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use. The standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance. One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
A design principle has been followed for SEI message specifications: the SEI messages are generally not extended in future amendments or versions of the standard.
Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both. In the case of in-loop filters, the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame. An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded. An out-of-the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
Recently, neural networks (NNs) have been used in the context of image and video compression, by following mainly two approaches.
In one approach, NNs are used to replace one or more of the components of a traditional codec such as WC/H.266. Here, by “traditional” we mean those codecs whose components and their parameters are typically not learned from data. Examples of such components are:
- Additional in-loop filter, for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
- Single in-loop filter, for example by having the NN replacing all traditional inloop filters.
- Intra-frame prediction.
- Inter-frame prediction.
- Transform and/or inverse transform.
- Probability model for the arithmetic codec.
- Etc.
Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment. In particular, Figure 1 illustrates an encoder, which also includes a decoding loop. Figure 1 is shown to include components described below:
- A luma intra pred block or circuit 101 . This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame. The operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional auto-encoder.
- A chroma intra pred block or circuit 102. This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame. The chroma intra pred block or circuit 102 may perform crosscomponent prediction, for example, predicting chroma from luma. The operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
- An intra pred block or circuit 103 and inter-pred block or circuit 104. These blocks or circuit perform intra prediction and inter-prediction, respectively. The intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma. The operations of the intra pred block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders. - A probability estimation block or circuit 105 for entropy coding. This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol. The operation of the probability estimation block or circuit 105 may be performed by a neural network.
- A transform and quantization (T/Q) block or circuit 106. These are two blocks or circuits. The transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain. The transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values. In the decoding loop, there may be inverse quantization block or circuit and inverse transform block or circuit 113. One or both transform block or circuit, and quantization block or circuit may be replaced by one or two or more neural networks. One or both inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
- An in-loop filter block or circuit 107. Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder. The operation of the in-loop filter block or circuit
107 may be performed by a neural network, such as a convolutional auto-encoder. In examples, the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
- A postprocessing filter block or circuit 108. The postprocessing filter block or circuit
108 may be performed only at decoder side, as it may not affect the encoding process. The postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data. The postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
- A resolution adaptation block or circuit 109: this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution. The operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional autoencoder.
- An encoder control block or circuit 111. This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like. The operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
- An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing interframe prediction. ME/MC stands for motion estimation / motion compensation.
In another approach, commonly referred to as “end-to-end learned compression”, NNs are used as the main components of the image/video codecs. In this second approach, there are two main options:
Option 1 : re-use the video coding pipeline but replace most or all the components with NNs. Referring to Figure 2, it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment. An example of neural network may include, but is not limited to, a compressed representation of a neural network. Figure 2 is shown to include following components:
- A neural transform block or circuit 202: this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
- A quantization block or circuit 204: this block or circuit quantizes an input data 201 to a smaller set of possible values.
- An inverse transform and inverse quantization blocks or circuits 206. These blocks or circuits perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
- An encoder parameter control block or circuit 208. This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits. - An entropy coding block or circuit 210. This block or circuit may perform lossless coding, for example based on entropy. One popular entropy coding technique is arithmetic coding.
- A neural intra-codec block or circuit 212. This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame. An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an auto-encoder neural network. A decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network. An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
- A deep loop filter block or circuit 220. This block or circuit performs filtering of reconstructed data, in order to enhance it.
- A decode picture buffer block or circuit 222. This block or circuit is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter- prediction.
- An inter-prediction block or circuit 228. This block or circuit performs interframe prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby. An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction. ME/MC stands for motion estimation / motion compensation.
Option 2: re-design the whole pipeline, as follows. An example of option 2 is described in detail in Figure 3:
- Encoder NN: performs a non-linear transform
- Quantization and lossless encoding of the encoder NN's output.
- Lossless decoding and dequantization.
- Decoder NN: performs a non-linear inverse transform.
Figure 3 depicts an encoder and a decoder NNs being parts of a neural autoencoder architecture, in accordance with an example. In Figure 3, the Analysis Network 301 is an Encoder NN, and the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural autoencoder. In Option 2, the input data 304 is analyzed by the Encoder NN, Analysis Network 301 , which outputs a new representation of that input data. The new representation may be more compressible. This new representation may then be quantized, by a quantizer 305, to a discrete number of values. The quantized data may be then lossless encoded, for example, by an arithmetic encoder 306, thus obtaining a bitstream 307. The example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306. The arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments. On the decoding side, the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder 308. The lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302. The output is the reconstructed or decoded data 309.
In case of lossy compression, the lossy steps may comprise the Encoder NN and/or the quantization.
In order to train this system, a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses. In one example, the training loss comprises a reconstruction loss term and a rate loss term. The reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
- Mean squared error (MSE).
- Multi-scale structural similarity (MS-SSIM)
- Losses derived from the use of a pretrained neural network. For example, error(f1 , f2), where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm.
- Losses derived from the use of a neural network that is trained simultaneously with the end-to-end learned codec. For example, adversarial loss can be used, which is the loss provided by a discriminator neural network that is trained adversarially with respect to the codec, following the settings proposed in the context of Generative Adversarial Networks (GANs) and their variants. The rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder. By “compressing”, we mean reducing the number of bits output by the encoding stage.
When an entropy-based lossless encoder is used, such as an arithmetic encoder, the rate loss typically encourages the output of the Encoder NN to have low entropy. Example of rate losses are the following:
- A differentiable estimate of the entropy.
- A sparsification loss, i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm.
- A cross-entropy loss applied to the output of a probability model, where the probability model may be a NN used to estimate the probability of the next symbol to be encoded by an arithmetic encoder.
One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum. The different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses). These weights may be hyper-parameters of the training session and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
As shown in Figure 4, a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 404 (for example arithmetic encoder 405 and/or arithmetic decoder 406), a dequantizer 407, and a decoder 408. The encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components. The probability model 403 may also comprise mainly neural network components. Quantizer 402, dequantizer 407 and entropy codec 404 (for example, arithmetic encoder 405 and/or arithmetic decoder 406) may not be based on neural network components, but they may also comprise neural network components, potentially. On encoder side, the encoder component 401 takes a video as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input. In the case of an input image, the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location. If the input image is a 128x128x3 RGB image (with horizontal size of 128 pixels, vertical size of 128 pixels, and 3 channels for the Red, Green, Blue color components), and if the encoder downsamples the input tensor by 2 and expands the channel dimension to 32 channels, then the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels). Please note that the order of the different dimensions may differ depending on the convention which is used; in some cases, for the input image, the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3. In the case of an input video (instead of just an input image), another dimension in the input tensor may be used to represent temporal information. The quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels. Probability model 403 and arithmetic codec component 404 (for example, arithmetic encoder 405 and/or arithmetic decoder 406) work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side. Given a symbol to be encoded into the bitstream, the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded. Then, the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
On the decoder side, opposite operations are performed. The arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder to recover the input video/image. Note that the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model is used at encoder side, and another exact copy is used at decoder side. In this system, the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks. The system is trained in an end-to-end manner by minimizing the following rate-distortion loss function:
L = D + R, where D is the distortion loss term, R is the rate loss term, and A is the weight that controls the balance between the two losses. The distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM. The rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp).
For lossless video/image compression, the system contains only the probability model and arithmetic encoder/decoder. The system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e., consuming/watching the decoded image. Recently, with the advent of machine learning, especially deep learning, there is a rising number of machines (i.e., autonomous agents) that analyze data independently from humans and that may even take decisions based on the analysis results without human intervention. Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc. Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc. This may raise the following question: when decoded data is consumed by machines, shouldn’t we aim at a different quality metric -other than human perceptual quality- when considering media compression in inter-machine communications? Also, dedicated algorithms for compressing and decompressing data for machine consumption are likely to be different than those for compressing and decompressing data for human consumption. The set of tools and concepts for compressing and decompressing data for machine consumption is referred to here as Video Coding for Machines.
It is likely that the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system. The multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
In this description, machine and task neural network are referred to interchangeably, and for such referral any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant. In the rest of the description, other assumptions made regarding the machines considered in this disclosure may be specified in further details.
Also, it is to be noticed that terms “receiver-side” or “decoder-side” are used to refer to the physical or abstract entity or device which contains one or more machines and runs these one or more machines on some encoded and eventually decoded video representation which is encoded by another physical or abstract entity or device, the “encoder-side device”.
The encoded video data may be stored into a memory device, for example as a file. The stored file may later be provided to another device. Alternatively, the encoded video data may be streamed from one device to another.
Figure 5 is a general illustration of the pipeline of Video Coding for Machines. A VCM encoder 502 encodes the input video into a bitstream 504. A bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream. A VCM decoder 510 decodes the bitstream output by the VCM encoder 502. In Figure 5, the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502. For example, this data may not be easily understandable by a human by simply rendering the data onto a screen. The output of VCM decoder is then input to one or more task neural networks 514. In the figure, for the sake of illustrating that there may be any number of task-NNs 514, there are three example task-NNs, and a non-specified one (Task-NN X). The goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
One of the possible approaches to realize video coding for machines is an end-to- end learned approach. In this approach, the VCM encoder and VCM decoder mainly consist of neural networks. Figure 6 illustrates an example of a pipeline for the end- to-end learned approach. The video is input to a neural network encoder 601 . The output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604. The lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded. The probability model 603 may also be learned, for example it may be a neural network. At decoder-side, the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606. The output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated. A rate loss 705 may be computed from the output of the probability model 703. The rate loss 705 provides an approximation of the bitrate required to encode the input video data. A task loss 710 may be computed 709 from the output 708 of the task-NN 707.
The rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
The machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks.
As an alternative to an end-to-end trained codec, a video codec for machines can be realized by using a traditional codec such as H.266/WC.
Alternatively, as described already above for the case of video coding for humans, another possible design may comprise using a traditional codec such as H.266/VVC, which includes one or more neural networks. In one possible implementation, the one or more neural networks may replace one of the components of the traditional codec, such as:
- One or more in-loop filters.
- One or more intra-prediction modes.
- One or more inter-prediction modes.
- One or more transforms.
- One or more inverse transforms.
- One or more probability models, for lossless coding.
- One or more post-processing filters.
In another possible implementation, the one or more neural networks may function as an additional component, such as:
- One or more additional in-loop filters.
- One or more additional intra-prediction modes.
- One or more additional inter-prediction modes.
- One or more additional transforms.
- One or more additional inverse transforms.
- One or more additional probability models, for lossless coding.
- One or more additional post-processing filters.
Alternatively, another possible design may comprise using any codec architecture (such as a traditional codec, or a traditional codec which includes one or more neural networks, or an end-to-end learned codec), and having a post-processing neural network which adapts the output of the decoder so that it can be analysed more effectively by one or more machines or task neural networks. For example, the encoder and decoder may be conformant to the H.266/VVC standard, a postprocessing neural network takes the output of the decoder, and the output of the post-processing neural network is then input to an object detection neural network. In this example, the object detection neural network is the machine or task neural network.
The present embodiments are targeted to a problem on how to increase the ratedistortion performance of a video codec for machines.
The present embodiments propose to incorporate one or more intermediate task neural networks at the decoder-side and input an intermediate output frame or video to the one or more intermediate task neural networks. The output of the one or more intermediate task neural networks, referred to as intermediate output, is given as input to another neural network that we refer to as Intermediate Result Processing Neural Network (IRP-NN). The output of IRP-NN is then provided as input to one or more task NNs.
The intermediate output frame or video may be the output of a traditional video decoder, or the output of the video decoder of an end-to-end learned video codec. In this case, the IRP-NN is a post-processing NN.
Alternatively, the intermediate output frame or video may be the non-final output of a traditional video decoder, or the non-final output of the video decoder of an end- to-end learned video codec. In other words, the IRP-NN is a post-processing NN which is part of the video decoder, instead of being external to the video decoder.
Alternatively, the intermediate output frame or video may be one of the following:
- A predicted frame
- A frame obtained by adding a prediction error to a predicted frame.
- A frame obtained by adding a decompressed prediction error to a predicted frame, where the decompressed prediction error is obtained by decompressing a lossy or non-lossy (i.e., lossless) compressed prediction error.
- A frame which is output by an in-loop filter.
In these cases, the IRP-NN is an in-loop filter. The present embodiments also cover several methodologies to train the IRP-NN.
The following detailed description is based on compressing and decompressing data which is consumed by machines. The decompressed data may also be consumed by humans, either at the same time or at different times with respect to when the machines consume the decompressed data. The codec may consist of multiple parts, where some parts are used for compressing and/or decompressing data for machine consumption, and some other parts are used for compressing and/or decompressing data for human consumption.
An encoder-side device is configured to perform a compression or encoding operation by using an encoder. A decoder-side device is configured to perform decompression or decoding operation by using a decoder. The encoder-side device may also use at least some decoding operations, for example in a coding loop. The encoder-side device and the decoder-side device may be the same physical device, or different physical devices.
The present embodiments are not restricted to any specific type of data. However, for the sake of simplicity, example of the data is video data. By “video” one or more video frames are meant, unless specified otherwise. It is to be noticed that a video frame is also considered as an image, whereupon the embodiments are directly applicable with image data as well. Other example types of data are audio, speech, text.
In this description, the term "machine” also refers to task neural network, or task- NNs. An example of task-NN for image or video data is an object detection neural network, performing object detection task. Another example of task-NN for image or video data is a semantic segmentation neural network, performing semantic segmentation. It is appreciated that the task-NNs can be NNs for any possible task, for example image classification; video classification; anomaly detection; action detection; action classification; event detection; filtering; captioning; visual question answering, etc.
The input to a task-NN may be one or more video frames (or image, audio, speech, or text). The output of the task-NN may be a task result, or task output. An example of task result, for the case of an object detection task-NN, is a set of coordinates of one of more bounding boxes, representing the location and spatial extent of detected objects. Also, an object detection task-NN may output other data, such as the category or class of the detected objects, and a confidence value indicating an estimate of the probability that the bounding box and/or its class for a detected object is correct.
With respect to other data, such as audio, speech, and/or text the following examples are given. For audio, a task can be an audio event detection, whereupon the corresponding task-NN that performs the task of audio event detection may output a binary flag indicating whether a certain event has been detected or not. Another example of the task is audio event classification, where the corresponding task-NN performing this task may output a list of numerical values, where each value indicates an estimate of the probability that a certain event class was detected in the input signal, or ma output directly an indication of which event class was detected in the input signal.
For speech, the task can be a speech recognition, i.e., recognizing which letter or syllable or word or sentence (for example) was spelled in the input signal. For text, the task can be text parsing, where the output may be a semantic structure of the text. Another example task is translation from one language to another.
An example of task result, for the case of a semantic segmentation task-NN, is a tensor of shape (K, H, W), where K may be the total number of semantic classes considered by the task-NN, H and W may be the height and width of the input video frame that was input to the task-NN. Each of the K matrices of size HxW may represent the segmentation of the K-th class, i.e., it may indicate whether each pixel of the input video frame belongs to the K-th class or not. In case the number of video frames that are input to the task-NN is T, the output of the task-NN may be a tensor of shape (T, K, H, W).
It is assumed that at least some of the task-NNs (machines) are models, such as neural networks, for which it is possible to compute gradients of their output with respect to their input. For example, if they are parametric models, this may be possible by computing the gradients of their output first with respect to their internal parameters and then with respect to their input, by using the chain rule for differentiation in mathematics. In the case of neural networks, backpropagation may be used to obtain the gradients of the output of a NN with respect to its input. A baseline system is considered which comprises at least one encoder, at least one decoder, at least one task-NN. The present embodiments build on top of this baseline system. See an example illustration of a baseline system in Figure 8.
In Figure 8, the encoder 801 may be any video encoder, such as a traditional encoder which is conformant with the H.266/WC standard, or an encoder which combines a traditional encoder with one or more neural networks, or an end-to-end learned encoder (i.e., comprising mainly neural networks).
The decoder 803 may be any video decoder, such a traditional decoder which is conformant with the H.266/VVC standard, or a decoder which combines a traditional decoder with one or more neural networks, or an end-to-end learned decoder (i.e., comprising mainly neural networks).
The Task-NN 805 may be any task neural network performing an analysis task or a processing task. In this disclosure, a semantic segmentation is considered as an example task.
In this baseline system, the input video is encoded into a bitstream 802 by an Encoder 801 . A Decoder 803 decodes the bitstream 802 into a decoded video 804. The decoded video 804 is given as input to a task-NN 805. The task-NN 805 outputs some analysis or processing results. The output of a task-NN 805 is referred to either as “output” or as “result” interchangeably.
The baseline system may comprise a post-processing neural network which may be part of the decoder or may be external with respect to the decoder. The postprocessing neural network may post-process the decoded video. The postprocessed decoded video may then be input to one or more task-NNs. Figure 9 illustrates an example baseline system which comprises a post-processing neural network 905 that is external with respect to the decoder 903, where PP decoded video 906 refers to the post-processed decoded video.
FIRST EMBODIMENT
According to a first embodiment, a modification of a decoder-side of the baseline system is proposed. The proposed modification comprises inputting an intermediate decoded video to one or more Intermediate Task-NNs, obtaining one or more Intermediate Task-NNs’ results or outputs from the one or more Intermediate Task- NNs, inputting the one or more Intermediate Task-NNs results to one or more Intermediate Result Processing Neural Networks (IRP-NNs), inputting also an intermediate decoded video to the one or more IRP-NNs, obtaining one or more IRP decoded video, inputting the one or more IRP decoded video to one or more Task- NNs, obtaining results from the one or more Task-NNs.
An example of the workflow according to present embodiments is illustrated in Figure 10, where the intermediate decoded video 1004 is the output of a video decoder 1003, a single IRP-NN 1005 is used (where the single IRP-NN 1005 is a post-processing NN that is external to the decoder 1003), and a single task-NN 1009 is used. In Figure 10, IRP video 1008 refers to the IRP decoded video.
The intermediate decoded video 1004 may be one or more frames which are output by a traditional video decoder 1003, or output by a decoder 1003 that combines a traditional video decoder with one or more neural networks, or output by a video decoder that is part of an end-to-end learned video codec. In such case, the IRP- NN 1005 is a post-processing neural network which is external to the decoder.
Alternatively, the IRP-NN may be a post-processing neural network that is part of the decoder. In such case, the intermediate decoded video may be one or more frames which are the non-final output of a traditional video decoder, or the non-final output of a video decoder that combines one or more neural networks with a traditional video decoder, or the non-final output of the video decoder that is part of an end-to-end learned video codec. In other words, the IRP-NN is a post-processing NN which is part of the video decoder, instead of being external to the video decoder.
Alternatively, the IRP-NN may be an in-loop filter. In such case, the intermediate decoded video may be one or more frames, where each frame may be one of the following:
- A predicted frame.
- A frame obtained by adding a prediction error to a predicted frame.
- A frame obtained by adding a decompressed prediction error to a predicted frame, where the decompressed prediction error is obtained by decompressing a compressed prediction error, where the compressed prediction error may be obtained by compressing a prediction error using a lossy and/or non-lossy (i.e., lossless) compression algorithm. An example of lossy compression algorithm is quantization. An example of lossless compression algorithm is arithmetic coding.
- A frame which is output by an in-loop filter.
In this description, for the sake of simplicity (especially in the figures), an IRP-NN is used as an example of a post-processing neural network external to the decoder.
The one or more Intermediate Task-NNs 1006 may be the same task-NNs as the Task-NNs 1009 that are used for analysing or processing the IRP decoded video
1008. For example, the Intermediate Task-NNs 1006 and the Task-NNs 1009 may be two instantiations of the same neural networks, where the architecture and the weights or parameters of the Intermediate Task-NNs 1006 and of the Task-NNs 1009 are substantially the same.
Alternatively, the one or more Intermediate Task-NNs 1006 may perform similar tasks or functions as the Task-NNs 1009, but the architecture and/or the weights or parameters of the Intermediate Task-NNs 1006 and of the Task-NNs 1009 may differ. For example, assuming that there is only one Intermediate Task-NN 1006 and one Task-NN 1009, both may perform semantic segmentation task, but they may use a different architecture and/or weights.
Alternatively, the one or more Intermediate Task-NNs 1006 may perform different tasks or functions with respect to the tasks or functions performed by the Task-NNs
1009. The Intermediate Task-NNs 1006 may extract features from the intermediate decoded videos 1004, therefore each Intermediate Task-NN 1006 may be a feature extractor.
According to an example implementation, each Intermediate Task-NN 1006 performing feature extraction may be specific to a certain Task-NN 1009. This means that the extracted features may be used for obtaining an IRP decoded video 1008 that may be input only to that Task-NN 1009.
According to another example implementation, each Intermediate Task-NN 1006 performing feature extraction may not be specific to a single Task-NN 1009, i.e., the extracted features may be used for obtaining an IRP decoded video 1008 that may be input to more than one Task-NN 1009. Figure 11 illustrates an example of the IRP-NN internal components. The Intermediate Result Processing Neural Network (IRP-NN) may be a neural network which takes two types of inputs:
- A first type of input is the intermediate decoded video 1101.
- A second type of input may be the outputs of one or more Intermediate Task- NNs 1105.
- Alternatively, the second type of input may be the output of one or more Intermediate Task Results Mapping Neural Network (ITRM-NN) - this is discussed with respect to the second embodiment.
Each of the two types of input may first be mapped to two sets of feature maps, then the two sets of feature maps may be combined (for example by summation or by concatenation) 1104, and then the combined features 1108 may be mapped 1109 to one or more output tensors. Each of the one or more output tensors from IRP-NN may have the same shape as the intermediate decoded video. Alternatively, each of the one or more output tensors from IRP-NN may have a shape which is compatible with one or more Task-NNs.
SECOND EMBODIMENT
According to a second embodiment, intermediate task results are mapped to common representation. This means that the output of one or more Intermediate Task-NNs may be mapped to a common representation for all Intermediate Task- NNs, by using an Intermediate Task Results Mapping neural network (ITRM-NN).
Figure 12 illustrates an example of the present embodiments. In Figure’s 12 example, the input to the ITRM-NN 1209 may be the output 1208 of two or more Intermediate Task-NNs 1207. It is to be noticed that the Intermediate Task-NNs and their respective outputs are referred with a similar reference number for simplicity, however, the various Task-NNs are not necessarily the same, and therefore neither are their respective outputs. The ITRM-NN 1209 may output a single representation or set of features for all its inputs. The output of the ITRM-NN 1209 may then be input to the IRP-NN 1205.
According to another example implementation, the input to the ITRM-NN may be the output of a single Intermediate Task-NNs, and there may be more than one ITRM-NNs, for example there may be as many ITRM-NNs as there are Intermediate Task-NNs. The output of each ITRM-NN may then be input to the IRP-NN.
In the following paragraphs, embodiments relating to training of one or more components of the proposed system are described.
Training only IRP-NN
In one possible implementation, only the IRP-NN is trained as shown in Figure 13. The data needed for training the IRP-NN 1307 may comprise:
- a dataset of intermediate decoded videos 1304; and
- ground-truth data 1310 for each intermediate decoded video 1304 and each Task-NN 1309.
By using the intermediate decoded videos 1304, the corresponding intermediate task-NN’s outputs 1306 may be obtained from one or more Intermediate Task-NNs 1305.
The training process may be an iterative process. Each training iteration may comprise obtaining a set of intermediate decoded videos 1304 and a set of corresponding intermediate task-NNs’ outputs 1306, inputting the set of intermediate decoded videos 1304 and the set of corresponding intermediate task- NNs’ outputs 1306 to one or more IRP-NNs 1307, using the one or more IRP-NNs 1307 for obtaining one or more IRP videos 1308, inputting the one or more IRP videos 1308 to one or more Task-NNs 1309, obtaining one or more output 1314 from the one or more Task-NNs 1309, using the one or more output from the one or more Task-NNs for computing a loss 1311 , computing gradients of the loss with respect to the one or more learnable parameters present in one or more IRP-NNs, updating the one or more learnable parameters present in one or more IRP-NNs according to the computed gradients and by using an optimization routine such as Stochastic Gradient Descent (SGD). The iterative training process may continue until a stopping condition is satisfied. A stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc. For example, the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span. The loss 1312, or task loss, may be dependent on the specific task that is considered. If there are more than one Task-NNs 1309, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
Training Encoder’s NNs and/or Decoder’s NNs jointly with IRP-NN
According to another possible implementation, the IRP-NN is trained jointly with one or more neural networks which may be part of the encoder and/or of the decoder. Figure 14 illustrates an example of such embodiment.
In one example of this implementation, the encoder 1401 and/or the decoder 1403 may be a combination of one or more neural networks with a traditional encoder and/or a traditional decoder, such as a decoder which includes an in-loop neural network filter.
In another example of this possible implementation, the encoder and/or the decoder may be part of an end-to-end learned codec.
The data needed for training the IRP-NN 1405 jointly with one or more neural networks of the Encoder 1401 and/or the Decoder 1403 may comprise:
- a dataset of input videos
- ground-truth data 1411 for each intermediate decoded video and each Task- NN.
The training process may be an iterative process. Each training iteration may comprise obtaining a set of input videos, inputting one or more input videos to the encoder 1401 , obtaining a bitstream 1402 from the encoder 1401 for each of the one or more input videos, inputting the bitstreams 1402 to the decoder 1403, obtaining one or more intermediate decoded videos 1404 from the decoder 1403, inputting the one or more intermediate decoded videos 1404 to one or more intermediate Task-NNs 1406 to obtain one or more intermediate Task-NNs’ outputs 1407, inputting the one or more intermediate Task-NNs’ outputs 1407 and the one or more intermediate decoded videos 1404 to one or more IRP-NNs 1405, using one or more IRP-NNs 1405 for obtaining one or more IRP videos 1408, inputting the one or more IRP videos 1408 to one or more Task-NNs 1409, obtaining one or more output 1410 from the one or more Task-NNs 1409, using the one or more output 1410 from the one or more Task-NNs 1409 for computing 1412 a loss, computing gradients of the loss with respect to the one or more learnable parameters present in one or more IRP-NNs and also with respect one or more learnable parameters in one or more neural networks of the encoder and/or of the decoder, updating the one or more learnable parameters present in one or more IRP-NNs and the one or more learnable parameters in one or more neural networks of the encoder and/or of the decoder according to the computed gradients and by using an optimization routine such as Stochastic Gradient Descent (SGD). The iterative training process may continue until a stopping condition is satisfied. A stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc. For example, the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span.
The loss, or task loss, may be dependent on the specific task that is considered. If there are more than on Task-NNs, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
In some cases, a rate loss may be computed based on the bitstream or based on an intermediate output of the encoder. The rate loss may be an estimate of the number of bits needed to represent the bitstream output by the encoder. For example, the rate loss may be computed based on the output of a probability model, where the probability model may be a neural network that provides an estimate of a probabil ity for one or more element to an entropy codec such as an arithmetic codec. The rate loss may then be used to train one or more neural networks that are used within the encoder, by combining the rate loss with the other loss terms for example by a linear combination.
Figure 15 illustrates an example where the rate loss is computed and used to train one or more neural networks of the encoder.
Training Encoder’s NNs and/or Decoder’s NNs jointly with IRP-NN, by using intermediate task loss for Encoder’s NNs and/or Decoder’s NNs
In another possible implementation, shown in Figure 16, the IRP-NN 1610 is trained jointly with one or more neural networks which may be part of the Encoder 1601 and/or of the Decoder 1602, by using an intermediate task loss 1609 that may be computed by using the Intermediate Task NNs’ outputs 1607 and the task groundtruth data 1606.
In one example of this possible implementation, the Encoder 1601 and/or the Decoder 1603 may be a combination of one or more neural networks with a traditional encoder and/or a traditional decoder, such as a decoder which includes an in-loop neural network filter.
In another example of this possible implementation, the Encoder and/or the Decoder may be part of an end-to-end learned codec.
The data needed for training the IRP-NN 1610 jointly with one or more neural networks of the Encoder 1601 and/or the Decoder 1603 may comprise:
- a dataset of input videos;
- ground-truth data 1606 for each intermediate decoded video 1604 and each Task-NN 1612.
The training process may be an iterative process. Each training iteration may comprise obtaining a set of input videos, inputting one or more input videos to the Encoder 1601 , obtaining a bitstream 1602 from the Encoder 1601 for each of the one or more input videos, inputting the bitstreams 1602 to the Decoder 1603, obtaining one or more intermediate decoded videos 1604 from the Decoder 1603, inputting the one or more intermediate decoded videos 1604 to one or more Intermediate Task-NNs 1605 to obtain one or more Intermediate Task-NNs’ outputs 1607, inputting the one or more Intermediate Task-NNs’ outputs 1607 and the one or more intermediate decoded videos 1604 to one or more IRP-NNs 1610, using one or more IRP-NNs 1610 for obtaining one or more IRP videos 1611 , inputting the one or more IRP videos 1611 to one or more Task-NNs 1612, obtaining one or more output 1613 from the one or more Task-NNs 1612, using the one or more output 1613 from the one or more Task-NNs 1612 for computing 1615 a first loss (task loss) 1616, computing gradients of the first loss with respect to the one or more learnable parameters present in one or more IRP-NNs and also with respect one or more learnable parameters in one or more neural networks of the Encoder 1601 and/or of the Decoder 1603, using the one or more Intermediate Task-NNs’ outputs 1607 for computing a second loss (intermediate task loss) 1608, computing gradients of the second loss with respect to one or more learnable parameters in one or more neural networks of the Encoder and/or of the Decoder, updating the one or more learnable parameters present in one or more IRP-NNs and the one or more learnable parameters in one or more neural networks of the Encoder and/or of the Decoder according to the computed gradients and by using an optimization routine such as Stochastic Gradient Descent (SGD). The iterative training process may continue until a stopping condition is satisfied. A stopping condition may be based on one or more of the following: a predefined number of iterations is reached, a predefined value for the loss is obtained, etc. For example, the optimization may stop when the loss does not decrease more than a predetermined amount, during a predetermined temporal span.
The loss, or task loss, may be dependent on the specific task that is considered. If there are more than on Task-NNs, more than one loss term may be computed, and then the loss terms may be combined for example by means of a linear combination with predetermined weighting coefficients.
Also in this case, a rate loss may be used to train one or more neural networks in the encoder.
With respect to the embodiments relating to training, there are several further embodiments. For example, other losses may be used in combination with the already mentioned losses. For example, Mean-Squared Error (MSE) loss may be computed based on the IRP video and the input video, and/or based on the intermediate decoded video and the input video.
In one example, the ground-truth data for a certain input video and for one or more tasks may be determined by manually annotating the data. For example, in the case of an object detection task, humans may annotate the bounding boxes for objects in a set of images and/or videos. In another example, the ground-truth data for a certain input video and for one or more tasks may be determined by running one or more task-NNs on the input video. The output results from the one or more task- NNs may then be used as the ground-truth data for computing a task loss and/or an intermediate task loss for the considered input video. Other possible ways to determine ground-truth data may be considered, and the present embodiments are not limited to any specific way by which ground-truth is obtained. Ground-truth data may be determined during an offline process with respect to the process when training is performed, or may be determined at substantially the same time when training is performed. After the ground-truth data has been determined, it may be stored for example in a database hosted at a local or at a remote location or device with respect to the location or device where training is performed. If the database is hosted at a remote location, the device performing training may first retrieve the ground-truth data or the database, and then use it for performing training.
In one example, in addition or alternatively to any other training loss, a loss term may be computed as a distortion metric between features extracted by a task NN when the input is the input video and features extracted by a task NN when the input is the IRP video. We refer to this loss term as task-feature loss. Task-features may be one or more feature maps extracted by one or more layers of a task NN. An example distortion metric may be the mean-squared error (MSE).
In one example, in addition or alternatively to any other training loss, a loss term may be computed as a distortion metric between features extracted by an intermediate task NN when the input is the input video and features extracted by an intermediate task NN when the input is the intermediate decoded video. We refer to this loss term as intermediate task-feature loss. Intermediate task-features may be one or more feature maps extracted by one or more layers of an intermediate task NN. An example distortion metric may be the mean-squared error (MSE).
The method according to an embodiment is shown in Figure 17. The method generally comprises steps for receiving 1710 decoded data from a decoder; for performing 1720 an intermediate analysis of the decoded data at one or more intermediate task neural networks; for providing 1730 the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and for providing the decoded data as a second input to said one or more intermediate result processing neural networks; for providing 1740 an output from said one or more intermediate result processing neural network to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks, and for analysing 1750 the data at said one or more task neural networks.
At the intermediate result processing neural network, the method comprises steps for mapping the first and the second inputs to two sets of feature maps; for combining the two sets of feature maps; and for mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks.
The method may also comprise mapping the outputs from said one or more intermediate task neural networks to a common representation at an intermediate task results mapping neural network.
Each of the previous steps can be implemented by a respective module of a computer system.
The method for training according to an embodiment is shown in Figure 18. The method generally comprises steps for training 1810 an intermediate result processing neural network, by obtaining a set of intermediate decoded data and a set of corresponding intermediate task neural network outputs; using 1820 one or more intermediate result processing network to obtain one or more IRP data; inputting 1830 the one or more IRP data to one or more task neural networks; obtaining 1840 one or more output from the one or more task neural networks; computing 1850 a loss based on the output from the one or more task neural networks; computing 1860 gradients of the loss with respect to one or more weights present in one or more intermediate results processing neural network; and updating 1870 the weights according to the computed gradients.
An apparatus according to an embodiment comprises means for receiving decoded data from a decoder; for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks; for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and means for providing the decoded data as a second input to said one or more intermediate result processing neural networks; for providing an output from said one or more intermediate result processing neural network to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks, and for analysing the data at said one or more task neural networks.
The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 17 according to various embodiments.
An apparatus for training an intermediate result processing neural network according to an embodiment comprises means for obtaining a set of intermediate decoded data and a set of corresponding intermediate task neural network outputs; for using one or more intermediate result processing network to obtain one or more IRP data; for inputting the one or more IRP data to one or more task neural networks; for obtaining one or more output from the one or more task neural networks; for computing a loss based on the output from the one or more task neural networks; for computing gradients of the loss with respect to one or more weights present in one or more intermediate results processing neural network; and for updating the weights according to the computed gradients. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 18 according to various embodiments.
Figure 19 illustrates an example of an apparatus. The apparatus is a user equipment for the purposes of the present embodiments. The apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93. The apparatus according to an embodiment, shown in Figure 10, may also comprise a camera module 95. Alternatively, the apparatus may be configured to receive image and/or video data from an external camera device over a communication network. The memory 92 stores data including computer program code in the apparatus 90. The computer program code is configured to implement the method according to various embodiments by means of various computer modules. The camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91 . The communication interface 93 forwards processed data, i.e., the image file, for example to a display of another device, such a virtual reality headset. When the apparatus 90 is a video source comprising the camera module 95, user inputs may be received from the user interface.
The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of embodiments.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.

Claims

39 Claims:
1 . An apparatus comprising
- means for receiving decoded data from a decoder;
- means for performing an intermediate analysis of the decoded data at one or more intermediate task neural networks;
- means for providing the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and means for providing the decoded data as a second input to said one or more intermediate result processing neural networks;
- means for providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and
- means for analysing the data at said one or more task neural networks.
2. The apparatus according to claim 1 , wherein the intermediate result processing neural network comprises:
- means for mapping the first and the second inputs to two sets of feature maps;
- means for combining the two sets of feature maps; and
- means for mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks.
3. The apparatus according to claim 1 or 2, further comprising means for mapping the outputs from said one or more intermediate task neural networks to a common representation at an intermediate task results mapping neural network.
4. The apparatus according to any of the claims 1 to 3, wherein decoded data is a video frame, wherein the intermediate output data is one of the following: a predicted frame, a frame obtained by adding a prediction error to a predicted frame; a frame obtained by adding a decompressed prediction error to a predicted frame; or a frame which is output by an in-loop filter.
5. The apparatus according to any of the claims 1 to 3, wherein the decoded data is audio data or a text data. 40
6. The apparatus according to any of the claims 1 to 5, wherein the task neural network is for any of the following: image classification; video classification; image segmentation; video segmentation; object tracking; anomaly detection; action detection; action classification; event detection; filtering; captioning; or visual question answering.
7. The apparatus according to any of the claims 1 to 6, further comprising means for training at least one of the intermediate result processing neural network and intermediate results mapping neural network with a data comprising a dataset of intermediate decoded videos and ground-truth data for intermediate decoded video and one or more task neural networks.
8. The apparatus according to claim 7, further comprising means for training the encoder and decoder jointly with at least one of the intermediate result processing neural network and intermediate results mapping neural network.
9. A method comprising
- receiving decoded data from a decoder;
- performing an intermediate analysis of the decoded data at one or more intermediate task neural networks;
- providing an output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and providing the decoded data as a second input to said one or more intermediate result processing neural networks;
- providing an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and
- analysing the received data at said one or more task neural networks.
10. The method according to claim 9, wherein the method further comprises
- mapping the first and the second inputs to two sets of feature maps;
- combining the two sets of feature maps; and
- mapping the combined sets of feature maps to one or more output tensors thus generating the output of the intermediate result processing neural networks. 41
11 . The method according to claim 9 or 10, further comprising mapping the outputs from said one or more intermediate task neural networks to a common representation at an intermediate task results mapping neural network.
12. The method according to any of the claims 9 to 11 , wherein decoded data is a video frame, wherein the intermediate output data is one of the following: a predicted frame, a frame obtained by adding a prediction error to a predicted frame; a frame obtained by adding a decompressed prediction error to a predicted frame; or a frame which is output by an in-loop filter.
13. The method according to any of the claims 9 to 12, wherein the decoded data is audio data or a text data.
14. The method according to any of the claims 9 to 13, wherein the task neural network is for any of the following: image classification; video classification; image segmentation; video segmentation; object tracking; anomaly detection; action detection; action classification; event detection; filtering; captioning; or visual question answering.
15. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- receive decoded data from a decoder;
- perform an intermediate analysis of the decoded data at one or more intermediate task neural networks;
- provide the output from said one or more intermediate task neural networks as a first input to one or more intermediate result processing neural networks and provide the decoded data as a second input to said one or more intermediate result processing neural networks;
- provide an output from said one or more intermediate result processing neural networks to one or more task neural networks, the output representing combined features of the decoded data and the output of said one or more intermediate task neural networks; and
- analyse the data at said one or more task neural networks.
PCT/FI2022/050444 2021-09-03 2022-06-22 A method, an apparatus and a computer program product for video encoding and video decoding WO2023031503A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20215927 2021-09-03
FI20215927 2021-09-03

Publications (1)

Publication Number Publication Date
WO2023031503A1 true WO2023031503A1 (en) 2023-03-09

Family

ID=85410881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2022/050444 WO2023031503A1 (en) 2021-09-03 2022-06-22 A method, an apparatus and a computer program product for video encoding and video decoding

Country Status (1)

Country Link
WO (1) WO2023031503A1 (en)

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHIEH-CHI KAO; WEIRAN WANG; MING SUN; CHAO WANG: "R-CRNN: Region-based Convolutional Recurrent Neural Network for Audio Event Detection", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 20 August 2018 (2018-08-20), 201 Olin Library Cornell University Ithaca, NY 14853 , XP080898452 *
FISCHER KRISTIAN; BLUM CHRISTIAN; HERGLOTZ CHRISTIAN; KAUP ANDRE: "Robust Deep Neural Object Detection and Segmentation for Automotive Driving Scenario with Compressed Image Data", 2019 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), IEEE, 22 May 2021 (2021-05-22), pages 1 - 5, XP033932614, ISSN: 2158-1525, ISBN: 978-1-7281-3320-1, DOI: 10.1109/ISCAS51556.2021.9401621 *
ROSS GIRSHICK: "Fast R-CNN", ARXIV, EPRINT ARXIV:1412.7122, 7 December 2015 (2015-12-07) - 13 December 2015 (2015-12-13), pages 1440 - 1448, XP055646790, ISBN: 978-1-4673-8391-2, DOI: 10.1109/ICCV.2015.169 *
SHAOQING REN, HE KAIMING, GIRSHICK ROSS, SUN JIAN: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", ARXIV: COMPUTER SCIENCE - COMPUTER VISION AND PATTERN RECOGNITION, 6 January 2016 (2016-01-06), pages 1 - 14, XP055480920, Retrieved from the Internet <URL:https://arxiv.org/pdf/1506.01497.pdf> [retrieved on 20180604] *
ZHANG ZHICONG; WANG MENGYANG; MA MENGYAO; LI JIAHUI; FAN XIAOPENG: "MSFC: Deep Feature Compression in Multi-Task Network", 2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), IEEE, 5 July 2021 (2021-07-05), pages 1 - 6, XP034125237, DOI: 10.1109/ICME51207.2021.9428258 *

Similar Documents

Publication Publication Date Title
US11375204B2 (en) Feature-domain residual for video coding for machines
US11575938B2 (en) Cascaded prediction-transform approach for mixed machine-human targeted video coding
AU2013212013A1 (en) Object detection informed encoding
WO2023280558A1 (en) Performance improvements of machine vision tasks via learned neural network based filter
WO2022238967A1 (en) Method, apparatus and computer program product for providing finetuned neural network
WO2022269415A1 (en) Method, apparatus and computer program product for providng an attention block for neural network-based image and video compression
US20220303568A1 (en) Multi-scale optical flow for learned video compression
US20230110503A1 (en) Method, an apparatus and a computer program product for video encoding and video decoding
EP4142289A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
EP4156691A2 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023135518A1 (en) High-level syntax of predictive residual encoding in neural network compression
WO2022224113A1 (en) Method, apparatus and computer program product for providing finetuned neural network filter
WO2023031503A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
EP4260242A1 (en) A caching and clearing mechanism for deep convolutional neural networks
WO2024068081A1 (en) A method, an apparatus and a computer program product for image and video processing
WO2023089231A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023073281A1 (en) A method, an apparatus and a computer program product for video coding
WO2024074231A1 (en) A method, an apparatus and a computer program product for image and video processing using neural network branches with different receptive fields
WO2024068190A1 (en) A method, an apparatus and a computer program product for image and video processing
WO2022269441A1 (en) Learned adaptive motion estimation for neural video coding
WO2023194650A1 (en) A method, an apparatus and a computer program product for video coding
US20240146938A1 (en) Method, apparatus and computer program product for end-to-end learned predictive coding of media frames
WO2023151903A1 (en) A method, an apparatus and a computer program product for video coding
WO2023208638A1 (en) Post processing filters suitable for neural-network-based codecs
WO2022229495A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863699

Country of ref document: EP

Kind code of ref document: A1