WO2023089231A1 - A method, an apparatus and a computer program product for video encoding and video decoding - Google Patents

A method, an apparatus and a computer program product for video encoding and video decoding Download PDF

Info

Publication number
WO2023089231A1
WO2023089231A1 PCT/FI2022/050739 FI2022050739W WO2023089231A1 WO 2023089231 A1 WO2023089231 A1 WO 2023089231A1 FI 2022050739 W FI2022050739 W FI 2022050739W WO 2023089231 A1 WO2023089231 A1 WO 2023089231A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
regions
features
bitstream
server
Prior art date
Application number
PCT/FI2022/050739
Other languages
French (fr)
Inventor
Hamed REZAZADEGAN TAVAKOLI
Honglei Zhang
Francesco Cricrì
Miska Matias Hannuksela
Emre Baris Aksu
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2023089231A1 publication Critical patent/WO2023089231A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • H04N21/6379Control signals issued by the client directed to the server or network components directed to server directed to encoder, e.g. for requesting a lower encoding rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments

Definitions

  • the present solution generally relates to video coding.
  • the solution relates to video coding for machines (VCM).
  • VCM video coding for machines
  • Video Coding for Machines VCM
  • an apparatus comprising at least means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; means for determining features contained in the regions of interest; means for encoding the determined features into a bitstream; and means for sending the encoded features to a client with information on regions of interest.
  • an apparatus comprising at least means for receiving a bitstream of encoded features from a server with an information on regions of interest; and means for decoding the bitstream to determine relevant features for a certain region of interest.
  • a method comprising accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determining features contained in the regions of interest; encoding the determined features into a bitstream; and sending the encoded features to a client with information on regions of interest.
  • a method comprising receiving a bitstream of encoded features from a server with an information on regions of interest; and decoding the bitstream to determine relevant features for a certain region of interest.
  • an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: access a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determine features contained in the regions of interest; encode the determined features into a bitstream; and send the encoded features to a client with information on regions of interest.
  • an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive a bitstream of encoded features from a server with an information on regions of interest; and decode the bitstream to determine relevant features for a certain region of interest.
  • computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to access a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determine features contained in the regions of interest; encode the determined features into a bitstream; and send the encoded features to a client with information on regions of interest.
  • computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a bitstream of encoded features from a server with an information on regions of interest; and decode the bitstream to determine relevant features for a certain region of interest.
  • the list of regions of interest is received from a client.
  • the list of regions is generated by a region of interest estimator component.
  • the features indicated in the list of regions of interest is encoded with a quality higher than other parts of a frame.
  • the list of regions of interest being received from a client is an encoded list of regions of interest
  • the server comprises means for decoding the encoded list of regions of interest
  • the video frames are encoded and transmitted with synchronization information to a client.
  • a task result from a task output encoder is encoded.
  • the computer program product is embodied on a non-transitory computer readable medium.
  • Fig. 1 shows an example of a codec with neural network (NN) components
  • Fig. 2 shows another example of a video coding system with neural network components
  • Fig. 3 shows an example of a neural auto-encoder architecture
  • Fig. 4 shows an example of a neural network-based end-to-end learned video coding system
  • Fig. 5 shows an example of a video coding for machines
  • Fig. 6 shows an example of a pipeline for end-to-end learned system
  • Fig. 7 shows an example of training an end-to-end learned system
  • Fig. 8 shows an example of an on-demand VCM encoding
  • Fig. 9 shows an example of co-operative VCM encoding
  • Fig. 10 shows an example of a client agnostic task-based VCM encoding
  • Fig. 11 shows an example of a client agnostic task independent VCM encoding
  • Fig. 12a is a flowchart illustrating a method according to an embodiment
  • Fig. 12b is a flowchart illustrating a method according to another embodiment
  • Fig. 13 illustrates an apparatus according to an embodiment.
  • a term “computer-readable storage medium” refers to a physical storage medium (e.g., volatile or non-volatile memory device), may be differentiated from a “computer-readable transmission medium,” which refers to an electromagnetic signal.
  • the present embodiments provide a system and semantics for region of interest (ROI) based video coding for machines (VCM).
  • ROI region of interest
  • VCM video coding for machines
  • a neural network is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
  • Feed-forward neural networks are such that there is no feedback loop: each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers, and provide output to one or more of following layers.
  • Initial layers extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features.
  • semantically low-level features such as edges and textures in images
  • intermediate and final layers extract more high-level features.
  • After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, superresolution, etc.
  • recurrent neural nets there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
  • Neural networks are being utilized in an ever-increasing number of applications for many different types of device, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
  • neural networks are able to learn properties from input data, either in supervised way or in unsupervised way. Such learning is a result of a training algorithm, or of a meta-level neural network providing the training signal.
  • the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output. For example, in the case of classification of objects in images, the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to. Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, crossentropy, etc.
  • training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
  • model and “neural network” are used interchangeably, and also the weights of neural networks are sometimes referred to as learnable parameters or simply as parameters.
  • Training a neural network is an optimization process.
  • the goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset.
  • the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization.
  • data may be split into at least two sets, the training set and the validation set.
  • the training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss.
  • the validation set is used for checking the performance of the network on data which was not used to minimize the loss, as an indication of the final performance of the model.
  • the errors on the training set and on the validation set are monitored during the training process to understand the following things:
  • the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set, but performs poorly on a set not used for tuning its parameters.
  • neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec.
  • the most widely used architecture for realizing one component of an image codec is the autoencoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder.
  • the neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder.
  • the neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
  • Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal- to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar.
  • MSE Mean Squared Error
  • PSNR Peak Signal- to-Noise Ratio
  • SSIM Structural Similarity Index Measure
  • Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form.
  • An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • Hybrid video codecs may encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g.
  • DCT Discrete Cosine Transform
  • Inter prediction which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy.
  • inter prediction the sources of prediction are previously decoded pictures.
  • Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
  • One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients.
  • Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters.
  • a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded.
  • Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
  • the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame.
  • the decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
  • the motion information may be indicated with motion vectors associated with each motion compensated image block.
  • Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures.
  • those may be coded differentially with respect to block specific predicted motion vectors.
  • the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
  • Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
  • the reference index of previously coded/decoded picture can be predicted.
  • the reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture.
  • high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction.
  • predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or colocated blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
  • Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
  • C D + AR
  • C the Lagrangian cost to be minimized
  • D the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered
  • R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
  • Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike.
  • SEI supplemental enhancement information
  • Some video coding specifications include SEI network abstraction layer (NAL) units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike.
  • An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation.
  • SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/WC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use.
  • the standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance.
  • One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
  • the phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively.
  • the phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively.
  • the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
  • a container file such as a file conforming to the ISO Base Media File Format
  • certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
  • Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both.
  • in-loop filters the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame.
  • An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded.
  • An out-of- the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
  • NNNs neural networks
  • NNs are used to replace one or more of the components of a traditional codec such as WC/H.266.
  • a traditional codec such as WC/H.266.
  • traditional refers to those codecs whose components and their parameters may not be learned from data. Examples of such components are:
  • Additional in-loop filter for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
  • Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment.
  • Figure 1 illustrates an encoder, which also includes a decoding loop.
  • Figure 1 is shown to include components described below:
  • a luma intra pred block or circuit 101 This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame.
  • the operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional autoencoder.
  • a chroma intra pred block or circuit 102 This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame.
  • the chroma intra pred block or circuit 102 may perform cross-component prediction, for example, predicting chroma from luma.
  • the operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
  • An intra pred block or circuit 103 and inter-pred block or circuit 104 These blocks or circuit perform intra prediction and inter-prediction, respectively.
  • the intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma.
  • the operations of the intra pred block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders.
  • a probability estimation block or circuit 105 for entropy coding This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol.
  • the operation of the probability estimation block or circuit 105 may be performed by a neural network.
  • transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain.
  • the transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values.
  • there may be inverse quantization block or circuit and inverse transform block or circuit 113.
  • One or both of the transform block or circuit and quantization block or circuit may be replaced by one or two or more neural networks.
  • One or both of the inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
  • An in-loop filter block or circuit 107 Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder.
  • the operation of the in-loop filter block or circuit 107 may be performed by a neural network, such as a convolutional auto-encoder. In examples, the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
  • the postprocessing filter block or circuit 108 may be performed only at decoder side, as it may not affect the encoding process.
  • the postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data.
  • the postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
  • a resolution adaptation block or circuit 109 this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution.
  • the operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional auto-encoder.
  • An encoder control block or circuit 111 This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like.
  • the operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
  • An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction.
  • ME/MC stands for motion estimation / motion compensation.
  • NNs are used as the main components of the image/video codecs.
  • end-to-end learned compression there are two main options:
  • Option 1 re-use the video coding pipeline but replace most or all the components with NNs.
  • FIG 2 it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment.
  • An example of neural network may include, but is not limited to, a compressed representation of a neural network.
  • Figure 2 is shown to include following components:
  • a neural transform block or circuit 202 this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
  • a quantization block or circuit 204 this block or circuit quantizes an input data 201 to a smaller set of possible values.
  • An inverse transform and inverse quantization blocks or circuits 206 perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
  • An encoder parameter control block or circuit 208 This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits.
  • An entropy coding block or circuit 210 This block or circuit may perform lossless coding, for example based on entropy.
  • One popular entropy coding technique is arithmetic coding.
  • a neural intra-codec block or circuit 212 This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame.
  • An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an autoencoder neural network.
  • a decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network.
  • An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
  • a deep loop filter block or circuit 220 This block or circuit performs filtering of reconstructed data, in order to enhance it.
  • a decode picture buffer block or circuit 222 is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter prediction.
  • An inter-prediction block or circuit 228 This block or circuit performs inter-frame prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby.
  • An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction.
  • ME/MC stands for motion estimation / motion compensation.
  • Option 2 re-design the whole pipeline, as follows.
  • - Encoder NN is configured to perform a non-linear transform
  • - Decoder NN is configured to perform a non-linear inverse transform.
  • FIG. 3 shows an encoder NN and a decoder NN being parts of a neural auto-encoder architecture, in accordance with an example.
  • the Analysis Network 301 is an Encoder NN
  • the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural auto-encoder.
  • the input data 304 is analyzed by the Encoder NN (Analysis Network 301 ), which outputs a new representation of that input data.
  • the new representation may be more compressible.
  • This new representation may then be quantized, by a quantizer 305, to a discrete number of values.
  • the quantized data is then lossless encoded, for example by an arithmetic encoder 306, thus obtaining a bitstream 307.
  • the example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306.
  • the arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments.
  • the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder
  • the lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302.
  • the output is the reconstructed or decoded data
  • the lossy steps may comprise the Encoder NN and/or the quantization.
  • a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses.
  • the training loss comprises a reconstruction loss term and a rate loss term.
  • the reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
  • MS-SSIM Multi-scale structural similarity
  • error(f1 , f2) where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm;
  • GANs Generative Adversarial Networks
  • the rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder.
  • compressing we mean reducing the number of bits output by the encoding stage.
  • rate loss typically encourages the output of the Encoder NN to have low entropy.
  • rate losses are the following:
  • a sparsification loss i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm;
  • One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum.
  • the different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses).
  • These weights may be considered to be hyper-parameters of the training session, and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
  • a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 420 (for example arithmetic encoder 405 / arithmetic decoder 406), a dequantizer 407, and a decoder 408.
  • the encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components.
  • the probability model 403 may also comprise mainly neural network components.
  • Quantizer 402, dequantizer 407 and entropy codec 420 may not be based on neural network components, but they may also comprise neural network components, potentially.
  • the encoder component 401 takes a video x 409 as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input.
  • the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location.
  • the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels).
  • the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3.
  • another dimension in the input tensor may be used to represent temporal information.
  • the quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels.
  • Probability model 403 and arithmetic codec component 420 work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side.
  • the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded. Then, the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
  • the arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder 408 to recover the input video/image. Note that the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model 403 is used at encoder side, and another exact copy is used at decoder side.
  • the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks.
  • the system may be trained in an end-to- end manner by minimizing the following rate-distortion loss function:
  • the distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM.
  • the rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp).
  • the system may contain only the probability model 403 and arithmetic encoder/decoder 405, 406.
  • the system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
  • Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e. consuming/watching the decoded image.
  • machines i.e., autonomous agents
  • Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc.
  • Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc.
  • VCM Video Coding for Machines
  • VCM concerns the encoding of video streams to allow consumption for machines.
  • Machine is referred to indicate any device except human.
  • Example of machine can be a mobile phone, an autonomous vehicle, a robot, and such intelligent devices which may have a degree of autonomy or run an intelligent algorithm to process the decoded stream beyond reconstructing the original input stream.
  • a machine may perform one or multiple tasks on the decoded stream.
  • the example of tasks can be classification, object detection and tracking, captioning, action recognition and similar objectives.
  • the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system.
  • the multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
  • FIG. 5 is a general illustration of the pipeline of Video Coding for Machines.
  • a VCM encoder 502 encodes the input video into a bitstream 504.
  • a bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream.
  • a VCM decoder 510 decodes the bitstream output by the VCM encoder 502.
  • the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502.
  • this data may not be easily understandable by a human by simply rendering the data onto a screen.
  • the output of VCM decoder is then input to one or more task neural networks 514.
  • task-NNs 514 there are three example task-NNs, and a nonspecified one (Task-NN X).
  • the goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
  • FIG. 6 illustrates an example of a pipeline for the end-to-end learned approach.
  • the video is input to a neural network encoder 601 .
  • the output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604.
  • the lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded.
  • the probability model 603 may also be learned, for example it may be a neural network.
  • the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606.
  • the output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
  • Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated.
  • a rate loss 705 may be computed from the output of the probability model 703. The rate loss 705 provides an approximation of the bitrate required to encode the input video data.
  • a task loss 710 may be computed 709 from the output 708 of the task-NN 707.
  • the rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
  • an optimization method such as Adam
  • the machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks.
  • a video codec for machines can be realized by using a traditional codec such as H.266/VVC.
  • another possible design may comprise using a traditional "base" codec, such as H.266/VVC, which additionally comprises one or more neural networks.
  • the one or more neural networks may replace or be an alternative of one of the components of the traditional codec, such as:
  • the one or more neural networks may function as an additional component, such as:
  • VCM can be considered a task agnostic encoding mechanism. It is, however, possible to employ the VCM encoder and decoder in a more intelligent manner, for example, an intelligent region of interest (ROI) -based encoding. To achieve such a more intelligent region of interest-based approach, one may have to sacrifice the task agnostic nature of VCM codec in some cases.
  • ROI region of interest
  • Region of interest based encoding refers to a coding process, where only some region of an image or a frame is encoded with a high-quality, while rest of the image or the frame is encoded with lower quality.
  • regions of interest in the image or the frame which comprises e.g. feature-based algorithms, object-based algorithms, saliency-based algorithms, or their combination.
  • the designs are: 1 ) On demand VCM encoding; 2) Co-operative VCM encoding; 3) Client agnostic task-based VCM encoding; and 4) Client agnostic task independent VCM encoding.
  • the machines are referred in this disclosure as a client and a server, where the server is the machine that encodes the image or video and/or the features extracted from the image or video, and the client is the machine that decodes the video, image, and/or features and performs analysis tasks.
  • any machine can act as a server or a client or both.
  • the systems discussed in the present disclosure enable, for example, intelligent region of interest (ROI) -based encoding.
  • ROI region of interest
  • Example use cases may comprise, for example, an enhanced streaming of a selection of recorded content in a security camera, action and object recognition -based processing systems, and similar cases.
  • FIG. 8 shows an example design of such a system.
  • the system of this embodiment comprises, for example, a traditional video encoder 810 and a decoder 815; low-fidelity task processor 820, ROI estimator 825, ROI encoder 830 and ROI decoder 840, VCM encoder 850, wherein the VCM encoder may have ROI-based capabilities, a buffer 855, and VCM decoder 860.
  • the VCM encoder could be ROI-aware and has ROI encoding capabilities or alternatively the ROI-capability could be achieved by running a simple VCM encoder on top of each ROI.
  • the server may comprise the following components:
  • a traditional video encoder 810 for encoding the video can follow any of the conventional video coding tools;
  • ROI decoder 840 for decoding the ROIs that have been encoded by any client (device) and sent to the server as part of a request;
  • Buffer 855 which is a cache storage that may include the high-quality videos or the previously extracted features.
  • the buffer may have access methods via hash mechanisms or alike;
  • VCM encoder 850 may have the capability of encoding ROIs or may operate independent of ROI information.
  • the client may comprise the following components:
  • a video decoder 815 for allowing reconstruction of the videos at the client.
  • the decoded video can be of low resolution;
  • the low fidelity task network 820 being configured to perform on the decoded image sequences
  • the ROI estimator 825 being configured to produce a list of ROIs.
  • Such an estimator may operate on the output of a task network. It may use the decoded sequences of images and the combination of both output of task network and the decoded sequences;
  • the ROI encoder 830 being configured to encode the list of ROIs for efficient transport.
  • Such ROIs may consist of a list of locations, a density map, or a list of density maps.
  • the list is not limiting to the examples, and could include other information independent or dependent of the use case;
  • VCM decoder 860 being configured to decode the VCM bitstream.
  • the server may transmit a video stream, which may be of low resolution from a video encoder 810 to the client.
  • the video stream may be complemented with information that enables synchronization, such as time stamps and information of such natures.
  • the server may buffer a high-quality stream for a given period. The server may retrieve any sequence or frame using the synchronization information.
  • a ROI request can be received from the client or another task in the process workflow on server side.
  • the ROI request may consist of a list of ROIs and the synchronization information transmitted by the server.
  • the ROI request may contain a mode of operation that determines per ROI feature encoding or one feature representing all the ROIs.
  • the list of ROIs may be compressed at the client side.
  • the ROIs may be pre-determined and fixed, where an index to the ROI is communicated for each ROI or the list of ROIs can contain the exact or approximate location of the ROIs.
  • the list of ROIs may require to be remapped to the domain of high-quality sequence.
  • the server may output features per ROI or a combined feature representing all the ROIs. If the VCM encoder is enabled of ROI encoding, such list of features may further be compressed, for example, by considering residual encoding. In residual encoding, for example, a reference feature may be transmitted at fine granularity and the subsequent features in the list may be encoded in terms of differences with the reference feature. The calculated features and ROI information may also be kept for a period of time in the buffer 855.
  • the server may use the VCM encoder 850 to encode features only in the ROI regions and send the bitstream to the client.
  • the encoding may be performed that high-quality reconstruction can be generated from the bitstream.
  • the VCM encoder may encode the features in the ROIs of the video with high quality and features in the non-ROI regions of the video with low quality and send the bitstream to the client.
  • a bitstream and synchronization information may be received from a server.
  • a video decoder 815 may decode the received video streams from the bitstream.
  • a task network 820 may be applied to the decoded videos.
  • An example of the task network can be an action recognition.
  • ROI estimator 825 may be applied to the output of decoder and/or the task network to generate a list of ROIs or a density map.
  • the list of ROIs may be encoded by a ROI encoder 830 and sent to the server as a ROI request.
  • the ROI request may contain synchronization information to indicate which frames are associated with the ROIs.
  • VCM features relevant to the requested ROI are received in a bitstream from the server and decoded from the bitstream by the VCM decoder 860.
  • the following messages may be communicated to and from the server and the client (the arrows in the table indicate the direction of the communication):
  • Co-operative VCM encoding applies when both machines are performing some tasks and the client has access to the output of the tasks in the server and can request ROI-based feature encoding or has access to a certain task agnostic VCM encoded features provided by the server upon client’s request.
  • the tasks at client and server may be different or the same.
  • Figure 9 illustrates an example, when the server runs a task and the client has access to the server’s task output and can request ROI-based feature encoding.
  • the server may comprise the following components: - one or more task networks;
  • VCM encoder 940 The design of VCM encoder 940, buffer 930 and ROI decoder 920 may follow the same principles as in the design 1 .
  • the server itself runs one or more task networks, examples of tasks may be (but not limited to) classification, detection, segmentation, etc.
  • the task network output encoder 910 is responsible for encoding the output of the task network for transport in an efficient manner.
  • a client may comprise the following components:
  • Task output decoder 950 being configured to decode the bitstream of the results from the task network at server side so they may be used in the client;
  • - ROI estimator 960 being configured to produce a list of ROIs. Such an estimator may operate on the task results provided by the server;
  • ROI encoder 960 configured to encode the list of ROIs for efficient transport.
  • ROIs may consist of a list of locations, a density map or a list of density maps, or some results ID that is provided by the server, e.g. an ID indicating a specific ROI.
  • VCM decoder 970 being configured to decode the VCM features.
  • the server may run one or more task networks and produce one or more results.
  • the one or more results may be encoded by the task output encoder 910 and streamed to the one or more clients along with synchronization information.
  • a ROI request may be received by the ROI decoder 920 from the client.
  • the ROI request may consists of a list of ROIs and the synchronization information broadcasted by the server.
  • the ROI request may also comprise a mode of operation that determines whether feature encoding is performed per ROI or one feature representing for all the ROIs.
  • the list of ROIs may have been compressed at the client.
  • the ROIs may be predetermined and fixed where an index is communicated for each ROI. Or the list of ROIs can contain the exact or approximate location of the ROIs.
  • the list of ROIs may be remapped to the domain of high-quality sequence.
  • the server may output the list of features per ROI or a combined feature representing for all the ROIs. If the VCM encoder 940 is enabled for ROI encoding, such list of features may further be compressed, for example, by considering residual encoding. The calculated features and ROI information may be kept for a period of time in the buffer 930.
  • the server may use the VCM encoder 940 to encode features only in the ROI regions and send the bitstream to the client.
  • the encoding may be performed that high quality reconstruction can be generated from the bitstream.
  • the VCM encoder may encode the features in the ROIs of the video with high quality and features in the non-ROI regions of the video with low quality and send the bitstream to the client.
  • a video decoder 950 may decode the received video streams.
  • a task network 960 may be applied to the decoded videos.
  • An example of the task network may be an action recognition task network.
  • ROI estimator may be applied to the output of the decoder and/or the task network to generate a list of ROIs or a density map.
  • the ROI may be encoded by ROI encoder 960 and sent to the server as a ROI request.
  • Such request may contain synchronization information to indicate which frames are associated with the ROIs.
  • the VCM features relevant to the requested ROIs may be received from the server, and decoded by the VCM decoder 970.
  • the following messages may be communicated between the server and the client (the arrows in the table indicate the direction of the communication):
  • Design 3 Client agnostic task-based VCM encoding
  • the server encodes ROI using the output of its task network and is agnostic to the client.
  • a client may decide to register and receive the information given the communicated information, such as task information from the server.
  • the design may be used when one task in a machine (server) can help the other machine (client) without any assumption of the client’s task.
  • One example use case may be autonomous systems such as drones, vehicles, etc., where the features relevant to one machine’s task can be the extra input to the other machine’s task network.
  • Figure 10 illustrates the design of client agnostic task-based VCM encoding.
  • the server may consists of one or more task networks 1010, ROI estimator and VCM encoder 1020.
  • the client may comprise a VCM decoder 1030, its own task network(s) and any other components that facilities its operations.
  • the server may receive some configuration information from a user that adapts server’s operations with regard to its one or more task networks 1010.
  • the server may perform one or more tasks and generate one or more results.
  • the one or more results may be used in the server’s ROI estimator 1020 to generate a list of useful ROIs.
  • the list of ROIs may be used to generate the features to be encoded by the VCM encoder 1020.
  • the server may send the encoded ROIs and features to the client as a list of features or a combined feature in conjunction with some metadata such as task id of the task used to generate the list of tasks location of ROIs.
  • the server may use the VCM encoder 1020 to encode only the ROIs and send the bitstream to the client.
  • the encoding may be performed that high quality reconstruction can be generated from the bitstream.
  • the VCM encoder may encode the ROI regions of the video with high quality and non-ROI regions of the video with low quality and send the bitstream to the client.
  • the client may receive the VCM bitstream from the server and decode it at the VCM decoder 1030 in conjunction with the metadata that is provided.
  • the following messages may be communicated to and from the server and client (the arrows in the table indicate the direction of the communication):
  • Design 4 Client agnostic task independent VCM encoding
  • a client agnostic task-independent VCM encoding scheme is one that provides a sequence of features of highly relevant ROIs in a sequence. It does not make any assumption about the client.
  • An example of such a system is illustrated in Figure 11 .
  • the server may consist of or have an interface to/for a ROI estimator 1110, and a VCM feature extraction and encoding component.
  • the server may stream the compressed features, ROI information (including location), and number of ROIs.
  • the ROI estimator estimates the ROI independent of any task. For example, it may be done based on salient location detection.
  • the client may consist of the VCM decoder 1120, and any task network of preference.
  • the client may receive the stream consisting of compressed features, ROI information, e.g. number of ROIs and their locations from the server. This information may be decoded by the VCM decoder 1120 and provided to the internal working components of the client.
  • the communicated messages may be (the arrows in the table indicate the direction of the communication):
  • the server additionally includes a traditional video encoder 810 that outputs a video bitstream, which may for example have low resolution. Additionally, the traditional video encoder 810 may output synchronization information, such as time stamps, in or along the video bitstream.
  • the client additionally includes a traditional video decoder 815.
  • the VCM encoder encodes features of the ROI(s) using residual encoding.
  • the encoder may use a reference feature.
  • the reference feature may be a representative of a shot, several frames, or a global generic feature, or one of the ROI’s features that are already obtained.
  • the difference of the features with respect to the reference feature will be encoded.
  • VCM encoder When VCM encoder has encoded features of the ROI(s) using residual encoding, at the client, the ROI(s) are decoded using residual decoding. In that case, the VCM decoder decodes the reference feature and the difference of features with respect to the reference feature.
  • One or more of the following semantical information items may be produced by VCM encoder.
  • Taskjd an id that identifies one task or multiple tasks that could have been used for generating the features at the encoder side.
  • the taskjd could be also a list of taskjds in case of multiple tasks or a combination of tasks could have a unique id.
  • ROI_enabled_flag a flag to indicate that ROI-based encoding is used.
  • Global_ROI_featureO_flag a flag to indicate that the feature set is a global feature but calculated from multiple ROIs.
  • ROIJist the information about the list of ROIs.
  • Number_of_ROIs the number of ROIs used in calculations of a ROI-based encoding
  • Synchronizationjd an id to indicate how the features relate to the video frames in a video bitstream.
  • Residual_encode a flag to indicate if a residual encoding is used
  • the method generally comprises accessing 1210 a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determining 1215 features contained in the regions of interest; encoding 1220 the determined features into a bitstream; and sending 1225 the encoded features to a client with information on regions of interest.
  • Each of the steps can be implemented by a respective module of a computer system.
  • An apparatus comprises means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; means for determining features contained in the regions of interest; means for encoding the determined features into a bitstream; and means for sending the encoded features to a client with information on regions of interest.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 12a according to various embodiments.
  • the method according to another embodiment is shown in Figure 12b.
  • the method generally comprises receiving 1230 a bitstream of encoded features from a server with an information on regions of interest; and decoding 1235 the bitstream to determine relevant features for a certain region of interest.
  • Each of the steps can be implemented by a respective module of a computer system.
  • An apparatus comprises means for receiving a bitstream of encoded features from a server with an information on regions of interest; and means for decoding the bitstream to determine relevant features for a certain region of interest.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 12b according to various embodiments.
  • the apparatus is a user equipment for the purposes of the present embodiments.
  • the apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93.
  • the apparatus may also comprise a camera module 95.
  • the apparatus may be configured to receive image and/or video data from an external camera device over a communication network.
  • the memory 92 stores data including computer program code in the apparatus 90.
  • the computer program code is configured to implement the method according various embodiments by means of various computer modules.
  • the camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91 .
  • the communication interface 93 forwards processed data, i.e. the image file, for example to a display of another device, such a virtual reality headset.
  • the apparatus 90 is a video source comprising the camera module 95, user inputs may be received from the user interface.
  • a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
  • a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiments relate to a server apparatus and a client apparatus, where the server apparatus comprises means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; means for determining features contained in the regions of interest; means for encoding the determined features into a bitstream; and means for sending the encoded features to a client with information on regions of interest. The client apparatus comprises at least means for receiving a bitstream of encoded features from a server with an information on regions of interest; and means for decoding the bitstream to determine relevant features for a certain region of interest. The embodiments also relate to corresponding methods.

Description

A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND VIDEO DECODING
Technical Field
The present solution generally relates to video coding. In particular, the solution relates to video coding for machines (VCM).
Background
One of the elements in image and video compression is to compress data while maintaining the quality to satisfy human perceptual ability. However, in recent development of machine learning, machines can replace humans when analyzing data for example in order to detect events and/or objects in video/image. Thus, when decoded image data is consumed by machines, the quality of the compression can be different from the human approved quality. Therefore a concept Video Coding for Machines (VCM) has been provided.
Summary
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
Various aspects include a method, an apparatus and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims.
According to a first aspect, there is provided an apparatus comprising at least means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; means for determining features contained in the regions of interest; means for encoding the determined features into a bitstream; and means for sending the encoded features to a client with information on regions of interest.
According to a second aspect, there is provided an apparatus comprising at least means for receiving a bitstream of encoded features from a server with an information on regions of interest; and means for decoding the bitstream to determine relevant features for a certain region of interest.
According to a third aspect, there is provided a method, comprising accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determining features contained in the regions of interest; encoding the determined features into a bitstream; and sending the encoded features to a client with information on regions of interest.
According to a fourth aspect, there is provided a method, comprising receiving a bitstream of encoded features from a server with an information on regions of interest; and decoding the bitstream to determine relevant features for a certain region of interest.
According to a fifth aspect, there is provided an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: access a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determine features contained in the regions of interest; encode the determined features into a bitstream; and send the encoded features to a client with information on regions of interest.
According to a sixth aspect, there is provided an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive a bitstream of encoded features from a server with an information on regions of interest; and decode the bitstream to determine relevant features for a certain region of interest. According to a seventh aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to access a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determine features contained in the regions of interest; encode the determined features into a bitstream; and send the encoded features to a client with information on regions of interest.
According to an eighth aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a bitstream of encoded features from a server with an information on regions of interest; and decode the bitstream to determine relevant features for a certain region of interest.
According to an embodiment, the list of regions of interest is received from a client.
According to an embodiment, the list of regions is generated by a region of interest estimator component.
According to an embodiment, the features indicated in the list of regions of interest is encoded with a quality higher than other parts of a frame.
According to an embodiment, the list of regions of interest being received from a client is an encoded list of regions of interest, wherein the server comprises means for decoding the encoded list of regions of interest.
According to an embodiment, the video frames are encoded and transmitted with synchronization information to a client.
According to an embodiment, a task result from a task output encoder is encoded.
According to an embodiment, the computer program product is embodied on a non-transitory computer readable medium. Description of the Drawings
In the following, various embodiments will be described in more detail with reference to the appended drawings, in which
Fig. 1 shows an example of a codec with neural network (NN) components;
Fig. 2 shows another example of a video coding system with neural network components;
Fig. 3 shows an example of a neural auto-encoder architecture;
Fig. 4 shows an example of a neural network-based end-to-end learned video coding system;
Fig. 5 shows an example of a video coding for machines;
Fig. 6 shows an example of a pipeline for end-to-end learned system;
Fig. 7 shows an example of training an end-to-end learned system;
Fig. 8 shows an example of an on-demand VCM encoding;
Fig. 9 shows an example of co-operative VCM encoding;
Fig. 10 shows an example of a client agnostic task-based VCM encoding;
Fig. 11 shows an example of a client agnostic task independent VCM encoding;
Fig. 12a is a flowchart illustrating a method according to an embodiment; Fig. 12b is a flowchart illustrating a method according to another embodiment; and
Fig. 13 illustrates an apparatus according to an embodiment.
Description of Example Embodiments
The following description and drawings are illustrative and are not to be construed as unnecessarily limiting. The specific details are provided for a thorough understanding of the disclosure. However, in certain instances, well- known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, reference to the same embodiment and such references mean at least one of the embodiments.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment in included in at least one embodiment of the disclosure.
In the present disclosure, terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention.
In the present disclosure a term “computer-readable storage medium” refers to a physical storage medium (e.g., volatile or non-volatile memory device), may be differentiated from a “computer-readable transmission medium,” which refers to an electromagnetic signal.
The present embodiments provide a system and semantics for region of interest (ROI) based video coding for machines (VCM).
Before discussing the present embodiments in more detailed manner, a short reference to related technology is given. A neural network (NN) is a computation graph consisting of several layers of computation. Each layer consists of one or more units, where each unit performs an elementary computation. A unit is connected to one or more other units, and the connection may have associated with a weight. The weight may be used for scaling the signal passing through the associated connection. Weights are learnable parameters, i.e., values which can be learned from training data. There may be other learnable parameters, such as those of batch-normalization layers.
Two of the most widely used architectures for neural networks are feed-forward and recurrent architectures. Feed-forward neural networks are such that there is no feedback loop: each layer takes input from one or more of the layers before and provides its output as the input for one or more of the subsequent layers. Also, units inside a certain layer take input from units in one or more of preceding layers, and provide output to one or more of following layers.
Initial layers (those close to the input data) extract semantically low-level features such as edges and textures in images, and intermediate and final layers extract more high-level features. After the feature extraction layers there may be one or more layers performing a certain task, such as classification, semantic segmentation, object detection, denoising, style transfer, superresolution, etc. In recurrent neural nets, there is a feedback loop, so that the network becomes stateful, i.e., it is able to memorize information or a state.
Neural networks are being utilized in an ever-increasing number of applications for many different types of device, such as mobile phones. Examples include image and video analysis and processing, social media data analysis, device usage data analysis, etc.
One of the important properties of neural networks (and other machine learning tools) is that they are able to learn properties from input data, either in supervised way or in unsupervised way. Such learning is a result of a training algorithm, or of a meta-level neural network providing the training signal. In general, the training algorithm consists of changing some properties of the neural network so that its output is as close as possible to a desired output. For example, in the case of classification of objects in images, the output of the neural network can be used to derive a class or category index which indicates the class or category that the object in the input image belongs to. Training usually happens by minimizing or decreasing the output’s error, also referred to as the loss. Examples of losses are mean squared error, crossentropy, etc. In recent deep learning techniques, training is an iterative process, where at each iteration the algorithm modifies the weights of the neural net to make a gradual improvement of the network’s output, i.e., to gradually decrease the loss.
In this description, terms “model” and “neural network” are used interchangeably, and also the weights of neural networks are sometimes referred to as learnable parameters or simply as parameters.
Training a neural network is an optimization process. The goal of the optimization or training process is to make the model learn the properties of the data distribution from a limited training dataset. In other words, the goal is to learn to use a limited training dataset in order to learn to generalize to previously unseen data, i.e., data which was not used for training the model. This is usually referred to as generalization. In practice, data may be split into at least two sets, the training set and the validation set. The training set is used for training the network, i.e., to modify its learnable parameters in order to minimize the loss. The validation set is used for checking the performance of the network on data which was not used to minimize the loss, as an indication of the final performance of the model. In particular, the errors on the training set and on the validation set are monitored during the training process to understand the following things:
- If the network is learning at all - in this case, the training set error should decrease, otherwise the model is in the regime of underfitting.
- If the network is learning to generalize - in this case, also the validation set error needs to decrease and to be not too much higher than the training set error. If the training set error is low, but the validation set error is much higher than the training set error, or it does not decrease, or it even increases, the model is in the regime of overfitting. This means that the model has just memorized the training set’s properties and performs well only on that set, but performs poorly on a set not used for tuning its parameters.
Lately, neural networks have been used for compressing and de-compressing data such as images, i.e., in an image codec. The most widely used architecture for realizing one component of an image codec is the autoencoder, which is a neural network consisting of two parts: a neural encoder and a neural decoder. The neural encoder takes as input an image and produces a code which requires less bits than the input image. This code may be obtained by applying a binarization or quantization process to the output of the encoder. The neural decoder takes in this code and reconstructs the image which was input to the neural encoder.
Such neural encoder and neural decoder may be trained to minimize a combination of bitrate and distortion, where the distortion may be based on one or more of the following metrics: Mean Squared Error (MSE), Peak Signal- to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), or similar. These distortion metrics are meant to be correlated to the human visual perception quality, so that minimizing or maximizing one or more of these distortion metrics results into improving the visual quality of the decoded image as perceived by humans.
Video codec comprises an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. An encoder may discard some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
Hybrid video codecs, for example ITU-T H.263 and H.264, may encode the video information in two phases. Firstly pixel values in a certain picture area (or “block”) are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This may be done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, exploits temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures.
Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means, the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
In video codecs, the motion information may be indicated with motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently, those may be coded differentially with respect to block specific predicted motion vectors. In video codecs, the predicted motion vectors may be created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, the reference index of previously coded/decoded picture can be predicted. The reference index is typically predicted from adjacent blocks and/or or co-located blocks in temporal reference picture. Moreover, high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information may be carried out using the motion field information of adjacent blocks and/or colocated blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
In video codecs the prediction residual after motion compensation may be first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding. Video encoders may utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
C = D + AR where C is the Lagrangian cost to be minimized, D is the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
Video coding specifications may enable the use of supplemental enhancement information (SEI) messages or alike. Some video coding specifications include SEI network abstraction layer (NAL) units, and some video coding specifications contain both prefix SEI NAL units and suffix SEI NAL units, where the former type can start a picture unit or alike and the latter type can end a picture unit or alike. An SEI NAL unit contains one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, post-processing of decoded pictures, rendering, error detection, error concealment, and resource reservation. Several SEI messages are specified in H.264/AVC, H.265/HEVC, H.266/WC, and H.274/VSEI standards, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use. The standards may contain the syntax and semantics for the specified SEI messages but a process for handling the messages in the recipient might not be defined. Consequently, encoders may be required to follow the standard specifying a SEI message when they create SEI message(s), and decoders might not be required to process SEI messages for output order conformance. One of the reasons to include the syntax and semantics of SEI messages in standards is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
The phrase along the bitstream (e.g. indicating along the bitstream) or along a coded unit of a bitstream (e.g. indicating along a coded tile) may be used in claims and described embodiments to refer to transmission, signaling, or storage in a manner that the "out-of-band" data is associated with but not included within the bitstream or the coded unit, respectively. The phrase decoding along the bitstream or along a coded unit of a bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream or the coded unit, respectively. For example, the phrase along the bitstream may be used when the bitstream is contained in a container file, such as a file conforming to the ISO Base Media File Format, and certain file metadata is stored in the file in a manner that associates the metadata to the bitstream, such as boxes in the sample entry for a track containing the bitstream, a sample group for the track containing the bitstream, or a timed metadata track associated with the track containing the bitstream.
Image and video codecs may use a set of filters to enhance the visual quality of the predicted visual content and can be applied either in-loop or out-of-loop, or both. In the case of in-loop filters, the filter applied on one block in the currently-encoded frame will affect the encoding of another block in the same frame and/or in another frame which is predicted from the current frame. An in-loop filter can affect the bitrate and/or the visual quality. In fact, an enhanced block will cause a smaller residual (difference between original block and predicted-and-filtered block), thus requiring less bits to be encoded. An out-of- the loop filter will be applied on a frame after it has been reconstructed, the filtered visual content won't be as a source for prediction, and thus it may only impact the visual quality of the frames that are output by the decoder.
Recently, neural networks (NNs) have been used in the context of image and video compression, by following mainly two approaches.
In one approach, NNs are used to replace one or more of the components of a traditional codec such as WC/H.266. Here, term “traditional” refers to those codecs whose components and their parameters may not be learned from data. Examples of such components are:
- Additional in-loop filter, for example by having the NN as an additional in-loop filter with respect to the traditional loop filters.
- Single in-loop filter, for example by having the NN replacing all traditional in-loop filters.
- Intra-frame prediction.
- Inter-frame prediction.
- Transform and/or inverse transform.
- Probability model for the arithmetic codec.
- Etc.
Figure 1 illustrates examples of functioning of NNs as components of a traditional codec's pipeline, in accordance with an embodiment. In particular, Figure 1 illustrates an encoder, which also includes a decoding loop. Figure 1 is shown to include components described below:
- A luma intra pred block or circuit 101. This block or circuit performs intra prediction in the luma domain, for example, by using already reconstructed data from the same frame. The operation of the luma intra pred block or circuit 101 may be performed by a deep neural network such as a convolutional autoencoder.
- A chroma intra pred block or circuit 102. This block or circuit performs intra prediction in the chroma domain, for example, by using already reconstructed data from the same frame. The chroma intra pred block or circuit 102 may perform cross-component prediction, for example, predicting chroma from luma. The operation of the chroma intra pred block or circuit 102 may be performed by a deep neural network such as a convolutional auto-encoder.
- An intra pred block or circuit 103 and inter-pred block or circuit 104. These blocks or circuit perform intra prediction and inter-prediction, respectively. The intra pred block or circuit 103 and the inter-pred block or circuit 104 may perform the prediction on all components, for example, luma and chroma. The operations of the intra pred block or circuit 103 and inter-pred block or circuit 104 may be performed by two or more deep neural networks such as convolutional auto-encoders.
- A probability estimation block or circuit 105 for entropy coding. This block or circuit performs prediction of probability for the next symbol to encode or decode, which is then provided to the entropy coding module 112, such as the arithmetic coding module, to encode or decode the next symbol. The operation of the probability estimation block or circuit 105 may be performed by a neural network.
- A transform and quantization (T/Q) block or circuit 106. These are actually two blocks or circuits. The transform and quantization block or circuit 106 may perform a transform of input data to a different domain, for example, the FFT transform would transform the data to frequency domain. The transform and quantization block or circuit 106 may quantize its input values to a smaller set of possible values. In the decoding loop, there may be inverse quantization block or circuit and inverse transform block or circuit 113. One or both of the transform block or circuit and quantization block or circuit may be replaced by one or two or more neural networks. One or both of the inverse transform block or circuit and inverse quantization block or circuit 113 may be replaced by one or two or more neural networks.
- An in-loop filter block or circuit 107. Operations of the in-loop filter block or circuit 107 is performed in the decoding loop, and it performs filtering on the output of the inverse transform block or circuit, or anyway on the reconstructed data, in order to enhance the reconstructed data with respect to one or more predetermined quality metrics. This filter may affect both the quality of the decoded data and the bitrate of the bitstream output by the encoder. The operation of the in-loop filter block or circuit 107 may be performed by a neural network, such as a convolutional auto-encoder. In examples, the operation of the in-loop filter may be performed by multiple steps or filters, where the one or more steps may be performed by neural networks.
- A postprocessing filter block or circuit 108. The postprocessing filter block or circuit 108 may be performed only at decoder side, as it may not affect the encoding process. The postprocessing filter block or circuit 108 filters the reconstructed data output by the in-loop filter block or circuit 107, in order to enhance the reconstructed data. The postprocessing filter block or circuit 108 may be replaced by a neural network, such as a convolutional auto-encoder.
- A resolution adaptation block or circuit 109: this block or circuit may downsample the input video frames, prior to encoding. Then, in the decoding loop, the reconstructed data may be upsampled, by the upsampling block or circuit 110, to the original resolution. The operation of the resolution adaptation block or circuit 109 block or circuit may be performed by a neural network such as a convolutional auto-encoder.
- An encoder control block or circuit 111. This block or circuit performs optimization of encoder's parameters, such as what transform to use, what quantization parameters (QP) to use, what intra-prediction mode (out of N intra-prediction modes) to use, and the like. The operation of the encoder control block or circuit 111 may be performed by a neural network, such as a classifier convolutional network, or such as a regression convolutional network.
- An ME/MC block or circuit 114 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction. ME/MC stands for motion estimation / motion compensation.
In another approach, commonly referred to as “end-to-end learned compression”, NNs are used as the main components of the image/video codecs. In this second approach, there are two main options:
Option 1 : re-use the video coding pipeline but replace most or all the components with NNs. Referring to Figure 2, it illustrates an example of modified video coding pipeline based on a neural network, in accordance with an embodiment. An example of neural network may include, but is not limited to, a compressed representation of a neural network. Figure 2 is shown to include following components:
- A neural transform block or circuit 202: this block or circuit transforms the output of a summation/subtraction operation 203 to a new representation of that data, which may have lower entropy and thus be more compressible.
- A quantization block or circuit 204: this block or circuit quantizes an input data 201 to a smaller set of possible values.
- An inverse transform and inverse quantization blocks or circuits 206. These blocks or circuits perform the inverse or approximately inverse operation of the transform and the quantization, respectively.
- An encoder parameter control block or circuit 208. This block or circuit may control and optimize some or all the parameters of the encoding process, such as parameters of one or more of the encoding blocks or circuits.
- An entropy coding block or circuit 210. This block or circuit may perform lossless coding, for example based on entropy. One popular entropy coding technique is arithmetic coding.
- A neural intra-codec block or circuit 212. This block or circuit may be an image compression and decompression block or circuit, which may be used to encode and decode an intra frame. An encoder 214 may be an encoder block or circuit, such as the neural encoder part of an autoencoder neural network. A decoder 216 may be a decoder block or circuit, such as the neural decoder part of an auto-encoder neural network. An intra-coding block or circuit 218 may be a block or circuit performing some intermediate steps between encoder and decoder, such as quantization, entropy encoding, entropy decoding, and/or inverse quantization.
- A deep loop filter block or circuit 220. This block or circuit performs filtering of reconstructed data, in order to enhance it.
- A decode picture buffer block or circuit 222. This block or circuit is a memory buffer, keeping the decoded frame, for example, reconstructed frames 224 and enhanced reference frames 226 to be used for inter prediction.
- An inter-prediction block or circuit 228. This block or circuit performs inter-frame prediction, for example, predicts from frames, for example, frames 232, which are temporally nearby. An ME/MC 230 performs motion estimation and/or motion compensation, which are two key operations to be performed when performing inter-frame prediction. ME/MC stands for motion estimation / motion compensation. Option 2: re-design the whole pipeline, as follows.
- Encoder NN is configured to perform a non-linear transform;
- Quantization and lossless encoding of the encoder NN's output;
- Lossless decoding and dequantization;
- Decoder NN is configured to perform a non-linear inverse transform.
An example of option 2 is described in detail in Figure 3 which shows an encoder NN and a decoder NN being parts of a neural auto-encoder architecture, in accordance with an example. In Figure 3, the Analysis Network 301 is an Encoder NN, and the Synthesis Network 302 is the Decoder NN, which may together be referred to as spatial correlation tools 303, or as neural auto-encoder.
As shown in Figure 3, the input data 304 is analyzed by the Encoder NN (Analysis Network 301 ), which outputs a new representation of that input data. The new representation may be more compressible. This new representation may then be quantized, by a quantizer 305, to a discrete number of values. The quantized data is then lossless encoded, for example by an arithmetic encoder 306, thus obtaining a bitstream 307. The example shown in Figure 3 includes an arithmetic decoder 308 and an arithmetic encoder 306. The arithmetic encoder 306, or the arithmetic decoder 308, or the combination of the arithmetic encoder 306 and arithmetic decoder 308 may be referred to as arithmetic codec in some embodiments. On the decoding side, the bitstream is first lossless decoded, for example, by using the arithmetic codec decoder
308. The lossless decoded data is dequantized and then input to the Decoder NN, Synthesis Network 302. The output is the reconstructed or decoded data
309.
In case of lossy compression, the lossy steps may comprise the Encoder NN and/or the quantization.
In order to train this system, a training objective function (also called “training loss”) may be utilized, which may comprise one or more terms, or loss terms, or simply losses. In one example, the training loss comprises a reconstruction loss term and a rate loss term. The reconstruction loss encourages the system to decode data that is similar to the input data, according to some similarity metric. Examples of reconstruction losses are:
- Mean squared error (MSE);
- Multi-scale structural similarity (MS-SSIM);
- Losses derived from the use of a pretrained neural network. For example, error(f1 , f2), where f1 and f2 are the features extracted by a pretrained neural network for the input data and the decoded data, respectively, and error() is an error or distance function, such as L1 norm or L2 norm;
- Losses derived from the use of a neural network that is trained simultaneously with the end-to-end learned codec. For example, adversarial loss can be used, which is the loss provided by a discriminator neural network that is trained adversarially with respect to the codec, following the settings proposed in the context of Generative Adversarial Networks (GANs) and their variants.
The rate loss encourages the system to compress the output of the encoding stage, such as the output of the arithmetic encoder. By “compressing”, we mean reducing the number of bits output by the encoding stage.
When an entropy-based lossless encoder is used, such as an arithmetic encoder, the rate loss typically encourages the output of the Encoder NN to have low entropy. Example of rate losses are the following:
- A differentiable estimate of the entropy;
- A sparsification loss, i.e., a loss that encourages the output of the Encoder NN or the output of the quantization to have many zeros. Examples are L0 norm, L1 norm, L1 norm divided by L2 norm;
- A cross-entropy loss applied to the output of a probability model, where the probability model may be a NN used to estimate the probability of the next symbol to be encoded by an arithmetic encoder.
One or more of reconstruction losses may be used, and one or more of the rate losses may be used, as a weighted sum. The different loss terms may be weighted using different weights, and these weights determine how the final system performs in terms of rate-distortion loss. For example, if more weight is given to the reconstruction losses with respect to the rate losses, the system may learn to compress less but to reconstruct with higher accuracy (as measured by a metric that correlates with the reconstruction losses). These weights may be considered to be hyper-parameters of the training session, and may be set manually by the person designing the training session, or automatically for example by grid search or by using additional neural networks.
As shown in Figure 4, a neural network-based end-to-end learned video coding system may contain an encoder 401 , a quantizer 402, a probability model 403, an entropy codec 420 (for example arithmetic encoder 405 / arithmetic decoder 406), a dequantizer 407, and a decoder 408. The encoder 401 and decoder 408 may be two neural networks, or mainly comprise neural network components. The probability model 403 may also comprise mainly neural network components. Quantizer 402, dequantizer 407 and entropy codec 420 may not be based on neural network components, but they may also comprise neural network components, potentially.
On the encoder side, the encoder component 401 takes a video x 409 as input and converts the video from its original signal space into a latent representation that may comprise a more compressible representation of the input. In the case of an input image, the latent representation may be a 3-dimensional tensor, where two dimensions represent the vertical and horizontal spatial dimensions, and the third dimension represent the “channels” which contain information at that specific location. If the input image is a 128x128x3 RGB image (with horizontal size of 128 pixels, vertical size of 128 pixels, and 3 channels for the Red, Green, Blue color components), and if the encoder downsamples the input tensor by 2 and expands the channel dimension to 32 channels, then the latent representation is a tensor of dimensions (or “shape”) 64x64x32 (i.e., with horizontal size of 64 elements, vertical size of 64 elements, and 32 channels). Please note that the order of the different dimensions may differ depending on the convention which is used; in some cases, for the input image, the channel dimension may be the first dimension, so for the above example, the shape of the input tensor may be represented as 3x128x128, instead of 128x128x3. In the case of an input video (instead of just an input image), another dimension in the input tensor may be used to represent temporal information. The quantizer component 402 quantizes the latent representation into discrete values given a predefined set of quantization levels. Probability model 403 and arithmetic codec component 420 work together to perform lossless compression for the quantized latent representation and generate bitstreams to be sent to the decoder side. Given a symbol to be encoded into the bitstream, the probability model 403 estimates the probability distribution of all possible values for that symbol based on a context that is constructed from available information at the current encoding/decoding state, such as the data that has already been encoded/decoded. Then, the arithmetic encoder 405 encodes the input symbols to bitstream using the estimated probability distributions.
On the decoder side, opposite operations are performed. The arithmetic decoder 406 and the probability model 403 first decode symbols from the bitstream to recover the quantized latent representation. Then the dequantizer 407 reconstructs the latent representation in continuous values and pass it to decoder 408 to recover the input video/image. Note that the probability model 403 in this system is shared between the encoding and decoding systems. In practice, this means that a copy of the probability model 403 is used at encoder side, and another exact copy is used at decoder side.
In this system, the encoder 401 , probability model 403, and decoder 408 may be based on deep neural networks. The system may be trained in an end-to- end manner by minimizing the following rate-distortion loss function:
L = D + R, where D is the distortion loss term, R is the rate loss term, and A is the weight that controls the balance between the two losses. The distortion loss term may be the mean square error (MSE), structure similarity (SSIM) or other metrics that evaluate the quality of the reconstructed video. Multiple distortion losses may be used and integrated into D, such as a weighted sum of MSE and SSIM. The rate loss term is normally the estimated entropy of the quantized latent representation, which indicates the number of bits necessary to represent the encoded symbols, for example, bits-per-pixel (bpp). For lossless video/image compression, the system may contain only the probability model 403 and arithmetic encoder/decoder 405, 406. The system loss function contains only the rate loss, since the distortion loss is always zero (i.e., no loss of information).
Reducing the distortion in image and video compression is often intended to increase human perceptual quality, as humans are considered to be the end users, i.e. consuming/watching the decoded image. Recently, with the advent of machine learning, especially deep learning, there is a rising number of machines (i.e., autonomous agents) that analyze data independently from humans and that may even take decisions based on the analysis results without human intervention. Examples of such analysis are object detection, scene classification, semantic segmentation, video event detection, anomaly detection, pedestrian tracking, etc. Example use cases and applications are self-driving cars, video surveillance cameras and public safety, smart sensor networks, smart TV and smart advertisement, person re-identification, smart traffic monitoring, drones, etc. When the decoded data is consumed by machines, a different quality metric shall be used instead of human perceptual quality. Also, dedicated algorithms for compressing and decompressing data for machine consumption are likely to be different than those for compressing and decompressing data for human consumption. The set of tools and concepts for compressing and decompressing data for machine consumption is referred to here as Video Coding for Machines (VCM).
VCM concerns the encoding of video streams to allow consumption for machines. Machine is referred to indicate any device except human. Example of machine can be a mobile phone, an autonomous vehicle, a robot, and such intelligent devices which may have a degree of autonomy or run an intelligent algorithm to process the decoded stream beyond reconstructing the original input stream.
A machine may perform one or multiple tasks on the decoded stream. The example of tasks can be classification, object detection and tracking, captioning, action recognition and similar objectives. It is likely that the receiver-side device has multiple “machines” or task neural networks (Task-NNs). These multiple machines may be used in a certain combination which is for example determined by an orchestrator sub-system. The multiple machines may be used for example in succession, based on the output of the previously used machine, and/or in parallel. For example, a video which was compressed and then decompressed may be analyzed by one machine (NN) for detecting pedestrians, by another machine (another NN) for detecting cars, and by another machine (another NN) for estimating the depth of all the pixels in the frames.
In this description, “task machine” and “machine” and “task neural network” are referred to interchangeably, and for such referral any process or algorithm (learned or not from data) which analyzes or processes data for a certain task is meant. In the rest of the description, other assumptions made regarding the machines considered in this disclosure may be specified in further details.
Figure 5 is a general illustration of the pipeline of Video Coding for Machines. A VCM encoder 502 encodes the input video into a bitstream 504. A bitrate 506 may be computed 508 from the bitstream 504 in order to evaluate the size of the bitstream. A VCM decoder 510 decodes the bitstream output by the VCM encoder 502. In Figure 5, the output of the VCM decoder 510 is referred to as “Decoded data for machines” 512. This data may be considered as the decoded or reconstructed video. However, in some implementations of this pipeline, this data may not have same or similar characteristics as the original video which was input to the VCM encoder 502. For example, this data may not be easily understandable by a human by simply rendering the data onto a screen. The output of VCM decoder is then input to one or more task neural networks 514. In the figure, for the sake of illustrating that there may be any number of task-NNs 514, there are three example task-NNs, and a nonspecified one (Task-NN X). The goal of VCM is to obtain a low bitrate while guaranteeing that the task-NNs still perform well in terms of the evaluation metric 516 associated to each task.
One of the possible approaches to realize video coding for machines is an end- to-end learned approach. In this approach, the VCM encoder and VCM decoder mainly consist of neural networks. Figure 6 illustrates an example of a pipeline for the end-to-end learned approach. The video is input to a neural network encoder 601 . The output of the neural network encoder 601 is input to a lossless encoder 602, such as an arithmetic encoder, which outputs a bitstream 604. The lossless codec may be a probability model 603, both in the lossless encoder and in the lossless decoder, which predicts the probability of the next symbol to be encoded and decoded. The probability model 603 may also be learned, for example it may be a neural network. At decoder-side, the bitstream 604 is input to a lossless decoder 605, such as an arithmetic decoder, whose output is input to a neural network decoder 606. The output of the neural network decoder 606 is the decoded data for machines 607, that may be input to one or more task-NNs 608.
Figure 7 illustrates an example of how the end-to-end learned system may be trained. For the sake of simplicity, only one task-NN 707 is illustrated. A rate loss 705 may be computed from the output of the probability model 703. The rate loss 705 provides an approximation of the bitrate required to encode the input video data. A task loss 710 may be computed 709 from the output 708 of the task-NN 707.
The rate loss 705 and the task loss 710 may then be used to train 711 the neural networks used in the system, such as the neural network encoder 701 , the probability model 703, the neural network decoder 706. Training may be performed by first computing gradients of each loss with respect to the neural networks that are contributing or affecting the computation of that loss. The gradients are then used by an optimization method, such as Adam, for updating the trainable parameters of the neural networks.
The machine tasks may be performed at decoder side (instead of at encoder side) for multiple reasons, for example because the encoder-side device does not have the capabilities (computational, power, memory) for running the neural networks that perform these tasks, or because some aspects or the performance of the task neural networks may have changed or improved by the time that the decoder-side device needs the tasks results (e.g., different or additional semantic classes, better neural network architecture). Also, there could be a customization need, where different clients would run different neural networks for performing these machine learning tasks. Alternatively to an end-to-end trained codec, a video codec for machines can be realized by using a traditional codec such as H.266/VVC.
Alternatively, as described already above for the case of video coding for humans, another possible design may comprise using a traditional "base" codec, such as H.266/VVC, which additionally comprises one or more neural networks. In one possible implementation, the one or more neural networks may replace or be an alternative of one of the components of the traditional codec, such as:
- one or more in-loop filters;
- one or more intra-prediction modes;
- one or more inter-prediction modes;
- one or more transforms;
- one or more inverse transforms;
- one or more probability models, for lossless coding;
- one or more post-processing filters.
In another possible implementation, the one or more neural networks may function as an additional component, such as:
- one or more additional in-loop filters;
- one or more additional intra-prediction modes;
- one or more additional inter-prediction modes;
- one or more additional transforms;
- one or more additional inverse transforms;
- one or more additional probability models, for lossless coding;
- one or more additional post-processing filters.
VCM can be considered a task agnostic encoding mechanism. It is, however, possible to employ the VCM encoder and decoder in a more intelligent manner, for example, an intelligent region of interest (ROI) -based encoding. To achieve such a more intelligent region of interest-based approach, one may have to sacrifice the task agnostic nature of VCM codec in some cases.
Preserving a task agnostic VCM codec and enabling more intelligent encoding, e.g., region of interest-based approach, one shall design a system to facilitate utilization of a task agnostic VCM codec. This may require defining elements of communication and potential communication patterns between entities of the system. The present embodiments are targeted to such need.
Region of interest based encoding refers to a coding process, where only some region of an image or a frame is encoded with a high-quality, while rest of the image or the frame is encoded with lower quality. There are alternatives to detect regions of interest in the image or the frame, which comprises e.g. feature-based algorithms, object-based algorithms, saliency-based algorithms, or their combination.
In this disclosure, four possible designs of a system utilizing a VCM encoding are provided. The designs are: 1 ) On demand VCM encoding; 2) Co-operative VCM encoding; 3) Client agnostic task-based VCM encoding; and 4) Client agnostic task independent VCM encoding. The machines are referred in this disclosure as a client and a server, where the server is the machine that encodes the image or video and/or the features extracted from the image or video, and the client is the machine that decodes the video, image, and/or features and performs analysis tasks. In a VCM system, any machine can act as a server or a client or both. The systems discussed in the present disclosure enable, for example, intelligent region of interest (ROI) -based encoding.
Design 1 : On demand VCM encoding
In this embodiment, the VCM codec can be activated on demand by a client. Example use cases may comprise, for example, an enhanced streaming of a selection of recorded content in a security camera, action and object recognition -based processing systems, and similar cases.
Figure 8 shows an example design of such a system. The system of this embodiment comprises, for example, a traditional video encoder 810 and a decoder 815; low-fidelity task processor 820, ROI estimator 825, ROI encoder 830 and ROI decoder 840, VCM encoder 850, wherein the VCM encoder may have ROI-based capabilities, a buffer 855, and VCM decoder 860. The VCM encoder could be ROI-aware and has ROI encoding capabilities or alternatively the ROI-capability could be achieved by running a simple VCM encoder on top of each ROI.
The server may comprise the following components:
- A traditional video encoder 810 for encoding the video. Such encoder can follow any of the conventional video coding tools;
- A region of interest (ROI) decoder 840 for decoding the ROIs that have been encoded by any client (device) and sent to the server as part of a request;
- Buffer 855, which is a cache storage that may include the high-quality videos or the previously extracted features. The buffer may have access methods via hash mechanisms or alike;
- VCM encoder 850 that may have the capability of encoding ROIs or may operate independent of ROI information.
The client may comprise the following components:
- A video decoder 815 for allowing reconstruction of the videos at the client. The decoded video can be of low resolution;
- The low fidelity task network 820 being configured to perform on the decoded image sequences;
- The ROI estimator 825 being configured to produce a list of ROIs. Such an estimator may operate on the output of a task network. It may use the decoded sequences of images and the combination of both output of task network and the decoded sequences;
- The ROI encoder 830 being configured to encode the list of ROIs for efficient transport. Such ROIs may consist of a list of locations, a density map, or a list of density maps. The list is not limiting to the examples, and could include other information independent or dependent of the use case;
- A VCM decoder 860 being configured to decode the VCM bitstream.
Server-side operations:
In operation, the server may transmit a video stream, which may be of low resolution from a video encoder 810 to the client. The video stream may be complemented with information that enables synchronization, such as time stamps and information of such natures. In addition, the server may buffer a high-quality stream for a given period. The server may retrieve any sequence or frame using the synchronization information.
At the server side, a ROI request can be received from the client or another task in the process workflow on server side. The ROI request may consist of a list of ROIs and the synchronization information transmitted by the server. In addition, the ROI request may contain a mode of operation that determines per ROI feature encoding or one feature representing all the ROIs. The list of ROIs may be compressed at the client side. The ROIs may be pre-determined and fixed, where an index to the ROI is communicated for each ROI or the list of ROIs can contain the exact or approximate location of the ROIs. The list of ROIs may require to be remapped to the domain of high-quality sequence.
The server may output features per ROI or a combined feature representing all the ROIs. If the VCM encoder is enabled of ROI encoding, such list of features may further be compressed, for example, by considering residual encoding. In residual encoding, for example, a reference feature may be transmitted at fine granularity and the subsequent features in the list may be encoded in terms of differences with the reference feature. The calculated features and ROI information may also be kept for a period of time in the buffer 855.
The server may use the VCM encoder 850 to encode features only in the ROI regions and send the bitstream to the client. The encoding may be performed that high-quality reconstruction can be generated from the bitstream. In another option, the VCM encoder may encode the features in the ROIs of the video with high quality and features in the non-ROI regions of the video with low quality and send the bitstream to the client.
Client-side operations:
At the client side, a bitstream and synchronization information may be received from a server. A video decoder 815 may decode the received video streams from the bitstream. A task network 820 may be applied to the decoded videos. An example of the task network can be an action recognition. ROI estimator 825 may be applied to the output of decoder and/or the task network to generate a list of ROIs or a density map.
The list of ROIs may be encoded by a ROI encoder 830 and sent to the server as a ROI request. The ROI request may contain synchronization information to indicate which frames are associated with the ROIs.
The VCM features relevant to the requested ROI are received in a bitstream from the server and decoded from the bitstream by the VCM decoder 860.
The following messages may be communicated to and from the server and the client (the arrows in the table indicate the direction of the communication):
Figure imgf000030_0001
Design 2: Co-operative VCM encoding
Co-operative VCM encoding applies when both machines are performing some tasks and the client has access to the output of the tasks in the server and can request ROI-based feature encoding or has access to a certain task agnostic VCM encoded features provided by the server upon client’s request.
In this embodiment, the tasks at client and server may be different or the same.
Figure 9 illustrates an example, when the server runs a task and the client has access to the server’s task output and can request ROI-based feature encoding.
The server may comprise the following components: - one or more task networks;
- a task network output encoder 910;
- ROI decoder 920;
- VCM encoder 940;
- a buffer 930.
The design of VCM encoder 940, buffer 930 and ROI decoder 920 may follow the same principles as in the design 1 . The server itself runs one or more task networks, examples of tasks may be (but not limited to) classification, detection, segmentation, etc.
The task network output encoder 910 is responsible for encoding the output of the task network for transport in an efficient manner.
A client may comprise the following components:
- Task output decoder 950 being configured to decode the bitstream of the results from the task network at server side so they may be used in the client;
- ROI estimator 960 being configured to produce a list of ROIs. Such an estimator may operate on the task results provided by the server;
- ROI encoder 960 configured to encode the list of ROIs for efficient transport. Such ROIs may consist of a list of locations, a density map or a list of density maps, or some results ID that is provided by the server, e.g. an ID indicating a specific ROI.
- VCM decoder 970 being configured to decode the VCM features.
Server-side operations:
At server side, the server may run one or more task networks and produce one or more results. The one or more results may be encoded by the task output encoder 910 and streamed to the one or more clients along with synchronization information.
At the server side, a ROI request may be received by the ROI decoder 920 from the client. The ROI request may consists of a list of ROIs and the synchronization information broadcasted by the server. The ROI request may also comprise a mode of operation that determines whether feature encoding is performed per ROI or one feature representing for all the ROIs. The list of ROIs may have been compressed at the client. The ROIs may be predetermined and fixed where an index is communicated for each ROI. Or the list of ROIs can contain the exact or approximate location of the ROIs. The list of ROIs may be remapped to the domain of high-quality sequence.
The server may output the list of features per ROI or a combined feature representing for all the ROIs. If the VCM encoder 940 is enabled for ROI encoding, such list of features may further be compressed, for example, by considering residual encoding. The calculated features and ROI information may be kept for a period of time in the buffer 930.
The server may use the VCM encoder 940 to encode features only in the ROI regions and send the bitstream to the client. The encoding may be performed that high quality reconstruction can be generated from the bitstream. In another option, the VCM encoder may encode the features in the ROIs of the video with high quality and features in the non-ROI regions of the video with low quality and send the bitstream to the client.
Client-side operations:
At the client side, a bitstream and some synchronization information may be received. A video decoder 950 may decode the received video streams.
A task network 960 may be applied to the decoded videos. An example of the task network may be an action recognition task network. ROI estimator may be applied to the output of the decoder and/or the task network to generate a list of ROIs or a density map.
The ROI may be encoded by ROI encoder 960 and sent to the server as a ROI request. Such request may contain synchronization information to indicate which frames are associated with the ROIs.
The VCM features relevant to the requested ROIs may be received from the server, and decoded by the VCM decoder 970. The following messages may be communicated between the server and the client (the arrows in the table indicate the direction of the communication):
Figure imgf000033_0001
Design 3: Client agnostic task-based VCM encoding
In this design, the server encodes ROI using the output of its task network and is agnostic to the client. A client may decide to register and receive the information given the communicated information, such as task information from the server. The design may be used when one task in a machine (server) can help the other machine (client) without any assumption of the client’s task. One example use case may be autonomous systems such as drones, vehicles, etc., where the features relevant to one machine’s task can be the extra input to the other machine’s task network.
Figure 10 illustrates the design of client agnostic task-based VCM encoding.
The server may consists of one or more task networks 1010, ROI estimator and VCM encoder 1020.
The client may comprise a VCM decoder 1030, its own task network(s) and any other components that facilities its operations.
Server-side operations:
The server may receive some configuration information from a user that adapts server’s operations with regard to its one or more task networks 1010. The server may perform one or more tasks and generate one or more results. The one or more results may be used in the server’s ROI estimator 1020 to generate a list of useful ROIs. The list of ROIs may be used to generate the features to be encoded by the VCM encoder 1020.
The server may send the encoded ROIs and features to the client as a list of features or a combined feature in conjunction with some metadata such as task id of the task used to generate the list of tasks location of ROIs.
The server may use the VCM encoder 1020 to encode only the ROIs and send the bitstream to the client. The encoding may be performed that high quality reconstruction can be generated from the bitstream. In another option, the VCM encoder may encode the ROI regions of the video with high quality and non-ROI regions of the video with low quality and send the bitstream to the client.
Client-side operations:
The client may receive the VCM bitstream from the server and decode it at the VCM decoder 1030 in conjunction with the metadata that is provided.
The following messages may be communicated to and from the server and client (the arrows in the table indicate the direction of the communication):
Figure imgf000034_0001
Design 4: Client agnostic task independent VCM encoding
A client agnostic task-independent VCM encoding scheme is one that provides a sequence of features of highly relevant ROIs in a sequence. It does not make any assumption about the client. An example of such a system is illustrated in Figure 11 . The server may consist of or have an interface to/for a ROI estimator 1110, and a VCM feature extraction and encoding component. The server may stream the compressed features, ROI information (including location), and number of ROIs.
In this case, the ROI estimator estimates the ROI independent of any task. For example, it may be done based on salient location detection.
The client may consist of the VCM decoder 1120, and any task network of preference.
The client may receive the stream consisting of compressed features, ROI information, e.g. number of ROIs and their locations from the server. This information may be decoded by the VCM decoder 1120 and provided to the internal working components of the client.
The communicated messages may be (the arrows in the table indicate the direction of the communication):
Figure imgf000035_0001
In an embodiment, the server additionally includes a traditional video encoder 810 that outputs a video bitstream, which may for example have low resolution. Additionally, the traditional video encoder 810 may output synchronization information, such as time stamps, in or along the video bitstream. In a respective embodiment, the client additionally includes a traditional video decoder 815.
In an embodiment, the VCM encoder encodes features of the ROI(s) using residual encoding. In such case, the encoder may use a reference feature. The reference feature may be a representative of a shot, several frames, or a global generic feature, or one of the ROI’s features that are already obtained. To encode the subsequent features of the ROI(s) the difference of the features with respect to the reference feature will be encoded.
When VCM encoder has encoded features of the ROI(s) using residual encoding, at the client, the ROI(s) are decoded using residual decoding. In that case, the VCM decoder decodes the reference feature and the difference of features with respect to the reference feature.
Semantics of a ROI-aware VCM encoder
One or more of the following semantical information items may be produced by VCM encoder.
Taskjd: an id that identifies one task or multiple tasks that could have been used for generating the features at the encoder side. The taskjd could be also a list of taskjds in case of multiple tasks or a combination of tasks could have a unique id.
ROI_enabled_flag: a flag to indicate that ROI-based encoding is used.
Global_ROI_featureO_flag: a flag to indicate that the feature set is a global feature but calculated from multiple ROIs.
ROIJist: the information about the list of ROIs.
Number_of_ROIs: the number of ROIs used in calculations of a ROI-based encoding
Synchronizationjd: an id to indicate how the features relate to the video frames in a video bitstream.
Residual_encode: a flag to indicate if a residual encoding is used
The method according to an embodiment is shown in Figure 12a. The method generally comprises accessing 1210 a list of regions of interest, which regions of interest are pre-determined from one or more video frames; determining 1215 features contained in the regions of interest; encoding 1220 the determined features into a bitstream; and sending 1225 the encoded features to a client with information on regions of interest. Each of the steps can be implemented by a respective module of a computer system. An apparatus according to an embodiment comprises means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames; means for determining features contained in the regions of interest; means for encoding the determined features into a bitstream; and means for sending the encoded features to a client with information on regions of interest. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 12a according to various embodiments.
The method according to another embodiment is shown in Figure 12b. The method generally comprises receiving 1230 a bitstream of encoded features from a server with an information on regions of interest; and decoding 1235 the bitstream to determine relevant features for a certain region of interest. Each of the steps can be implemented by a respective module of a computer system.
An apparatus according to an embodiment comprises means for receiving a bitstream of encoded features from a server with an information on regions of interest; and means for decoding the bitstream to determine relevant features for a certain region of interest. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 12b according to various embodiments.
An example of an apparatus is shown in Figure 13. The apparatus is a user equipment for the purposes of the present embodiments. The apparatus 90 comprises a main processing unit 91 , a memory 92, a user interface 94, a communication interface 93. The apparatus according to an embodiment, shown in Figure 13, may also comprise a camera module 95. Alternatively, the apparatus may be configured to receive image and/or video data from an external camera device over a communication network. The memory 92 stores data including computer program code in the apparatus 90. The computer program code is configured to implement the method according various embodiments by means of various computer modules. The camera module 95 or the communication interface 93 receives data, in the form of images or video stream, to be processed by the processor 91 . The communication interface 93 forwards processed data, i.e. the image file, for example to a display of another device, such a virtual reality headset. When the apparatus 90 is a video source comprising the camera module 95, user inputs may be received from the user interface.
The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.

Claims

37 Claims:
1 . Server apparatus comprising at least
- means for accessing a list of regions of interest, which regions of interest are pre-determined from one or more video frames;
- means for determining features contained in the regions of interest;
- means for encoding the determined features into a bitstream; and
- means for sending the encoded features to a client with information on regions of interest.
2. The server according to claim 1 , wherein the list of regions of interest is received from a client.
3. The server according to claim 1 , wherein the list of regions is generated by a region of interest estimator component.
4. The server according to claim 2 or 3, wherein the features indicated in the list of regions of interest is encoded with a quality higher than other parts of a frame.
5. The server according to claim 2, wherein the list of regions of interest being received from a client is an encoded list of regions of interest, wherein the server comprises means for decoding the encoded list of regions of interest.
6. The server according to claim 2 or 3, further comprising means for encoding the video frames and transmitting the encoded video frames with synchronization information to a client.
7. The server according to claim 2 or 3, further comprising means for encoding a task result from a task output encoder.
8. A client apparatus comprising at least
- means for receiving a bitstream of encoded features from a server with an information on regions of interest; and 38
- means for decoding the bitstream to determine relevant features for a certain region of interest.
9. The client apparatus according to claim 8, further comprising means for estimating regions of interest from one or more video frames, and generating a list of regions of interest to be delivered to a server.
10. The client apparatus according to claim 9, further comprising a task network for performing a pre-determined task on the decoded video frames to be used for estimating the regions of interest.
11. The client apparatus according to any of the claims 8 to 10, further comprising receiving a bitstream and synchronization information from a server, the bitstream comprising one or more video frames from which regions of interest are estimated.
12. A method, comprising:
- accessing a list of regions of interest, which regions of interest are predetermined from one or more video frames;
- determining features contained in the regions of interest;
- encoding the determined features into a bitstream; and
- sending the encoded features to a client with information on regions of interest.
13. A method, comprising
- receiving a bitstream of encoded features from a server with an information on regions of interest; and
- decoding the bitstream to determine relevant features for a certain region of interest.
14. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- access a list of regions of interest, which regions of interest are predetermined from one or more video frames; - determine features contained in the regions of interest;
- encode the determined features into a bitstream; and
- send the encoded features to a client with information on regions of interest.
15. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: - receive a bitstream of encoded features from a server with an information on regions of interest; and
- decode the bitstream to determine relevant features for a certain region of interest.
PCT/FI2022/050739 2021-11-17 2022-11-09 A method, an apparatus and a computer program product for video encoding and video decoding WO2023089231A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20216178 2021-11-17
FI20216178 2021-11-17

Publications (1)

Publication Number Publication Date
WO2023089231A1 true WO2023089231A1 (en) 2023-05-25

Family

ID=86396309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2022/050739 WO2023089231A1 (en) 2021-11-17 2022-11-09 A method, an apparatus and a computer program product for video encoding and video decoding

Country Status (1)

Country Link
WO (1) WO2023089231A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080117295A1 (en) * 2004-12-27 2008-05-22 Touradj Ebrahimi Efficient Scrambling Of Regions Of Interest In An Image Or Video To Preserve Privacy
US20130114849A1 (en) * 2011-11-04 2013-05-09 Microsoft Corporation Server-assisted object recognition and tracking for mobile devices
EP3349453A1 (en) * 2017-01-13 2018-07-18 Nokia Technologies Oy Video encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080117295A1 (en) * 2004-12-27 2008-05-22 Touradj Ebrahimi Efficient Scrambling Of Regions Of Interest In An Image Or Video To Preserve Privacy
US20130114849A1 (en) * 2011-11-04 2013-05-09 Microsoft Corporation Server-assisted object recognition and tracking for mobile devices
EP3349453A1 (en) * 2017-01-13 2018-07-18 Nokia Technologies Oy Video encoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Use cases and requirements for Video Coding for Machines", 135. MPEG MEETING; 20210712 - 20210716; ONLINE; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 19 August 2021 (2021-08-19), XP030297558 *
ZARE, A. ET AL.: "HEVC-compliant Tile-based Streaming of Panoramic Video for Virtual Reality Applications", PROCEEDINGS OF THE 24TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 1 October 2016 (2016-10-01), XP058631148, Retrieved from the Internet <URL:https://dl.acm.org/doi/10.1145/2964284.2967292> [retrieved on 20230330], DOI: 10.1145/2964284.2967292 *

Similar Documents

Publication Publication Date Title
US11375204B2 (en) Feature-domain residual for video coding for machines
US9602819B2 (en) Display quality in a variable resolution video coder/decoder system
US11575938B2 (en) Cascaded prediction-transform approach for mixed machine-human targeted video coding
US8396127B1 (en) Segmentation for video coding using predictive benefit
WO2023280558A1 (en) Performance improvements of machine vision tasks via learned neural network based filter
WO2023135518A1 (en) High-level syntax of predictive residual encoding in neural network compression
WO2022238967A1 (en) Method, apparatus and computer program product for providing finetuned neural network
EP4142289A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
US20230325639A1 (en) Apparatus and method for joint training of multiple neural networks
WO2022224113A1 (en) Method, apparatus and computer program product for providing finetuned neural network filter
WO2022269415A1 (en) Method, apparatus and computer program product for providng an attention block for neural network-based image and video compression
WO2022084762A1 (en) Apparatus, method and computer program product for learned video coding for machine
WO2023089231A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023073281A1 (en) A method, an apparatus and a computer program product for video coding
WO2024068081A1 (en) A method, an apparatus and a computer program product for image and video processing
WO2024074231A1 (en) A method, an apparatus and a computer program product for image and video processing using neural network branches with different receptive fields
WO2023031503A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2024068190A1 (en) A method, an apparatus and a computer program product for image and video processing
WO2023111384A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023151903A1 (en) A method, an apparatus and a computer program product for video coding
WO2023194650A1 (en) A method, an apparatus and a computer program product for video coding
WO2024002579A1 (en) A method, an apparatus and a computer program product for video coding
US20240146938A1 (en) Method, apparatus and computer program product for end-to-end learned predictive coding of media frames
US20240121387A1 (en) Apparatus and method for blending extra output pixels of a filter and decoder-side selection of filtering modes
US20230169372A1 (en) Appratus, method and computer program product for probability model overfitting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22895040

Country of ref document: EP

Kind code of ref document: A1