EP4593391A1 - Bildcodierungs-/-decodierungsverfahren und -vorrichtung auf basis einer high-level-syntax zur profildefinition und aufzeichnungsmedium mit darauf gespeichertem bitstrom - Google Patents
Bildcodierungs-/-decodierungsverfahren und -vorrichtung auf basis einer high-level-syntax zur profildefinition und aufzeichnungsmedium mit darauf gespeichertem bitstromInfo
- Publication number
- EP4593391A1 EP4593391A1 EP23868611.7A EP23868611A EP4593391A1 EP 4593391 A1 EP4593391 A1 EP 4593391A1 EP 23868611 A EP23868611 A EP 23868611A EP 4593391 A1 EP4593391 A1 EP 4593391A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- profile
- information
- image
- feature
- syntax element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure related to an image encoding/decoding method and apparatus based on a high-level syntax defining a profile, and a recording medium storing a bitstream, and more particularly, to an image encoding/decoding technology based on high-level syntax signaling for extension and change of a profile tier level.
- An object of the present disclosure is to provide an image encoding/decoding method and apparatus with improved encoding/decoding efficiency.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus based on a high-level syntax defining a profile, and a recording medium storing a bitstream.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus based on a profile that is distinguished and defined according to a specific criterion.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus based on a profile that is distinguished based on a propose, a use and the like.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus based on a profile including a machine analysis-based on profile and a neural network-based profile.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus based on a profile including a coding tool.
- Another object of the present disclosure is to provide a method for transmitting a bitstream that is generated by an image encoding method or apparatus.
- Another object of the present disclosure is to provide a recording medium for storing a bitstream that is generated by an image encoding method or apparatus according to the present disclosure.
- Another object of the present disclosure is to provide a recording medium for storing a bitstream that is received and decoded by an image decoding apparatus according to the present disclosure and is used to reconstruct an image.
- An image decoding method performed by an image decoding apparatus may comprise obtaining profile tier level (PTL) information of an image and determining a profile and a tier of the image based on the PTL information, wherein the PTL information includes a profile indicator indicating a type of the profile for the image, wherein the profile type includes a basic profile or an expanded profile, and wherein the expanded profile includes a neural network (NN)-based profile and a machine analysis-based profile.
- PTL profile tier level
- An image encoding method performed by an image encoding apparatus may comprise determining profile tier level (PTL) information of an image; and encoding the PTL as PTL information of the image, wherein the PTL information includes a profile indicator indicating a type of the profile for the image, wherein the profile type includes a basic profile or an expanded profile, and wherein the expanded profile includes a neural network (NN)-based profile and a machine analysis-based profile.
- PTL profile tier level
- a method for transmitting a bitstream for an image may comprise determining profile tier level (PTL) information of an image and encoding the PTL as PTL information of the image, wherein the PTL information includes a profile indicator indicating a type of the profile for the image, wherein the profile type includes a basic profile or an expanded profile, and wherein the expanded profile includes a neural network (NN)-based profile and a machine analysis-based profile.
- PTL profile tier level
- a recording medium may store a bitstream generated by the image encoding method or the image encoding apparatus of the present disclosure.
- a bitstream transmission method may transmit a bitstream generated by the image encoding method or the image encoding apparatus of the present disclosure to an image decoding apparatus.
- an image encoding/decoding method and apparatus based on a profile including a machine analysis-based on profile and a neural network-based profile.
- an image encoding/decoding method and apparatus that defines necessary resource information, which is changeable according to an applied technology, by defining a profile, a tile and a level in consideration of the feature and purpose of a technology applied to coding, thereby becoming suitable for a service purpose and performance of a device.
- an image encoding/decoding method and apparatus that are capable of adaptively encoding/decoding information on a specific coding tool according to an applied profile.
- a recording medium storing a bitstream that is received and decoded by an image decoding apparatus according to the present disclosure and is used to reconstruct an image.
- a component when a component is “connected”, “coupled” or “linked” to another component, it may include not only a direct connection relationship but also an indirect connection relationship in which an intervening component is present.
- a component when a component “includes” or “has” other components, it means that other components may be further included, rather than excluding other components unless otherwise stated.
- first, second, etc. may be used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment may be referred to as a first component in another embodiment.
- components that are distinguished from each other are intended to clearly describe each feature, and do not mean that the components are necessarily separated. That is, a plurality of components may be integrated and implemented in one hardware or software unit, or one component may be distributed and implemented in a plurality of hardware or software units. Therefore, even if not stated otherwise, such embodiments in which the components are integrated or the component is distributed are also included in the scope of the present disclosure.
- the components described in various embodiments do not necessarily mean essential components, and some components may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in the various embodiments are included in the scope of the present disclosure.
- the present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have a general meaning commonly used in the technical field, to which the present disclosure belongs, unless newly defined in the present disclosure.
- the present disclosure may be applied to a method disclosed in a Versatile Video Coding (VVC) standard and/or a Video Coding for Machines (VCM) standard.
- VVC Versatile Video Coding
- VCM Video Coding for Machines
- EVC essential video coding
- AV1 AOMedia Video 1
- AVS2 2nd generation of audio video coding standard
- next-generation video/image coding standard e.g., H.267 or H.268, etc.
- video refers to a set of a series of images according to the passage of time.
- An “image” may be information generated by artificial intelligence (AI).
- AI artificial intelligence
- Input information used in the process of performing a series of tasks by AI, information generated during the information processing process, and the output information may be used as images.
- a "picture” generally refers to a unit representing one image in a specific time period, and a slice/tile is a coding unit constituting a part of a picture in encoding.
- One picture may be composed of one or more slices/tiles.
- a slice/tile may include one or more coding tree units (CTUs).
- the CTU may be partitioned into one or more CUs.
- a tile is a rectangular region present in a specific tile row and a specific tile column in a picture, and may be composed of a plurality of CTUs.
- a tile column may be defined as a rectangular region of CTUs, may have the same height as a picture, and may have a width specified by a syntax element signaled from a bitstream part such as a picture parameter set.
- a tile row may be defined as a rectangular region of CTUs, may have the same width as a picture, and may have a height specified by a syntax element signaled from a bitstream part such as a picture parameter set.
- a tile scan is a certain continuous ordering method of CTUs partitioning a picture.
- CTUs may be sequentially ordered according to a CTU raster scan within a tile, and tiles in a picture may be sequentially ordered according to a raster scan order of tiles of the picture.
- a slice may contain an integer number of complete tiles, or may contain a continuous integer number of complete CTU rows within one tile of one picture.
- a slice may be exclusively included in a single NAL unit.
- One picture may be composed of one or more tile groups.
- One tile group may include one or more tiles.
- a brick may indicate a rectangular region of CTU rows within a tile in a picture.
- One tile may include one or more bricks. The brick may refer to a rectangular region of CTU rows in a tile.
- One tile may be split into a plurality of bricks, and each brick may include one or more CTU rows belonging to a tile.
- a tile which is not split into a plurality of bricks may also be treated as
- a "pixel” or a “pel” may mean a smallest unit constituting one picture (or image).
- “sample” may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a value of a pixel, and may represent only a pixel/pixel value of a luma component or only a pixel/pixel value of a chroma component.
- a pixel/pixel value may represent a pixel/pixel value of a component generated through independent information or combination, synthesis, and analysis of each component.
- a pixel/pixel value of a component generated through independent information or combination, synthesis, and analysis of each component.
- R may be represented
- G may be represented
- B may be represented.
- pixel/pixel value of a luma component synthesized using the R, G, and B components may be represented.
- only the pixel/pixel values of images and information extracted through analysis of R, G, and B components from components may be represented.
- a "unit” may represent a basic unit of image processing.
- the unit may include at least one of a specific region of the picture and information related to the region.
- One unit may include one luma block and two chroma (e.g., Cb and Cr) blocks.
- the unit may be used interchangeably with terms such as "sample array", "block” or "area” in some cases.
- an M ⁇ N block may include samples (or sample arrays) or a set (or array) of transform coefficients of M columns and N rows.
- the unit may represent a basic unit containing information for performing a specific task.
- current block may mean one of “current coding block”, “current coding unit”, “coding target block”, “decoding target block” or “processing target block”.
- current block may mean “current prediction block” or “prediction target block”.
- transform inverse transform
- quantization dequantization
- current block may mean “current transform block” or “transform target block”.
- filtering filtering target block
- filtering target block filtering target block
- a “current block” may mean “a luma block of a current block” unless explicitly stated as a chroma block.
- the "chroma block of the current block” may be expressed by including an explicit description of a chroma block, such as “chroma block” or "current chroma block”.
- the term “/” and “,” should be interpreted to indicate “and/or.”
- the expression “A/B” and “A, B” may mean “A and/or B.”
- “A/B/C” and “A/B/C” may mean “at least one of A, B, and/or C.”
- the term “or” should be interpreted to indicate “and/or.”
- the expression “A or B” may comprise 1) only “A”, 2) only “B”, and/or 3) both "A and B”.
- the term “or” should be interpreted to indicate "additionally or alternatively.”
- the present disclosure relates to video/image coding for machines (VCM).
- VCM video/image coding for machines
- VCM refers to a compression technology that encodes/decodes part of a source image/video or information obtained from the source image/video for the purpose of machine vision.
- the encoding/decoding target may be referred to as a feature.
- the feature may refer to information extracted from the source image/video based on task purpose, requirements, surrounding environment, etc.
- the feature may have a different information form from the source image/video, and accordingly, the compression method and expression format of the feature may also be different from those of the video source.
- VCM may be applied to a variety of application fields. For example, in a surveillance system that recognizes and tracks objects or people, VCM may be used to store or transmit object recognition information. In addition, in intelligent transportation or smart traffic systems, VCM may be used to transmit vehicle location information collected from GPS, sensing information collected from LIDAR, radar, etc., and various vehicle control information to other vehicles or infrastructure. Additionally, in the smart city field, VCM may be used to perform individual tasks of interconnected sensor nodes or devices.
- present disclosure provides various embodiments of feature/feature map coding. Unless otherwise specified, embodiments of the present disclosure may be implemented individually, or may be implemented in combination of two or more.
- FIG. 1 is a diagram schematically showing a VCM system to which embodiments of the present disclosure are applicable.
- the VCM system may include an encoding apparatus 10 and a decoding apparatus 20.
- the encoding apparatus 10 may compress/encode a feature/feature map extracted from a source image/video to generate a bitstream, and transmit the generated bitstream to the decoding apparatus 20 through a storage medium or network.
- the encoding apparatus 10 may also be referred to as a feature encoding apparatus.
- the feature/feature map may be generated at each hidden layer of a neural network. The size and number of channels of the generated feature map may vary depending on the type of neural network or the location of the hidden layer.
- a feature map may be referred to as a feature set, and a feature or feature map may be referred to as 'feature information'.
- the encoding apparatus 10 may include a feature acquisition unit 11, an encoding unit 12, and a transmission unit 13.
- the feature acquisition unit 11 may acquire a feature/feature map for the source image/video.
- the feature acquisition unit 11 may acquire a feature/feature map from an external device, for example, a feature extraction network. In this case, the feature acquisition unit 11 performs a feature reception interface function.
- the feature acquisition unit 11 may acquire a feature/feature map by executing a neural network (e.g., CNN, DNN, etc.) using the source image/video as input. In this case, the feature acquisition unit 11 performs a feature extraction network function.
- a neural network e.g., CNN, DNN, etc.
- the encoding apparatus 10 may further include a source image generator (not shown) for acquiring the source image/video.
- the source image generator may be implemented with an image sensor, a camera module, etc., and may acquire the source image/video through an image/video capture, synthesis, or generation process.
- the generated source image/video may be sent to the feature extraction network and used as input data for extracting the feature/feature map.
- the encoding unit 12 may encode the feature/feature map acquired by the feature acquisition unit 11.
- the encoding unit 12 may perform a series of procedures such as prediction, transform, and quantization to increase encoding efficiency.
- the encoded data (encoded feature/feature map information) may be output in the form of a bitstream.
- the bitstream containing the encoded feature/feature map information may be referred to as a VCM bitstream.
- the transmission unit 13 may obtain feature/feature map information or data output in the form of a bitstream form and forward the feature/feature map information or data to the decoding apparatus 20 or another external object through a digital storage medium or network in the form of a file or streaming.
- digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit 13 may include elements for generating a media file with a predetermined file format or elements for transmitting data through a broadcasting/communication network.
- the transmission unit 13 may be provided as a separate transmission device from the encoding unit 12, and in this case, the transmission device may include at least one processor for obtaining feature/feature map information or data output in the form of a bitstream and a transmission unit for forwarding the feature/feature map information in the form of a file or streaming.
- the decoding apparatus 20 may acquire feature/feature map information from the encoding apparatus 10 and reconstruct the feature/feature map based on the acquired information.
- the decoding apparatus 20 may include a reception unit 21 and a decoding unit 22.
- the reception unit 21 may receive a bitstream from the encoding apparatus 10, acquire feature/feature map information from the received bitstream, and send it to the decoding unit 22.
- the decoding unit 22 may decode the feature/feature map based on the acquired feature/feature map information.
- the decoding unit 22 may perform a series of procedures such as dequantization, inverse transform, and prediction corresponding to the operation of the encoding unit 12 to increase decoding efficiency.
- the decoding apparatus 20 may further include a task analysis/rendering unit 23.
- the task analysis/rendering unit 23 may perform task analysis based on the decoded feature/feature map. Additionally, the task analysis/rendering unit 23 may render the decoded feature/feature map into a form suitable for task performance. Various machine (oriented) tasks may be performed based on task analysis results and the rendered features/feature map.
- the VCM system may encode/decode the feature extracted from the source image/video according to user and/or machine requests, task purpose, and surrounding environment, and performs various machine (oriented) tasks based on the decoded feature.
- the VCM system may be implemented by expanding/redesigning the video/image coding system and may perform various encoding/decoding methods defined in the VCM standard.
- FIG. 2 is a diagram schematically showing a VCM pipeline structure to which embodiments of the present disclosure are applicable.
- the VCM pipeline 200 may include a first pipeline 210 for encoding/decoding an image/video and a second pipeline 220 for encoding/decoding a feature/feature map.
- the first pipeline 210 may be referred to as a video codec pipeline
- the second pipeline 220 may be referred to as a feature codec pipeline.
- the first pipeline 210 may include a first stage 211 for encoding an input image/video and a second stage 212 for decoding the encoded image/video to generate a reconstructed image/video.
- the reconstructed image/video may be used for human viewing, that is, human vision.
- the second pipeline 220 may include a third stage 221 for extracting a feature/feature map from the input image/video, a fourth stage 222 for encoding the extracted feature/feature map, and a fifth stage 223 for decoding the encoded feature/feature map to generate a reconstructed feature/feature map.
- the reconstructed feature/feature map may be used for a machine (vision) task.
- the machine (vision) task may refer to a task in which images/videos are consumed by a machine.
- the machine (vision) task may be applied to service scenarios such as, for example, Surveillance, Intelligent Transportation, Smart City, Intelligent Industry, Intelligent Content, etc.
- the reconstructed feature/feature map may be used for human vision.
- the feature/feature map encoded in the fourth stage 222 may be transferred to the first stage 221 and used to encode the image/video.
- an additional bitstream may be generated based on the encoded feature/feature map, and the generated additional bitstream may be transferred to the second stage 222 and used to decode the image/video.
- FIG. 2 shows a case where the VCM pipeline 200 includes a first pipeline 210 and a second pipeline 220, but this is merely an example and embodiments of the present disclosure are not limited thereto.
- the VCM pipeline 200 may include only the second pipeline 220, or the second pipeline 220 may be expanded into multiple feature codec pipelines.
- the first stage 211 may be performed by an image/video encoder
- the second stage 212 may be performed by an image/video decoder
- the third stage 221 may be performed by a VCM encoder (or feature/feature map encoder)
- the fourth stage 222 may be performed by a VCM decoder (or feature/feature map encoder).
- FIG. 3 is a diagram schematically showing an image/video encoder to which embodiments of the present disclosure are applicable.
- the image/video encoder 300 may further include an image partitioner 310, a predictor 320, a residual processor 330, an entropy encoder 340, and an adder 350, a filter 360, and a memory 370.
- the predictor 320 may include an inter predictor 321 and an intra predictor 322.
- the residual processor 330 may include a transformer 332, a quantizer 333, a dequantizer 334, and an inverse transformer 335.
- the residual processor 330 may further include a subtractor 331.
- the adder 350 may be referred to as a reconstructor or a reconstructed block generator.
- the image partitioner 310, the predictor 320, the residual processor 330, the entropy encoder 340, the adder 350, and the filter 360 may be configured by one or more hardware components (e.g., encoder chipset or processor) depending on the embodiment.
- the memory 370 may include a decoded picture buffer (DPB) and may be configured by a digital storage medium.
- DPB decoded picture buffer
- the hardware components described above may further include a memory 370 as an internal/external component.
- the image partitioner 310 may partition an input image (or picture, frame) input to the image/video encoder 300 into one or more processing units.
- the processing unit may be referred to as a coding unit (CU).
- the coding unit may be recursively partitioned according to a quad-tree binary-tree ternary-tree (QTBTTT) structure from a coding tree unit (CTU) or largest coding unit (LCU).
- QTBTTT quad-tree binary-tree ternary-tree
- CTU coding tree unit
- LCU largest coding unit
- one coding unit may be partitioned into a plurality of coding units of deeper depth based on a quad tree structure, binary tree structure, and/or ternary structure.
- the quad tree structure may be applied first and the binary tree structure and/or ternary structure may be applied later.
- the binary tree structure may be applied first.
- the image/video coding procedure according to the present disclosure may be performed based on a final coding unit that is no longer partitioned.
- the largest coding unit may be used as the final coding unit based on coding efficiency according to image characteristics, or, if necessary, the coding unit may be recursively partitioned into coding units of deeper depth to use a coding unit with an optimal size as the final coding unit.
- the coding procedure may include procedures such as prediction, transform, and reconstruction, which will be described later.
- the processing unit may further include a prediction unit (PU) or a transform unit (TU).
- the prediction unit and the transform unit may each be divided or partitioned from the final coding unit described above.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
- an MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
- a sample may generally represent a pixel or a pixel value, and may represent only a pixel/pixel value of a luma component, or only a pixel/pixel value of a chroma component. The sample may be used as a term corresponding to pixel or pel.
- the image/video encoder 300 may generate a residual signal (residual block, residual sample array) by subtracting a prediction signal (predicted block, prediction sample array) output from the inter predictor 321 or the intra predictor 322 from the input image signal (original block, original sample array) and transmit the generated residual signal to the transformer 332.
- the unit that subtracts the prediction signal (prediction block, prediction sample array) from the input image signal (original block, original sample array) within the image/video encoder 300 may be referred to as the subtractor 331.
- the predictor may perform prediction on a processing target block (hereinafter referred to as a current block) and generate a predicted block including prediction samples for the current block.
- the predictor may determine whether intra prediction or inter prediction is applied in current block or CU units.
- the predictor may generate various information related to prediction, such as prediction mode information, and transfer it to the entropy encoder 340.
- Information about prediction may be encoded in the entropy encoder 340 and output in the form of a bitstream.
- the intra predictor 322 may predict the current block by referring to the samples in the current picture. At this time, the referenced samples may be located in the neighbor of the current block or may be located away from the current block, depending on the prediction mode.
- the prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the non-directional mode may include, for example, a DC mode and a planar mode.
- the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to the degree of detail of the prediction direction. However, this is merely an example, more or less directional prediction modes may be used depending on settings.
- the intra predictor 322 may determine the prediction mode applied to the current block by using a prediction mode applied to a neighboring block.
- the inter predictor 321 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on a reference picture.
- the motion information may be predicted in block, subblock, or sample units based on correlation of motion information between the neighboring block and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
- the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
- the temporal neighboring block may be called a collocated reference block, a co-located CU (colCU), and the like, and the reference picture including the temporal neighboring block may be called a collocated picture (colPic).
- the inter predictor 321 may construct a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive a motion vector and/or reference picture index of the current block. Inter prediction may be performed based on various prediction modes, and, for example, in the case of a skip mode and a merge mode, the inter predictor 321may use motion information of the neighboring block as motion information of the current block.
- the residual signal may not be transmitted.
- the motion vector of the neighboring block may be used as a motion vector predictor, and a motion vector difference may be signaled to indicate the motion vector of the current block.
- the predictor 320 may generate a prediction signal based on various prediction methods. For example, the predictor may not only apply intra prediction or inter prediction but also simultaneously apply both intra prediction and inter prediction, for prediction of one block. This may be called combined inter and intra prediction (CIIP).
- the predictor may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of the block.
- IBC intra block copy
- the IBC prediction mode or the palette mode may be used for content image/video coding of a game or the like, for example, screen content coding (SCC).
- SCC screen content coding
- IBC basically performs prediction within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in the present disclosure.
- the palette mode may be regarded as an example of intra coding or intra prediction. When the palette mode is applied, the sample values within the picture may be signaled based on information about a palette table and
- the prediction signal generated by the predictor 320 may be used to generate a reconstructed signal or to generate a residual signal.
- the transformer 332 may generate transform coefficients by applying a transform technique to the residual signal.
- the transform technique may include at least one of a discrete cosine transform (DCT), a discrete sine transform (DST), a karhunen-loève transform (KLT), a graph-based transform (GBT), or a conditionally non-linear transform (CNT).
- the GBT refers to transform obtained from a graph when relationship information between pixels is represented by the graph.
- the CNT refers to transform acquired based on a prediction signal generated using all previously reconstructed pixels.
- the transform process may be applied to square pixel blocks having the same size or may be applied to non-square blocks having a variable size.
- the quantizer 130 may quantize the transform coefficients and transmit them to the entropy encoder 190.
- the entropy encoder 190 may encode the quantized signal (information on the quantized transform coefficients) and output a bitstream.
- the information on the quantized transform coefficients may be referred to as residual information.
- the quantizer 130 may reorder quantized transform coefficients in a block form into a one-dimensional vector form based on a coefficient scan order and generate information on the quantized transform coefficients based on the quantized transform coefficients in the one-dimensional vector form.
- the entropy encoder 340 may perform various encoding methods such as, for example, exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
- the entropy encoder 340 may encode information necessary for video/image reconstruction other than quantized transform coefficients (e.g., values of syntax elements, etc.) together or separately.
- Encoded information e.g., encoded video/image information
- NALs network abstraction layers
- the video/image information may further include information on various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/image information may further include general constraint information.
- the video/image information may further include a method of generating and using encoded information, a purpose, and the like.
- information and/or syntax elements transferred/signaled from the image/video encoder to the image/video decoder may be included in image/video information.
- the image/video information may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted over a network or may be stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, and the like.
- a transmitter (not shown) transmitting a signal output from the entropy encoder 340 and/or a storage unit (not shown) storing the signal may be configured as internal/external element of the image/video encoder 300, or the transmitter may be included in the entropy encoder 340.
- the quantized transform coefficients output from the quantizer 130 may be used to generate a prediction signal.
- the residual signal residual block or residual samples
- the adder 350 adds the reconstructed residual signal to the prediction signal output from the inter predictor 321 or the intra predictor 322 to generate a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array).
- the predicted block may be used as the reconstructed block.
- the adder 350 may be called a reconstructor or a reconstructed block generator.
- the generated reconstructed signal may be used for intra prediction of a next processing target block in the current picture and may be used for inter prediction of a next picture through filtering as described below.
- LMCS luma mapping with chroma scaling
- the filter 360 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filter 360 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 370, specifically, a DPB of the memory 370.
- the various filtering methods may include, for example, deblocking filtering, a sample adaptive offset, an adaptive loop filter, a bilateral filter, and the like.
- the filter 360 may generate various information related to filtering and transmit the generated information to the entropy encoder 190.
- the information related to filtering may be encoded by the entropy encoder 340 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 370 may be used as the reference picture in the inter predictor 321. Through this, prediction mismatch between the encoder and the decoder may be avoided and encoding efficiency may be improved.
- the DPB of the memory 370 may store the modified reconstructed picture for use as a reference picture in the inter predictor 321.
- the memory 370 may store the motion information of the block from which the motion information in the current picture is derived (or encoded) and/or the motion information of the blocks in the already reconstructed picture.
- the stored motion information may be transferred to the inter predictor 321 for use as the motion information of the spatial neighboring block or the motion information of the temporal neighboring block.
- the memory 370 may store reconstructed samples of reconstructed blocks in the current picture and may transfer the stored reconstructed samples to the intra predictor 322.
- the VCM encoder (or feature/feature map encoder) basically performs a series of procedures such as prediction, transform, and quantization to encode the feature/feature map and thus may basically have the same/similar structure as the image/video encoder 300 described with reference to FIG. 3 .
- the VCM encoder is different from the image/video encoder 300 in that the feature/feature map is an encoding target, and thus may be different from the image/video encoder 300 in the name of each unit (or component) (e.g., image partitioner 310, etc.) and its specific operation content. The specific operation of the VCM encoder will be described in detail later.
- FIG. 4 is a diagram schematically showing an image/video decoder to which embodiments of the present disclosure are applicable.
- the image/video decoder 400 may include an entropy decoder 410, a residual processor 420, a predictor 430, an adder 440, a filter 450 and a memory 460.
- the predictor 430 may include an inter predictor 431 and an intra predictor 432.
- the residual processor 420 may include a dequantizer 421 and an inverse transformer 422.
- the entropy decoder 410, the residual processor 420, the predictor 430, the adder 440, and the filter 450 may be configured by one hardware component (e.g., a decoder chipset or processor) depending on the embodiment.
- the memory 460 may include a decoded picture buffer (DPB) and may be configured by a digital storage medium.
- the hardware component may further include the memory 460 as an internal/external component.
- the image/video decoder 400 may reconstruct an image/video in correspondence with the process in which the image/video information is processed in the image/video encoder 300 of FIG. 3 .
- the image/video decoder 400 may derive units/blocks based on block partition-related information acquired from the bitstream.
- the image/video decoder 400 may perform decoding using a processing unit applied in the image/video encoder.
- the processing unit of decoding may, for example, be a coding unit, and the coding unit may be partitioned from a coding tree unit or a largest coding unit according to a quad tree structure, a binary tree structure and/or a ternary tree structure.
- One or more transform units may be derived from the coding unit.
- the reconstructed image signal decoded and output through the image/video decoder 400 may be played through a playback device.
- the image/video decoder 400 may receive a signal output from the encoder of FIG. 3 in the form of a bitstream, and decode the received signal through the entropy decoder 410.
- the entropy decoder 410 may parse the bitstream to derive information (e.g., image/video information) necessary for image reconstruction (or picture reconstruction).
- the image/video information may further include information about various parameter sets, such as an adaptation parameter set (APS), picture parameter set (PPS), sequence parameter set (SPS), or video parameter set (VPS).
- image/video information may further include general constraint information.
- the image/video information may include a method of generating and using decoded information, a purpose, and the like.
- the image/video decoder 400 may decode the picture further based on information about the parameter set and/or general constraint information.
- the signaled/received information and/or syntax elements may be decoded and acquired from the bitstream through a decoding procedure.
- the entropy decoder 410 may decode information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and output the values of syntax elements necessary for image reconstruction and quantized values of transform coefficients related to residuals.
- a bin corresponding to each syntax element may be received in the bitstream, a context model may be determined using decoding target syntax element information and decoding information of neighboring and decoding target blocks or information on the symbol/bin decoded in the previous step, the occurrence probability of the bin may be predicted according to the determined context model, and arithmetic decoding of the bin may be performed to generate a symbol corresponding to the value of each syntax element.
- the CABAC entropy decoding method may update the context model using the information on the decoded symbol/bin for the context model of the next symbol/bin after determining the context model.
- Information about prediction among the information decoded in the entropy decoder 410 is provided to the predictor (inter predictor 432 and intra predictor 431), and a residual value obtained by performing entropy decoding in the entropy decoder 410, that is, quantized transform coefficients and related parameter information may be input to the residual processor 420.
- the residual processor 420 may derive a residual signal (residual block, residual samples, residual sample array). Additionally, information about filtering among the information decoded by the entropy decoder 410 may be provided to the filter 450.
- a receiver (not shown) that receives a signal output from the image/video encoder may be further configured as an internal/external element of the image/video decoder 400, or the receiver may be a component of the entropy decoder 410.
- the image/video decoder according to the present disclosure may be called an image/video decoding apparatus, and the image/video decoder may be divided into an information decoder (image/video information decoder) and a sample decoder (image/video sample decoder).
- the information decoder may include an entropy decoder 410, and the sample decoder may include at least one of a dequantizer 321, an inverse transformer 322, an adder 440, a filter 450, and a memory 460, an inter predictor 432 or an intra predictor 431.
- the dequantizer 421 may dequantize the quantized transform coefficients and output transform coefficients.
- the dequantizer 421 may rearrange the quantized transform coefficients into a two-dimensional block form. In this case, rearranging may be performed based on the coefficient scan order performed in the image/video encoder.
- the dequantizer 321 may perform dequantization on quantized transform coefficients using quantization parameters (e.g., quantization step size information) and acquire transform coefficients.
- the inverse transformer 422 inversely transforms the transform coefficients to acquire a residual signal (residual block, residual sample array).
- the predictor 430 may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
- the predictor may determine whether intra prediction or inter prediction is applied to the current block based on information about prediction output from the entropy decoder 410, and may determine a specific intra/inter prediction mode.
- the predictor 420 may generate a prediction signal based on various prediction methods. For example, the predictor may not only apply intra prediction or inter prediction for prediction of one block, but also may apply intra prediction and inter prediction simultaneously. This may be called combined inter and intra prediction (CIIP). Additionally, the predictor may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block.
- IBC prediction mode or palette mode may be used, for example, for image/video coding of content such as games, such as screen content coding (SCC).
- SCC screen content coding
- prediction is basically performed within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in this document.
- the palette mode may be viewed as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index may be included and signaled in the image/video information.
- the intra predictor 431 may predict the current block by referencing samples in the current picture.
- the referenced samples may be located in the neighbor of the current block, or may be located away from the current block, depending on the prediction mode.
- prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the intra predictor 431 may determine the prediction mode applied to the current block using the prediction mode applied to the neighboring block.
- the inter predictor 432 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector in the reference picture.
- motion information may be predicted in block, subblock, or sample units based on the correlation of motion information between the neighboring block and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- neighboring blocks may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
- the inter predictor 432 may construct a motion information candidate list based on neighboring blocks and derive a motion vector and/or reference picture index of the current block based on received candidate selection information. Inter prediction may be performed based on various prediction modes, and information about prediction may include information indicating the mode of inter prediction for the current block.
- the adder 440 may generate a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array) by adding the acquired residual signal to a prediction signal (predicted block, prediction sample array) output from the predictor (including the inter predictor 432 and/or the intra predictor 431). If there is no residual for a processing target block, such as when skip mode is applied, the predicted block may be used as a reconstruction block.
- the adder 440 may be called a reconstructor or a reconstruction block generator.
- the generated reconstructed signal may be used for intra prediction of a next processing target block in the current picture, may be output after filtering as described later, or may be used for inter prediction of a next picture.
- LMCS luma mapping with chroma scaling
- the filter 450 can improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filter 450 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and transmit the modified reconstructed picture in the memory 460, specifically the DPB of the memory 460.
- Various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc.
- the (modified) reconstructed picture stored in the DPB of the memory 460 may be used as a reference picture in the inter predictor 432.
- the memory 460 may store motion information of a block from which motion information in the current picture is derived (or decoded) and/or motion information of blocks in an already reconstructed picture.
- the stored motion information may be transferred to the inter predictor 432 for use as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 460 may store reconstructed samples of reconstructed blocks in the current picture and transfer them to the intra predictor 431.
- the VCM decoder (or feature/feature map decoder) performs a series of procedures such as prediction, inverse transform, and dequantization to decode the feature/feature map, and may basically have the same/similar structure as the image/video decoder 400 described above with reference to FIG. 4 .
- the VCM decoder is different from the image/video decoder 400 in that the feature/feature map is a decoding target, and may be different from the image/video decoder 400 in the name (e.g., DPB, etc.) of each unit (or component) and its specific operation.
- the operation of the VCM decoder may correspond to the operation of the VCM encoder, and the specific operation will be described in detail later.
- FIG. 5 is a flowchart schematically illustrating a feature/feature map encoding procedure to which embodiments of the present disclosure are applicable.
- the feature/feature map encoding procedure may include a prediction procedure (S510), a residual processing procedure (S520), and an information encoding procedure (S530).
- the prediction procedure (S510) may be performed by the predictor 320 described above with reference to FIG. 3 .
- the intra predictor 322 may predict a current block (that is, a set of current encoding target feature elements) by referencing feature elements in a current feature/feature map. Intra prediction may be performed based on the spatial similarity of feature elements constituting the feature/feature map. For example, feature elements included in the same region of interest (RoI) within an image/video may be estimated to have similar data distribution characteristics. Accordingly, the intra predictor 322 may predict the current block by referencing the already reconstructed feature elements within the region of interest including the current block. At this time, the referenced feature elements may be located adjacent to the current block or may be located away from the current block depending on the prediction mode.
- a current block that is, a set of current encoding target feature elements
- Intra prediction may be performed based on the spatial similarity of feature elements constituting the feature/feature map. For example, feature elements included in the same region of interest (RoI) within an image/video may be estimated to have similar data distribution characteristics. Accordingly, the intra predictor 322 may predict the
- Intra prediction modes for feature/feature map encoding may include a plurality of non-directional prediction modes and a plurality of directional prediction modes.
- the non-directional prediction modes may include, for example, prediction modes corresponding to the DC mode and planar mode of the image/video encoding procedure.
- the directional modes may include prediction modes corresponding to, for example, 33 directional modes or 65 directional modes of an image/video encoding procedure.
- this is an example, and the type and number of intra prediction modes may be set/changed in various ways depending on the embodiment.
- the inter predictor 321 may predict the current block based on a reference block (i.e., a set of referenced feature elements) specified by motion information on the reference feature/feature map. Inter prediction may be performed based on the temporal similarity of feature elements constituting the feature/feature map. For example, temporally consecutive features may have similar data distribution characteristics. Accordingly, the inter predictor 321 may predict the current block by referencing the already reconstructed feature elements of features temporally adjacent to the current feature.
- motion information for specifying the referenced feature elements may include a motion vector and a reference feature/feature map index.
- the motion information may further include information about an inter prediction direction (e.g., L0 prediction, L1 prediction, Bi prediction, etc.).
- neighboring blocks may include spatial neighboring blocks present within the current feature/feature map and temporal neighboring blocks present within the reference feature/feature map.
- a reference feature/feature map including a reference block and a reference feature/feature map including a temporal neighboring block may be the same or different.
- the temporal neighboring block may be referred to as a collocated reference block, etc.
- a reference feature/feature map including a temporal neighboring block may be referred to as a collocated feature/feature map.
- the inter predictor 321 may construct a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive the motion vector and/or reference feature/feature map index of the current block. Inter prediction may be performed based on various prediction modes.
- the inter predictor 321 may use motion information of the neighboring block as motion information of the current block.
- the residual signal may not be transmitted.
- the motion vector of the neighboring block is used as a motion vector predictor, and the motion vector of the current block may be indicated by signaling the motion vector difference.
- the predictor 320 may generate a prediction signal based on various prediction methods in addition to intra prediction and inter prediction described above.
- the prediction signal generated by the predictor 320 may be used to generate a residual signal (residual block, residual feature elements) (S520).
- the residual processing procedure (S520) may be performed by the residual processor 330 described above with reference to FIG. 3 .
- (quantized) transform coefficients may be generated through a transform and/or quantization procedure for the residual signal, and the entropy encoder 340 may encode information about the (quantized) transform coefficients in the bitstream as residual information (S530). Additionally, the entropy encoder 340 may encode information necessary for feature/feature map reconstruction, such as prediction information (e.g., prediction mode information, motion information, etc.), in the bitstream, in addition to the residual information.
- prediction information e.g., prediction mode information, motion information, etc.
- the feature/feature map encoding procedure may further include not only a procedure (S530) for encoding information for feature/feature map reconstruction (e.g., prediction information, residual information, partitioning information, etc.) and outputting it in the form of a bitstream, a procedure for generating a reconstructed feature/feature map for the current feature/feature map and a procedure (optional) for applying in-loop filtering to the reconstructed feature/feature map.
- a procedure for encoding information for feature/feature map reconstruction (e.g., prediction information, residual information, partitioning information, etc.) and outputting it in the form of a bitstream
- a procedure for generating a reconstructed feature/feature map for the current feature/feature map e.g., partitioning information, etc.
- the VCM encoder may derive (modified) residual feature(s) from the quantized transform coefficient(s) through dequantization and inverse transform, and generate a reconstructed feature/feature map based on the predicted feature(s) and (modified) residual feature(s) that are the output of step S510.
- the reconstructed feature/feature map generated in this way may be the same as the reconstructed feature/feature map generated in the VCM decoder.
- a modified reconstructed feature/feature map may be generated through the in-loop filtering procedure on the reconstructed feature/feature map.
- the modified reconstructed feature/feature map may be stored in a decoded feature buffer (DFB) or memory and used as a reference feature/feature map in the feature/feature map prediction procedure later.
- (in-loop) filtering-related information may be encoded and output in the form of a bitstream.
- noise that may occur during feature/feature map coding may be removed, and feature/feature map-based task performance may be improved.
- the identity of the prediction result can be guaranteed, the reliability of feature/feature map coding can be improved, and the amount of data transmission for feature/feature map coding can be reduced.
- FIG. 6 is a flowchart schematically illustrating a feature/feature map decoding procedure to which embodiments of the present disclosure are applicable.
- the feature/feature map decoding procedure may include an image/video information acquisition procedure (S610), a feature/feature map reconstruction procedure (S620 to S640), and an in-loop filtering procedure for a reconstructed feature/feature map (S650).
- the feature/feature map reconstruction procedure may be performed on the prediction signal and residual signal acquired through inter/intra prediction (S620) and residual processing (S630), dequantization and inverse transform process for quantized transform coefficients described in the present disclosure.
- a modified reconstructed feature/feature map may be generated through an in-loop filtering procedure for the reconstructed feature/feature map, and the modified reconstructed feature/feature map may be output as a decoded feature/feature map.
- the decoded feature/feature map may be stored in a decoded feature buffer (DFB) or memory and used as a reference feature/feature map in the inter prediction procedure when decoding the feature/feature map.
- a decoded feature buffer DVB
- the above-described in-loop filtering procedure may be omitted.
- the reconstructed feature/feature map may be output without change as a decoded feature/feature map, and stored in the decoded feature buffer (DFB) or memory, and then be used as a reference feature/feature map in the inter prediction procedure when decoding the feature/feature map.
- Embodiments of the present disclosure propose a prediction process necessary for compressing an activation (feature) map generated in a hidden layer of a deep neural network, and a method for generating a related bitstream.
- Input data which is input into a deep neural network, passes through many hidden layers to undergo an operation process, and an operation result of each hidden layer is output as a feature/feature map that has a variety of sizes and channels according to a type of the neural network in use and a position of a hidden layer within the neural network.
- FIG. 7 is a view showing an example of a feature extraction and reconstruction method to which embodiments of the present disclosure are applicable.
- a feature extraction network 710 may extract a middle-layer activation (feature) map of a neural network from a source image/video and output the extracted feature map.
- the feature extraction network 710 may be a set of consecutive hidden layers from an input of the neural network.
- An encoding apparatus 720 may compress the extracted feature map and output it in a bitstream form, and a decoding apparatus 730 may reconstruct the (compressed) feature map from the output bitstream.
- the encoding apparatus 720 may correspond to the encoding unit 12 of FIG. 1
- the decoding apparatus 730 may correspond to the decoding unit 22 of FIG. 1 .
- a task network 740 may perform a task based on the reconstructed feature map.
- the number of channels of a feature map to be compressed by VCM may vary according to a network used for feature extraction and an extraction position and may be greater than the number of channels of input data.
- FIG. 8 is a view showing an example of an image partitioning method to which embodiments of the present disclosure are applicable. As an example, it is a view showing a CTU, a slice and a tile within an image.
- a video/image coding method may be performed based on the following partitioning structure.
- Procedures such as prediction, residual processing ((inverse) transform, (inverse) quantization, etc.) syntax element coding and filtering may be performed based on a CTU and a CU (and/or TU, PU) that are derived based on the partitioning structure of FIG. 8 .
- a block partitioning procedure may be performed in an encoding apparatus, and partitioning-related information may be processed by encoding and delivered to a decoding apparatus in a bitstream form.
- the decoding apparatus may derive a block partitioning structure of a current picture, and based on this, may perform a series of procedures for image decoding (e.g., prediction, residual processing, block/picture reconstruction, in-loop filtering, etc.).
- a CU size may be equal to a TU size, and a plurality of TUs may be present within a CU area.
- a CU size may generally represent a luma component (sample) CB size.
- a TU size may generally represent a luma component (sample) TB size.
- a chroma component (sample) CB or TB size may be derived based on a luma component (sample) CB or TB size according to a composition ratio based on a color format (e.g., chroma format and examples of 4:4:4, 4:2:2, 4:2:0) of a picture/image, and a transform/inverse transform may be performed in a TU (TB) unit.
- a color format e.g., chroma format and examples of 4:4:4, 4:2:2, 4:2:0
- an image processing unit may have a hierarchical structure.
- One picture is partitioned into one or more CUs, and one or more CUs may be grouped and distinguished into one or more tiles, bricks, slices and/or tile groups.
- One slice may include one or more bricks.
- One brick may include one or more CTU rows within a tile.
- a slice may include an integer number of bricks of a picture.
- One tile group may include one or more tiles.
- One tile may include one or more CTUs.
- One CTU may be partitioned into one or more CUs.
- a tile group may include an integer number of tiles according to tile raster scan in a picture.
- a slice header may carry information/parameter applicable to a corresponding slice (blocks in the slice).
- a picture header may carry information/parameter applicable to a corresponding picture (or block in the picture).
- an encoding/decoding apparatus has a multi-core processor, an encoding/decoding procedure for the tile, slice, brick and/or tile groups may be processed in parallel.
- a slice or a tile group may be interchangeably used. That is, a tile group header may be referred to as a slice header.
- a slice may have one type of slices types including I slice, P slice and B slice.
- a tile/tile group, a brick, a slice, and minimum and maximum coding unit sizes may be determined according to a feature (e.g., resolution) of a video image or in consideration of coding efficiency or parallel processing, and information on it or information capable of deriving it may be included in a bitstream.
- a feature e.g., resolution
- a decoding apparatus information indicating whether a tile/tile group, a brick, a slice and an in-tile CTU of a current picture are partitioned into a plurality of coding units may be obtained.
- information indicating whether a tile/tile group, a brick, a slice and an in-tile CTU of a current picture are partitioned into a plurality of coding units may be obtained.
- the slice header may include information/parameter that is commonly applicable to the slice.
- APS APS syntax
- PPS PPS syntax
- the SPS SPS syntax
- the VPS may include information/parameter that is commonly applicable to multiple layers.
- the DPS DPS syntax
- the DPS may include information/parameter that is commonly applicable to an entire video.
- the DPS may include information/parameter related to concatenation of a coded video sequence (CVS).
- CVS coded video sequence
- information on the partitioning and construction of a tile/a tile group/a brick/a slice may be configured at an encoding stage through the higher-level syntax and be delivered to a decoding apparatus in a bitstream form.
- FIG. 9 is a view showing an example of a VCM image encoding/decoding system
- FIG. 10 is a view showing another example of a VCM image encoding/decoding system.
- an image coding system e.g., FIG. 1
- FIG. 9 and FIG. 10 may be expanded/redesigned to use only a portion of a video source or obtain and use a necessary portion/information from the image source according to the request, purpose and surrounding environment of a user or machine. That is, FIG. 9 and FIG. 10 may be related to video coding for machines (VCM).
- VCM video coding for machines
- VCM may mean obtaining an entire image and/or a portion thereof and/or necessary information (feature) of the image according to the request, purpose and surrounding environment of a user or machine and encoding/decoding the entire image and/or the portion thereof and/or the necessary information.
- an encoding target may be an image itself or information referred to as a feature extracted from the image according to the request, purpose and surrounding environment of a user and/or a machine, and it may mean a set of a series of information over time.
- a VCM system may include an image encoder and an image decoder for VCM.
- a source device ( FIG. 1 ) may deliver encoded image information to a reception device through a storage medium or network.
- a user agent of each of the devices may be a human and/or a machine.
- a VCM system may include an image encoder and an image decoder for VCM.
- a source device may deliver encoded image information (feature) to a reception device through a storage medium or network.
- a user agent of each of the devices may be a human and/or a machine.
- a process of extracting information that is, a feature from an image may be referred to as feature extraction.
- Feature extraction may be performed both in a video/image capture device and/or a video/image generation device.
- a feature may be information extracted/processed in an image according to the request, purpose and surrounding environment of a user and/or a machine and mean a set of a series of information over time.
- An image encoder for VCM may perform a series of procedures including prediction, transform and quantization for compression and coding efficiency of an entire image and/or a portion of the image and/or a feature. Encoded data may be output in the form of a bitstream.
- An image decoder for VCM may decode a video/image by performing a series of procedures including inverse quantization, inverse transform and prediction corresponding to an operation of an encoding apparatus, that is, an encoder.
- a decoded image and/or feature may be rendered. Also, it may be used to perform a task of a user or a machine.
- An example of the task may include AI and computer vision tasks such as face recognition, action recognition and lane recognition.
- the present disclosure provides various embodiments about acquisition and coding of an entire image and/or a portion of the image, and unless otherwise stated, the embodiments may also be performed by being combined with each other.
- a method/embodiment of the present disclosure may be applied to a method disclosed in VCM (Video Coding for Machines) standards.
- VCM may be based on a hierarchical structure consisting of a feature coding layer, a neural network (feature) abstraction layer and a feature extraction layer.
- FIG. 11 is a view showing an example of a VCM hierarchical layer
- FIG. 12 is a view showing an example of a VCM bitstream consisting of an encoded abstraction feature and NNAL information.
- a VCM hierarchical structure may consist of a feature extraction layer 1110, a neural network (feature) abstraction layer 1120 and a feature coding layer 1130.
- the feature extraction layer 1110 means a layer for extracting a feature from an input source and may also include a result of extraction.
- the feature coding layer 1130 means a layer for compressing an extracted feature and may also include a result of compression.
- the neural network abstraction layer 1120 may abstract information (e.g., information on an extracted feature/feature map) generated from the feature extraction layer 1110 and deliver it to the feature coding layer 1130.
- the neural network abstraction layer 1120 may hide the inside of the feature extraction layer 1110 through abstraction of information and provide a consistent feature interface function. Accordingly, even if a compression target becomes different because of a change of tool (e.g., CNN, DNN, etc.), the feature coding layer 1130 may perform a consistent feature coding procedure.
- a neural network abstraction layer may be also be referred to as a feature abstraction layer.
- An interface between the feature extraction layer 1110 and the neural network abstraction layer 1120 and an interface between the feature coding layer 1130 and the neural network abstraction layer 1120 may be defined in advance, and an operation in the neural network abstraction layer 1120 may be configured to be subject to change later.
- a bitstream configured as illustrated herein may be referred to as a neural network abstraction layer (NNAL) unit.
- An NNAL unit may be one independent feature reconstruction unit.
- Input features for one NNAL unit may be extracted from a same layer within a neural network. Accordingly, input features for one NNAL unit may be enforced to have a same characteristic. For example, a same feature extraction method may be applied to input features for one NNAL unit.
- An NNAL unit may include an NNAL unit header and an NNAL unit payload.
- An NNAL unit header may include any information necessary for utilizing an encoded feature for a task.
- An NNAL unit payload may include abstracted feature information.
- the NNAL unit payload may include a group header and group data.
- the group header may include configuration information of feature group data such as information on a temporal order of feature channels constituting a feature group, the number of the channels and their common attribute.
- a feature channel may mean an encoded feature unit.
- Group data may include a plurality of feature channels and a coding indicator, and each feature channel may include type information, prediction information, side information and residual information.
- the type information may indicate an encoding method
- the prediction information may indicate a prediction method.
- the side information may indicate additional information necessary for decoding (e.g., information related to entropy coding and quantization)
- the residual information may include information on encoded feature elements (e.g., a set
- FIG. 13 is a view showing conventional tiers and levels
- FIG. 14 is a view for describing an encoding/decoding process that applies a neural network-based coding tool.
- the levels may define a maximum number of applicable pixels, a maximum number of per-second pixels, a maximum size of per-second encoded bitstream, and the like.
- the tiers may define a maximum size of per-second encoded bitstream. That is, a maximum required resource and a maximum required output may be defined by a level and a tier. By using them, a bitstream suitable for a decoder may be selected, and thus a decoder suitable for a service may be selected.
- a neural network-based technique may change a trained model that may be an algorithm.
- (a) of FIG. 14 is an example of applying a neural network-based technique for a coding tool unit in a conventional encoding/decoding framework
- (b) of FIG. 14 is an example of applying a neural network-based technique in an end-to-end manner.
- the two examples may change a trained model through neural network compression and representation (NNR), neural network exchange format (NNEF), open neural network exchange (ONNX) and the like, and it is because hardware running a neural network-based technique may be reused.
- NNR neural network compression and representation
- NEF neural network exchange format
- ONNX open neural network exchange
- the level definition method using a fixed value has limitations in representing required resources for various profiles.
- Embodiments described in the present disclosure may solve the above-described problem and relate to the above-described coding hierarchy and structure of the VCM system.
- a method of improving the neural network abstraction layer 1120 is proposed.
- a video codec may define a profile, a tier and a level (PTL) for an appropriate design and application according to its purpose (that is, a service type) and an application environment (e.g., device performance and an available bandwidth).
- a profile may include information on a coding tool used for encoding/decoding of a video. Specifically, it may represent a constraint on the coding tool (algorithm), and a level may define a maximum number of applicable pixels, a maximum number of per-second pixels, and a maximum size of per-second encoded bitstream as constraints on an available memory size and an available buffer size.
- the level may be divided into a plurality of tiers. A maximum size of per-second encoded bitstream may be differently defined according to each tier.
- a decoder may check whether the bitstream is suitable for capability of the decoder, and it may be possible because an encoding algorithm is standardized such that there is no change in a resource necessary for decoding the bitstream. Meanwhile, as for a bitstream that is encoded by the conventional method but by a neural network-based technique, it may be difficult to define a necessary resource for decoding only by the existing PTL. It is because unlike the conventional method, the neural network-based technique has great flexibility of algorithms used for encoding/decoding.
- the neural network-based technique may change a compression model (algorithm), and even when the algorithm is changed, hardware (HW) (e.g., mobile TPU, GPU) may be reused. That is, as the neural network-based technique may change an algorithm, it may face a problem when defining a level and a tier by fixed values.
- HW hardware
- Embodiments described in the present disclosure relate to a technique for solving the above problem by defining a profile, a tier and a level for a video encoder/decoder to which a neural network-based technique is applied.
- Embodiments described in the present disclosure may be operated separately or in a combination thereof.
- any contents corresponding to "image” may also be applied to “feature” or “feature map”.
- any contents corresponding to "feature” or “feature map” may also be applied to "image”.
- FIG. 15 is a view showing an example of a conventional profile definition structure.
- each profile is independently defined and may be distinguished by an identifier (ID) indicating the profile.
- ID the profile identifier (ID) of Main 10 may be defined as 10
- ID the profile identifier (ID) of Multilayer main 10 may be defined as 17.
- FIG. 16 , FIG. 17 and FIG. 18 are views showing an example of a profile definition structure according to an embodiment of the present disclosure.
- FIG. 16 to FIG. 18 may show an example of a method for an expanded profile, including a neural network-based coding profile.
- FIG. 16 may be a method for additionally defining an expanded profile (e.g., a neural network-based coding profile) according to an embodiment of the present disclosure at a same level as an existing defined profile.
- it is a method for additionally defining an expanded profile (e.g., a neural network-based coding profile) at a same level, that is, in parallel with an existing defined profile.
- a general profile may include Main 10, Main 10 NN (Neural Network), Main 10 still picture, Main 10 still picture NN, ..., and Multilayer Main 10 4:4:4.
- a different coding tool may be applied according to each expanded profile (e.g., a profile to which a neural network-based coding tool is applied and a profile to which no neural network-based coding tool is applied), but a same service and a same purpose may be supported. For example, a bitdepth and a color format may be identical.
- there may be a Main 10 neural network Profile that applies a coding tool (e.g., a neural network-based coding tool) for an expanded profile supporting a same input and a same output as the existing Main 10 Profile.
- FIG. 17 and FIG. 18 are views showing a method for distinguishing layers of each profile according to an embodiment of the present disclosure. Referring to FIG.
- a profile may be defined in a first layer first, and then the profile may be defined in detail in a second layer.
- This is for responding to increasingly diverse operation environment of an encoding/decoding apparatus. For example, the use of GPU, NPU and the like increases in image encoding/decoding and image processing, but such resources are not available in every device.
- corresponding information may have to be defined in PTL.
- the PLT may be defined according to a type of a required resource through the profile definition method disclosed in FIG. 17 according to an embodiment of the present disclosure.
- a general profile and an expanded profile may be distinguished in a first layer, and detailed profiles may be applied to the profiles distinguished in the first layer.
- the general profile may correspond to conventional video codec techniques like HEVC and VVC and include Main 10, Main 10 still picture, and Main 10 4:4:4 Still Picture.
- the expanded profile is a neural network-based profile, it may include Main 10, Profile A, ..., Profile Z.
- PTL is divided to support various types of devices, and such division is advantageous in supporting efficient decoding.
- profiles may be distinguished in a first layer according to a purpose of use and a target, and then the profiles may be defined in detail in a second layer.
- the consumers of videos were mostly humans, but the current trend is that more and more videos are consumed by machines such as robots, self-driving cars and AIs, and a layer definition reflecting the trend may be required. It is because humans and machines may need different types of information and thus a method of encoding/decoding information required by humans and a method of encoding/decoding information required by machines may be different from each other.
- a human profile and a machine profile may be distinguished in a first layer, and then a detailed profile of the profiles may be defined according to each consumption target, that is, a purpose.
- the human profile may include Profile A, B, ... Z
- the machine profile may include Profile A', B', ... Z'.
- Table 1 below is an example of a profile structure that may be defined according to FIG. 16 to FIG. 18 .
- a syntax structure profile_tier_level for the profile structure may be signaled by being included in a parameter set including a VPS.
- a syntax element profile_idc (e.g., a first syntax element) may be information on a profile indicator (or profile identifier) and indicate a major classification consisting of a coding tool and a resource which are required according to a purpose and a use of an encoded bitstream.
- the first syntax element may be signaled based on information (e.g., profileTierPresentFlag) indicating whether PTL information is present.
- the first syntax element may indicate GENERAL_PROFILE representing a general profile and NN_PROFILE representing a neural network-based profile.
- a first syntax element value indicates a general profile
- an encoded bitstream is encoded by a conventional codec (e.g., VVC, HEVC, etc.) and a resource.
- a first syntax element value does not indicate any general profile, it may mean that an encoded bitstream is encoded by using a neural network-based coding tool and a resource such as GPU and NPU.
- a syntax element general_profile_idc (e.g., a second syntax element) is a general profile indicator and may be signaled based on a profile indicator (or a profile identifier).
- a profile indicator e.g., profile_idc, the first syntax element
- a general profile e.g., GENERAL_PROFILE
- the syntax element may indicate a detailed profile including a coding tool.
- a syntax element nn_profile_idc is a neural network-based profile indicator and may be signaled bed on a profile indicator (or a profile identifier).
- a value of a profile indicator e.g., profile_idc, the first syntax element
- NN_PROFILE a neural network-based profile
- the syntax element may indicate a detailed profile including a coding tool. That is, a type of an applicable neural network-based tool may be different according to a value of the third syntax element.
- whether a level is expandable may also be defined according to a value of the third syntax element. For example, in case the third syntax element indicates a specific value, an input image may be subject to a predefined resource constraint, but in case the third syntax element indicates another specific value, it may mean that a resource constraint for an input image may be changed through expansion.
- Table 2 below is an example of a profile structure that may be defined according to FIG. 16 to FIG. 18 .
- a syntax element profile_idc may be information on a profile indicator (or profile identifier) and indicate a major classification consisting of a coding tool and a resource which are required according to a purpose and a use of an encoded bitstream.
- the first syntax element may be signaled based on information (e.g., profileTierPresentFlag) indicating whether PTL information is present.
- the first syntax element may indicate a human recognition-based profile, that is, a human analysis-based profile (e.g., HUMAN_PROFILE) or a machine recognition-based profile, that is, a machine analysis-based profile (e.g., MACHINE_PROFILE).
- a value of the first syntax element is the human analysis-based profile (e.g., HUMAN_PROFILE)
- it may mean that an encoded bitstream is encoded to be viewed by a human.
- it may mean that a coding tool and an evaluation criterion applied to an input image are configured to be suitable for the human's viewing.
- a first syntax element value is not the human analysis-based profile (e.g., MACHINE_PROFILE)
- it may mean that an encoded bitstream is encoded to perform a machine task.
- a coding tool and an evaluation criterion applied to an input image may be configured to be suitable for the machine task.
- a syntax element human_profile_idc (e.g., a fourth syntax element) is a human profile (that is, a human analysis-based profile) indicator and may be signaled, when a profile indicator (e.g., the first syntax element profile_idc) value is a human analysis-based profile (e.g., HUMAN_PROFILE), and represent a detailed classification of a human profile.
- a profile indicator e.g., the first syntax element profile_idc
- HUMAN_PROFILE human analysis-based profile
- a type of a coding tool applicable to an input image may be different according to a fourth syntax element value.
- a specific fourth syntax element value may be configured only by a coding tool with a same characteristic as a conventional coding method (e.g., VVC, HEVC, etc.).
- a specific fourth syntax element value may represent the use of an expanded coding tool, and for example, may apply a neural network-based coding tool to an input image.
- whether a level (e.g., a required resource) is expandable may also be defined based on the fourth syntax element value.
- a fourth syntax element value may indicate that an input image may be subject to a predefined resource constraint, while a specific fourth syntax element value may indicate that a resource constraint of an input image may be changed through expansion.
- a syntax element machine_profile_idc (e.g., a fifth syntax element) is a machine profile (that is, a machine analysis-based profile) indicator and may represent a detailed classification of a human profile, when a profile indicator (e.g., the first syntax element profile_idc) value is a machine analysis-based profile (e.g., MACHINE_PROFILE).
- a type of an applicable coding tool may be different according to a fifth syntax element value.
- a specific fifth syntax element value may indicate that it is configured only by a coding tool with a same characteristic as a conventional coding method (e.g., VVC, HEVC, etc.). On the other hand, a specific fifth syntax element value may indicate that a neural network-based coding tool is applied.
- whether a level (e.g., a required resource) is expandable may also be defined based on the fifth syntax element value.
- a specific fifth syntax element value may indicate that an input image is subject to a predefined resource constraint, while a specific fifth syntax element value may indicate that a resource constraint of an input image may be changed through expansion.
- Table 3 below is an example of a profile structure that may be defined according to FIG. 16 to FIG. 18 .
- a syntax element profile_idc (e.g., a first syntax element) may be information on a profile indicator (or profile identifier) and indicate a major classification consisting of a coding tool and a resource which are required according to a purpose and a use of an encoded bitstream.
- the first syntax element may be signaled based on information (e.g., profileTierPresentFlag) indicating whether PTL information is present.
- the first syntax element may indicate a human recognition-based profile, that is, a human analysis-based profile (e.g., HUMAN_PROFILE), a machine recognition-based profile, that is, a machine analysis-based profile (e.g., MACHINE_PROFILE), a general profile (e.g., GENERAL_PROFILE), or a neural network-based profile (e.g., NN_PROFILE).
- a human analysis-based profile e.g., HUMAN_PROFILE
- MACHINE_PROFILE machine analysis-based profile
- GENERAL_PROFILE General profile
- NN_PROFILE neural network-based profile
- the syntax element machine_profile_idc (e.g., the fifth syntax element) has already been described above and will not be redundantly described herein.
- syntax element general_profile_idc (e.g., the second syntax element) has already been described above and will not be redundantly described herein.
- nn_profile_idc (e.g., the third syntax element) has already been described above and will not be redundantly described herein.
- the second syntax element, the third syntax element, the fourth syntax element and the fifth syntax element may be collectively referred to as a sixth syntax element sub_profile_idc.
- the sixth syntax element is a sub-profile indicator and may be signaled according to a profile indicator of a higher layer (e.g., the first syntax element profile_idc) and represent a corresponding detailed profile.
- the second syntax element, the third syntax element, the fourth syntax element and the fifth syntax element may all represent a detailed profile for a profile indicator determined in a higher layer, which is the same as described above. Its meaning may be different according to a profile indicator of a higher layer but also be defined as one variable (e.g., the sixth syntax element sub_profile_idc). That is, the second syntax element to the fifth syntax element may be replaced by the sixth syntax element.
- profiles of each layer are distinguished based on a purpose or means, but this is an embodiment of the present disclosure, and it is obvious that the name and content of each profile may be different.
- an expanded profile which is distinguished from a general profile, may include a neural network-based profile, a human analysis-based profile, and a machine analysis-based profile, and each of the profiles may be described in the same manner as above.
- FIG. 19 and FIG. 20 show a conventional profile structure and a profile structure according to an embodiment of the present disclosure respectively.
- the PTL structure has a lower profile included in a higher profile so that there may be a significant common part between each profile.
- a structure of including a lower profile may not be appropriate.
- FIG. 19 shows a conventional profile structure and a profile structure according to an embodiment of the present disclosure respectively.
- FIG. 20 shows a profile structure according to an embodiment of the present disclosure, visualizing a proposed hierarchical profile structure (e.g., the above-described profile structure of FIG. 17 and FIG. 18 ).
- a proposed hierarchical profile structure e.g., the above-described profile structure of FIG. 17 and FIG. 18 .
- an interface and coding tools which need to be commonly used, may be shared in a higher layer, and a profile of the higher layer may be subdivided in a lower layer.
- a profile structure may be efficiently defined, and a required coding tool may be easily shared.
- a neural network-based in-loop filter model may be changed according to a type of a task to be performed in encoding/decoding, and a required resource may also be changed.
- Table 4 below is an example of a profile structure that may be defined according to FIG. 16 to FIG. 18 .
- the syntax element profile_idc (e.g., the first syntax element) may be a profile indicator. This will not be redundantly described herein.
- the syntax element general_profile_idc (e.g., the second syntax element) may be a general profile indicator. This will not be redundantly described herein.
- the syntax element nn_profile_idc (e.g., the third syntax element) may be a neural network profile indicator. This will not be redundantly described herein.
- a syntax element level_extension_flag (e.g., a seventh syntax element) may be level extension information indicating whether a level is extended. This may be signaled based on the first syntax element and/or the third syntax element. As an example, in case a value of the first syntax element indicates a neural network-based profile (e.g., NN_PROFILE) and a value of the third syntax element indicates an expandable neural network-based profile (e.g., expandable_nn_profile_idc), the seventh syntax element, which is information indicating whether a level is extended, may be signaled.
- a value of the first syntax element indicates a neural network-based profile (e.g., NN_PROFILE) and a value of the third syntax element indicates an expandable neural network-based profile (e.g., expandable_nn_profile_idc)
- the seventh syntax element which is information indicating whether a level is extended, may be signaled.
- the value of the seventh syntax element is a first value (e.g., 1), it may mean that expanded level information is defined apart from predefined level information.
- information on an expanded level may be signaled as the seventh syntax element (e.g., neural_network_decoding_capability_information(), that is, neural_network_decoding_capability_information()_rbsp).
- the value of the seventh syntax element is a second value (e.g., 0), it may mean that there is no expanded other than a predefined content (such as a resource).
- neural_network_decoding_capability_information() may be information on an expanded level as information on neural network-based coding capability and be signaled based on level expansion information (e.g., the seventh syntax element). This will not be redundantly described herein.
- Table 5 below is an example of a profile structure that may be defined according to FIG. 16 to FIG. 18 .
- the syntax element profile_idc (e.g., the first syntax element) may be a profile indicator. This will not be redundantly described herein.
- the syntax element human_profile_idc (e.g., the fourth syntax element) may be a human profile indicator (that is, a human analysis-based profile indicator). This will not be redundantly described herein.
- the syntax element machine_profile_idc (e.g., the fifth syntax element) may be a machine profile indicator (that is, a machine analysis-based profile indicator). This will not be redundantly described herein.
- the syntax element level_extension_flag (e.g., the seventh syntax element) may be level extension information indicating whether a level is extended. This may be signaled based on the first syntax element, the fourth syntax element and/or the fifth syntax element. Meanwhile, as an example, in case a value of the first syntax element indicates a machine analysis-based profile (e.g., MACHINE_PROFILE) and a value of the fifth syntax element indicates an expandable machine analysis-based profile (e.g., expandable _machine_profile_idc), the seventh syntax element, which is information indicating whether a level is extended, may be signaled.
- MACHINE_PROFILE machine analysis-based profile
- expandable _machine_profile_idc an expandable machine analysis-based profile
- the seventh syntax element which is information indicating whether a level is extended, may be signaled.
- a corresponding value may not refer to a specific detailed neural network profile.
- the value of the seventh syntax element is a first value (e.g., 1), it may mean that expanded level information is defined apart from predefined level information. In this case, information on an expanded level may be signaled as the seventh syntax element (e.g., neural_network_decoding_capability_information(), that is, neural_network_decoding_capability_information()_rbsp).
- the value of the seventh syntax element is a second value (e.g., 0), it may mean that there is no expanded other than a predefined content (such as a resource).
- the syntax element profile_idc (e.g., the first syntax element) may be a profile indicator. This will not be redundantly described herein.
- the syntax element general_profile_idc (e.g., the second syntax element) may be a general profile indicator. This will not be redundantly described herein.
- the syntax element nn_profile_idc (e.g., the third syntax element) may be a neural network profile indicator. This will not be redundantly described herein.
- the syntax element human_profile_idc (e.g., the fourth syntax element) may be a human profile indicator (that is, a human analysis-based profile indicator). This will not be redundantly described herein.
- the syntax element machine_profile_idc (e.g., the fifth syntax element) may be a machine profile indicator (that is, a machine analysis-based profile indicator). This will not be redundantly described herein.
- expandable_machine_profile_idc and expandable_human_profile_idc do not each mean an expandable specific machine or human profile value but may be used to collectively mean any machine profile indicators and any human profile indicators with expandable level, respectively.
- the syntax element level_extension_flag (e.g., the seventh syntax element) may be level extension information indicating whether a level is extended. This may be signaled based on the first syntax element, the second syntax element, the third syntax element, the fourth syntax element and/or the fifth syntax element. This will not be redundantly described herein.
- neural_network_decoding_capability_information() that is, neural_network_decoding_capability_information_rbsp
- neural_network_decoding_capability_information_rbsp e.g., the eighth syntax element (structure)
- neural_network_decoding_capability_information_rbsp e.g., the eighth syntax element (structure)
- Table 7 below is an example of an expanded level structure, that is, a detailed example of a syntax element that may be included in the eighth syntax element (structure).
- a syntax element max_flops may be a maximum required operation amount.
- FLOPS Floating point operations per second
- a maximum operation amount may be defined by using a variable (e.g., multiply accumulate (MAC)) that means a resource size for running a neural network (NN)-based technique.
- a syntax element max_memory e.g., a tenth syntax element
- a size of a required peak memory may be different according to an applied neural network model. When a corresponding model is updated, the tenth syntax element may have to be updated.
- a syntax element nndci_extension_flag (e.g., an eleventh syntax element) is information indicating whether extension data information is present, and may be extension data presence information (e.g., additional resource presence information), that is, neural network decoding capability information (DCI) extension information.
- extension data presence information e.g., additional resource presence information
- DCI neural network decoding capability information
- a value of the eleventh syntax element is a first value (e.g., 0)
- it may mean that no extension data information (e.g., a twelfth syntax element nndci_extension_data_flag) is not signaled.
- extension data information e.g., the twelfth syntax element nndci_extension_data_flag
- the eighth syntax element structure
- the eleventh syntax element may be information for a case in which additional resource information is needed to run a neural network-based technique. As an example, there may be a change in decoding capability during a video service, that is, an available resource may increase or decrease because of, for example, a constraint on an encoder/decoder resource.
- the eleventh syntax element which may be signaled, is decoding capability information (DCI) extension data, and as DCI may include PTL information (e.g., profile_tier_level) and the PTL information may include the eighth syntax element (structure) (e.g., neural_network_decoding_capability_information_rbsp), information on a resource may be updated through a DCI unit (e.g., DCI_NAL unit).
- DCI decoding capability information
- PTL information e.g., profile_tier_level
- the eighth syntax element e.g., neural_network_decoding_capability_information_rbsp
- information on a resource may be updated through a DCI unit (e.g., DCI_NAL unit).
- the eighth syntax element (e.g., neural_network_decoding_capability_information_rbsp) may be redefined by signaling the eleventh syntax element, and thus video encoding/decoding may be performed according to the decoding capability of the apparatus.
- the syntax element nndci_extension_data_flag may mean extension data information.
- the twelfth syntax element may be signaled based on the eleventh syntax element.
- the twelfth syntax element may be signaled further based on a specific syntax element (e.g., more_rbsp_data()) indicating whether residual data is present.
- a value of the twelfth syntax element is a first value (e.g., 1), it may indicate presence of information on an additional resource.
- the value of the twelfth syntax element is a second value (e.g., 0), it may mean that there is no information on an additional resource.
- an additionally required resource may be defined by the twelfth syntax element, apart from an operation amount and a memory.
- a flexible encoding/decoding method may be applied according to the performance of an apparatus and a service purpose, which may result in improved coding efficiency.
- FIG. 21 is a view showing an image decoding method according to an embodiment of the present disclosure.
- the image decoding method of FIG. 21 may be performed by an image decoding apparatus.
- FIG. 21 will not be redundantly described herein.
- PTL information of an image may be obtained (S2110).
- the PTL information may be encoded in a bitstream.
- the PTL information may include the above-described profile_tier_level.
- a profile and a tier of an image may be determined (S2120).
- the PTL information may include a profile indicator representing a type of a profile for an image, and the profile type may be classified according to a purpose, a viewing target, and the like.
- the profile type may include a basic profile (e.g., a first type) and an expanded profile (e.g., a second type).
- the basic profile may be a general profile
- the expanded profile may include a neural network (NN)-based profile and a machine analysis-based profile.
- the expanded profile may further include a human analysis-based profile.
- a detailed profile indicator e.g., a second syntax element to a sixth syntax element
- the detailed profile indicator may include information on a detailed profile capable of level expansion.
- expanded level presence information e.g., a seventh syntax element
- information on a required resource e.g., an eighth syntax element (structure)
- a required resource e.g., an eighth syntax element (structure)
- the information on the required resource may include at least one or more of a maximum operation amount (e.g., a ninth syntax element), a maximum memory (e.g., a tenth syntax element), or additional resource presence information (e.g., an eleventh syntax element).
- a maximum operation amount e.g., a ninth syntax element
- a maximum memory e.g., a tenth syntax element
- additional resource presence information e.g., an eleventh syntax element.
- addition resource information e.g., a twelfth syntax element
- the names of the first syntax element to the twelfth syntax element are arbitrarily designated for clarity of descriptions and do not put any limitations in naming a corresponding syntax element.
- the first syntax element to the twelfth syntax element may be referred to as first information to twelfth information respectively.
- the first syntax element to the twelfth syntax element may be obtained from a bitstream and also be derived by another syntax element, which may also be included in an embodiment of the present disclosure.
- FIG. 22 is a view showing an image encoding method according to an embodiment of the present disclosure.
- the image encoding method of FIG. 22 may be performed by an image encoding apparatus.
- FIG. 22 will not be redundantly described herein.
- PTL information of an image may be determined (S2210).
- PTL may be determined and then information related to PTL may also be determined.
- PTL may be encoded as PTL information of the image (S2220).
- the PTL information may include the above-described profile_tier_level.
- the PTL information may include a profile indicator (e.g., a first syntax element) representing a type of a profile for the image, and a profile type may be classified according to a purpose and a viewing target.
- the profile type may include a basic profile (e.g., a first type) and an expanded profile (e.g., a second type).
- the basic profile may be a general profile
- the expanded profile may include a neural network (NN)-based profile and a machine analysis-based profile.
- the expanded profile may further include a human analysis-based profile.
- NN neural network
- a detailed profile type for the profile type may be determined.
- the detailed profile type for the profile type is a detailed profile indicator (e.g., a second syntax element to a sixth syntax element) and may be encoded in a bitstream.
- the detailed profile indicator may include information on a detailed profile capable of level expansion.
- whether a level is expanded may further be determined based on the profile type. Whether a level is expanded is expanded level presence information (e.g., a seventh syntax element) and may be encoded in a bitstream.
- information on a required resource e.g., an eighth syntax element (structure) may further be determined.
- the information on the required resource may include at least one or more of a maximum operation amount (e.g., a ninth syntax element), a maximum memory (e.g., a tenth syntax element), or additional resource presence information (e.g., an eleventh syntax element).
- the information on the required resource may be encoded in a bitstream.
- additional resource presence information may be information regarding whether an additional resource is present. Based on whether an additional resource is present, additional resource information (e.g., a twelfth syntax element) may further be determined, and the additional resource information may be encoded in a bitstream.
- the names of the first syntax element to the twelfth syntax element are arbitrarily designated for clarity of descriptions and do not put any limitations in naming a corresponding syntax element.
- the first syntax element to the twelfth syntax element may be referred to as first information to twelfth information respectively.
- the first syntax element to the twelfth syntax element may be obtained from a bitstream and also be derived by another syntax element, which may also be included in an embodiment of the present disclosure.
- a bitstream generated by the image encoding method may be stored in a non-transitory computer-readable recording medium.
- a bitstream generated by the image encoding method may be transmitted to another apparatus (e.g., an image decoding apparatus).
- a method for transmitting the bitstream may include a process of transmitting the bitstream.
- the image encoding apparatus or the image decoding apparatus that performs a predetermined operation may perform an operation (step) of confirming an execution condition or situation of the corresponding operation (step). For example, if it is described that predetermined operation is performed when a predetermined condition is satisfied, the image encoding apparatus or the image decoding apparatus may perform the predetermined operation after determining whether the predetermined condition is satisfied.
- Embodiments described in the present disclosure may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units shown in each drawing may be implemented and performed on a computer, processor, microprocessor, controller, or chip.
- information for implementation e.g., information on instructions
- algorithm may be stored in a digital storage medium.
- the decoder (decoding apparatus) and the encoder (encoding apparatus), to which the embodiment(s) of the present disclosure are applied may be included in a multimedia broadcasting transmission and reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, a real time communication device such as video communication, a mobile streaming device, a storage medium, a camcorder, a video on demand (VoD) service providing device, an OTT video (over the top video) device, an Internet streaming service providing device, a three-dimensional (3D) video device, an argument reality (AR) device, a video telephony video device, a transportation terminal (e.g., vehicle (including autonomous vehicle) terminal, robot terminal, airplane terminal, ship terminal, etc.) and a medical video device, and the like, and may be used to process video signals or data signals.
- the OTT video devices may include a game console, a blu-ray player, an Internet access TV, a home theater
- a processing method to which the embodiment(s) of the present disclosure is applied may be produced in the form of a program executed by a computer and stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the embodiment(s) of this document may also be stored in a computer-readable recording medium.
- Computer-readable recording media include all types of storage devices and distributed storage devices that store computer-readable data.
- Computer-readable recording media include, for example, Blu-ray Disc (BD), Universal Serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device.
- computer-readable recording media include media implemented in the form of carrier waves (e.g., transmission via the Internet).
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
- embodiment(s) of the present disclosure may be implemented as a computer program product by program code, and the program code may be executed on a computer by the embodiment(s) of the present disclosure.
- the program code may be stored on a carrier readable by a computer.
- FIG. 23 is a view illustrating an example of a content streaming system to which embodiments of the present disclosure are applicable.
- the content streaming system may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
- the encoding server compresses contents input from multimedia input devices such as a smartphone, a camera, a camcorder, etc. into digital data to generate a bitstream and transmits the bitstream to the streaming server.
- multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate a bitstream
- the encoding server may be omitted.
- the bitstream may be generated by an image encoding method or an image encoding apparatus, to which the embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in the process of transmitting or receiving the bitstream.
- the streaming server transmits the multimedia data to the user device based on a user's request through the web server, and the web server serves as a medium for informing the user of a service.
- the web server may deliver it to a streaming server, and the streaming server may transmit multimedia data to the user.
- the contents streaming system may include a separate control server.
- the control server serves to control a command/response between devices in the contents streaming system.
- the streaming server may receive contents from a media storage and/or an encoding server. For example, when the contents are received from the encoding server, the contents may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
- Examples of the user device may include a mobile phone, a smartphone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), navigation, a slate PC, tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glasses, head mounted displays), digital TVs, desktops computer, digital signage, and the like.
- PDA personal digital assistant
- PMP portable multimedia player
- navigation e.g., a slate PC, tablet PCs, ultrabooks
- wearable devices e.g., smartwatches, smart glasses, head mounted displays
- digital TVs desktops computer
- digital signage e.g., digital signage, and the like.
- Each server in the contents streaming system may be operated as a distributed server, in which case data received from each server may be distributed.
- FIG. 24 is a diagram illustrating another example of a content streaming system to which embodiments of the present disclosure are applicable.
- a task may be performed in a user terminal or a task may be performed in an external device (e.g., streaming server, analysis server, etc.) according to the performance of the device, the user's request, the characteristics of the task to be performed, etc.
- the user terminal may generate a bitstream including information necessary to perform the task (e.g., information such as task, neural network and/or usage) directly or through an encoding server.
- the analysis server may perform a task requested by the user terminal after decoding the encoded information received from the user terminal (or from the encoding server). At this time, the analysis server may transmit the result obtained through the task performance back to the user terminal or may transmit it to another linked service server (e.g., web server). For example, the analysis server may transmit a result obtained by performing a task of determining a fire to a fire-related server.
- the analysis server may include a separate control server.
- the control server may serve to control a command/response between each device associated with the analysis server and the server.
- the analysis server may request desired information from a web server based on a task to be performed by the user device and the task information that may be performed.
- the web server When the analysis server requests a desired service from the web server, the web server transmits it to the analysis server, and the analysis server may transmit data to the user terminal.
- the control server of the content streaming system may serve to control a command/response between devices in the streaming system.
- the embodiments of the present disclosure may be used to encode or decode a feature/feature map.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20220120503 | 2022-09-23 | ||
| PCT/KR2023/014377 WO2024063559A1 (ko) | 2022-09-23 | 2023-09-21 | 프로파일을 정의하는 하이 레벨 신택스에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4593391A1 true EP4593391A1 (de) | 2025-07-30 |
Family
ID=90455001
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23868611.7A Pending EP4593391A1 (de) | 2022-09-23 | 2023-09-21 | Bildcodierungs-/-decodierungsverfahren und -vorrichtung auf basis einer high-level-syntax zur profildefinition und aufzeichnungsmedium mit darauf gespeichertem bitstrom |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4593391A1 (de) |
| CN (1) | CN120226361A (de) |
| WO (1) | WO2024063559A1 (de) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2642269A (en) * | 2024-06-28 | 2026-01-07 | Sony Group Corp | Data encoding and decoding |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109905703B (zh) * | 2013-10-11 | 2023-11-17 | Vid拓展公司 | Hevc扩展的高级句法 |
| US10999605B2 (en) * | 2017-01-10 | 2021-05-04 | Qualcomm Incorporated | Signaling of important video information in file formats |
| WO2020229734A1 (en) * | 2019-05-16 | 2020-11-19 | Nokia Technologies Oy | An apparatus, a method and a computer program for handling random access pictures in video coding |
| US11973955B2 (en) * | 2019-12-20 | 2024-04-30 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Video coding in relation to subpictures |
| US11683514B2 (en) * | 2020-12-22 | 2023-06-20 | Tencent America LLC | Method and apparatus for video coding for machine |
-
2023
- 2023-09-21 CN CN202380078783.2A patent/CN120226361A/zh active Pending
- 2023-09-21 WO PCT/KR2023/014377 patent/WO2024063559A1/ko not_active Ceased
- 2023-09-21 EP EP23868611.7A patent/EP4593391A1/de active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024063559A1 (ko) | 2024-03-28 |
| CN120226361A (zh) | 2025-06-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11838519B2 (en) | Image encoding/decoding method and apparatus for signaling image feature information, and method for transmitting bitstream | |
| EP4429241A1 (de) | Verfahren und vorrichtung zur kodierung/dekodierung von merkmalen und aufzeichnungsmedium zur speicherung eines bitstroms | |
| US12506888B2 (en) | Image encoding/decoding method, device, and computer-readable recording medium for signaling purpose of VCM bitstream | |
| EP4391540A1 (de) | Verfahren und vorrichtung zur merkmalscodierung/-decodierung auf basis der korrelation zwischen kanälen und aufzeichnungsmedium mit darauf gespeichertem bitstrom | |
| EP4593391A1 (de) | Bildcodierungs-/-decodierungsverfahren und -vorrichtung auf basis einer high-level-syntax zur profildefinition und aufzeichnungsmedium mit darauf gespeichertem bitstrom | |
| US20250234005A1 (en) | Feature encoding/decoding method, device, recording medium storing bitstream, and method for transmitting bitstream | |
| US20250080746A1 (en) | Feature encoding/decoding method and device, and recording medium in which bitstream is stored | |
| US20250008110A1 (en) | Feature encoding/decoding method and device, and recording medium storing bitstream | |
| US20240406372A1 (en) | Feature encoding/decoding method and device based on inter-channel reference of encoding structure, recording medium in which bitstream is stored, and bitstream transmission method | |
| EP4492783A1 (de) | Verfahren und vorrichtung zur kodierung/dekodierung von merkmalen und aufzeichnungsmedium zur speicherung eines bitstroms | |
| EP4429240A1 (de) | Verfahren und vorrichtung zur kodierung/dekodierung von merkmalen und aufzeichnungsmedium zur speicherung eines bitstroms | |
| EP4604525A1 (de) | Verfahren und vorrichtung zur bildcodierung/-decodierung mit bildsegmentierung und aufzeichnungsmedium mit darauf gespeichertem bitstrom | |
| US12593026B2 (en) | Feature encoding/decoding method and device, recording medium on which bitstream is stored, and method for transmitting bitstream | |
| US20240414332A1 (en) | Feature encoding/decoding method and apparatus, and recording medium storing bitstream | |
| US12556702B2 (en) | Feature encoding/decoding method and device, and recording medium storing bitstream | |
| US20250203064A1 (en) | Feature encoding/decoding method and apparatus, and recording medium in which bitstream is stored | |
| US20250080717A1 (en) | Feature encoding/decoding method and device, recording medium on which bitstream is stored, and method for transmitting bitstream | |
| EP4604555A1 (de) | Bilddatenverarbeitungsverfahren und -vorrichtung, aufzeichnungsmedium mit darauf gespeichertem bitstrom und bitstromübertragungsverfahren | |
| US12593045B2 (en) | Entropy coding-based feature encoding/decoding method and device, recording medium having bitstream stored therein, and method for transmitting bitstream | |
| EP4589949A1 (de) | Verfahren und vorrichtung zur kodierung/dekodierung von merkmalen basierend auf training eines vorhersagemodells und aufzeichnungsmedium mit darauf gespeichertem bitstrom | |
| US20240414342A1 (en) | Entropy coding-based feature encoding/decoding method and device, recording medium having bitstream stored therein, and method for transmitting bitstream | |
| EP4686200A1 (de) | Verfahren und vorrichtung zur kodierung/dekodierung von merkmalen, aufzeichnungsmedium mit darauf gespeichertem bitstrom und verfahren zur übertragung eines bitstroms | |
| EP4697725A1 (de) | Codierungs-/decodierungsverfahren, vorrichtung und aufzeichnungsmedium zur speicherung von bitströmen | |
| CN121241562A (zh) | 基于根据视频使用的视频优化的编码/解码方法和装置以及用于发送比特流的方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250317 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |