WO2014031734A1 - Procédé et appareil de signalisation efficiente de prédiction pondérée dans des schémas de codage avancés - Google Patents
Procédé et appareil de signalisation efficiente de prédiction pondérée dans des schémas de codage avancés Download PDFInfo
- Publication number
- WO2014031734A1 WO2014031734A1 PCT/US2013/055968 US2013055968W WO2014031734A1 WO 2014031734 A1 WO2014031734 A1 WO 2014031734A1 US 2013055968 W US2013055968 W US 2013055968W WO 2014031734 A1 WO2014031734 A1 WO 2014031734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slice
- weighted prediction
- parameter
- type
- slices
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000011664 signaling Effects 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 32
- 230000033001 locomotion Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 41
- 230000005540 biological transmission Effects 0.000 description 38
- 230000008569 process Effects 0.000 description 30
- 230000002123 temporal effect Effects 0.000 description 28
- 239000000872 buffer Substances 0.000 description 27
- 238000007906 compression Methods 0.000 description 14
- 230000006835 compression Effects 0.000 description 14
- 238000003860 storage Methods 0.000 description 14
- 230000001419 dependent effect Effects 0.000 description 11
- 238000013139 quantization Methods 0.000 description 10
- 238000005192 partition Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 6
- 208000034188 Stiff person spectrum disease Diseases 0.000 description 5
- 229920010524 Syndiotactic polystyrene Polymers 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 208000012112 ischiocoxopodopatellar syndrome Diseases 0.000 description 5
- 238000002490 spark plasma sintering Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 230000001174 ascending effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007373 indentation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000012536 storage buffer Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
Definitions
- the present invention relates to systems and methods for encoding data and in particular to a system and method for generating and processing slice headers with high efficiency video-coded data.
- the High Efficiency Video Coding (“HEVC”) coding standard (also called H.265) is the most recent coding standard promulgated by the ISO/IEC MPEG standardization organizations.
- the coding standards preceding HEVC include the H.262/MPEG-2 and the subsequent H.264/MPEG-4 Advanced Video Coding ("AVC") standard.
- H.264/MPEG-4 has substantially replaced H.262/MPEG-2 in many applications including high definition television.
- HEVC supports resolutions higher than "high definition,” even in stereo or multi-view embodiments, and is more suitable for mobile devices such as tablet personal computers.
- bitstream structure and syntax of HEVC compliant data are standardized, such that every decoder conforming to the standard will produce the same output when provided with the same input.
- Some of the features incorporated into the HEVC standard include the definition and processing of a slice, one or more of which may together compose one of the pictures in a video sequence.
- a video sequence comprises a plurality of pictures, and each picture may comprise one or more slices.
- Slices include non-dependent slices and dependent slices.
- a non-dependent slice (hereinafter simply referred to as a slice) is a data structure that can be decoded independently from other slices of the same picture in terms of entropy encoding, signal prediction, and residual signal construction.
- a "dependent slice” is a structure that permits information about the slice (such as those related with tiles within the slice or wavefront entries) to be carried to the network layer, thus making that data available to a system to more quickly process fragmented slices.
- Dependent slices are mostly useful for low-delay encoding.
- HEVC and legacy coding standards define a parameter set structure that offers improved flexibility for operation over a wide variety of applications and network environments and improved robustness to data losses.
- Parameter sets contain information that can be shared for decoding of different portions of the encoded video.
- the parameter set structure provides a secure mechanism for conveying data that is essential to the decoding process.
- H.264 defined both sequence parameter sets ("SPSs") that describe parameters for decoding a sequence of pictures and a picture parameter set (“PPS”) that describes parameters for decoding a picture of the sequence of pictures.
- SPSs sequence parameter sets
- PPS picture parameter set
- HEVC introduces a new parameter set, the video parameter set ("VPS").
- the encoding and decoding of slices is performed according to information included in a slice header.
- the slice header includes syntax and logic for reading flags and data that are used in decoding the slice.
- HEVC supports both temporal and spatial encoding of picture slices.
- HEVC defines slices to include I-slices, which are spatially, but not temporally, encoded with reference to another slice. I-slices are alternatively described as "intra" slice encoded.
- HEVC also defines slices to include P (predictive) slices, which are spatially encoded and temporally encoded with reference to another slice. P-slices are alternatively described as "inter” slice encoded.
- HEVC also describes slices to include bi- predictive ("B")-slices. B-slices are spatially encoded and temporally encoded with reference to two or more other slices. Further, HEVC consolidates the notion of P and B slices into general B slices that can be used as reference slices.
- the PPS includes two syntaxes, weighted _pred_flag and weighte_bipred_flag.
- a weighted_pred_fiag value of 0 specifies that weighted prediction shall not be applied to P slices, whereas a weighted_pred_flag of 1 specifies that weighted prediction shall be applied to P slices.
- a weighted bipred flag value of 0 specifies that the default weighted prediction is applied to B slices, while a weighted_bipred_flag value of 1 specifies that weighted prediction is applied to B slices.
- weighted prediction flags are coded in PPS
- the specific weighted prediction parameters are coded in the slice header.
- the flags controlling the weighted prediction processing and the weighted prediction parameters are on different hierarchical levels of coding (picture versus slice), and this can create logical difficulties which unnecessarily make slice header or PPS logic more complex or redundant, as not all slices within a picture require weighed prediction. In some scenarios, it may also cause wasted bits. For example, if the weighted prediction enabling flag is coded as 1 at PPS for a picture, then the weighted prediction parameters have to be coded at the slice header for each slice of the picture. This is even true for I slices, which do not perform weighted prediction.
- any given video stream typically includes thousands of pictures, and each picture may contain one or more slices, the syntax and logic used in the header can have a significant impact on the processing load performed to encode and later decode the video stream.
- this document discloses a method usable in a processing system for decoding a sequence comprising a plurality of pictures, each of the plurality of pictures partitionable into one or more slices, each of the pictures processed at least in part according to a picture parameter set, and each of the slices processed at least in part according to a slice header.
- the method comprises determining if a slice of the one or more slices is an inter-predicted slice according to slice-type data, and if the slice is determined to be an inter-predicted slice, determining if a first parameter is in the slice header, the first parameter associated with a value signaling enablement of a state of weighted prediction of image data associated with the slice. If the first parameter is in the slice header, then the first parameter is read and used to perform weighted prediction of the image data according to the read first parameter.
- Figure 1 is a diagram depicting an exemplary embodiment of a video coding/decoding system that can be used for transmission or storage and retrieval of audio or video information
- Figure 2A is a diagram of one embodiment of an encoding/decoding (“codec”) system in which encoded audio/visual (“AV”) information is transmitted to and received at another location;
- codec encoding/decoding
- Figure 2B is a diagram depicting an exemplary embodiment of a codec system in which the encoded information is stored and later retrieved for presentation, hereinafter referred to as a codec storage system;
- Figure 2C is a diagram depicting an exemplary content-distribution system comprising an encoder and a decoder that can be used to transmit and receive HEVC data;
- Figure 3 is a block diagram illustrating one embodiment of a source encoder
- Figure 4 is a diagram depicting a picture of AV information, such as one of the pictures in the picture sequence;
- Figure 5 is a diagram showing an exemplary partition of a coding-tree block into coding units
- Figure 6 is a diagram illustrating a representative quadtree and data parameters for the coding-tree block partitioning shown in Figure 5;
- Figure 7 is a diagram illustrating the partition of a coding unit into one or more prediction units
- Figure 8 is a diagram showing a coding unit partitioned into four prediction units and an associated set of transform units
- FIG. 9 is a diagram showing a Residual QuadTree ("RQT") for the transform units associated with the coding unit in the example of Figure 8;
- Figure 10 is a diagram illustrating spatial prediction of prediction units;
- Figure 1 1 is a diagram illustrating temporal prediction;
- Figure 12 is a diagram illustrating the use of motion vector predictors
- Figure 13 is an example of the use of the reference picture lists
- Figure 14 is a diagram illustrating processes performed by the encoder according to the aforementioned standard
- Figure 15 depicts the use of a the collocated from lO flag by the decoder according to the emerging HEVC standard
- Figures 16A and 16B are diagrams presenting a baseline PPS syntax
- Figures 17A through 17C are diagrams presenting baseline slice header logic and syntax
- Figures 18A and 18B are diagrams illustrating one embodiment of an improved PPS syntax
- Figures 19A through 19C are syntax diagrams illustrating one embodiment of an improved slice header syntax for use with the improved PPS syntax
- Figures 20A and 20B are diagrams illustrating exemplary operations that can be performed in accordance with the slice header shown in Figures 19A through 19C;
- Figure 21 illustrates an exemplary processing system that could be used to implement embodiments of the invention.
- FIG. 1 is a diagram depicting an exemplary embodiment of a codec system 100 that can be used for transmission or storage and retrieval of audio or video information.
- the codec system 100 comprises an encoding system 104, which accepts AV information 102 and processes the AV information 102 to generate encoded (compressed) AV information 106.
- a decoding system 1 12 processes the encoded AV information 106 to produce recovered AV information 1 14. Since the encoding and decoding processes are not lossless, the recovered AV information 1 14 is not identical to the initial AV information 102, but with judicious selection of the encoding processes and parameters, the differences between the recovered AV information 1 14 and the unprocessed AV information 102 are acceptable to human perception.
- the encoded AV information 106 is typically transmitted or stored and retrieved before decoding and presenting, as performed by "transception” (transmission and reception) or storage/retrieval system 108. Transception losses may be significant, but storage/retrieval losses are typically minimal or non-existent, hence, the transcepted AV information 1 10 provided to the decoding system 1 12 is typically the same as or substantially the same as the encoded AV information 106.
- FIG. 2A is a diagram of one embodiment of a codec system 200A in which the encoded AV information 106 is transmitted to and received at another location.
- a transmission segment 230 converts input AV information 102 into a signal appropriate for transmission and transmits the converted signal over the transmission channel 212 to the reception segment 232.
- the reception segment 232 receives the transmitted signal and converts the received signal into the recovered AV information 1 14 for presentation.
- the recovered AV information 1 14 may be of lower quality than the AV information 102 that was provided to the transmission segment 230.
- error-correcting systems may be included to reduce or eliminate such errors.
- the encoded AV information 106 may be forward-error correction ("FEC") encoded by adding redundant information, and such redundant information can be used to identify and eliminate errors in the reception segment 232.
- FEC forward-error correction
- the transmission segment 230 comprises one or more source encoders 202 to encode multiple sources of AV information 102.
- the source encoder 202 encodes the AV information 102 primarily for purposes of compression to produce the encoded AV information 106 and may include, for example, a processor and related memory storing instructions implementing a codec such as MPEG-1 , MPEG-2, MPEG-4 AVC/H.264, HEVC, or a similar codec, as described further below.
- the codec system 200A may also include optional elements indicated by the dashed lines in Figure 2A. These optional elements include a video multiplex encoder 204, an encoding controller 208, and a video demultiplexing decoder 218.
- the optional video multiplex encoder 204 multiplexes encoded AV information 106 from an associated plurality of source encoders 202 according to one or more parameters supplied by the optional encoding controller 208. Such multiplexing is typically accomplished in the time domain and is data-packet based.
- the video multiplex encoder 204 comprises a statistical multiplexer, which combines the encoded AV information 106 from a plurality of source encoders 202 so as to minimize the bandwidth required for transmission. This is possible because the instantaneous bit rate of the coded AV information 106 from each source encoder 202 can vary greatly with time according to the content of the AV information 102. For example, scenes having a great deal of detail and motion (e.g., sporting events) are typically encoded at higher bitrates than scenes with little motion or detail (e.g., portrait dialog).
- each source encoder 202 may produce information with a high instantaneous bitrate while another source encoder 202 produces information with a low instantaneous bit rate, and since the encoding controller 208 can command the source encoders 202 to encode the AV information 106 according to certain performance parameters that affect the instantaneous bit rate, the signals from each of the source encoders 202 (each having a temporally varying instantaneous bit rate) can be combined together in an optimal way to minimize the instantaneous bit rate of the multiplexed stream 205.
- the source encoder 202 and the video multiplex coder 204 may optionally be controlled by a coding controller 208 to minimize the instantaneous bit rate of the combined video signal. In one embodiment, this is accomplished using information from a transmission buffer 206 which temporarily stores the coded video signal and can indicate the fullness of the buffer 206. This allows the coding performed at the source encoder 202 or at the video multiplex coder 204 to be a function of the storage remaining in the transmission buffer 206.
- the transmission segment 230 also may comprise a transmission encoder 210 which further encodes the video signal for transmission to the reception segment 232.
- Transmission encoding may include for example, the aforementioned FEC coding or coding into a multiplexing scheme for the transmission medium of choice. For example, if the transmission is by satellite or terrestrial transmitters, then the transmission encoder 210 may encode the signal into a signal constellation before transmission via quadrature amplitude modulation or a similar modulation technique. Also, if the encoded video signal is to be streamed via an Internet protocol device and the Internet, then the transmission encodes the signal according to the appropriate protocol. Further, if the encoded signal is to be transmitted via mobile telephony, then the appropriate coding protocol is used, as further described below.
- the reception segment 232 comprises a transmission decoder 214 to receive the signal that was coded by the transmission coder 210 using a decoding scheme complementary to the coding scheme used in the transmission encoder 210.
- the decoded received signal may be temporarily stored by an optional reception buffer 216, and if the received signal comprises multiple video signals, then the received signal is multiplex- decoded by the video multiplex decoder 218 to extract the video signal of interest from the video signals multiplexed by the video multiplex coder 204.
- the video signal of interest is decoded by source decoder 220 using a decoding scheme or codec complementary to the codec used by the source encoder 202 to encode the AV information 102.
- the transmitted data comprise a packetized video stream transmitted from a server (representing the transmitting segment 230) to a client (representing the receiving segment 232).
- the transmission encoder 210 may packetize the data and embed Network Abstract Layer (“NAL") units in network packets.
- NAL units define a data container that has header and coded elements and may correspond to a video frame or other slice of video data.
- the compressed data to be transmitted may be packetized and transmitted via transmission channel 212, which may include a Wide Area Network or a Local Area Network.
- a network may comprise, for example, a wireless network such as WiFi, an Ethernet network, an Internet network, or a mixed network composed of several different networks.
- Such communication may be affected via a communication protocol, for example Real-time Transport Protocol, User Datagram Protocol, or any other type of communication protocol.
- Different packetization methods may be used for each NAL unit of the bitstream. In one case, one NAL unit size is smaller than the maximum transport unit size corresponding to the largest packet size that can be transmitted over the network without being fragmented. In this case, the NAL unit is embedded into a single network packet.
- NAL units are included in a single network packet.
- one NAL unit may be too large to be transmitted in a single network packet and is thus split into several fragmented NAL units with each fragmented NAL unit being transmitted in an individual network packet. Fragmented NAL unit are typically sent consecutively for decoding purposes.
- the reception segment 232 receives the packetized data and reconstitutes the NAL units from the network packet. For fragmented NAL units, the client concatenates the data from the fragmented NAL units in order to reconstruct the original NAL unit. The client 232 decodes the received and reconstructed data stream and reproduces the video images on a display device and the audio data by a loud speaker.
- FIG. 2B is a diagram depicting an exemplary embodiment of a codec system in which the encoded information is stored and later retrieved for presentation, hereinafter referred to as codec storage system 200B.
- This embodiment may be used, for example, to locally store information in a digital video recorder, a flash drive, hard drive, or similar device.
- the AV information 102 is source-encoded by source encoder 202 and optionally buffered by storage buffer 234 before storage in a storage device 236.
- the storage device 236 may store the video signal temporarily or for an extended period of time and may comprise a hard drive, flash drive, random-access memory (“RAM”), or read-only memory (“ROM").
- the stored AV information is then retrieved, optionally buffered by retrieve buffer 238, and decoded by the source decoder 220.
- FIG. 2C is another diagram depicting an exemplary content-distribution system 200C comprising a coding system 240 and a decoding system 258 that can be used to transmit and receive HEVC data.
- the coding system 240 can comprise an input interface 256, a controller 241 , a counter 242, a frame memory 243, an encoding unit 244, a transmitter buffer 267, and an output interface 257.
- the decoding system 258 can comprise a receiver buffer 259, a decoding unit 260, a frame memory 261 , and a controller 267.
- the coding system 240 and the decoding system 258 can be coupled with each other via a transmission path which can carry a compressed bit stream.
- the controller 241 of the coding system 240 can control the amount of data to be transmitted on the basis of the capacity of the transmitter buffer 267 or receiver buffer 259 and can include other parameters such as the amount of data per unit of time.
- the controller 241 can control the encoding unit 244 to prevent the occurrence of a failure of the decoding system 258.
- the controller 241 can be a processor or can include, by way of a non-limiting example, a microcomputer having a processor, RAM, and ROM.
- Source pictures 246 supplied from a content provider can include a video sequence of frames including source pictures in a video sequence. The source pictures 246 can be uncompressed or compressed. If the source pictures 246 are uncompressed, then the coding system 240 can have an encoding function. If the source pictures 246 are compressed, then the coding system 240 can have a transcoding function. Coding units can be derived from the source pictures utilizing the controller 241.
- the 243 can have a first area that can be used for storing the incoming frames from the source pictures 246 and a second area that can be used for reading out the frames and outputting them to the encoding unit 244.
- the controller 241 can output an area switching control signal 249 to the frame memory 243.
- the area switching control signal 249 can indicate whether the first area or the second area is to be utilized.
- the controller 241 can output an encoding control signal 250 to the encoding unit 244.
- the encoding control signal 250 can cause the encoding unit 244 to start an encoding operation, such as preparing the Coding Units based on a source picture.
- the encoding unit 244 In response to the encoding control signal 250 from the controller 241 , the encoding unit
- the 244 can begin to read out the prepared Coding Units to a high-efficiency encoding process, such as a prediction coding process or a transform coding process, which processes the prepared Coding Units generating video compression data based on the source pictures associated with the Coding Units.
- a high-efficiency encoding process such as a prediction coding process or a transform coding process, which processes the prepared Coding Units generating video compression data based on the source pictures associated with the Coding Units.
- the encoding unit 244 can package the generated video compression data in a packetized elementary stream including video packets.
- the encoding unit 244 can map the video packets into an encoded video signal 122 using control information and a program time stamp, and the encoded video signal 122 can be transmitted to the transmitter buffer 267.
- the encoded video signal 122 can be stored in the transmitter buffer 267.
- the information amount counter 242 can be incremented to indicate the total amount of data in the transmitter buffer 267. As data are retrieved and removed from the buffer, the counter 242 can be decremented to reflect the amount of data in the transmitter buffer 267.
- the occupied area information signal 253 can be transmitted to the counter 242 to indicate whether data from the encoding unit 244 has been added to or removed from the transmitter buffer 267 so the counter 242 can be incremented or decremented.
- the controller 241 can control the production of video packets produced by the encoding unit 244 on the basis of the occupied area information 253, which can be communicated in order to anticipate, avoid, prevent, or detect an overflow or underflow from taking place in the transmitter buffer 267.
- the information amount counter 242 can be reset in response to a preset signal 254 generated by the controller 241. After the information amount counter 242 is reset, it can count data output by the encoding unit 244 and obtain the amount of video compression data or video packets which have been generated. The information amount counter 242 can supply the controller 241 with an information amount signal 255 representative of the obtained amount of information. The controller 241 can control the encoding unit 244 so that there is no overflow at the transmitter buffer 267.
- the decoding system 258 can comprise an input interface 266, a receiver buffer 259, a controller 267, a frame memory 261 , a decoding unit 260, and an output interface 267.
- the receiver buffer 259 of the decoding system 258 can temporarily store the compressed bit stream including the received video compression data and video packets based on the source pictures from the source pictures 246.
- the decoding system 258 can read the control information and presentation time stamp information associated with video packets in the received data and output a frame number signal 263 which can be supplied to the controller 267.
- the controller 267 can supervise the counted number of frames at a predetermined interval. By way of a non- limiting example, the controller 267 can supervise the counted number of frames each time the decoding unit 260 completes a decoding operation.
- the controller 267 can output a decoding start signal 264 to the decoding unit 260.
- the controller 267 can wait for the occurrence of a situation in which the counted number of frames becomes equal to the predetermined amount.
- the controller 267 can output the decoding start signal 264 when the situation occurs.
- the controller 267 can output the decoding start signal 264 when the frame number signal 263 indicates that the receiver buffer 259 is at the predetermined capacity.
- the encoded video packets and video compression data can be decoded in a monotonic order (i.e., increasing or decreasing) based on presentation time stamps associated with the encoded video packets.
- the decoding unit 260 can decode data amounting to one picture associated with a frame and compressed video data associated with the picture associated with video packets from the receiver buffer 259.
- the decoding unit 260 can write a decoded video signal 162 into the frame memory 261.
- the frame memory 261 can have a first area into which the decoded video signal is written and a second area used for reading out decoded pictures 262 to the output interface 267.
- the coding system 240 can be incorporated or otherwise associated with a transcoder or an encoding apparatus at a headend, and the decoding system 258 can be incorporated or otherwise associated with a downstream device, such as a mobile device, a set-top box, or a transcoder.
- the encoders 202 employ compression algorithms to generate bit streams or files of smaller size than the original video sequences in the AV information 102. Such compression is made possible by reducing spatial and temporal redundancies in the original sequences.
- Figure 3 is a block diagram illustrating one embodiment of the source encoder 202.
- the source encoder 202 accepts AV information 102 and uses sampler 302 to sample the AV information 102 to produce a sequence 303 of successive of digital images or pictures, each having a plurality of pixels.
- a picture can comprise a frame or a field, wherein a frame is a complete image captured during a known time interval, and a field is the set of odd-numbered or even-numbered scanning lines composing a partial image.
- the sampler 302 produces an uncompressed picture sequence 303.
- Each digital picture can be represented by one or more matrices having a plurality of coefficients that represent information about the pixels that together compose the picture.
- the value of a pixel can correspond to luminance or other information.
- each of these components may be separately processed.
- Images can be segmented into "slices,” which may comprise a portion of the picture or may comprise the entire picture.
- these slices are divided into coding entities called macroblocks (generally blocks of size 16 pixels* 16 pixels), and each macroblock may in turn be divided into different sizes of data blocks, for example 4x4, 4x8, 8x4, 8x8, 8x 16, or 16x8.
- macroblocks generally blocks of size 16 pixels* 16 pixels
- each macroblock may in turn be divided into different sizes of data blocks, for example 4x4, 4x8, 8x4, 8x8, 8x 16, or 16x8.
- HEVC expands and generalizes the notion of the coding entity beyond that of the macroblock.
- HEVC is a block-based hybrid spatial and temporal predictive coding scheme.
- HEVC introduces new coding entities that are not included with the H.264/ AVC standard. These coding entities include Coding- Tree block (“CTUs”), coding units (“CUs”), predictive units (“PUs”), and transform units (“TUs”) which are further described below.
- CTUs Coding- Tree block
- CUs coding units
- PUs predictive units
- TUs transform units
- FIG 4 is a diagram 400 of AV information 102, such as one of the pictures in the picture sequence 303.
- the picture 400 is spatially divided into non-overlapping square blocks known as CTUs 402.
- the CTU 402 is the basic coding unit of HEVC and can be as large as 128x128 pixels.
- the CTUs 402 are typically referenced within the picture 400 in an order analogous to a progressive scan.
- Each CTU 402 may in turn be iteratively divided into smaller variable size coding units described by a "quadtree" decomposition further described below. Coding units are regions formed in the image to which similar encoding parameters are applied and transmitted in the bitstream 314.
- FIG. 5 is a diagram showing an exemplary partition of a CTU 402 into CUs such as CUs 502A and 502B.
- a single CTU 402 can be divided into four CUs 502 such as CU 502A, each a quarter of the size of CTU 402.
- Each such divided CU 502A can be further divided into four smaller CUs 502B of a quarter of the size of the initial CU 502A.
- quadtree data parameters e.g., flags or bits
- FIG. 6 is a diagram illustrating a representative quadtree 600 and data parameters for the CTU 402 partitioning shown in Figure 5.
- the quadtree 600 comprises a plurality of nodes including first node 602A at one hierarchical level and second node 602B at a lower hierarchical level (hereinafter, quadtree nodes may be alternatively referred to as "nodes" 602).
- quadtree nodes may be alternatively referred to as "nodes" 602).
- a "split flag" or bit "1 " is assigned if the node 602 is further split into sub-nodes, otherwise a bit "0" is assigned.
- the CTU 402 partition illustrated in Figure 5 can be represented by the quadtree 600 presented in Figure 6, which includes a split flag of "1 " associated with node 602A at the top CU 502 level (indicating there are 4 additional nodes at a lower hierarchical level).
- the illustrated quadtree 600 also includes a split flag of "1 " associated with node 602B at the middle CU 502 level to indicate that this CU is also partitioned into four further CUs 502 at the next (bottom) CU level.
- the source encoder 202 may restrict the minimum and maximum CU 502 sizes, thus changing the maximum possible depth of CU 502 splitting.
- the encoder 202 generates encoded AV information 106 in the form of a bitstream 314 that includes a first portion having encoded data for the CUs 502 and a second portion that includes overhead known as syntax elements.
- the encoded data include data corresponding to the encoded CUs 502 (i.e., the encoded residuals together with their associated motion vectors, predictors, or related residuals as described further below).
- the second portion includes syntax elements that may represent encoding parameters which do not directly correspond to the encoded data of the blocks.
- the syntax elements may comprise an address and identification of the CU 502 in the image, a quantization parameter, an indication of the elected Inter/Intra coding mode, the quadtree 600, or other information.
- CUs 502 correspond to elementary coding elements and include two related sub-units: PUs and TUs, both of which have a maximum size equal to the size of the corresponding CU 502.
- Figure 7 is a diagram illustrating the partition of a CU 502 into one or more PUs 702.
- a PU 702 corresponds to a partitioned CU 502 and is used to predict pixels values for intra-picture or inter-picture types.
- a final (bottom level) CU 502 of 2Nx2N can possess one of four possible patterns of PUs: 2Nx2N (702A), Nx2N (702B), 2NxN (702C), and xN (702D), as shown in Figure 7.
- a CU 502 can be either spatially or temporally predictively encoded. If a CU 502 is coded in "intra" mode, each PU 702 of the CU 502 can have its own spatial prediction direction and image information as further described below. Also, in the "intra” mode, the PU 702 of the CU 502 may depend on another CU 502 because it may use a spatial neighbor, which is in another CU. If a CU 502 is coded in "inter” mode, each PU 702 of the CU 502 can have its own motion vectors and associated reference pictures as further described below.
- FIG. 8 is a diagram showing a CU 502 partitioned into four PUs 702 and an associated set of TUs 802.
- TUs 802 are used to represent the elementary units that are spatially transformed by a Discrete Cosine Transform ("DCT").
- DCT Discrete Cosine Transform
- the size and location of each block transform TU 802 within a CU 502 is described by an RQT further illustrated below.
- FIG 9 shows RQT 900 for TUs 802 for the CU 502 in the example of Figure 8. Note that the "1 " at the first node 902 A of the RQT 900 indicates that there are four branches, and that the "1" at the second node 902B at the adjacent lower hierarchical level indicates that the indicated node further has four branches.
- the data describing the RQT 900 are also coded and transmitted as overhead in the bitstream 314.
- the coding parameters of a video sequence may be stored in dedicated NAL units called parameter sets.
- Two types of parameter sets may be employed.
- the first parameter set type is known as an SPS and comprises a NAL unit that includes parameters that are unchanged during the entire video sequence.
- SPS handles the coding profile, the size of the video frames, and other parameters.
- the second type of parameter set is known as a PPS and codes different values that may change from one image to another.
- One of the techniques used to compress a bitstream 314 is to forego the storage of pixel values themselves and instead predict the pixel values using a process that can be repeated at the decoder 220 and store or transmit the difference between the predicted pixel values and the actual pixel values (known as the residual). So long as the decoder 220 can compute the same predicted pixel values from the information provided, the actual picture values can be recovered by adding the residuals to the predicted values. The same technique can be used to compress other data as well. [0082] Referring back to Figure 3, each PU 702 of the CU 502 being processed is provided to a predictor module 307.
- the predictor module 307 predicts the values of the PUs 702 based on information in nearby PUs 702 in the same frame (intra-frame prediction, which is performed by the spatial predictor 324) and information of PUs 702 in temporally proximate frames (inter-frame prediction, which is performed by the temporal predictor 330).
- Temporal prediction may not always be based on a collocated PU, since collocated PUs are defined to be located at a reference/non- reference frame having the same x and y coordinates as the current PU 702.
- Encoded units can therefore be categorized to include two types: (1) non- temporally predicted units and (2) temporally predicted units.
- Non-temporally predicted units are predicted using the current frame, including adjacent or nearby PUs 702 within the frame (e.g., intra-frame prediction), and are generated by the spatial predictor 324.
- Temporally predicted units are predicted from one temporal picture (e.g., P-frames) or predicted from at least two reference pictures temporally ahead or behind (i.e., B-frames).
- Figure 10 is a diagram illustrating spatial prediction of PUs 702.
- a picture i may comprise a PU 702 and spatially proximate other PUs 1 -4, including nearby PU 702N.
- the spatial predictor 324 predicts the current block (e.g., block C of Figure 10) by means of an "intra-frame" prediction which uses PUs 702 of already encoded other blocks of pixels of the current image.
- the spatial predictor 324 locates a nearby PU (e.g., PU 1 , 2, 3, or 4 of Figure 10) that is appropriate for spatial coding and determines an angular prediction direction to that nearby PU.
- a nearby PU e.g., PU 1 , 2, 3, or 4 of Figure 10.
- 35 directions can be considered, so each PU may have one of 35 directions associated with it, including horizontal, vertical, 45 degree diagonal, 135 degree diagonal, DC, etc.
- the spatial prediction direction of the PU is indicated in the syntax.
- this located nearby PU is used to compute a residual PU 704 (e) as the difference between the pixels of the nearby PU 702N and the current PU 702, using element 305.
- the result is an intra- predicted PU element 1006 that comprises a prediction direction 1002 and the intra- predicted residual PU 1004.
- the prediction direction 1002 may be coded by inferring the direction from spatially proximate PUs and the spatial dependencies of the picture, enabling the coding rate of the intra prediction direction mode to be reduced.
- Figure 1 1 is a diagram illustrating temporal prediction.
- Temporal prediction considers information from temporally neighboring pictures or frames, such as the previous picture, picture i-1.
- temporal prediction includes single-prediction (P-type), which predicts the PU 702 by referring to one reference area from only one reference picture, and multiple prediction (B-type), which predicts the PU by referring to two reference areas from one or two reference pictures.
- P-type single-prediction
- B-type multiple prediction
- Reference images are images in the video sequence that have already been coded and then reconstructed (by decoding).
- the temporal predictor 330 identifies, in one or several of these reference areas (one for P-type or several for B-type), areas of pixels in a temporally nearby frame so that they can be used as predictors of this current PU 702. In the case where several areas predictors are used (B-type), they may be merged to generate one single prediction.
- the reference area 1 102 is identified in the reference frame by a motion vector ("MV") 1 104 that defines the displacement between the current PU 702 in current frame (picture i) and the reference area 1 102 ("refldx") in the reference frame (picture i-1).
- a PU in a B- picture may have up to two MVs. Both MV and refldx information are included in the syntax of the HEVC bitstream.
- a difference between the pixel values of the reference area 1 102 and the current PU 702 may be computed by element 305 as selected by switch 306. This difference is referred to as the residual of the inter-predicted PU 1106.
- the current PU 1006 is composed of one MV 1 104 and a residual 1106.
- one technique for compressing data is to generate predicted values for the data using means repeatable by the decoder 220, computing the difference between the predicted and actual values of the data (the residual) and transmitting the residual for decoding. So long as the decoder 220 can reproduce the predicted values, the residual values can be used to determine the actual values.
- This technique can be applied to the MVs 1104 used in temporal prediction by generating a prediction of the MV 1 104, computing a difference between the actual MV 1104 and the predicted MV 1 104 (a residual), and transmitting the MV residual in the bitstream 314. So long as the decoder 220 can reproduce the predicted MV 1104, the actual MV 1104 can be computed from the residual.
- HEVC computes a predicted MV for each PU 702 using the spatial correlation of movement between nearby PUs 702.
- FIG 12 is a diagram illustrating the use of MVPs in HEVC.
- MVPs Vi, V 2 , and V 3 are taken from the MVs 1 104 of a plurality of blocks 1 , 2, and 3 situated nearby or adjacent to the block to encode (C).
- C block to encode
- these vectors refer to MVs of spatially neighboring blocks within the same temporal frame and can be used to predict the MV of the block to encode, these vectors are known as spatial motion predictors.
- Figure 12 also illustrates a temporal MVP V T which is the MV of the co- located block C in a previously decoded picture (in decoding order) of the sequence (e.g., a block of picture i-1 located at the same spatial position as the block currently being coded (block C of image i)).
- V T is the MV of the co- located block C in a previously decoded picture (in decoding order) of the sequence (e.g., a block of picture i-1 located at the same spatial position as the block currently being coded (block C of image i)).
- the components of the spatial MVPs Vi, V 2 , and V 3 and the temporal MVP V T can be used to generate a median MVP V M -
- the three spatial MVPs may be taken as shown in Figure 12, that is, from the block situated to the left of the block to encode (Vi), the block situated above (V 3 ), and from one of the blocks situated at the corners of the block to encode (V 2 ), according to a predetermined rule of availability.
- This MV predictor selection technique is known as Advanced Motion Vector Prediction.
- a plurality of (typically five) MVP candidates having spatial predictors (e.g., Vi, V 2 , and V 3 ) and temporal predictors V T are therefore obtained.
- the set of MVPs may be reduced by eliminating data for duplicated MVs (for example, MVs which have the same value as other MVs may be eliminated from the candidates).
- the encoder 202 may select a "best" MVP from among the candidates, compute an MVP residual as a difference between the selected MVP and the actual MV, and transmit the MVP residual in the bitstream 314. To perform this operation, the actual MV must be stored for later use by the decoder 220 (although it is not transmitted in the bit stream 314). Signaling bits or flags are included in the bitstream 314 to specify which MV residual was computed from the normalized MVP and are later used by the decoder to recover the MV. These bits or flags are further described below.
- the intra-predicted residuals 1004 and the inter- predicted residuals 1 106 obtained from the spatial (intra) or temporal (inter) prediction process are then transformed by transform module 308 into the TUs 802 described above.
- a TU 802 can be further split into smaller TUs using the RQT decomposition described above with respect to Figure 9.
- RQT decomposition described above with respect to Figure 9.
- HEVC generally two or three levels of decomposition are used and authorized transform sizes are 32x32, 16x 16, 8x8, and 4x4.
- the transform is derived according to a DCT or discrete sine transform.
- the residual transformed coefficients are then quantized by quantizer 310.
- Quantization plays a very important role in data compression. In HEVC, quantization converts the high precision transform coefficients into a finite number of possible values. Although the quantization permits a great deal of compression, quantization is a lossy operation, and the loss by quantization cannot be recovered.
- the coefficients of the quantized transformed residual are coded by means of an entropy coder 312 and then inserted into the compressed bit stream 314 as a part of the useful data coding the images of the AV information. Coding syntax elements may also be coded using spatial dependencies between syntax elements to increase the coding efficiency.
- HEVC offers context-adaptive binary arithmetic coding ("CABAC"). Other forms of entropy or arithmetic coding may also be used.
- CABAC context-adaptive binary arithmetic coding
- the encoder 202 decodes already encoded PUs 702 using the "decoding" loop 315, which includes elements 316, 318, 320, 322, and 328. This decoding loop 315 reconstructs the PUs and images from the quantized transformed residuals.
- the quantized transform residual coefficients E are provided to dequantizer 316, which applies the inverse operation to that of quantizer 310 to produce dequantized transform coefficients of the residual PU (£") 708.
- the dequantized data 708 are then provided to inverse transformer 318 which applies the inverse of the transform applied by the transform module 308 to generate reconstructed residual coefficients of the PU ⁇ e ") 710.
- the reconstructed coefficients of the residual PU 710 are then added to the corresponding coefficients of the corresponding predicted PU (x ") 702' selected from the intra-predicted PU 1004 and the inter-predicted PU 1 106 by selector 306 .
- the "intra" predictor ( ⁇ ') is added to this residual in order to recover a reconstructed PU (x") 712 corresponding to the original PU 702 modified by the losses resulting from a transformation, for example in this case the quantization operations.
- the MV may be stored using the MV buffer 329 for use in temporally subsequent frames.
- a flag may be set and transferred in the syntax to indicate that the MV for the currently decoded frame should be used for at least the subsequently coded frame instead of replacing the contents of the MV buffer 329 with the MV for the current frame.
- a loop filter 322 is applied to the reconstructed signal (x") 712 in order to reduce the effects created by heavy quantization of the residuals obtained and to improve the signal quality.
- the loop filter 322 may comprise, for example, a deblocking filter for smoothing borders between PUs to visually attenuate high frequencies created by the coding process and a linear filter that is applied after all of the PUs for an image have been decoded to minimize the sum of the square difference ("SSD") with the original image.
- the linear filtering process is performed on a frame-by-frame basis and uses several pixels around the pixel to be filtered and also uses spatial dependencies between pixels of the frame.
- the linear filter coefficients may be coded and transmitted in one header of the bitstream, typically a picture or slice header.
- the filtered images also known as reconstructed images, are then stored as reference images in reference image buffer 328 in order to allow the subsequent "Inter" predictions taking place during the compression of the subsequent images of the current video sequence.
- HEVC permits the use of several reference images for estimation and motion compensation of the current image.
- the collocated PU 1 102 for a particular slice resides in an associated nearby reference or non-reference picture.
- the collocated PU 1 102 for the current PU 702 in picture (i) resides in the associated nearby reference picture (i-1).
- the best "inter" or temporal predictors of the current PU 702 are selected in some of the multiple reference or non- reference images, which may be based on pictures temporally prior to or after the current picture in display order (backwards and forward prediction, respectively).
- index to reference pictures is defined by reference picture lists that are described in the slice syntax.
- Forward prediction is defined by list O
- RefPicListO backward prediction
- list 1 backward prediction
- list 0 and list 1 can contain multiple reference pictures prior to or later than the current picture in the display order.
- Figure 13 illustrates an example of the use of the reference picture lists.
- the list O reference pictures with ascending reference picture indices and starting with index equal to zero are 4, 2, 0, 6, 8, and 10
- the list 1 reference pictures with ascending reference picture indices and starting with index equal to zero are 6, 8, 10, 4, 2, and 0.
- a slice that the motion compensated prediction is restricted to the list O prediction is called a predictive or P-slice.
- Collocated pictures are indicated by using the collocated ref idx index in the HEVC.
- a slice for which the motion-compensated prediction includes more than one reference picture is a bi-predictive or B-slice.
- the motion compensated prediction may include reference pictures from list 1 prediction as well as list O.
- a collocated PU 1 102 is disposed in a reference picture specified in either list O or list 1.
- a flag (collocated_from_10_flag) is used to specify whether the collocated partition should be derived from list O or list 1 for a particular slice type.
- Each of the reference pictures is also associated with an MV.
- the collocated ref idx variable specifies the reference picture as the picture that contains the co-located partition as specified by RefPicListl . Otherwise (slice type is equal to B and collocated from lO flag is equal to 1 or slice type is equal to P), the collocated ref idx variable specifies the reference picture as the picture that contains the collocated partition as specified by RefPicListO.
- Figure 14 is a diagram illustrating processes performed by the encoder 202 according to the aforementioned standard.
- Block 1402 determines whether the current picture is a reference picture for another picture. If not, there is no need to store the reference picture or MV information. If the current picture is a reference picture for another picture, then block 1504 determines whether the "another" picture is a P-type or a B-type picture. If the picture is a P-type picture, then processing is passed to block 1410, which sets the collocated from lO flag to one and stores the reference picture and MV in list 0. If the "another picture" is a B-type picture, then block 1406 nonetheless directs processing to blocks 1408 and 1410 if the desired reference picture is to be stored in list
- Figure 15 depicts the use of a the collocated from lO flag by the decoder 220 in decoding according to the previous HEVC standard.
- Block 1502 determines if the current slice type being computed is an intra or I-type. Such slices do not use temporally nearby slices in the encoding/decoding process, and hence there is no need to find a temporally nearby reference picture. If the slice type is not I-type, then block 1504 determines whether the slice is a B-slice. If the slice is not a B-type, it is a P-type slice, and the reference picture that contains the collocated partition is found in list 0, according to the value of collocated ref idx. If the slice is B-type, then the collocated from lO flag determines whether the reference picture is found in list 0 or list
- the collocated picture is therefore defined as the reference picture having the indicated collocated ref idx in either list 0 or list 1 , depending on the slice type (B-type or P-type) and the value of the collocated from lO flag.
- the first reference picture (the reference picture having index [0] as shown in Figure 13) is selected as the collocated picture.
- Figures 16A and 16B are diagrams presenting a baseline PPS syntax.
- HEVC implements a technique known as weighted prediction, which is used to encode chroma and luma data used in slices subject to temporal encoding.
- weighted prediction can consider one other reference slice (uni-weighted prediction) or two or more slices (bi-weighted prediction).
- the PPS syntax includes two flags related to weighted prediction operations: weighted _pred_flag 1602 and weighted bipred flag 1604.
- the weighted prediction flag 1602 specifies whether weighted prediction is to be applied to image data of P-slices.
- weighted_bipred_flag 1604 is set to logical 0 to specify that the default weighted prediction is applied to B slices, and set to logical 1 specifies that weighted prediction is applied to B slices.
- Figures 17A through 17C are diagrams presenting a baseline slice header logic and syntax.
- indentation of the text indicates the logical structure of the syntax, wherein the delimiter "
- a logical condition statement e.g., "if statement) is true, then the operations indented from the logical if statement (and enclosed in brackets " ⁇ " are performed, otherwise processing continues to the next logical statement.
- slice processing syntax differs depending upon whether the slice is the first of a plurality of slices in a picture, or if it is not the first slice in the picture.
- the slice header comprises a first slice in picture flag (first_slice_in_pic_flag) that is read. This is illustrated in syntax 1702. If a RapPicFlag is set, a no_output_of_prior_pics_flag is read, as shown in syntax 1703.
- the HEVC standard includes a plurality of NAL unit types that include a VPS, an SPS which presents parameters for a sequence of pictures, and a PPS which describes parameters for a particular picture. An identifier of the picture parameter set is also read. If the slice is not the first slice in the picture, then the slice address is read. This is illustrated in syntax 1706.
- slices may include non-dependent slices or dependent slices, and the slice header syntax permits the disabling or enabling of the use of dependent slices altogether.
- the next logic uses a previously read flag that signals that dependent slices are enabled and the first_slice_in_pic_flag to determine whether to read the dependent_slice_flag. Note that if the slice is the first slice in the picture, then the dependent_slice_flag for this slice is not read, as the slice cannot be a dependent slice under such circumstances. If the slice is not a dependent slice, then the logic that follows reads the slice type and other parameters that are used in later processing for all slice types (I, P, and B). Further processing shown in syntax 1712 is also performed.
- syntax 1715 includes a conditional statement testing whether the slicejype data read earlier in the slice header indicate if the slice type is either P or B. If the slice type is neither P or B, then processing is routed to determine whether read reference picture list (ref_pic_list_modification) is read or not, as further discussed below with reference to syntax 1719. If the slice type is a P or B, then the logic uses an sps_temporal_mvp_enable_flag that was read as a part of the SPS header syntax to determine if the slice may be decoded using temporal MVP. If the flag is set, indicating that temporal MVP may be enabled, then a flag describing whether temporal MVP is permitted for the picture containing the slice is read.
- the num_ref_idx_active_override_flag is read as shown in syntax 1717. This flag indicates whether a parameter (num ref inx lO active minusl) describing the maximum reference picture list index for list O (P-type) or another parameter (num ref idx l l active minusl) describing the maximum reference picture list index for list 1 (B-type) are present in the slice header.
- the num ref idx active override flag is positive, then the num ref inx lO active minusl parameter is read, and if the slice is a B-type slice, then the num_ref_inx_l l_active_minusl parameter is also read, as shown in syntax 1718.
- HEVC permits the baseline of the reference pictures to be modified in the encoding process.
- slice type since the operations that follow are not within the conditional in syntax 1715 testing whether the slice is a P-type or a B- type, a previously read flag is read, in one embodiment, from the PPS. If this flag tests as a logic 1 , then a ref_pic_list_modification syntax is executed.
- This information is used by the ref_pic_list_modification syntax to read, based on the slice type, a flag identifying whether the slice was encoded according to an implicit reference picture list (if the flag is a logical zero or not provided) or if the reference picture list for the reference picture list associated with the slice is to be explicitly defined (if the flag is logical 1), in which case list entries for the reference picture list are read.
- the baseline refj)ic_list_modification syntax includes logical conditional statements based on the slice-type, which are simplified in the solutions described below.
- the slice header logic again determines whether the slice under consideration is a B-type slice, and if so, reads an mvd l l zero flag.
- the mvd l l zero flag is not applicable to P-type slices and indicates whether the MV difference coding syntax structure used with B-type slices is parsed or not. This is shown in syntax 1720.
- CAB AC is a form of entropy encoding that encodes binary symbols using probability models.
- a non-binary valued symbol (such as a transform unit coefficient or MV) is binarized or converted into a binary code prior to arithmetic coding. Stages are repeated for each bit (or "bin") of the binarized symbol.
- a context model is a probability model for one or more bins of the binarized symbol. This model may be chosen from a plurality of available models depending on the statistics of recently coded data symbols.
- the context model stores the probability of each bin being "1" or "0.”
- An arithmetic coder then encodes each bin according to the selected probability model.
- a context variable is a variable specified for the adaptive binary arithmetic decoding process of a bin by an equation containing recently decoded bins.
- a cabac init flag specifies the method for determining the initialization table used in the initialization process for context variables. The value of cabac init flag is from 0 to 1 , inclusive. When cabac init flag is not present, it is inferred to be 0.
- the slice header logic checks a signaling flag indicating whether the cabac init flag is present in the slice header and should be read. If the signaling flag indicates that the context variable initialization flag is present in the slice header, then the context variable initialization flag is read.
- the context variable initialization variable flag specifies the method for determining the initialization table used in the context variable initialization process. This is shown in syntax 1722.
- the slice header logic performs operations related to determining the location of the collocated picture used for temporal MVP.
- the slice header first checks if temporal MVP is enabled on a slice/picture level by checking a flag as shown in syntax 1724. If the flag is not set, then processing is directed to the weighted prediction discussed further below. If the flag is set, then the slice header logic determines if the slice type is B, as shown in syntax 1730. If the slice type is B, then the slice header logic reads the collocated_from_10_flag, as shown in syntax 1732.
- the logic determines if the slice type is not I-type and either (1) the logical combination of the collocated from lO flag and the num ref idx lO active minusl is greater than zero or (2) the logical combination of the inverse of the collocated from lO flag and the num ref idx active minus 1 is greater than zero. If either of these possibilities tests to True, then the collocated_ref_idx is read, as shown in syntax 1734.
- HEVC and previous coding standards permit a scaling and offset operation that is applied to prediction signals in a manner known as weighted prediction.
- H.264/MPEG-4 AVC supported both temporally-implicit and explicit weighted prediction
- only explicit weighted prediction is applied, by scaling and offsetting the prediction with values sent explicitly by the encoder.
- the bit depth of the prediction is then adjusted to the original bit depth of the reference samples.
- the interpolated (and possibly weighted) prediction value is rounded, right-shifted, and clipped to have the original bit depth.
- the interpolated (and possibly weighted) prediction values from two prediction blocks are added first, and then rounded, right-shifted and clipped.
- the slice header logic uses the slice type and the weighted prediction flags described above to determine if a table for weighted prediction is to be read and applied to the image data of the slice.
- the weighted_pred_flag is set equal to logical 0 to indicate that the weighted prediction is not applied to P slices, and set to logical 1 to indicate that weighted prediction is applied to P slices.
- the weighted_bipred_flag is set to logical 0 to specify that the default weighted prediction is applied to B slices, and set to logical 1 specifies that weighted prediction is applied to B slices.
- the slice header logic includes logic to read and apply the prediction weight table to slice image values if the weighted_pred_flag is set to a logical 1 and the slice type is P or if the weighted_bipred_flag is set to a logical 1 and the slice type is B, as shown in syntax 1736.
- a maximum number of MV prediction candidates that are supported in the slice can be specified. In the slice header logic, this is specified as the difference between the number "5" and the maximum number and is referred to as flve minus max num merge cand. In the next slice header logic, if the slice type is a P type or a B type, then the five_minus_max_num_merge_cand is read, as shown in syntax 1738. Since the maximum number of candidates is typically five, the number read is typically zero.
- the slice header logic reads a variable describing the initial value for a quantization parameter to be used in coding blocks of data is read. This initial value is used until modified in the coding unit. This is illustrated by syntax 1740.
- the loop filter 322 of the encoder/decoder may comprise, for example, a deblocking filter for smoothing borders between PUs to visually attenuate high frequencies created by the coding process and a linear filter that is applied after all of the PUs for an image have been decoded to minimize the SSD with the original image.
- the linear filtering process is performed on a frame-by-frame basis and uses several pixels around the pixel to be filtered and also uses spatial dependencies between pixels of the frame.
- the linear filter coefficients may be coded and transmitted in one header of the bitstream, typically a picture or slice header.
- the slice header logic performs deblocking filter logic, as illustrated with respect to syntax 1742.
- the slice header logic determines whether a deblocking filter control is enabled by checking the status of a control flag in the PPS. If the flag tests true, then logic checks to determine if the deblocking filter is overridden by checking another flag which indicates that the slice header for pictures referring to the PPS have a deblocking filter override. If this filter is enabled, then a flag is read that indicates that the deblocking filter is to be overridden. Logic then determines whether the deblocking override filter is set, and if so, reads a slice header level flag (that indicates whether the deblocking filter should be disabled). If this flag is not set, then the slice header logic reads the beta_offset_div2 and tc_offset_div2 data, which specify default deblocking parameter offsets.
- HEVC permits in-loop filtering operations to be performed across left and upper boundaries of the current slice.
- Previous editions of the HEVC slice header included a flag that when set equal to 1 specifies that these in-loop filtering operations (include the deblocking filter and sample adaptive offset filter ) are performed across the left and upper boundaries of the current slice; otherwise, the in-loop operations are not applied across left and upper boundaries of the current slice.
- the logic of syntax 1742 reads this flag if the feature is enabled on a sequence level (e.g., the loop_filter_across_slices_enabled_flag is set and any one of the indicated flags is set, as shown in syntax 1742).
- the remaining slice header syntax logic 1744 relates to the use of tiles or slice header extensions.
- FIGS. 18A and 18B are diagrams illustrating one embodiment of an improved PPS syntax 1800. Importantly, the improved PPS syntax no longer includes the weighted _pred_flag 1602 and weighted bipred flag 1604 or any other flag related to weighted prediction processing on a picture level. Instead, flags controlling weighted prediction processing are disposed in the slice header, as described further below
- the baseline slice header design includes logic that reads the weighted prediction table of data based upon syntax implementing a logical test to determine if the weighted_pred_flag is set and the slice in question is a P type slice or if the weighted_bipred_idc is set and the slice type is B.
- FIGs 19A through 19C are diagrams illustrating one embodiment of the improved slice header syntax.
- the syntax shown in Figure 19A is identical to the syntax discussed with respect to Figure 17A.
- the improved slice header syntax depicted in Figure 19B is modified from the baseline slice header syntax depicted in Figure 17B.
- the baseline slice header syntax shown in Figure 17B includes a conditional statement that determines whether the weighted_pred_flag is set and the slice is a P slice or if the weighted bipred flag is set and the slice is a B slice. If either is true, then the predicted weight table is read.
- the modified slice header weighted biprediction syntax 1902 reads the weighted _prediction_flag that signals the application of weighted prediction for both P and B slices and, if this flag is set, reads the prediction weight table and applies the predicted weights.
- the remainder of the improved slice header syntax is unchanged from that of the baseline slice header.
- the improved weighted prediction syntax removes two flags from the PPS syntax (one for P slices and one for B slices) and substitutes a single flag for both P and B slices in the slice header syntax.
- the flags controlling weighted prediction processing and the weighted prediction parameters are on the same hierarchical levels of coding (picture versus slice), logical procesing redundancies are reduced, and in most circumstances, bits are conserved.
- Figures 20A and 20B are diagrams further illustrating the slice header syntax logic. Turning first to Figure 20A, block 2002 reads slice-type data. In one embodiment, the slice-type data are read from the slice header using syntax 1710.
- Block 2004 determines whether the slice is an inter-predicted slice such as a P slice or a B slice. This may be implemented using slice header syntax 1715, for example. If the slice is not an inter-predicetd slice, weighted prediction is not impemented, and in the exemplary syntax illustrated in Figure 19B, processing bypasses sytnax 1716 through 1738 and is passed to syntax 1740. If the slice is an inter-predicted slice, processing is routed to block 2006, which determines whether the slice header includes a parameter signaling enablement of a state of weighted predication of the related image of the slice. If such parameter is present in the slice header, then it is read and used to perform weighted prediction according to the read parameter, as shown in blocks 2008 and 2010. If there is no parameter in the slice header, then processing is routed to block 2010, where default weighted prediction processing is performed for B slices but not P slices, as further described below.
- slice header syntax 1715 for example. If the slice is not an inter-predicetd slice
- FIG. 20B is a diagram further illustrating the weighted prediction processing.
- B lock 2022 determines whether the weighted_pred_flag has a value of logical 1. If so, blocks 2030 and 2032 read the pred weight table and apply weighted prediction to the slice. If block 2022 does not determine that the weighted_pred_enable_flag has a value of a logical 1 (either because it is set to a logical 0 or is not present and inferred to be logical 0), then processing is passed to block 2024, which determines if the slice is a P-type slice. If the slice is a P-type slice, then weighted prediction is not applied to the image data of the slice, as shown in block 2026.
- the slice is not a P-slice (and hence, is a B-slice by virtue of block 2004 determining that the slice is either a P-slice or a B-slice)
- default weighted prediction is applied to the slice, as shown in block 2028.
- the weighted_pred_enable_flag may instead be read from other portions of the slice header.
- the weighted _pred_enable_fiag may be read directly after the slice header syntax that determines if the slice is an inter-predicted slice (e.g., slice header syntax 1715).
- the foregoing operations are described with respect to a decoding process, which can take place in either the source decoder 220 or an encoder 202, as a part of the encoding process.
- the encoding process may also be expressed as comprising determining if a slice of the one or more slices is an inter-predicted slice according to slice-type data, and if the slice is an inter-predicted slice, then configuring a first parameter in the slice header associated with the slice to a value signaling enablement of a state of weighted prediction of image data associated with the slice.
- FIG. 21 illustrates an exemplary processing system 2100 that could be used to implement the embodiments of the invention.
- the computer 2102 comprises a processor 2104A,B and a memory, such as 2106.
- the computer 2102 is operatively coupled to a display 2122, which presents images such as windows to a user on a graphical user interface ("GUI") 21 18B.
- GUI graphical user interface
- the computer 2102 may be coupled to other devices, such as a keyboard 21 14, a mouse 21 16, a printer, etc.
- GUI graphical user interface
- the computer 2102 operates under control of an operating system 2108 stored in the memory 2106 and interfaces with the user to accept inputs and commands and to present results through the GUI module 21 18A.
- GUI module 21 18A is depicted as a separate module, the instructions performing the GUI functions can be resident or distributed in the operating system 2108, the computer program 21 10, or implemented with special purpose memory and processors.
- the computer 2102 also implements a compiler 21 12 which allows an application program 21 10 written in a programming language to be translated into processor- readable code. After completion, the application 21 10 accesses and manipulates data stored in the memory 2106 of the computer 2102 using the relationships and logic that were generated using the compiler 21 12.
- the computer 2102 also optionally comprises an external communication device such as a modem, satellite link, Ethernet card, or other device for communicating with other computers.
- instructions implementing the operating system 2108, the computer program 21 10, and the compiler 21 12 are tangibly embodied in a computer- readable medium, e.g., in data-storage device 2120, which could include one or more fixed or removable data storage devices, such as a zip drive, floppy disc drive 2124, hard drive, CD-ROM drive, tape drive, etc.
- the operating system 2108 and the computer program 21 10 comprise instructions which, when read and executed by the computer 2102, cause the computer 2102 to perform the steps necessary to implement or use the invention.
- Computer program 21 10 or operating instructions may also be tangibly embodied in memory 2106 or data communications devices 2130, thereby making a computer program product or article of manufacture.
- the processing system 2100 may also be embodied in a desktop, laptop, tablet, notebook computer, personal digital assistant, cellphone, smartphone, or any device with suitable processing and memory capability. Further, the processing system 2100 may utilize special purpose hardware to perform some or all of the foregoing functionality. For example the encoding and decoding processes described above may be performed by a special purpose processor and associated memory.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un procédé destiné à signaler un traitement de prédiction pondérée dans des schémas de codage avancés. La signalisation est retirée du niveau hiérarchique de l'ensemble des paramètres d'image et est plutôt insérée dans l'en-tête de tranche, et un fanion unique est utilisé pour signaler (2006) la prédiction pondérée tant pour les tranches P que pour les tranches B, simplifiant ainsi le fonctionnement et accroissant le rendement binaire.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261691764P | 2012-08-21 | 2012-08-21 | |
US61/691,764 | 2012-08-21 | ||
US201261691800P | 2012-08-22 | 2012-08-22 | |
US61/691,800 | 2012-08-22 | ||
US201261711211P | 2012-10-09 | 2012-10-09 | |
US61/711,211 | 2012-10-09 | ||
US13/972,017 | 2013-08-21 | ||
US13/972,017 US20140056356A1 (en) | 2012-08-21 | 2013-08-21 | Method and apparatus for efficient signaling of weighted prediction in advanced coding schemes |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014031734A1 true WO2014031734A1 (fr) | 2014-02-27 |
Family
ID=50150360
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/055968 WO2014031734A1 (fr) | 2012-08-21 | 2013-08-21 | Procédé et appareil de signalisation efficiente de prédiction pondérée dans des schémas de codage avancés |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2014031734A1 (fr) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011050641A1 (fr) * | 2009-10-28 | 2011-05-05 | Mediatek Singapore Pte. Ltd. | Procédés de codage d'images vidéo et encodeurs et décodeurs d'images vidéo dotés d'une fonction de prédiction pondérée localisée |
-
2013
- 2013-08-21 WO PCT/US2013/055968 patent/WO2014031734A1/fr active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011050641A1 (fr) * | 2009-10-28 | 2011-05-05 | Mediatek Singapore Pte. Ltd. | Procédés de codage d'images vidéo et encodeurs et décodeurs d'images vidéo dotés d'une fonction de prédiction pondérée localisée |
Non-Patent Citations (1)
Title |
---|
PHILIPPE BORDES ET AL: "AHG9: Simplification of weighted prediction signaling in PPS", 10. JCT-VC MEETING; 101. MPEG MEETING; 11-7-2012 - 20-7-2012; STOCKHOLM; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-J0503, 12 July 2012 (2012-07-12), XP030112865 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2977526C (fr) | Modification d'unification de syntaxe et de semantique relatives a la copie intra-bloc et a la signalisation inter | |
EP2862353B1 (fr) | Procédé et appareil améliorant le traitement des en-têtes de tronçons | |
US20140056356A1 (en) | Method and apparatus for efficient signaling of weighted prediction in advanced coding schemes | |
US9185408B2 (en) | Efficient storage of motion information for high efficiency video coding | |
US12075071B2 (en) | Modification of picture parameter set (PPS) for HEVC extensions | |
WO2015051011A1 (fr) | Syntaxe d'arbre de transformée hevc modifiée | |
US11363301B2 (en) | Conditionally parsed extension syntax for HEVC extension processing | |
EP3072299A1 (fr) | Syntaxe d'extension analysée de manière conditionnelle pour un traitement d'extension hevc | |
WO2014031734A1 (fr) | Procédé et appareil de signalisation efficiente de prédiction pondérée dans des schémas de codage avancés | |
EP2781093B1 (fr) | Conservation en mémoire efficace d'informations de mouvement pour un codage vidéo haute efficacité | |
EP3266216A1 (fr) | Modification d'unification de syntaxe et de sémantique relatives à la copie intra-bloc et à la signalisation inter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13753989 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM XXXX DATED 03.07.2015) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13753989 Country of ref document: EP Kind code of ref document: A1 |