GB2606282A - Video coding and decoding - Google Patents
Video coding and decoding Download PDFInfo
- Publication number
- GB2606282A GB2606282A GB2207449.6A GB202207449A GB2606282A GB 2606282 A GB2606282 A GB 2606282A GB 202207449 A GB202207449 A GB 202207449A GB 2606282 A GB2606282 A GB 2606282A
- Authority
- GB
- United Kingdom
- Prior art keywords
- motion vector
- vector predictor
- index
- list
- merge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
- H04N19/54—Motion estimation other than block-based using feature points or meshes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Encoding or decoding a motion vector predictor index in which a maximum number of motion vector predictor candidates includable in a list of motion vector predictor candidates is determined, and a list of motion vector predictor candidates is generated having the determined maximum number of candidates, including a candidate for subblock collocated temporal prediction (SbTMVP) and a candidate for subblock Affine prediction. One of the candidates in the list are selected and the motion vector predictor index is generated for the selected candidate and encoded using CABAC (context adaptive binary arithmetic coding) encoding; where all bits or bins except for the first bit of the motion vector predictor index are bypass coded, and the first bit of the motion vector predictor index is coded by CABC using a single context. In embodiments the motion vector predictor index is a merge index, indicating a merge vector from a list of candidate merge motion vectors. The SbTMVP candidate may also be referred to as an advanced temporal motion vector predictor or alternative temporal motion vector predictor (ATMVP) candidate.
Description
VIDEO CODING AND DECODING
Field of invention
The present invention relates to video coding and decoding.
Background
Recently, the Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16's VCEG, commenced work on a new video coding standard referred to as Versatile Video Coding (VVC). The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard (i.e., typically twice as much as before) and to be completed in 2020. The main target applications and services include -but not limited to -360-degree and high-dynamic-range (HDR) videos. In total, JVET evaluated responses from 32 organizations using formal subjective tests conducted by independent test labs. Some proposals demonstrated compression efficiency gains of typically 40% or more when compared to using HEVC. Particular effectiveness was shown on ultra-high definition (UM) video test material. Thus, we may expect compression efficiency gains well-beyond the targeted 50% for the final standard.
The JVET exploration model (JEN1) uses all the HEVC tools. A further tool not present in HEVC is to use an affine motion mode' when applying motion compensation. Motion compensation in FIEVC is limited to translations, but in reality there are many kinds of motion, e.g. zoom in/out, rotation, perspective motions and other irregular motions. When utilising affine motion mode, a more complex transform is applied to a block to attempt to more accurately predict such forms of motion.
Another tool not present in HEVC is to use Alternative Temporal Motion Vector Prediction (ATMVP). The alternative temporal motion vector prediction (ATM VP) is a particular motion compensation. Instead of considering only one motion information for the current block from a temporal reference frame, each motion information of each collocated block is considered. So this temporal motion vector prediction gives a segmentation of the current block with the related motion information of each sub-block. In the current VTM reference software, ATMVP is signalled as a merge candidate inserted in the list of Merge candidates. When ATA4VP is enabled at SPS level, the maximum number of Merge candidates is increased by one. So 6 candidates are considered instead of 5 when this mode is disabled.
These, and other tools described later, are bringing up problems relating to the coding efficiency and complexity of the coding of a Merge index used to signal which Merge candidate is selected from among the list of Merge candidates.
Accordingly, a solution to at least one of the aforementioned problems is desirable. According to a first aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an ATIVIVP candidate; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index (Merge index) for the selected motion vector predictor candidate using CABAC coding, one or more bits of the motion vector predictor 10 index being bypass CABAC coded.
In one embodiment, all bits except for a first bit of the motion vector predictor index are bypass CABAC coded.
According to a second aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an ATIVIVP candidate; decoding the motion vector predictor index using CABAC decoding, one or more bits of the motion vector predictor index being bypass CAB AC decoded; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
In one embodiment, all bits except for a first bit of the motion vector predictor index are bypass CABAC decoded.
According to a third aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an ATMVP 25 candidate; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index (Merge index) for the selected motion vector predictor candidate using CABAC coding, one or more bits of the motion vector predictor index being bypass CABAC coded.
According to a fourth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an ATM VP candidate; means for decoding the motion vector predictor index using CABAC decoding, one or more bits of the motion vector predictor index being bypass CABAC decoded; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a fifth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor 10 candidate using CABAC coding, two or more bits of the motion vector predictor index sharing the same context.
In one embodiment, all bits of the motion vector predictor index share the same context. According to a sixth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, two or more bits of the motion vector predictor index sharing the same context; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list In one embodiment, all bits of the motion vector predictor index share the same context.
According to a seventh aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, two or more bits of the motion vector predictor index sharing the same context.
According to an eighth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, two or more bits of the motion vector predictor index sharing the same context; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a ninth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a motion vector predictor index of at least one block neighbouring the current block.
In one embodiment the context variable for at least one bit of the motion vector predictor index depends on the respective motion vector predictor indexes of at least two neighbouring blocks.
In another embodiment the context variable for at least one bit of the motion vector predictor index depends on a motion vector predictor index of a left neighbouring block on the left of the current block and on a motion vector predictor index of an upper neighbouring block above the current block.
In another embodiment the left neighbouring block is A2 and the upper neighbouring block is B3.
In another embodiment the left neighbouring block is Al and the upper neighbouring block is BI.
In another embodiment the context variable has 3 different possible values.
Another embodiment comprises comparing the motion vector predictor index of at least one neighbouring block with an index value of the motion vector predictor index of the current block and setting said context variable in dependence upon the comparison result.
Another embodiment comprises comparing the motion vector predictor index of at least one neighbouring block with a parameter representing a bit position of the or one said bit in the motion vector predictor index of the current block and setting said context variable in dependence upon the comparison result.
Yet another embodiment comprises: making a first comparison, comparing the motion vector predictor index of a first neighbouring block with a parameter representing a bit position of the or one said bit in the motion vector predictor index of the current block; making a second comparison, comparing the motion vector predictor index of a second neighbouring block with said parameter; and setting said context variable in dependence upon the results of the first and second comparisons According to a tenth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a motion vector predictor index of at least one block neighbouring the current block; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
In one embodiment the context variable for at least one bit of the motion vector predictor index depends on the respective motion vector predictor indexes of at least two neighbouring blocks.
In another embodiment the context variable for at least one bit of the motion vector predictor index depends on a motion vector predictor index of a left neighbouring block on the left of the current block and on a motion vector predictor index of an upper neighbouring block above the current block.
In another embodiment the left neighbouring block is A2 and the upper neighbouring block is B3.
In another embodiment the left neighbouring block is Al and the upper neighbouring block is BI.
In another embodiment the context variable has 3 different possible values.
Another embodiment comprises comparing the motion vector predictor index of at least one neighbouring block with an index value of the motion vector predictor index of the current block and setting said context variable in dependence upon the comparison result.
Another embodiment comprises comparing the motion vector predictor index of at least one neighbouring block with a parameter representing a bit position of the or one said bit in the motion vector predictor index of the current block and setting said context variable in dependence upon the comparison result.
Yet another embodiment comprises: making a first comparison, comparing the motion vector predictor index of a first neighbouring block with a parameter representing a bit position of the or one said bit in the motion vector predictor index of the current block; making a second comparison, comparing the motion vector predictor index of a second neighbouring block with said parameter; and setting said context variable in dependence upon the results of the first and second comparisons According to an eleventh aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a motion vector predictor index of at least one block neighbouring the current block.
According to a twelfth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a motion vector predictor index of at least one block neighbouring the current block; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirteenth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a Skip flag of said current block.
According to a fourteenth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates, selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current Nock that is available prior to decoding of the motion vector predictor index.
According to a fifteenth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is an indicator of a complexity of motion in the current block.
According to a sixteenth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a Skip flag of said current block; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a seventeenth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is available prior to decoding of the motion vector predictor index; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to an eighteenth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is an indicator of a complexity of motion in the current block; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list According to a nineteenth aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CAB AC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a Skip flag of said current block According to a twentieth aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is available prior to decoding of the motion vector predictor index.
According to a twenty-first aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is an indicator of a complexity of motion in the current block. According to a twenty-second aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on a Skip flag of said current block; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a twenty-third aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is available prior to decoding of the motion vector predictor index; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a twenty-fourth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on another parameter or syntax element of said current block that is an indicator of a complexity of motion in the current block; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a twenty-fifth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list.
In one embodiment the context variable depends on position in said list of a first Affine Motion vector predictor candidate.
According to a twenty-sixth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
In one embodiment the context variable depends on position in said list of a first Affine Motion vector predictor candidate.
According to a twenty-seventh aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector 10 predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list.
According to a twenty-eighth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a twenty-ninth aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an Affine Motion vector predictor candidate; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on an affine flag of the current block and/or of at least one block neighbouring the current block.
According to a thirtieth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an Affine Motion vector predictor candidate; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on an affine flag of the current block and/or of at least one block neighbouring the current block; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirty-first aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an Affine Motion vector predictor candidate; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on an affine flag of the current block and/or of at least one block neighbouring the current block.
According to a thirty-second aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an Affine Motion vector predictor candidate; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on an affine flag of the current block and/or of at least one block neighbouring the current block; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirty-third aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block is derived from a context variable of at least one of a Skip flag and an affine flag of the current block.
According to a thirty-fourth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block is derived from a context variable of at least one of a Skip flag and an affine flag of the current block; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirty-fifth aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block is derived from a context variable of at least one of a Skip flag and an affine flag of the current block.
According to a thirty-sixth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block is derived from a context variable of at least one of a Skip flag and an affine flag of the current block; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirty-seventh aspect of the present invention there is provided a method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block has only two different possible values.
According to a thirty-eighth aspect of the present invention there is provided a method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block has only two different possible values; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
According to a thirty-ninth aspect of the present invention there is provided a device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block has only two different possible values According to a fortieth aspect of the present invention there is provided a device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for decoding the motion vector predictor index using CABAC decoding, wherein a context variable for at least one bit of the motion vector predictor index of a current block has only two different possible values; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
Yet further aspects of the present invention relate to programs which when executed by a computer or processor cause the computer or processor to carry out any of the methods of the aforementioned aspects. The program may be provided on its own or may be carried on, by or in a carrier medium. The carrier medium may be non-transitory, for example a storage medium, in particular a computer-readable storage medium. The carrier medium may also be transitory, for example a signal or other transmission medium The signal may be transmitted via any suitable network, including the Internet.
Yet further aspects of the present in invention relate to a camera comprising a device according to any of the aforementioned device aspects. In one embodiment the camera further comprises zooming means.
In one embodiment the camera is adapted to indicate when said zooming means is operational and signal affine mode in dependence on said indication that the zooming means is operational.
In another embodiment the camera further comprises panning means.
In another embodiment the camera is adapted to indicate when said panning means is operational and signal affine mode in dependence on said indication that the panning means is operational.
According to yet another aspect of the present invention there is provided a mobile device comprising a camera embodying any of the camera aspects above In one embodiment the mobile device further comprises at least one positional sensor adapted to sense a change in orientation of the mobile device.
In one embodiment the mobile device is adapted to signal affine mode in dependence on said sensing a change in orientation of the mobile device.
Further features of the invention are characterised by the other independent and dependent claims Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus aspects, and vice versa.
Furthermore, features implemented in hardware may be implemented in software, and 20 vice versa. Any reference to software and hardware features herein should be construed accordingly Any apparatus feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated 25 memory.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
Reference will now be made, by way of example, to the accompanying drawings, in which: Figure 1 is a diagram for use in explaining a coding structure used in HEVC, Figure 2 is a block diagram schematically illustrating a data communication system in which one or more embodiments of the invention may be implemented; Figure 3 is a block diagram illustrating components of a processing device in which one or more embodiments of the invention may be implemented; Figure 4 is a flow chart illustrating steps of an encoding method according to embodiments of the invention; Figure 5 is a flow chart illustrating steps of a decoding method according to embodiments of the invention, Figures 6(a) and 6(b) illustrates spatial and temporal blocks that can be used to generate motion vector predictors; Figure 7 shows simplified steps of the process of an AMVP predictor set derivation; Figure 8 is a schematic of a motion vector derivation process of the Merge modes; Figure 9 illustrates segmentation of a current block and temporal motion vector prediction; Figure 10(a) illustrates the coding of the Merge ndex for HEVC, or when ATM VP not enabled at SPS level; Figure 10(b) illustrates the coding of the Merge index when ATMVP is enabled at SPS level Figure 11(a) illustrates a simple affine motion field; Figure 11(b) illustrates a more complex affine motion field; Figure 12 is a flow chart of the partial decoding process of some syntax elements related to the coding mode; Figure 13 is a flow chart illustrating Merge candidates derivation; Figure 14 is a flow chart illustrating a first embodiment of the invention; Figure 15 is a flow chart of the partial decoding process of some syntax elements related to the coding mode in a twelfth embodiment of the invention; Figure 16 is a flow chart illustrating generating a list of merge candidates in the twelfth embodiment of the invention; Figure 17 is a block diagram for use in explaining a CABAC encoder suitable for use in embodiments of the invention; Figure 18 is a schematic block diagram of a computing device for implementation of one or more embodiments of the invention; Figure 19 is a schematic block diagram of a computing device; Figure 20 is a diagram illustrating a network camera system, and Figure 21 is a diagram illustrating a smart phone.
Detailed description
Embodiments of the present invention described below relate to improving encoding and decoding of indexes using CABAC. Before describing the embodiments, video encoding and decoding techniques and related encoders and decoders will be described.
Figure 1 relates to a coding structure used in the High Efficiency Video Coding (HEVC) video standard. A video sequence 1 is made up of a succession of digital images i. Each such digital image is represented by one or more matrices. The matrix coefficients represent pixels.
An image 2 of the sequence may be divided into slices 3. A slice may in some instances constitute an entire image. These slices are divided into non-overlapping Coding Tree Units (CTUs). A Coding Tree Unit (CTU) is the basic processing unit of the High Efficiency Video Coding (HEVC) video standard and conceptually corresponds in structure to macroblock units that were used in several previous video standards. A CTU is also sometimes referred to as a Largest Coding Unit (LCU). A CTU has luma and chroma component parts, each of which component parts is called a Coding Tree Block (CTB). These different color components are not shown in Figure 1.
A CTU is generally of size 64 pixels x 64 pixels for HEVC, yet for VVC this size can be 128 pixels x 128 pixels. Each CTU may in turn be iteratively divided into smaller variable-size Coding Units (CUs) 5 using a quadtree decomposition.
Coding units are the elementary coding elements and are constituted by two kinds of sub-unit called a Prediction Unit (PU) and a Transform Unit (TU). The maximum size of a PU or TU is equal to the CU size. A Prediction Unit corresponds to the partition of the CU for prediction of pixels values. Various different partitions of a CU into PUs are possible as shown by 606 including a partition into 4 square PUs and two different partitions into 2 rectangular PUs. A Transform Unit is an elementary unit that is subjected to spatial transformation using DCT. A CU can be partitioned into TUs based on a quadtree representation 607.
Each slice is embedded in one Network Abstraction Layer (NAL) unit. In addition, the coding parameters of the video sequence are stored in dedicated NAL units called parameter sets. In HEVC and I-1.264/AVC two kinds of parameter sets NAL units are employed: first, a Sequence Parameter Set (SP S) NAL unit that gathers all parameters that are unchanged during the whole video sequence. Typically, it handles the coding profile, the size of the video frames and other parameters. Secondly, a Picture Parameter Set (PPS) NAL unit includes parameters that may change from one image (or frame) to another of a sequence. HEVC also includes a Video Parameter Set (1/PS) NAL unit which contains parameters describing the overall structure of the bitstream. The 1/PS is a new type of parameter set defined in HEVC, and applies to all of the layers of a bitstream. A layer may contain multiple temporal sub-layers, and all version 1 bitstreams are restricted to a single layer. EIEVC has certain layered extensions for scalability and multiview and these will enable multiple layers, with a backwards compatible version 1 base layer.
Figure 2 illustrates a data communication system in which one or more embodiments of the invention may be implemented. The data communication system comprises a transmission device, in this case a server 201, which is operable to transmit data packets of a data stream to a receiving device, in this case a client terminal 202, via a data communication network 200. The data communication network 200 may be a Wide Area Network (WAN) or a Local Area Network (LAN). Such a network may be for example a wireless network (Will / 802.11a or b or g), an Ethernet network, an Internet network or a mixed network composed of several different networks. In a particular embodiment of the invention the data communication system may be a digital television broadcast system in which the server 201 sends the same data content to multiple clients.
The data stream 204 provided by the server 201 may be composed of multimedia data representing video and audio data. Audio and video data streams may, in some embodiments of the invention, be captured by the server 201 using a microphone and a camera respectively.
In some embodiments data streams may be stored on the server 201 or received by the sewer 201 from another data provider, or generated at the server 201. The server 201 is provided with an encoder for encoding video and audio steams in particular to provide a compressed bitstream for transmission that is a more compact representation of the data presented as input to the encoder.
In order to obtain a better ratio of the quality of transmitted data to quantity of transmitted data, the compression of the video data may be for example in accordance with the TIEVC format or H.264/AVC format.
The client 202 receives the transmitted bitstream and decodes the reconstructed bitstream to reproduce video images on a display device and the audio data by a loud speaker.
Although a streaming scenario is considered in the example of Figure 2, it will be appreciated that in some embodiments of the invention the data communication between an encoder and a decoder may be performed using for example a media storage device such as an optical disc.
In one or more embodiments of the invention a video image is transmitted with data representative of compensation offsets for application to reconstructed pixels of the image to provide filtered pixels in a final image.
Figure 3 schematically illustrates a processing device 300 configured to implement at least one embodiment of the present invention. The processing device 300 may be a device such as a micro-computer, a workstation or a light portable device. The device 300 comprises a communication bus 313 connected to: -a central processing unit 311, such as a microprocessor, denoted CPU; -a read only memory 306, denoted ROM, for storing computer programs for implementing the invention; -a random access memory 312, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to embodiments of the invention; and -a communication interface 302 connected to a communication network 303 over which digital data to be processed are transmitted or received Optionally, the apparatus 300 may also include the following components: -a data storage means 304 such as a hard disk, for storing computer programs for 20 implementing methods of one or more embodiments of the invention and data used or produced during the implementation of one or more embodiments of the invention; -a disk drive 305 for a disk 306, the disk drive being adapted to read data from the disk 306 or to write data onto said disk; -a screen 309 for displaying data and/or serving as a graphical interface with the user, 25 by means of a keyboard 310 or any other pointing means.
The apparatus 300 can be connected to various peripherals, such as for example a digital camera 320 or a microphone 308, each being connected to an input/output card (not shown) so as to supply multimedia data to the apparatus 300.
The communication bus provides communication and interoperability between the various elements included in the apparatus 300 or connected to it. The representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the apparatus 300 directly or by means of another element of the apparatus 300.
The disk 306 can be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to the invention to be implemented.
The executable code may be stored either in read only memory 306, on the hard disk 304 or on a removable digital medium such as for example a disk 306 as described previously. According to a variant, the executable code of the programs can be received by means of the communication network 303, via the interface 302, in order to be stored in one of the storage means of the apparatus 300 before being executed, such as the hard disk 304.
The central processing unit 311 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, instructions that are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 304 or in the read only memory 306, are transferred into the random access memory 312, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC). Figure 4 illustrates a block diagram of an encoder according to at least one embodiment of the invention. The encoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 311 of device 300, at least one corresponding step of a method implementing at least one embodiment of encoding an image of a sequence of images according to one or more embodiments of the invention.
An original sequence of digital images /0 to in 401 is received as an input by the encoder 400. Each digital image is represented by a set of samples, known as pixels.
A bitstream 410 is output by the encoder 400 after implementation of the encoding process. The bitstream 410 comprises a plurality of encoding units or slices, each slice comprising a slice header for transmitting encoding values of encoding parameters used to encode the slice and a slice body, comprising encoded video data.
The input digital images i0 to in 401 are divided into blocks of pixels by module 402. The blocks correspond to image portions and may be of variable sizes (e.g. 4x4, 8x8, 16x16, 32x32, 64x64, 128x128 pixels and several rectangular block sizes can be also considered). A coding mode is selected for each input block. Two families of coding modes are provided: coding modes based on spatial prediction coding (Infra prediction), and coding modes based on temporal prediction (Inter coding, Merge, SKIP). The possible coding modes are tested.
Module 403 implements an Intra prediction process, in which the given block to be encoded is predicted by a predictor computed from pixels of the neighborhood of said block to be encoded. An indication of the selected Intra predictor and the difference between the given block and its predictor is encoded to provide a residual if the Intra coding is selected.
Temporal prediction is implemented by motion estimation module 404 and motion compensation module 405. Firstly a reference image from among a set of reference images 416 is selected, and a portion of the reference image, also called reference area or image portion, which is the closest area to the given block to be encoded, is selected by the motion estimation module 404. Motion compensation module 405 then predicts the block to be encoded using the selected area. The difference between the selected reference area and the given block, also called a residual block, is computed by the motion compensation module 405. The selected reference area is indicated by a motion vector.
Thus, in both cases (spatial and temporal prediction), a residual is computed by subtracting the prediction from the original block.
In the INTRA prediction implemented by module 403, a prediction direction is encoded. In the temporal prediction, at least one motion vector is encoded.
Information relative to the motion vector and the residual block is encoded if the Inter prediction is selected. To further reduce the bitrate, assuming that motion is homogeneous, the motion vector is encoded by difference with respect to a motion vector predictor. Motion vector predictors of a set of motion information predictors is obtained from the motion vectors field 418 by a motion vector prediction and coding module 417.
The encoder 400 further comprises a selection module 406 for selection of the coding mode by applying an encoding cost criterion, such as a rate-distortion criterion. In order to further reduce redundancies a transform (such as DCT) is applied by transform module 407 to the residual block, the transformed data obtained is then quantized by quantization module 408 and entropy encoded by entropy encoding module 409. Finally, the encoded residual block of the current block being encoded is inserted into the bitstream 410.
The encoder 400 also performs decoding of the encoded image in order to produce a reference image for the motion estimation of the subsequent images. This enables the encoder and the decoder receiving the bitstream to have the same reference frames. The inverse quantization module 411 performs inverse quantization of the quantized data, followed by an inverse transform by reverse transform module 412. The reverse intra prediction module 413 uses the prediction information to determine which predictor to use for a given block and the reverse motion compensation module 414 actually adds the residual obtained by module 412 to the reference area obtained from the set of reference images 416.
Post filtering is then applied by module 415 to filter the reconstructed frame of pixels. 10 In the embodiments of the invention an SAO loop filter is used in which compensation offsets are added to the pixel values of the reconstructed pixels of the reconstructed image Figure 5 illustrates a block diagram of a decoder 60 which may be used to receive data from an encoder according an embodiment of the invention. The decoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 311 of device 300, a corresponding step of a method implemented by the decoder 60.
The decoder 60 receives a bitstream 61 comprising encoding units, each one being composed of a header containing information on encoding parameters and a body containing the encoded video data. As explained with respect to Figure 4, the encoded video data is entropy encoded, and the motion vector predictors' indexes are encoded, for a given block, on a predetermined number of bits. The received encoded video data is entropy decoded by module 62. The residual data are then dequantized by module 63 and then a reverse transform is applied by module 64 to obtain pixel values.
The mode data indicating the coding mode are also entropy decoded and based on the 25 mode, an INTRA type decoding or an INTER type decoding is performed on the encoded blocks of image data.
In the case of INTRA mode, an INTRA predictor is determined by intra reverse prediction module 65 based on the intra prediction mode specified in the bitstream.
If the mode is INTER, the motion prediction information is extracted from the bitstream so as to find the reference area used by the encoder. The motion prediction information is composed of the reference frame index and the motion vector residual. The motion vector predictor is added to the motion vector residual in order to obtain the motion vector by motion vector decoding module 70.
Motion vector decoding module 70 applies motion vector decoding for each current block encoded by motion prediction. Once an index of the motion vector predictor, for the current block has been obtained the actual value of the motion vector associated with the current block can be decoded and used to apply reverse motion compensation by module 66. The reference image portion indicated by the decoded motion vector is extracted from a reference image 68 to apply the reverse motion compensation 66. The motion vector field data 71 is updated with the decoded motion vector in order to be used for the inverse prediction of subsequent decoded motion vectors.
Finally, a decoded block is obtained. Post filtering is applied by post filtering module 67. A decoded video signal 69 is finally provided by the decoder 60.
CABAC
HEVC uses several types of entropy coding like the Context based Adaptive Binary Arithmetic Coding (CABAC), Golomb-rice Code, or simple binary representation called Fixed Length Coding. Most of the time, a binary encoding process is performed to represent the different syntax element. This binary encoding process is also very specific and depends on the different syntax elements. The arithmetic coding represents the syntax element according to their current probabilities. CABAC is an extension of the arithmetic coding which separates the probabilities of a syntax element depending on a 'context' defined by a context variable.
This corresponds to a conditional probability. The context variable may be derived from the value of the current syntax of the top left block (A2 in Figure 6b as described in more detail below) and the above left block (B3 in Figure 6b) which are already decoded.
CABAC has been adopted as a normative part of the H.264/AVC and H.265/HEVC standards. In H.264/AVC, it is one of two alternative methods of entropy coding. The other method specified in H.264/AVC is a low-complexity entropy-coding technique based on the usage of context-adaptively switched sets of variable-length codes, so-called Context-Adaptive Variable-Length Coding (CAVLC). Compared to CABAC, CAVLC offers reduced implementation costs at the price of lower compression efficiency. For TV signals in standard-or high-definition resolution, CABAC typically provides bit-rate savings of 10-20% relative to CAVLC at the same objective video quality. In HEVC, CABAC is the only entropy coding method.
Figure 17 shows the main blocks of a CABAC encoder.
An input syntax element that is non-binary valued is binarized by a binarizer 1701. The coding strategy of CABAC is based on the finding that a very efficient coding of syntax-element values in a hybrid block-based video coder, like components of motion vector differences or transform-coefficient level values, can be achieved by employing a binarization scheme as a kind of preprocessing unit for the subsequent stages of context modeling and binary arithmetic coding. In general, a binarization scheme defines a unique mapping of syntax element values to sequences of binary decisions, so-called bins, which can also be interpreted in terms of a binary code tree. The design of binarization schemes in CABAC is based on a few elementary prototypes whose structure enables simple online calculation and which are adapted to some suitable model-probability distributions.
Each bin can be processed in one of two basic ways according to the setting of a switch 1702. When the switch is in the "regular" setting, the bin is supplied to a context modeler 1703 and a regular coding engine 1704. When the switch is in the "bypass" setting, the context modeler is bypassed and the bin is supplied to a bypass coding engine 1705. Another switch 1706 has "regular" and "bypass" settings similar to the switch 1702 so that the bins coded by the applicable one of the coding engines 1704 and 1705 can form a bitstream as the output of the CABAC encoder.
By decomposing each syntax element value into a sequence of bins, further processing of each bin value in CABAC depends on the associated coding-mode decision, which can be either chosen as the regular or the bypass mode. The latter is chosen for bins related to the sign information or for lower significant bins, which are assumed to be uniformly distributed and for which, consequently, the whole regular binary arithmetic encoding process is simply bypassed. In the regular coding mode, each bin value is encoded by using the regular binary arithmetic-coding engine, where the associated probability model is either determined by a fixed choice, without any context modeling, or adaptively chosen depending on the related context model. As an important design decision, the latter case is generally applied to the most frequently observed bins only, whereas the other, usually less frequently observed bins, will be treated using a joint, typically zero-order probability model. In this way, CABAC enables selective context modeling on a sub-symbol level, and hence, provides an efficient instrument for exploiting inter-symbol redundancies at significantly reduced overall modeling or learning costs. For the specific choice of context models, four basic design types are employed in CABAC, where two of them are applied to coding of transform-coefficient levels only. The design of these four prototypes is based on a priori knowledge about the typical characteristics of the source data to be modeled and it reflects the aim to find a good compromise between the conflicting objectives of avoiding unnecessary modeling-cost overhead and exploiting the statistical dependencies to a large extent.
On the lowest level of processing in CABAC, each bin value enters the binary arithmetic encoder, either in regular or bypass coding mode For the latter, a fast branch of the coding engine with a considerably reduced complexity is used while for the former coding mode, encoding of the given bin value depends on the actual state of the associated adaptive probability model that is passed along with the bin value to the Ni coder -a term that has been chosen for the table-based binary arithmetic coding engine in CABAC.
Inter coding HEVC uses 3 different INTER modes: the Inter mode, the Merge mode and the Merge Skip mode. The main difference between these modes is the data signalling in the bitstream.
For the Motion vector coding, the current HEVC standard includes a competitive based scheme for Motion vector prediction which was not present in earlier versions of the standard It means that several candidates are competing with the rate distortion criterion at encoder side in order to find the best motion vector predictor or the best motion information for respectively the Inter or the Merge mode. An index corresponding to the best predictors or the best candidate of the motion information is inserted in the bitstream. The decoder can derive the same set of predictors or candidates and uses the best one according to the decoded index. In the Screen Content Extension of HEVC, the new coding tool called Intra Block Copy is signalled as any of those three INTER modes, the difference between IBC and the equivalent INTER mode being made by checking whether the reference frame is the current one. This can be implemented e.g. by checking the reference index of the list LO, and deducing this is Intra Block Copy if this is the last frame in that list. Another way to do is comparing the Picture Order Count of current and reference frames: if equal, this is Intra Block Copy.
The design of the derivation of predictors and candidates is important in achieving the 25 best coding efficiency without a disproportionate impact on complexity. In HEVC two motion vector derivations are used: one for Inter mode (Advanced Motion Vector Prediction (AMVP)) and one for Merge modes (Merge derivation process). The following describes these processes. Figures 6a and 6b illustrates spatial and temporal blocks that can be used to generate motion vector predictors in Advanced Motion Vector Prediction (AMVP) and Merge modes 30 of HEVC coding and decoding systems and Figure 7 shows simplified steps of the process of the AMVP predictor set derivation.
Two predictors, i.e. the two spatial motion vectors of the AMVP mode, are chosen among the top blocks (indicated by letter 'El') and the left blocks (indicated by letter 'A') including the top corner blocks (block B2) and left corner block (block AO) and one predictor is chosen among the bottom right block (H) and centre block (Center) of the collocated block as represented in Figure 6a Table 1 below outlines the nomenclature used when referring to blocks in relative terms to the current block as shown in Figures 6a and 6b. This nomenclature is used as shorthand but it should be appreciated other systems of labelling may be used, in particular in future versions of a standard.
Block label Relative positional description of neighbouring block AO 'Left corner' -diagonally down and to the left of the current block Al 'Left' or 'Bottom left' -left of the bottom of the current block A2 'Top left' -left of the top of the current block BO 'Above right' -diagonally up and to the right of the current block BI 'Above' -above the top right of the current block B2 'Above left' -diagonally up and to the left of the current block B3 Up' -above the top left of the current block H Bottom right of a collocated block in a reference frame Center A block within a collocated block in a reference frame
Table 1
It should be noted that the 'current block' may be variable in size, for example 4x4, 16x16, 32x32, 64x64, 128x128 or any size in between. The dimensions of a block are preferably factors of 2 (i.e. 2An x 2Am where n and m are positive integers) as this results in a more efficient use of bits when using binary encoding. The current block need not be square, although this is often a preferable embodiment for coding complexity.
Turning to Figure 7, a first step aims at selecting a first spatial predictor (Cand 1, 706) among the bottom left blocks AO and Al, that spatial positions are illustrated in Figure 6. To that end, these blocks are selected (700, 702) one after another, in the given order, and, for each selected block, following conditions are evaluated (704) in the given order, the first block for which conditions are fulfilled being set as a predictor: - the motion vector from the same reference list and the same reference image; -the motion vector from the other reference list and the same reference image; - the scaled motion vector from the same reference list and a different reference image; or - the scaled motion vector from the other reference list and a different reference image.
If no value is found, the left predictor is considered as being unavailable. In this case, it indicates that the related blocks were INTRA coded or those blocks do not exist.
A following step aims at selecting a second spatial predictor (Cand 2, 716) among the above right block BO, above block B1, and left above block B2, that spatial positions are illustrated in Figure 6. To that end, these blocks are selected (708, 710, 712) one after another, in the given order, and, for each selected block, the above mentioned conditions are evaluated (714) in the given order, the first block for which the above mentioned conditions are fulfilled being set as a predictor.
Again, if no value is found, the top predictor is considered as being unavailable. In this case, it indicates that the related blocks were INTRA coded or those blocks do not exist.
In a next step (718), the two predictors, if both are available, are compared one to the other to remove one of them if they are equal (i.e. same motion vector values, same reference list, same reference index and the same direction type). If only one spatial predictor is available, the algorithm is looking for a temporal predictor in a following step.
The temporal motion predictor (Cand 3, 726) is derived as follows: the bottom right (H, 720) position of the collocated block in a previous frame is first considered in the availability check module 722. If it does not exist or if the motion vector predictor is not available, the centre of the collocated block (Centre, 724) is selected to be checked. These temporal positions (Centre and H) are depicted in Figure 6. In any case, scaling 723 is applied on those candidates to match the temporal distance between current frame and the first frame is the reference list.
The motion predictor value is then added to the set of predictors. Next, the number of predictors (Nb Cant]) is compared (728) to the maximum number of predictors (Max (-fond). As mentioned above, the maximum number of predictors (Afar Cand) of motion vector predictors that the derivation process of AMVP needs to generate is two in the current version 25 of HEVC standard.
If this maximum number is reached, the final list or set of AMVP predictors (732) is built. Otherwise, a zero predictor is added (730) to the list. The zero predictor is a motion vector equal to (0, 0).
As illustrated in Figure 7, the final list or set of AMVP predictors (732) is built from a subset of spatial motion predictors (700 to 712) and from a subset of temporal motion predictors (720, 724).
As mentioned above, a motion predictor candidate of Merge mode or of Merge Skip mode represents all the required motion information: direction, list, reference frame index, and motion vectors. An indexed list of several candidates is generated by a Merge derivation process. In the current HEVC design the maximum number of candidates for both Merge modes is equal to five (4 spatial candidates and 1 temporal candidate).
Figure 8 is a schematic of a motion vector derivation process of the Merge modes. In a first step of the derivation process, five block positions are considered (800 to 808). These positions are the spatial positions depicted in Figure 3 with references Al, Bl, BO, AO, and B2. In a following step, the availability of the spatial motion vectors is checked and at most five motion vectors are selected (810). A predictor is considered as available if it exists and if the block is not INTRA coded. Therefore, selecting the motion vectors corresponding to the five blocks as candidates is done according to the following conditions: if the "left" Al motion vector (800) is available (810), i.e. if it exists and if this block is not INTRA coded, the motion vector of the "left" block is selected and used as a first candidate in list of candidate (814); if the "above" B1 motion vector (802) is available (810), the candidate "above" block motion vector is compared to "left" Al motion vector (812), if it exists. If B1 motion vector is equal to Al motion vector, B1 is not added to the list of spatial candidates (814). On the contrary, if B1 motion vector is not equal to Al motion vector, B1 is added to the list of spatial candidates (814); if the "above right" BO motion vector (804) is available (810), the motion vector of the "above right" is compared to Bl motion vector (812). If BO motion vector is equal to B1 motion vector, BO motion vector is not added to the list of spatial candidates (814). On the contrary, if BO motion vector is not equal to B1 motion vector, BO motion vector is added to the list of spatial candidates (814); if the "below left" AO motion vector (806) is available (810), the motion vector of the "below left" is compared to Al motion vector (812). If AO motion vector is equal to Al motion vector, AO motion vector is not added to the list of spatial candidates (814). On the contrary, if AO motion vector is not equal to A] motion vector, AO motion vector is added to the list of spatial candidates (814); and if the list of spatial candidates doesn't contain four candidates, the availability of -above left" B2 motion vector (808) is checked (810). If it is available, it is compared to Al motion vector and to B1 motion vector. If B2 motion vector is equal to Al motion vector or to Bl motion vector, B2 motion vector is not added to the list of spatial candidates (814). On the contrary, if B2 motion vector is not equal to Al motion vector or to B1 motion vector, B2 motion vector is added to the list of spatial candidates (814).
At the end of this stage, the list of spatial candidates comprises up to four candidates.
For the temporal candidate, two positions can be used: the bottom right position of the collocated block (816, denoted H in Figure 6) and the centre of the collocated block (818) These positions are depicted in Figure 6.
As for the AMVP motion vector derivation process, a first step aims at checking (820) the availability of the block at the El position. Next, if it is not available, the availability of the block at the centre position is checked (820). If at least one motion vector of these positions is available, the temporal motion vector can be scaled (822), if needed, to the reference frame having index 0, for both list LO and L I, in order to create a temporal candidate (824) which is added to the list of Merge motion vector predictor candidates. It is positioned after the spatial candidates in the list. The lists LO and Lt are 2 reference frame lists containing zero, one or more reference frames.
If the number (Alb Cana) of candidates is strictly less (826) than the maximum number of candidates (Max ('and that value is signalled in the bit-stream slice header and is equal to five in the current HEVC design) and if the current frame is of the B type, combined candidates are generated (828). Combined candidates are generated based on available candidates of the list of Merge motion vector predictor candidates. It mainly consists in combining the motion vector of one candidate of the list LO with the motion vector of one candidate of list Ll.
If the number (Nb Cant!) of candidates remains strictly less (830) than the maximum number of candidates @Ica. Cant?), zero motion candidates are generated (832) until the number of candidates of the list of Merge motion vector predictor candidates reaches the maximum number of candidates At the end of this process, the list or set of Merge motion vector predictor candidates is built (834). As illustrated in Figure 8, the list or set of Merge motion vector predictor candidates is built (834) from a subset of spatial candidates (800 to 808) and from a subset of temporal candidates (816, 818).
Alternative Temporal Motion Vector Prediction (ATMVP) The alternative temporal motion vector prediction (ATMVP) is a particular motion compensation. Instead of considering only one motion information for the current block from a temporal reference frame, each motion information of each collocated block is considered.
So this temporal motion vector prediction gives a segmentation of the current block with the related motion information of each sub-block as depicted in Figure 9.
In the current VTIVI reference software, ATMVP is signalled as a merge candidate inserted in the list of Merge candidates. When ATMVP is enabled at SPS level, the maximum number of Merge candidates is increased by one. So 6 candidates are considered instead of 5 when this mode is disabled.
In addition when this prediction is enabled at SPS level, all bins of merge index are context coded by CABAC. While in HEVC or when ATMVP is not enabled at SPS level, only the first bin is context coded and the remaining bins are context by-pass coded. Figure 10(a) illustrates the coding of the Merge index for FLEVC, or when ATMVP is not enabled at SPS level. This corresponds to a unary max coding. In addition the first bit is CABAC coded and the other bits are bypass CABAC coded.
Figure 10(b) illustrates the coding of the Merge index when ATMVP is enabled at SPS level. In addition all bits are CABAC coded (from the 1st to the 5th bit). It should be noted that each index has its own context -in other words their probabilities are separated.
Affine mode In HEVC, only translation motion model is applied for motion compensation prediction (MCP). While in the real world, there are many kinds of motion, e.g. zoom in/out, rotation, perspective motions and other irregular motions.
In the JEM, a simplified affine transform motion compensation prediction is applied and the general principle of Affine mode is described below based on an extract of document JVET-G1001 presented at a JVET meeting in Torino at 13-21 July 2017. This entire document is hereby incorporated by reference insofar as it describes other algorithms used in JEM.
As shown in Figure 11(a), the affine motion field of the block is described by two control point motion vectors.
The motion vector field (MVF) of a block is described by the following equation: (1) Where (v0, VO.)) is motion vector of the top-left corner control point, and (vi, v11) is motion vector of the top-right corner control point.
In order to further simplify the motion compensation prediction, sub-block based affine transform prediction is applied. The sub-block size M x N is derived as in Equation 2, where AfrPre is the motion vector fraction accuracy (1/16 in JEM), (v2t, v2y) is motion vector of the bottom-left control point, calculated according to Equation 1.
) (v -vox)
TV
(12, -V" ) X y It' {M = clip3(4,w, max(abs(vrx-vex),abs(viy-uoyl) N = clip3(4, h, ,,) max(abs(v2x-vox),abs(v2y-voy)) hxMvPre wxMvPre \)) After derived by Equation 2, M and N may be adjusted downward if necessary to make it a divisor of w and h, respectively.
To derive motion vector of each MxN sub-block, the motion vector of the center sample of each sub-block, as shown in Figure 6a, is calculated according to Equation 1, and rounded to 1/16 fraction accuracy. Then motion compensation interpolation filters are applied to generate the prediction of each sub-block with derived motion vector.
The affine mode is a motion compensation mode as inter modes (AMVP, Merge, Merge Skip). Its principle is to generate one motion information per pixel according to 2 or 3 neighbouring motion information. In the current VTM reference software, the affine mode derives one motion information for each 4x4 block as depicted in Figure 11(a). This mode is available for AMVP and both Merge modes are enabled thanks to a flag. This flag is CABAC coded. In an embodiment, the context depends on the sum of affine flags of the left block (position A2 of Figure 6b) and the above left block (position B3 of Figure 6b).
So three context variables (0, 1 or 2) are possible in the JEM for the affine flag given by the following formula: Ctx = IsAffine(A2) + IsAffine(B3) Where IsAffine(block) is a function which returns 0 if the block is not an affine block and 1 if the block is affine.
Affine Merge candidate derivation In the JEM, the affine Merge mode (Merge or Merge Skip) is derived from the first neighbouring block which is affine among blocks at positions Al, Bl, BO, AO, B2. These positions are depicted in Figure 6a and 6b. However, how the affine parameter is derived is not completely defined, and the present invention aims to improve at least this aspect.
Affine Merge signalling Figure 12 is a flow chart of the partial decoding process of some syntax elements related to the coding mode. In this figure the Skip flag (1201), the prediction mode (1211), the 30 Merge flag (1203) the Merge Index (1208) and the affine flag (1207) can be decoded.
For all CU in an Inter slice, the Skip flag is decoded (1201). If the CU is not Skip (1202), the pred mode (Prediction mode) is decoded (1211). This syntax element indicates if the current (2) CU is an Inter or an Intra mode. Please note that if the CU is Skip (1202), its current mode is the Inter mode. If the CU (1212), the CU is coded in AMVP or in Merge mode. If the CU is Inter (1212), the Merge flag is decoded (1203). If the CU is Merge (1204) or if the CU is Skip (1202), it is verified (1205) if the affine flag (1206) needs to be decoded. This flag is decoded if the current CU is a 2Nx2N CU, which means in the current VVC that the height and the width of the CU shall be equal. Moreover, at least one neighbouring CU Al or B1 or BO or AO or B2 must be coded with the affine mode (Merge or ANIVP). Eventually the current CU shall not be a 4x4 CU but by default the CU 4x4 are disabled in the VTM reference software. If this condition (1205) is false, it is sure that the current CU is coded with the classical Merge mode or Merge Skip mode and a Merge Index is decoded (1208). If the Affine Flag (1206) is set equal to 1 (1207), the CU is a Merge affine CU or a Merge Skip Affine CU and the Merge index (1208) doesn't need to be decoded. Otherwise, the current CU is a classical (basic) Merge or Merge Skip CU and the Merge index candidate (1208) is decoded.
In this specification 'signalling' may refer to inserting into, or extracting from, the 15 bitstream one or more syntax element representing the enabling or disabling of a mode other information.
Merge candidates derivation Figure 13 is a flow chart illustrating the Merge candidates derivation. This derivation has been built on top of the Merge List derivation of HEVC represented in Figure 8. The main changes compared to HEVC are the addition of the ATMVP candidate (1319, 1321, 1323), the full duplicate checks of candidates (1320, 1325) and a new order of the candidates. The ATMVP prediction is set as a special candidate as it represents several motion information of the current CU. The value of the first sub-block (top left) is compared to the temporal candidate and the temporal candidate is not added in the list of Merge if they are equal (1320). The ATM VP candidate is not compared to other spatial candidates. In opposite to the temporal candidate which is compared to each spatial candidate already in the list (1325) and not added in the Merge candidate list if it is a duplicate candidate.
When a spatial candidate is added in the list it is compared to the other spatial candidates in the list (1310) which is not the case in the final version of HEVC.
In the current VTM version the list of merge candidates is set as the following order as it has been determined to provide the best results over the coding test conditions: * Al *
BI * BO * AO
* ATMVP * B2 * TEMPORAL * Combined * Zero MV It is important to note that spatial candidate B2 is set after the ATMVP candidate.
In addition, when ATMVP is enabled at slice level the maximum number in the list of candidates is 6 instead of 5.
Exemplary embodiments of the invention will now be described with reference to Figures 14-17, 19 and 20. It should be noted that the embodiments may be combined unless explicitly stated otherwise; for example certain combinations of embodiments may improve coding efficiency at increased complexity, but this may be acceptable in certain use cases.
First Embodiment As noted above, in the current VTM reference software, ATMVP is signalled as a Merge candidate inserted in the list of Merge candidates. ATMVP can be enabled or disabled for a whole sequence (at SPS level). When ATMVP is disabled, the maximum number of Merge candidates is 5. When ATMVP is enabled, the maximum number of Merge candidates is increased by one from 5 to 6.
In the encoder, the list of Merge candidates is generated using the method of Figure 13.
One Merge candidate is selected from the list of Merge candidates, for example based on a rate-distortion criterion. The selected Merge candidate is signalled to the decoder in the bitstream using a syntax element called the Merge index.
In the current VTM reference software, the manner of coding the Merge index is 30 different depending on whether ATMVP is enabled or disabled.
Figure 10(a) illustrates the coding of the Merge index when ATMVP is not enabled at SPS level. The 5 Merge candidates CandO, Candl, Cand2, Cand3 and Cand4 are coded 0, 10, 110, 1110 and 1111 respectively. This corresponds to a unary max coding. In addition, the first bit is coded by CABAC using a single context and the other bits are bypass coded.
Figure 10(b) illustrates the coding of the Merge index when ATMVP is enabled. The 6 Merge candidates CandO, Candi, Cand2, Cand3, Cand4 and Cand5 are coded 0, 10, 110, 1110, 11110 and 11111 respectively. In this case, all bits of the merge index (from the 1" to the.5th bit) are context coded by CABAC Each bit has its own context and there are separate probability models for the different bits.
In the first embodiment of the present invention, as shown in Figure 14, when ATMVP is included as a Merge candidate in the list of Merge candidates (for example, when ATMVP is enabled at SPS level) the coding of the Merge index is modified so that only the first bit of the Merge index is coded by CABAC using a single context. The context is set in the same manner as in the current VTM reference software when ATMVP is not enabled at SPS level The other bits (from the 2" to the 5th bit) are bypass coded. When ATNIVP is not included as a Merge candidate in the list of Merge candidates (for example, when ATMVP is disabled at SPS level) there are 5 Merge candidates. Only the first bit of the Merge index is coded by CABAC using a single context. The context is set in the same manner as in the current VTM reference software when ATM VP is not enabled at SPS level The other bits (from the 2" to the 4'1' bit) are bypass decoded.
The decoder generates the same list of Merge candidates as the encoder. This may be accomplished by using the method of Figure 13. When ATMVP is not included as a Merge candidate in the list of Merge candidates (for example, when ATMVP is disabled at SPS level) there are 5 Merge candidates. Only the first bit of the Merge index is decoded by CABAC using a single context. The other bits (from the 2' to the 411' bit) are bypass decoded. In contrast to the current reference software, when ATMVP is included as a Merge candidate in the list of Merge candidates (for example, when ATIVIVP is enabled at SPS level), only the first bit of the Merge index is decoded by CABAC using a single context in the decoding of the Merge index. The other bits (from the 2nd to the 561 bit) are bypass decoded. The decoded merge index is used to identify the Merge candidate selected by the encoder from among the list of Merge candidates.
The advantage of this embodiment compared to the VTM2.0 reference software is a complexity reduction of the merge index decoding and decoder design (and encoder design) without impact on coding efficiency. Indeed, with this embodiment only 1 CABAC state is needed for the Merge index instead of 5 for the current VTNI Merge index coding/decoding.
Moreover, it reduces the worst-case complexity because the other bits are CABAC bypass coded which reduces the number of operations compared to coding all bits with CABAC Second Embodiment In a second embodiment, all bits of the Merge index are CABAC coded but they all share the same context. There may be a single context as in the first embodiment, which in this case is shared among the bits. As a result, when ATM VP is included as a Merge candidate in the list of Merge candidates (for example, when ATMVP is enabled at SPS level), only one context is used, compared to 5 in the VTM2.0 reference software. The advantage of this embodiment compared to the VTM2.0 reference software is a complexity reduction of the merge index decoding and decoder design (and encoder design) without impact on coding efficiency.
Alternatively, as described below in connection with the third to fifteenth embodiments, a context variable may be shared among the bits so that two or more contexts are available but the current context is shared by the bits.
When ATM VP is disabled the same context is still used for all bits This embodiment and all subsequent embodiments can be applied even if ATMVP is not an available mode or is disabled.
In a variant of the second embodiment, any two or more bits of the Merge index are CABAC coded and share the same context. Other bits of the Merge index are bypass coded. For example, the first N bits of the Merge index may be CABAC coded, where N is two or more.
Third Embodiment In the first embodiment the first bit of the Merge index was CABAC coded using a single context.
h) the third embodiment, a context variable for a bit of the Merge index depends on the 30 value of the Merge index of a neighbouring block. This allows more than one context for the target bit, with each context corresponding to a different value of the context variable.
The neighbouring block may be any block already decoded, so that its Merge index is available to the decoder by the time the current block is being decoded. For example, the neighbouring block may be any of the blocks AO, Al, A2, BO, Bl, B2 and B3 shown in Figure 6b In a first variant, just the first bit is CABAC coded using this context variable.
In a second variant, the first N bits of the Merge index, where N is two or more, are CAB AC coded and the context variable is shared among those N bits.
In a third variant, any N bits of the Merge index, where N is two or more, are CABAC coded and the context variable is shared among those N bits.
In a fourth variant, the first N bits of the Merge index, where N is two or more, are CABAC coded and N context variables are used for those N bits. Assuming the context variables have K values, KxN CABAC states are used. For example, in the present embodiment, with one neighbouring block, the context variable may conveniently have 2 values, e.g. 0 and 1 In other words 2N CABAC states are used.
In a fifth variant, any N bits of the Merge index, where N is two or more, are adaptive-PM coded and N context variables are used for those N bits.
The same variants are applicable to the fourth to sixteenth embodiments described hereinafter.
Fourth Embodiment In the fourth embodiment, the context variable for a bit of the Merge index depends on the respective values of the Merge index of two or more neighbouring blocks. For example, a first neighbouring block may be a left block AO, Al or A2 and a second neighbouring block may be an upper block BO, BI, B2 or B3. The manner of combining the two or more Merge index values is not particularly limited. Examples are given below.
The context variable may conveniently have 3 different values, e.g. 0, 1 and 2, in this case as there are two neighbouring blocks. If the fourth variant described in connection with the third embodiment is applied to this embodiment with 3 different values, therefore, K is 3 instead of 2. In other words 3N CABAC states are used.
Fifth Embodiment In the fifth embodiment, the context variable for a bit of the Merge index depends on the respective values of the Merge index of the neighbouring blocks A2 and B3.
Sixth Embodiment In the sixth embodiment, the context variable for a bit of the Merge index depends on the respective values of the Merge index of the neighbouring blocks Al and Bl. The advantage of this variant is alignment with the Merge candidates derivation. As a result, in some decoder and encoder implementations, memory access reductions can be achieved.
Seventh Embodiment In the seventh embodiment, the context variable for a bit having bit position idx_num in the Merge Index of the current block is obtained according to the following formula: ctxIdx = (Merge_index_left == idx_num) + (Merge_index_up == idx_num) where Merge index left is the Merge index for a left block, Merge index up is the 15 Merge index for an upper block, and the symbol == is the equality symbol.
When there are 6 Merge candidates, for example, 0 <= idx num <= 5.
The left block may be the block A] and the upper block may be the block 131 (as in the sixth embodiment). Alternatively, the left block may be the block A2 and the upper block may be the block B3 (as in the fifth embodiment).
The formula (Merge_index left == idx_num) is equal to 1 if the Merge index for the left block is equal to idx_num. The following table gives the results of this formula (Merge index left == idx_num): Merge index left idx_num 0 1 2 3 4 0 1 0 0 0 0 1 0 1 0 0 0 2 0 0 1 0 0 3 0 0 0 1 0 4 0 0 0 0 1 0 0 0 0 0 Of course the table of the formula (Merge index up == idx_num) is the same The following table gives the unary max code of each Merge index value and the relative bit position for each bit. This table corresponds to Figure MN.
NIerge_index left Unary max code 0 1 2 3 4 0 0 1 1 0 2 1 1 0 3 I 1 I 0 4 1 1 1 1 0 1 1 1 1 1 If the left block is not a merge block or an affine merge block it is considered that the left block is not available. The same condition is applied for the upper block.
For example, when only the first bit is CABAC coded, the context variable ctxldx is set equal to: 0 if no left and up block has a merge index or if the left block Merge index is not the first index (i.e. not 0) and if the upper block Merge index is not the first index (i.e. not 0); 1 if one but not the other of the left and upper blocks has its merge index equal to the first index; and 2 if for each of the left and upper blocks the merge index is equal to the first index.
More generally, for a target bit at position idx num which is CABAC coded, the context variable ctxldx is set equal to: 0 if no left and up block has a merge index or if the left block Merge index is not the ith index (where i = idx num) and if the upper block Merge index is not the index; 1 if one but not the other of the left and upper blocks has its merge index equal to the the ith index; and 2 if for each of the left and upper blocks the merge index is equal to the i index Here, the i index means the first index when i = 0, the second index when i = 1, and so on.
Eighth Embodiment In the eighth embodiment, the context variable for a bit having bit position idx_num in the Merge Index of the cun-ent block is obtained according to the following formula: Ctx = (Merge index left > idx num) + (Merge index up > idx num) where 25 Merge index left is the Merge index for a left block, Merge index up is the Merge index for an upper block, and the symbol > means "greater than".
When there are 6 Merge candidates, for example, 0 <= idx_num <= 5.
The left block may be the block Al and the upper block may be the block B1 (as in the fifth embodiment). Alternatively, the left block may be the block A2 and the upper block may be the block B3 (as in the sixth embodiment) The formula (Merge index left > idx_num) is equal to 1 if the Merge index for the left block is greater than idx_num. If the left block is not a merge block or an affine merge block it is considered that the left block is not available. The same condition is applied for the upper block The following table gives the results of this formula (Merge_index_left > idx_num) Merge index left idx_num 0 1 2 3 4 0 0 0 0 0 0 1 1 0 0 0 0 2 I 1 0 0 0 3 1 1 1 0 0 4 1 1 1 1 0 I 1 I 1 1 For example, when only the first bit is CABAC coded, the context variable ctxIdx is set equal to: 0 if no left and up block has a merge index or if the left block Merge index is less than or equal to the first index (i.e. not 0) and if the upper block Merge index is less than or equal to the first index (i.e. not 0); 1 if one but not the other of the left and upper blocks has its merge index greater than the first index; and 2 if for each of the left and upper blocks the merge index is greater than the first index. More generally, for a target bit at position idx_num which is CABAC coded, the context variable ctxldx is set equal to: 0 if no left and up block has a merge index or if the left block Merge index is less than the index (where i = idx_num) and if the upper block Merge index is less than or equal to the ith index; 1 if one but not the other of the left and upper blocks has its merge index greater than the the ith index; and 2 if for each of the left and upper blocks the merge index is greater than the itt index, The eighth embodiment provides a further coding efficiency increase over the seventh embodiment Ninth Embodiment In the fourth to eighth embodiments, the context variable for a bit of the Merge index of the current block depended on the respective values of the Merge index of two or more neighbouring blocks.
In the ninth embodiment, the context variable for a bit of the Merge index of the current 10 block depends on the respective Merge flags of two or more neighbouring blocks. For example, a first neighbouring block may be a left block AO, A] or A2 and a second neighbouring block may be an upper block BO, Bl, B2 or B3.
The Merge flag is set to 1 when a block is encoded using the Merge mode, and is set to 0 when another mode such as Skip mode or Affine Merge mode. Note that in VMT2.0 Affine Merge is a distinct mode from the basic or "classical" Merge mode. The Affine Merge mode may be signalled using a dedicated Affine flag. Alternatively, the list of Merge candidates may include an Affine Merge candidate, in which case the Affine Merge mode may be selected and signalled using the Merge index.
The context variable is then set to: 0 if neither the left nor the upper neighbouring block has its Merge flag set to 1; 1 if one but not the other of the left and upper neighbouring blocks has its Merge flag set to 1; and 2 if each of the left and upper neighbouring blocks has its Merge flag set to I. This simple measure achieves a coding efficiency improvement over VTM2.0. Another advantage, compared to the seventh and eighth embodiments, is a lower complexity because only the Merge flags and not the Merge indexes of the neighbouring blocks need to be checked. In a variant, the context variable for a bit of the Merge index of the current block depends on the Merge flag of a single neighbouring block.
Tenth Embodiment In the third to ninth embodiments, the context variable for a bit of the Merge index of the current block depended on Merge index values or Merge flags of one or more neighbouring blocks.
In the tenth embodiment, the context variable for a bit of the Merge index of the current block depends on the value of the Skip flag for the current block (current Coding Unit, or CU). The Skip flag is equal to 1 when the current block uses the Merge Skip mode, and is equal to 0 otherwise.
The Skip flag is a first example of another variable or syntax element already been decoded or parsed for the current block. This other variable or syntax element preferably is an indicator of a complexity of the motion information in the current block. Since the occurrences of the Merge index values depend on the complexity of the motion information a variable or syntax element such as the Skip flag is generally correlated with the merge index value.
More specifically, the Merge Skip mode is generally selected for static scenes or scenes involving constant motion. Consequently, the merge index value is generally lower for the Merge Skip mode than for the classical merge mode which is used to encode an inter prediction which contains a block residual. This occurs generally for more complex motion. However, the selection between these modes is also often related to the quantization and/or the RD criterion.
This simple measure provides a coding efficiency increase over VTM2.0. It is also very simple to implement as it does not involve neighbouring blocks or checking Merge index values.
In a first variant, the context variable for a bit of the Merge index of the current block is simply set equal to the Skip flag of the current block. The bit may be the first bit only. Other bits are bypass coded as in the first embodiment.
In a second variant, all bits of the Merge index are CABAC coded and each of them has its own context variable depending on the Merge flag. This requires 10 states of probabilities when there are 5 CABAC-coded bits in the Merge index (corresponding to 6 Merge candidates).
In a third variant, to limit the number of states, only N bits of the Merge index are CABAC coded, where N is two or more, for example the first N bits. This requires 2N states.
For example, when the first 2 bits are CABAC coded, 4 states are required Generally, in place of the Skip flag, it is possible to use any other variable or syntax element that has already been decoded or parsed for the current block and that is an indicator of a complexity of the motion information in the current block.
Eleventh Embodiment The eleventh embodiment relates to Affine Merge signalling as described previously with reference to Figures 11(a), 11(b) and 12.
In the eleventh embodiment, the context variable for a CABAC coded bit of the Merge index of the current block (current CU) depends on the Affine Merge candidates, if any, in the list of Merge candidates. The bit may be the first bit only of the Merge index, or the first N bits, where N is two or more, or any N bits. Other bits are bypass coded.
Affine prediction is designed for compensating complex motion. Accordingly, for complex motion the merge index generally has higher values than for less complex motion. It follows that if the first affine merge candidate is far down the list, or if there is no affine merge candidate at all, the merge index of the current CU is likely to have a small value.
It is therefore effective for the context variable to depend on the presence and/or position of at least one Affine Merge candidate in the list.
For example, the context variable may be set equal to: 1 if Al is affine 2 if B1 is affine 3 if BO is affine 4 if AO is affine if B2 is affine 0 if no neighbouring block is affine.
When the Merge index of the current block is decoded or parsed the affine flags of the Merge candidates at these positions have already been checked. Consequently, no further memory accesses are needed to derive the context for the Merge index of the current block. This embodiment provides a coding efficiency increase over VTM2.0. No additional memory accesses are required since step 1205 already involves checking the neighbouring CU affine modes.
In a first variant, to limit the number of states, the context variable may be set equal to: 0 if no neighbouring block is affine, or if Al or BI is affine 1 if BO, AO or B2 is affine In a second variant, to limit the number of states, the context variable may be set equal to: 0 if no neighbouring block is affine 1 if Al or B1 is affine 2 if BO, AO or B2 is affine In a third variant, the context variable may be set equal to: 1 if Al is affine 2 if B1 is affine 3 if BO is affine 4 if AO or B2 is affine 0 if no neighbouring block is affine.
Please note that these positions are already checked when the merge index is decoded or parsed because the affine flag decoding depends on these positions. Consequently, there is no need for additional memory access to derive the Merge index context which is coded after the affine flag.
Twelfth Embodiment In the twelfth embodiment signalling the affine mode comprises the affine mode comprises inserting affine mode as a candidate motion predictor.
In one example of the twelfth embodiment, the Affine Merge (and Merge Skip) is signalled as a Merge candidate. In that case the modules 1205, 1206 and 1207 of Figure 12 are removed. In addition, not to affect the coding efficiency of the Merge mode, the maximum possible number of merge candidates is incremented. For example, in the current VTM version this value is set equal to 6, so with if applying this embodiment to the current version of VTM, the value would be 7.
The advantage is a design simplification of the syntax element of Merge mode because fewer syntax elements need to be decoded. In some circumstances, a coding efficiency can be observed.
Two possibilities to implement this example will now be described: The Affine Merge index always has the same position inside the list whatever the value of the other Merge MV. The position of a candidate motion predictor indicates its likelihood of being selected and as such if it placed higher up the list (a lower index value), that motion vector predictor is more likely to be selected.
In the first example, the affine Merge index always has the same position inside the list of Merge candidates. This means that it has a fixed Merge idx value. For example this value can be set equal to 5, as the affine merge mode should represent complex motion which is not the most probable content. The additional advantage of this embodiment is that when the current block is parsed (decoding/reading of the syntax element only but not decoding the data itself),the current block can be set as affine block. Consequently the value can be used to determine the CABAC context for the affine flag which is used for AMVP. So the conditional probabilities should be improved for this affine flag and the coding efficiency should be better. In a second example, the affine Merge candidate is derived with other Merge candidates. In this example, a new affine Merge candidate is added into the list of Merge candidates. Figure 18 illustrates this example. Compared to Figure 13, the Affine candidate is the first affine neighbouring block AL B1, BO, AO B2 (1917). If the same condition as 1205 of Figure 12 is valid (1927), the motion vector field produced with the affine parameters is generated to obtain the affine candidate (1929). The list of initial candidates can have 4, 5, 6 or 7 candidates 10 according to the usage of ATMVP, Temporal and Affine candidates.
The order between all these candidate is important as more likely candidates should be processed first to ensure they are more likely to make the cut of motion vector candidates -a preferred ordering is the following: Al B1
BO
AO
AFFINE MERGE
ATM VP B2
TEMPORAL
Combined Zero M V It is important to note that the Affine Merge is before the ATMVP mode but after the four main neighbouring blocks. An advantage to set the affine Merge before the ATM VP candidate is a coding efficiency increase, as compared to setting it after the ATMVP and the temporal predictor. This coding efficiency increase depends on the GOP (group of pictures) structure and Quantization Parameter (QP) setting of each picture in the GOP. But for the most use GOP and QP setting this order give a coding efficiency increase.
A further advantage of this solution is a clean design of the Merge and Merge Skip for both syntax and derivation. Moreover, the affine candidate merge index can change according to the availability or value (duplicate check) of previous candidates in the list. Consequently an efficient signalization can be obtained.
In a further example, the affine Merge index is variable according to one or several conditions For example, the Merge index or the position inside the list associated with the affine candidate changes according to a criterion. The principle is to set a low value for merge index corresponding to the affine merge when the affine merge has a high probability to be selected (and a higher value when there is low probability to be selected).
In the twelfth embodiment, the affine merge candidate has a merge index value. To improve the coding efficiency of the Merge index it is effective to make the context variable for a bit of the Merge index depend on the affine flags for neighbouring blocks and/or for the current block.
For example, the context variable may be determined using the following formula: ctx Idx = IsAffine(A1) + IsAffine(B1) + I sAffine(B0) + I sAffine(A0) + IsAffine(B2) The resulting context value may have the value 0, 1, 2, 3 or 4.
The affine flags increase the coding efficiency.
In a first variant, to involve fewer neighbouring blocks, ctxldx = IsAffine(A1) + IsAffine(B1). The resulting context value may have the value 0, 1, or 2.
In a second variant, also involving fewer neighbouring blocks, ctxIdx = IsAffine(A2) + IsAffine(B3). Again, the resulting context value may have the value 0, 1, or 2.
In a third variant, involving no neighbouring blocks, ctxIdx = IsAffine(current block).
The resulting context value may have the value 0 or I. Figure 16 is a flow chart of the partial decoding process of some syntax elements related to the coding mode with the third variant. In this figure the Skip flag (1601), the prediction mode (1611), the Merge flag (1603), the Merge Index (1608) and the affine flag (1606) can be decoded. This flow chart is similar to that of Figure 12, described hereinbefore, and a detailed description is therefore omitted. The difference is that there is a merge index for the Affine Merge flag, which is not the case in VTN1 2.0. In VTM2.0 the Affine flag of the current block cannot be used to obtain a context for the Merge index because it always has the same value '0'.
Thirteenth Embodiment In the tenth embodiment, the context variable for a bit of the Merge index of the current block depended on the value of the Skip flag for the current block (current Coding Unit, or CU).
In the thirteenth embodiment, instead of using the Skip flag value directly to derive the context variable for the target bit of the Merge index, the context value for the target bit is derived from the context variable for the Skip flag of the current CU. This is possible because the Skip flag is itself CABAC coded and therefore has a context variable.
Preferably, the context variable for the target bit of the Merge index of the current CU is set equal to (copied from) the context variable for the Skip flag of the current CU.
The target bit may be the first bit only. Other bits are bypass coded as in the first embodiment.
The context variable for the Skip flag of the current CU is derived in the manner prescribed in VT1\42.0. The advantage of this embodiment compared to the VTM2.0 reference software is a complexity reduction of the merge index decoding and decoder design (and encoder design) without impact on coding efficiency. Indeed, with this embodiment, at the minimum only 1 CABAC state is needed for the Merge index instead of 5 for the current VTM Merge index coding/decoding. Moreover, it reduces the worst-case complexity because the other bits are CABAC bypass coded which reduces the number of operations compared to coding all bits with CABAC.
Fourteenth Embodiment In the thirteenth embodiment, the context value for the target bit was derived from the context variable for the Skip flag of the current CU.
In the fourteenth embodiment, the context value for the target bit is derived from the context variable for the affine flag of the current CU.
This is possible because the affine flag is itself CABAC coded and therefore has a context variable.
Preferably, the context variable for the target bit of the Merge index of the current CU is set equal to (copied from) the context variable for the affine flag of the current CU The target bit may be the first bit only. Other bits are bypass coded as in the first embodiment.
The context variable for the affine flag of the current CU is derived in the manner prescribed in VTIVI2.0.
The advantage of this embodiment compared to the VT1\42.0 reference software is a complexity reduction of the merge index decoding and decoder design (and encoder design) without impact on coding efficiency. Indeed, with this embodiment, at the minimum only I CABAC state is needed for the Merge index instead of 5 for the current VIM Merge index coding/decoding Moreover, it reduces the worst-case complexity because the other bits are CABAC bypass coded which reduces the number of operations compared to coding all bits with CABAC.
Fifteenth Embodiment In several of the foregoing embodiments, the context variable had more than 2 values, for example the three values 0, 1 and 2. However, to reduce the complexity, and reduce the number of states to be handled, it is possible to cap the number of permitted context-variable values at 2, e.g. 0 and]. This can be accomplished, for example, by changing any initial context variable having the value 2 to I. In practice, this simplification has no or only a limited impact on the coding efficiency.
Combinations of Embodiments Any two or more of the foregoing embodiments may be combined.
The preceding description has focussed on the encoding and decoding of the Merge index. For example, the first embodiment involves generating a list of merge candidates including an ATMVP candidate; selecting one of the merge candidates in the list; and generating a merge index for the selected merge candidate using CABAC coding, one or more bits of the merge index being bypass CABAC coded. In principle, the present invention can be applied to modes other than the Merge mode that involve generating a list of motion vector predictor (MVP) candidates; selecting one of the MVP candidates in the list; and generating an index for the selected MVP candidate. Thus, the present invention is not limited to the Merge mode and the index to be encoded or decoded is not limited to the Merge index. For example, in the development of VVC, it is conceivable that the techniques of the foregoing embodiments could be applied to extended to a mode other than the Merge mode, such as the AMVP mode of HEVC or its equivalent mode in VVC. The appended claims are to be interpreted accordingly.
Implementation of embodiments of the invention Figure 20 is a schematic block diagram of a computing device 2000 for implementation of one or more embodiments of the invention. The computing device 2000 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 2000 comprises a communication bus connected to: -a central processing unit (CPU) 2001, such as a microprocessor; -a random access memory (RAM) 2002 for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method for encoding or decoding at least part of an image according to embodiments of the invention, the memory capacity thereof can be expanded by an optional RAM connected to an expansion port for example; -a read only memory (ROM) 2003 for storing computer programs for implementing embodiments of the invention; -a network interface (NET) 2004 is typically connected to a communication network over which digital data to be processed are transmitted or received. The network interface (NET) 2004 can be a single network interface, or composed of a set of different network interfaces (for instance wired and wireless interfaces, or different kinds of wired or wireless interfaces). Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 2001; -a user interface (UI) 2005 may be used for receiving inputs from a user or to display information to a user; -a hard disk (HD) 2006 may be provided as a mass storage device; -an Input/Output module (10) 2007 may be used for receiving/sending data from/to external devices such as a video source or display. The executable code may be stored either in the ROM 2003, on the HD 2006 or on a removable digital medium such as, for example a disk. According to a variant, the executable code of the programs can be received by means of a communication network, via the NET 2004, in order to be stored in one of the storage means of the communication device 2000, such as the HD 2006, before being executed.
The CPU 2001 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 2001 is capable of executing instructions from main RAM memory 2002 relating to a software application after those instructions have been loaded from the program ROM 2003 or the FID 2006, for example. Such a software application, when executed by the CPU 2001, causes the steps of the method according to the invention to be performed.
It is also understood that according to another embodiment of the present invention, a decoder according to an aforementioned embodiment is provided in a user terminal such as a computer, a mobile phone (a cellular phone), a table or any other type of a device (e.g. a display apparatus) capable of providing/displaying a content to a user. According to yet another embodiment, an encoder according to an aforementioned embodiment is provided in an image capturing apparatus which also comprises a camera, a video camera or a network camera (e.g. a closed-circuit television or video surveillance camera) which captures and provides the content for the encoder to encode. Two such examples are provided below with reference to Figures 20 and 21.
Figure 20 is a diagram illustrating a network camera system 2100 including a network camera 2102 and a client apparatus 2104.
The network camera 2102 includes an imaging unit 2106, an encoding unit 2108, a communication unit 2110, and a control unit 2112.
The network camera 2102 and the client apparatus 2104 are mutually connected to be able to communicate with each other via the network 200.
The imaging unit 2106 includes a lens and an image sensor (e.g., a charge coupled device (CCD) or a complementary metal oxide semiconductor (CA/I0S)), and captures an image of an object and generates image data based on the image. This image can be a still image or a video image. The imaging unit may also comprise zooming means and/or panning means which are adapted to zoom or pan (either optically or digitally) respectfully.
The encoding unit 2108 encodes the image data by using said encoding methods explained in first to fifthteenth embodiments. The encoding unit 2108 uses at least one of encoding methods explained in first to fifthteenth embodiments. For another instance, the encoding unit 2108 can use combination of encoding methods explained in first to fifthteenth embodiments.
The communication unit 2110 of the network camera 2102 transmits the encoded image data encoded by the encoding unit 2108 to the client apparatus 2104.
Further, the communication unit 2110 receives commands from client apparatus 2104. The commands include commands to set parameters for the encoding of the encoding unit 2108. The control unit 2112 controls other units in the network camera 2102 in accordance 30 with the commands received by the communication unit 2110.
The client apparatus 2104 includes a communication unit 2114, a decoding unit 2116, and a control unit 2118.
The communication unit 2118 of the client apparatus 2104 transmits the commands to the network camera 2102.
Further, the communication unit 2118 of the client apparatus 2104 receives the encoded image data from the network camera 2102 The decoding unit 2116 decodes the encoded image data by using said decoding methods explained in any of the first to fifthteenth embodiments. For another instance, the decoding unit 2116 can use combination of decoding methods explained in the first to fifthteenth embodiments.
The control unit 2118 of the client apparatus 2104 controls other units in the client apparatus 2104 in accordance with the user operation or commands received by the communication unit 2114 The control unit 2118 of the client apparatus 2104 controls a display apparatus 2120 so as to display an image decoded by the decoding unit 21 1 6.
The control unit 2118 of the client apparatus 2104 also controls a display apparatus 2120 so as to display GUI (Graphical User Interface) to designate values of the parameters for the network camera 2102 includes the parameters for the encoding of the encoding unit 2108 The control unit 2118 of the client apparatus 2104 also controls other units in the client apparatus 2104 in accordance with user operation input to the GUI displayed by the display apparatus 2120.
The control unit 2118 of the client apparatus 2104 controls the communication unit 2114 of the client apparatus 2104 so as to transmit the commands to the network camera 2102 20 which designate values of the parameters for the network camera 2102, in accordance with the user operation input to the GUI displayed by the display apparatus 2120.
The network camera system 2100 may determine if the camera 2102 utilizes zoom or pan during the recording of video, and such information may be used when encoding a video stream as zooming or panning during filming may benefit from the use of affine mode which is well-suited to coding complex motion such as zooming, rotating and/or stretching (which may be side-effects of panning, in particular if the lens is a 'fish eye' lens).
Figure 21 is a diagram illustrating a smart phone 2200 The smart phone 2200 includes a communication unit 2202, a decoding/encoding unit 2204, a control unit 2206 and a display unit 2208.
the communication unit 2202 receives the encoded image data via network.
The decoding unit 2204 decodes the encoded image data received by the communication unit 2202.
The decoding unit 2204 decodes the encoded image data by using said decoding methods explained in first to fifthteenth embodiments. The decoding unit 2204 can use at least one of decoding methods explained in first to fifthteenth embodiments. For another instance, the encoding unit 2202 can use combination of decoding methods explained in first to fifthteenth embodiments.
The control unit 2206 controls other units in the smart phone 2200 in accordance with a user operation or commands received by the communication unit 2202.
For example, the control unit 2206 controls a display apparatus 2208 so as to display an image decoded by the decoding unit 2204.
The smart phone may further comprise an image recording device 2210 (for example a digital camera an associated circuity) to record images or videos. Such recorded images or videos may be encoded by the decoding/encoding unit 2204 under instruction of the control unit 2206.
The smart phone may further comprise sensors 2212 adapted to sense the orientation of the mobile device. Such sensors could include an accelerometer, gyroscope, compass, global positioning (UPS) unit or similar positional sensors. Such sensors 2212 can determine if the smart phone changes orientation and such information may be used when encoding a video stream as a change in orientation during filming may benefit from the use of affine mode which is well-suited to coding complex motion such as rotations.
Alternatives and modifications It will be appreciated that an object of the present invention is to ensure that affine mode is utilised in a most efficient manner, and certain examples discussed above relate to signalling the use of affine mode in dependence on a perceived likelihood of affine mode being useful. A further example of this may apply to encoders when it is known that complex motion (where an affine transform may be particularly efficient) is being encoded. Examples of such cases include: a) A camera zooming in / out b) A portable camera (e.g, a mobile phone) changing orientation during filming (i.e. a rotational movement) c) A 'fisheye' lens camera panning (e.g. a stretching / distortion of a portion of the image As such, an indication of complex motion may be raised during the recording process so that affine mode may be given a higher likelihood of being used for the slice, sequence of frames or indeed the entire video stream.
In a further example, affine mode may be given a higher likelihood of being used depending on a feature or functionality of the device used to record the video For example, a mobile device may be more likely to change orientation than (say) a fixed security camera so affine mode may be more appropriate for encoding video from the former. Examples of features or functionality include: the presence/use of zooming means, the presence/use of a positional sensor, the presence/use of panning means, whether or not the device is portable, or a user-selection on the device.
While the present invention has been described with reference to embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. It will be appreciated by those skilled in the art that various changes and modification might be made without departing from the scope of the invention, as defined in the appended claims. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
It is also understood that any result of comparison, determination, assessment, selection, execution, performing, or consideration described above, for example a selection made during an encoding or filtering process, may be indicated in or determinable/inferable from data in a bitstream, for example a flag or data indicative of the result, so that the indicated or determined/inferred result can be used in the processing instead of actually performing the comparison, determination, assessment, selection, execution, performing, or consideration, for example during a decoding process.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims In the preceding embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit.
Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (AS1C5), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
This application is a divisional application of United Kingdom patent application no. 2105596.7 (the "parent application"), also published under no. GB-A-2595051. The original claims of the parent application are repeated below in the present specification and form part of the content of this divisional application as filed.
A method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine prediction, selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate and encoding the generated motion vector predictor index using CABAC coding, all bits except for a first bit of the motion vector predictor index being coded by bypass CABAC and the first bit of the motion vector predictor index being coded by CABAC using a single context.
A method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine prediction, decoding the motion vector predictor index using CABAC decoding, all bits except for a first bit of the motion vector predictor index being decoded by bypass CABAC and the first bit of the motion vector predictor index being decoded by CABAC using a single context; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list 3. A method as claimed in claim 1 or 2, wherein the motion vector predictor candidates are Merge candidates and the motion vector predictor index is a Merge index.
A device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine 10 prediction; means for selecting one of the motion vector predictor candidates in the list, and means for generating a motion vector predictor index for the selected motion vector predictor candidate and encoding the generated motion vector predictor index using CAB AC coding, all bits except for a first bit of the motion vector predictor index being coded by bypass CABAC and the first bit of the motion vector predictor index being coded by CABAC using a single context.
A device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including a candidate for sub-block collocated temporal prediction and a candidate for sub-block Affine prediction; means for decoding the motion vector predictor index using CABAC decoding, all bits except for a first bit of the motion vector predictor index being decoded by bypass CABAC and the first bit of the motion vector predictor index being decoded by CABAC using a single context, and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list 6. A device as claimed in claim 4 or 5, wherein the motion vector predictor candidates are Merge candidates and the motion vector predictor index is a Merge index 7. A program which, when run on a computer or processor, causes the computer or processor to carry out the method of any one of claims 1 to 3.
A carrier medium carrying the program of claim 7.
The parent application is itself a divisional application of United Kingdom patent application no. 1815564.8 (the "grandparent application-), also published under no. GB-A- 2579763. The original claims of the grandparent application are repeated below in the present specification and form part of the content of this divisional application as filed.
A method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an ATMVP candidate; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, one or more bits of the motion vector predictor index being bypass CABAC coded.
2. A method as claimed in claim 1, wherein all bits except for a first bit of the motion vector predictor index are bypass CABAC coded.
3. A method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates including an ATMVP candidate; decoding the motion vector predictor index using CABAC decoding, one or more bits of the motion vector predictor index being bypass CABAC decoded; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list 4. A method as claimed in claim 3, wherein all bits except for a first bit of the motion vector predictor index are bypass CABAC decoded A device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an ATM VP candidate; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, one or more bits of the motion vector 15 predictor index being bypass CABAC coded.
6. A device as claimed in claim 5, wherein all bits except for a first bit of the motion vector predictor index are bypass CABAC coded.
7. A device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates including an ATMVP candidate, means for decoding the motion vector predictor index using CABAC decoding, one or more bits of the motion vector predictor index being bypass CABAC decoded; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
8. A device as claimed in claim 7, wherein all bits except for a first bit of the motion vector predictor index are bypass CABAC decoded A method of encoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; selecting one of the motion vector predictor candidates in the list; and generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list.
10. A method as claimed in claim 9, wherein the context variable depends on position in said list of a first Milne Motion vector predictor candidate.
11. A method of decoding a motion vector predictor index, comprising: generating a list of motion vector predictor candidates; decoding the motion vector predictor index using CABAC decoding wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
12. A method as claimed in claim 11, wherein the context variable depends on position in said list of a first Milne Motion vector predictor candidate.
13. A device for encoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates; means for selecting one of the motion vector predictor candidates in the list; and means for generating a motion vector predictor index for the selected motion vector predictor candidate using CABAC coding, wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list.
14. A device as claimed in claim 13, wherein the context variable depends on position in said list of a first Affine Motion vector predictor candidate.
15. A device for decoding a motion vector predictor index, comprising: means for generating a list of motion vector predictor candidates, means for decoding the motion vector predictor index using CABAC decoding wherein a context variable for at least one bit of the motion vector predictor index of a current block depends on Affine Motion vector predictor candidates, if any, in the list, and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
16. A method as claimed in claim 15, wherein the context variable depends on position in said list of a first Affine Motion vector predictor candidate.
17. A program which, when run on a computer or processor, causes the computer or processor to carry out the method of any one of claims 1 to 4 or 9 to 12.
18. A carrier medium carrying the program of claim 17.
Claims (6)
-
- A method of encoding a motion vector predictor index, comprising: determining a maximum number of motion vector predictor candidates includable in a list of motion vector predictor candidates; generating such a list of motion vector predictor candidates having the determined maximum number of motion vector predictor candidates, the list including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine prediction; selecting one of the motion vector predictor candidates in the list, and generating a motion vector predictor index for the selected motion vector predictor candidate and encoding the generated motion vector predictor index using CABAC coding, all bits except for a first bit of the motion vector predictor index being bypass CABAC coded and the first bit of the motion vector predictor index being coded by CABAC using a single context.
- A method of decoding a motion vector predictor index, comprising: determining a maximum number of motion vector predictor candidates includable in a list of motion vector predictor candidates, generating such a list of motion vector predictor candidates having the determined maximum number of motion vector predictor candidates, the list including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine prediction; decoding the motion vector predictor index using CABAC decoding, all bits except for a first bit of the motion vector predictor index being bypass CABAC decoded and the first bit of the motion vector predictor index being decoded by CABAC using a single context; and using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
- A method as claimed in claim 1 or 2, wherein the motion vector predictor candidates are Merge candidates and the motion vector predictor index is a Merge index 4, A device for encoding a motion vector predictor index, comprising: means for determining a maximum number of motion vector predictor candidates includable in a list of motion vector predictor candidates; means for generating such a list of motion vector predictor candidates having the determined maximum number of motion vector predictor candidates, the list including a candidate for subblock collocated temporal prediction and a candidate for subblock Affine prediction; means for selecting one of the motion vector predictor candidates in the list and means for generating a motion vector predictor index for the selected motion vector predictor candidate and encoding the generated motion vector predictor index using CABAC coding, all bits except for a first bit of the motion vector predictor index being bypass CABAC coded and the first bit of the motion vector predictor index being coded by CABAC using a single context.
- 5, A device for decoding a motion vector predictor index, comprising: means for determining a maximum number of motion vector predictor candidates includable in a list of motion vector predictor candidates, means for generating a list of motion vector predictor candidates having the determined maximum number of motion vector predictor candidates, the list including a candidate for sub-block collocated temporal prediction and a candidate for sub-block Affine prediction; means for decoding the motion vector predictor index using CABAC decoding, all bits except for a first bit of the motion vector predictor index being bypass CABAC decoded and the first bit of the motion vector predictor index being decoded by CABAC using a single context; and means for using the decoded motion vector predictor index to identify one of the motion vector predictor candidates in the list.
- 6. A device as claimed in claim 4 or 5, wherein the motion vector predictor candidates are Merge candidates and the motion vector predictor index is a Merge index 7. A program which, when run on a computer or processor, causes the computer or processor to carry out the method of any one of claims 1 to 3.A carrier medium carrying the program of claim 7.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1815443.5A GB201815443D0 (en) | 2018-09-21 | 2018-09-21 | Video coding and decoding |
GB2105596.7A GB2595051B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
Publications (3)
Publication Number | Publication Date |
---|---|
GB202207449D0 GB202207449D0 (en) | 2022-07-06 |
GB2606282A true GB2606282A (en) | 2022-11-02 |
GB2606282B GB2606282B (en) | 2023-05-24 |
Family
ID=64024158
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GBGB1815443.5A Ceased GB201815443D0 (en) | 2018-09-21 | 2018-09-21 | Video coding and decoding |
GB2207447.0A Active GB2606281B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
GB2207449.6A Active GB2606282B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
GB2105596.7A Active GB2595051B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GBGB1815443.5A Ceased GB201815443D0 (en) | 2018-09-21 | 2018-09-21 | Video coding and decoding |
GB2207447.0A Active GB2606281B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2105596.7A Active GB2595051B (en) | 2018-09-21 | 2018-09-24 | Video coding and decoding |
Country Status (1)
Country | Link |
---|---|
GB (4) | GB201815443D0 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102660160B1 (en) | 2018-11-22 | 2024-04-24 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Coordination method for subblock-based inter prediction |
WO2020135482A1 (en) * | 2018-12-29 | 2020-07-02 | Beijing Bytedance Network Technology Co., Ltd. | Construction method for default motion candidate in sub-block based inter prediction |
US11394993B2 (en) * | 2019-03-13 | 2022-07-19 | Tencent America LLC | Method and apparatus for affine inter prediction with small subblocks |
EP3997877A4 (en) | 2019-08-13 | 2023-05-24 | Beijing Bytedance Network Technology Co., Ltd. | Motion precision in sub-block based inter prediction |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018052986A1 (en) * | 2016-09-16 | 2018-03-22 | Qualcomm Incorporated | Offset vector identification of temporal motion vector predictor |
-
2018
- 2018-09-21 GB GBGB1815443.5A patent/GB201815443D0/en not_active Ceased
- 2018-09-24 GB GB2207447.0A patent/GB2606281B/en active Active
- 2018-09-24 GB GB2207449.6A patent/GB2606282B/en active Active
- 2018-09-24 GB GB2105596.7A patent/GB2595051B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018052986A1 (en) * | 2016-09-16 | 2018-03-22 | Qualcomm Incorporated | Offset vector identification of temporal motion vector predictor |
Non-Patent Citations (2)
Title |
---|
(CHEN et al.) 'Coding tools investigation for next generation video coding based on HEVC' Applications of digital image processing XXXVIII, Proc. of SPIE Vol. 9599, 95991B. 2015 doi: 10.1117/12.2193681 * |
(CHIEN et al.) 'Sub-block motion derivation for merge mode in HEVC' Applications of digital image processing XXXIX, Proc. of SPIE vol. 9971, 99711K. 2016 doi: 10.1117/12/2239709 * |
Also Published As
Publication number | Publication date |
---|---|
GB2606281B (en) | 2023-05-24 |
GB202105596D0 (en) | 2021-06-02 |
GB202207447D0 (en) | 2022-07-06 |
GB2595051B (en) | 2022-07-06 |
GB201815443D0 (en) | 2018-11-07 |
GB2606281A (en) | 2022-11-02 |
GB202207449D0 (en) | 2022-07-06 |
GB2606282B (en) | 2023-05-24 |
GB2595051A (en) | 2021-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11856186B2 (en) | Encoding and decoding information about a motion information predictor | |
US11601671B2 (en) | Video coding and decoding | |
GB2606282A (en) | Video coding and decoding | |
US20240107054A1 (en) | Video coding and decoding | |
US20240073410A1 (en) | Video coding and decoding | |
GB2606278A (en) | Video coding and decoding | |
GB2606277A (en) | Video coding and decoding | |
GB2606280A (en) | Video coding and decoding | |
GB2606279A (en) | Video coding and decoding |