WO2020088324A1 - 一种视频图像预测方法及装置 - Google Patents
一种视频图像预测方法及装置 Download PDFInfo
- Publication number
- WO2020088324A1 WO2020088324A1 PCT/CN2019/112749 CN2019112749W WO2020088324A1 WO 2020088324 A1 WO2020088324 A1 WO 2020088324A1 CN 2019112749 W CN2019112749 W CN 2019112749W WO 2020088324 A1 WO2020088324 A1 WO 2020088324A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- identifier
- prediction mode
- block
- maximum length
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 180
- 230000033001 locomotion Effects 0.000 claims abstract description 890
- 239000013598 vector Substances 0.000 claims abstract description 618
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 166
- 230000004927 fusion Effects 0.000 claims description 159
- 230000015654 memory Effects 0.000 claims description 85
- 230000002123 temporal effect Effects 0.000 claims description 21
- 238000013519 translation Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 82
- 238000013139 quantization Methods 0.000 description 77
- 238000013461 design Methods 0.000 description 45
- 238000004891 communication Methods 0.000 description 41
- 239000000872 buffer Substances 0.000 description 36
- 238000010586 diagram Methods 0.000 description 25
- 238000006243 chemical reaction Methods 0.000 description 24
- 230000008569 process Effects 0.000 description 23
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 17
- 230000003044 adaptive effect Effects 0.000 description 13
- 238000005070 sampling Methods 0.000 description 12
- 230000005540 biological transmission Effects 0.000 description 9
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 7
- 238000003491 array Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000013500 data storage Methods 0.000 description 5
- 238000005192 partition Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/527—Global motion vector estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/40—Tree coding, e.g. quadtree, octree
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/127—Prioritisation of hardware or computational resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present application relates to the technical field of image coding and decoding, and in particular to a video image prediction method and device.
- Video signals have become the most important way for people to obtain information in their daily lives due to their intuitive and efficient advantages. Due to the large amount of data contained in the video signal, a large amount of transmission bandwidth and storage space are required. In order to effectively transmit and store video signals, it is necessary to compress and encode the video signals. Video compression technology is increasingly becoming an indispensable key technology in the field of video applications.
- the basic principle of video coding compression is to use the correlation between the space domain, time domain, and codewords to remove as much redundancy as possible.
- the current popular method is to adopt a hybrid video coding framework based on image blocks, and realize video coding compression through steps such as prediction (including intra prediction and inter prediction), transformation, quantization, and entropy coding.
- motion estimation / motion compensation in inter prediction is a key technology that affects encoding / decoding performance.
- Existing inter-frame prediction uses motion compensation based on block-based motion compensation (MC) based on translation motion model, and adds sub-block fusion motion vector prediction. How to determine the maximum length of the candidate motion vector list in the sub-block fusion mode is currently not a feasible way.
- MC block-based motion compensation
- the present application provides a video image prediction method and device, and provides a maximum length of a candidate motion vector list for determining a sub-block fusion mode.
- an embodiment of the present application provides a video image prediction method, including:
- Parse the first flag from the code stream for example: sps_affine_enable_flag
- parse the second flag from the code stream For example: five_minus_max_num_subblock_merge_cand or six_minus_max_num_subblock_merge_cand
- the second identifier is used to indicate the maximum length of the first candidate motion vector list
- the first candidate motion vector list is a candidate motion constructed when the to-be-processed block adopts the sub-block fusion prediction mode Vector list; determine the maximum length of the first candidate motion vector list according to the second identifier.
- the above method provides a method for maximizing the length of the candidate motion vector list in the sub-block fusion mode, which is simple and easy to implement.
- the method before determining the maximum length of the first candidate motion vector list according to the second identifier, the method further includes: parsing a third identifier (for example: sps_sbtmvp_enabled_flag) from the code stream, the The third flag is used to indicate the existence state of the advanced time-domain motion vector prediction mode in the sub-block fusion prediction mode.
- a third identifier for example: sps_sbtmvp_enabled_flag
- the sub-block fusion prediction mode is composed of at least one of a plane motion vector prediction mode, the advanced time domain motion vector prediction mode, and the affine mode, when the third indicator indicates
- the determining the maximum length of the first candidate motion vector list according to the second identifier includes: according to the third The identifier determines the first quantity value; according to the second identifier and the first quantity value, the maximum length of the first candidate motion vector list is determined.
- the first number value is equal to the number of motion vectors supported by the advanced time-domain motion vector prediction mode.
- sps_sbtmvp_enabled_flag 0, the first quantity value is equal to the number of motion vectors supported by the advanced time-domain motion vector prediction mode.
- the method before determining the maximum length of the first candidate motion vector list according to the second identifier, the method further includes: parsing a fourth identifier (for example: sps_planar_enabled_flag) from the code stream.
- the fourth flag is used to indicate the existence state of the plane motion vector prediction mode in the sub-block fusion prediction mode.
- the determining the maximum length of the first candidate motion vector list according to the second identifier includes: determining a second quantity value based on the fourth identifier; The second identifier and the second quantity value determine the maximum length of the first candidate motion vector list.
- the second quantity value is equal to the number of motion vectors predicted by the plane motion vector prediction mode.
- the determining the maximum length of the first candidate motion vector list according to the second identifier includes: determining the maximum length according to the second identifier and the first quantity value The maximum length of the first candidate motion vector list.
- the determining the maximum length of the first candidate motion vector list according to the second identifier includes: according to the second identifier, the first quantity value and The second quantity value determines the maximum length of the first candidate motion vector list.
- the maximum length of the first candidate motion vector list is obtained according to the following formula:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- K is a preset non-negative integer
- the maximum length of the first candidate motion vector list is determined according to the second identifier and the first quantity value
- the maximum length of the first candidate motion vector list is according to the following formula obtain:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L1 represents the first quantity value
- K is a preset non-negative integer.
- the maximum length of the first candidate motion vector list is determined according to the second identifier and the second quantity value
- the maximum length of the first candidate motion vector list is according to the following formula obtain:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L2;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L2 represents the second quantity value
- K is a preset non-negative integer.
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1-L2; where MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list, K_minus_max_num_subblock_merge_cand represents the second identifier, and L1 represents the first quantity L2 represents the second quantity value, and K is a preset non-negative integer.
- the parsing the second identifier from the code stream includes:
- the third indicator indicates that the advanced time domain motion vector prediction mode is
- a third quantity value is determined according to the third identifier
- a maximum length of the first candidate motion vector list is determined according to the third quantity value.
- sps_sbtmvp_enabled_flag 1 indicates that the advanced time-domain motion vector prediction mode exists in the sub-block fusion prediction mode
- the third quantity value is equal to the number of motion vectors supported by the advanced time-domain motion vector prediction mode.
- the maximum length of the first candidate motion vector list is equal to the third quantity value.
- the determination of the first The maximum length of a candidate motion vector list includes: determining a fourth quantity value according to a fourth identifier; and determining a maximum length of the first candidate motion vector list according to the third quantity value and the fourth quantity value.
- the maximum length of the first candidate motion vector list is equal to the sum of the third quantity value and the fourth quantity value.
- sps_planar_enabled_flag 1, to indicate that the plane motion vector prediction mode exists in the sub-block fusion prediction mode, and the fourth quantity value is equal to the number of motion vectors predicted by the plane motion vector list.
- the third identifier indicates the advanced time-domain motion
- the fourth flag indicates that the planar motion vector prediction mode exists in the sub-block fusion prediction mode
- the third identifier indicates the advanced time domain
- the maximum length of the first candidate motion vector list is zero.
- the third identifier indicates the advanced time-domain motion
- the first candidate motion vector list Has a maximum length of zero.
- the third identifier is a first value, and the first quantity value is 1.
- the fourth identifier is a third value
- the second quantity value is 1.
- the third identifier is a second value
- the third quantity value is 1.
- the fourth identifier is a fourth value
- the fourth quantity value is 1.
- an embodiment of the present application provides a video image prediction device, including:
- the parsing unit is used to parse the first identifier from the code stream; when the first identifier indicates that the candidate mode used by the block to be processed for inter prediction includes an affine mode, parse the second identifier from the code stream An identifier, the second identifier is used to indicate a maximum length of a first candidate motion vector list, and the first candidate motion vector list is a candidate motion vector list constructed when the to-be-processed block adopts a sub-block fusion prediction mode;
- the determining unit is configured to determine the maximum length of the first candidate motion vector list according to the second identifier.
- the parsing unit is further configured to parse the third identifier from the code stream before determining the maximum length of the first candidate motion vector list according to the second identifier, the The third flag is used to indicate the existence state of the advanced time-domain motion vector prediction mode in the sub-block fusion prediction mode.
- the sub-block fusion prediction mode is composed of at least one of a plane motion vector prediction mode, the advanced time domain motion vector prediction mode, and the affine mode, when the third indicator indicates When the advanced time domain motion vector prediction mode does not exist in the sub-block fusion prediction mode, the determining unit is specifically used for:
- the maximum length of the first candidate motion vector list is determined.
- the parsing unit before determining the maximum length of the first candidate motion vector list according to the second identifier, the parsing unit is further used to:
- a fourth identifier is parsed from the code stream, and the fourth identifier is used to indicate the existence state of the plane motion vector prediction mode in the sub-block fusion prediction mode.
- the determining unit is specifically used for:
- the maximum length of the first candidate motion vector list is determined according to the second identifier and the second quantity value.
- the determining unit is specifically used for:
- the maximum length of the first candidate motion vector list is determined according to the second identifier and the first quantity value.
- the determining unit is specifically used to:
- the maximum length of the first candidate motion vector list is determined according to the second identifier, the first quantity value, and the second quantity value.
- the maximum length of the first candidate motion vector list is obtained according to the following formula:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- K is a preset non-negative integer
- the maximum length of the first candidate motion vector list is obtained according to the following formula:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L1 represents the first quantity value
- K is a preset non-negative integer.
- the maximum length of the first candidate motion vector list is obtained according to the following formula:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L2;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L2 represents the second quantity value
- K is a preset non-negative integer.
- the maximum length of the first candidate motion vector list is obtained according to the following formula:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1–L2;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L1 represents the first quantity value
- L2 represents the second quantity value
- K is a preset non-negative integer.
- the parsing unit parses the second identifier from the code stream, which is specifically used to:
- the determination unit is also used to:
- the third identifier indicates that the advanced temporal motion vector prediction mode is in the sub-block
- a third quantity value is determined according to the third identifier, and a maximum length of the first candidate motion vector list is determined according to the third quantity value.
- the determining unit determines the first The maximum length of the candidate motion vector list, specifically used for:
- the maximum length of the first candidate motion vector list is determined according to the first quantity value and the fourth quantity value.
- the determining unit is further used when the first identifier indicates that the candidate mode adopted by the block to be processed for inter prediction includes only the translational motion vector prediction mode, and the third The flag indicates that the advanced time domain motion vector prediction mode does not exist in the sub-block fusion prediction mode, and the fourth flag indicates that the planar motion vector prediction mode exists in the sub-block fusion prediction mode, according to The fourth identifier determines a fourth quantity value, and determines the maximum length of the first candidate motion vector list according to the fourth quantity value.
- the third identifier indicates the advanced time domain
- the maximum length of the first candidate motion vector list is zero.
- the third identifier indicates the advanced time-domain motion
- the first candidate motion vector list Has a maximum length of zero.
- the maximum length of the first candidate motion vector list is equal to the third quantity value.
- the maximum length of the first candidate motion vector list is equal to the sum of the third quantity value and the fourth quantity value.
- the maximum length of the first candidate motion vector list is equal to the fourth quantity value.
- the third identifier is a first value, and the first quantity value is 1.
- the fourth identifier is a third value
- the second quantity value is 1.
- the third identifier is a second value
- the third quantity value is 1.
- the fourth identifier is a fourth value
- the fourth quantity value is 1.
- an embodiment of the present application provides an apparatus, which may be a decoder, including: a processor and a memory; the memory is used to store instructions, and when the apparatus is running, the processor executes the instructions stored in the memory , So that the device performs the method provided in the first aspect or any design of the first aspect.
- the memory may be integrated in the processor or independent of the processor.
- an embodiment of the present application provides a video image prediction method, which is applied to the encoding side and includes:
- the first candidate motion vector list is a candidate motion vector list constructed when the to-be-processed block adopts a sub-block fusion prediction mode.
- an embodiment of the present application provides a video image prediction device, which may be an encoder, including: a processor and a memory; the memory is used to store instructions, and when the device is running, the processor executes the memory storage The instruction to enable the device to perform the method provided in the fourth aspect above. It should be noted that the memory may be integrated in the processor or independent of the processor.
- a sixth aspect of the present application provides a computer-readable storage medium having instructions stored therein, which when executed on a computer, causes the computer to execute the method described in the above aspects.
- a seventh aspect of the present application provides a computer program product containing instructions that, when run on a computer, causes the computer to perform the methods described in the above aspects.
- FIG. 1A is a block diagram of an example of a video encoding and decoding system 10 for implementing embodiments of the present application;
- FIG. 1B is a block diagram of an example of a video decoding system 40 for implementing an embodiment of the present application
- FIG. 2 is a block diagram of an example structure of an encoder 20 for implementing an embodiment of the present application
- FIG. 3 is a block diagram of an example structure of a decoder 30 for implementing an embodiment of the present application
- FIG. 4 is a block diagram of an example of a video decoding device 400 for implementing an embodiment of the present application
- FIG. 5 is a block diagram of another example of an encoding device or a decoding device used to implement embodiments of the present application;
- FIG. 6A is a schematic diagram of positions of motion information candidates for implementing embodiments of the present application.
- 6B is a schematic diagram of motion vector prediction for inherited control points used to implement the embodiment of the present application.
- 6C is a schematic diagram of motion vector prediction for a control point used to implement the construction of an embodiment of the present application.
- FIG. 6D is a schematic flowchart of a method for combining motion information of control points to obtain structured motion information of control points according to an embodiment of the present application
- 6E is a schematic diagram of an ATMVP prediction method used to implement an embodiment of the present application.
- FIG. 7 is a schematic diagram of a plane motion vector prediction method used to implement an embodiment of the present application.
- 8A is a flowchart of an inter prediction method for implementing embodiments of the present application.
- 8B is a schematic diagram of constructing a candidate motion vector list for implementing an embodiment of the present application.
- 8C is a schematic diagram of a motion compensation unit used to implement an embodiment of the present application.
- FIG. 9 is a schematic flowchart of a video image prediction method for implementing an embodiment of the present application.
- FIG. 10 is a schematic flowchart of another video image prediction method for implementing an embodiment of the present application.
- FIG. 11 is a schematic flowchart of another video image prediction method used to implement an embodiment of the present application.
- FIG. 12 is a schematic flowchart of another video image prediction method for implementing an embodiment of the present application.
- FIG. 13 is a schematic diagram of an apparatus 1300 for implementing embodiments of the present application.
- FIG. 14 is a schematic diagram of an apparatus 1400 for implementing embodiments of the present application.
- 15 is a schematic diagram of an apparatus 1500 for implementing embodiments of the present application.
- the corresponding device may include one or more units such as functional units to perform the one or more method steps described (eg, one unit performs one or more steps , Or multiple units, each of which performs one or more of multiple steps), even if such one or more units are not explicitly described or illustrated in the drawings.
- the corresponding method may include one step to perform the functionality of one or more units (eg, one step executes one or more units Functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units), even if such one or more steps are not explicitly described or illustrated in the drawings.
- the features of the exemplary embodiments and / or aspects described herein may be combined with each other.
- Video coding generally refers to processing a sequence of pictures that form a video or video sequence.
- picture In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
- Video coding as used herein means video coding or video decoding.
- Video encoding is performed on the source side and usually includes processing (eg, by compressing) the original video picture to reduce the amount of data required to represent the video picture, thereby storing and / or transmitting more efficiently.
- Video decoding is performed on the destination side and usually involves inverse processing relative to the encoder to reconstruct the video picture.
- the "encoding" of video pictures involved in the embodiments should be understood as referring to the “encoding” or “decoding” of video sequences.
- the combination of the encoding part and the decoding part is also called codec (encoding and decoding).
- the video sequence includes a series of pictures, which are further divided into slices, and the slices are further divided into blocks.
- Video encoding is performed in units of blocks.
- the concept of blocks is further expanded.
- macroblock macroblock, MB
- partitions multiple prediction blocks (partitions) that can be used for predictive coding.
- HEVC high-efficiency video coding
- the basic concepts such as coding unit (CU), prediction unit (PU) and transform unit (TU) are adopted.
- CU coding unit
- PU prediction unit
- TU transform unit
- a variety of block units are divided, and a new tree-based structure is used for description.
- the CU can be divided into smaller CUs according to the quadtree, and the smaller CU can be further divided to form a quadtree structure.
- the CU is the basic unit for dividing and coding the coded image.
- PU can correspond to the prediction block and is the basic unit of predictive coding.
- the CU is further divided into multiple PUs according to the division mode.
- the TU can correspond to the transform block and is the basic unit for transforming the prediction residual.
- PU or TU they all belong to the concept of block (or image block) in essence.
- the CTU is split into multiple CUs by using a quadtree structure represented as a coding tree.
- a decision is made at the CU level whether to use inter-picture (temporal) or intra-picture (spatial) prediction to encode picture regions.
- Each CU can be further split into one, two, or four PUs according to the PU split type.
- the same prediction process is applied within a PU, and related information is transmitted to the decoder on the basis of the PU.
- the CU may be divided into transform units (TU) according to other quadtree structures similar to the coding tree used for the CU.
- quad-tree and binary-tree (Quad-tree and binary tree, QTBT) split frames are used to split the coding blocks.
- the CU may have a square or rectangular shape.
- the image block to be encoded in the current encoded image may be referred to as the current block.
- the reference block is a block that provides a reference signal for the current block, where the reference signal represents a pixel value within the image block.
- the block in the reference image that provides the prediction signal for the current block may be a prediction block, where the prediction signal represents a pixel value or a sample value or a sample signal within the prediction block. For example, after traversing multiple reference blocks, the best reference block is found. This best reference block will provide a prediction for the current block. This block is called a prediction block.
- the original video picture can be reconstructed, that is, the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission).
- further compression is performed by, for example, quantization to reduce the amount of data required to represent the video picture, but the decoder side cannot fully reconstruct the video picture, that is, the quality of the reconstructed video picture is better than the original video picture The quality is lower or worse.
- Several video coding standards of H.261 belong to "lossy hybrid video codec” (ie, combining spatial and temporal prediction in the sample domain with 2D transform coding for applying quantization in the transform domain).
- Each picture of the video sequence is usually divided into non-overlapping block sets, which are usually encoded at the block level.
- the encoder side usually processes the encoded video at the block (video block) level.
- the prediction block is generated by spatial (intra-picture) prediction and temporal (inter-picture) prediction.
- the encoder duplicates the decoder processing loop so that the encoder and decoder generate the same prediction (eg, intra prediction and inter prediction) and / or reconstruction for processing, ie, encoding subsequent blocks.
- FIG. 1A exemplarily shows a schematic block diagram of a video encoding and decoding system 10 applied in an embodiment of the present application.
- the video encoding and decoding system 10 may include a source device 12 and a destination device 14, the source device 12 generates encoded video data, and therefore, the source device 12 may be referred to as a video encoding device.
- the destination device 14 may decode the encoded video data generated by the source device 12, and therefore, the destination device 14 may be referred to as a video decoding device.
- Various implementations of source device 12, destination device 14, or both may include one or more processors and memory coupled to the one or more processors.
- Source device 12 and destination device 14 may include various devices, including desktop computers, mobile computing devices, notebook (eg, laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, etc. Devices, televisions, cameras, display devices, digital media players, video game consoles, in-vehicle computers, wireless communication devices, or the like.
- FIG. 1A depicts the source device 12 and the destination device 14 as separate devices
- device embodiments may also include the functionality of the source device 12 and the destination device 14 or both, ie, the source device 12 or the corresponding Functionality of the destination device 14 or the corresponding functionality.
- the source device 12 or corresponding functionality and the destination device 14 or corresponding functionality may be implemented using the same hardware and / or software, or using separate hardware and / or software, or any combination thereof .
- a communication connection can be made between the source device 12 and the destination device 14 via the link 13, and the destination device 14 can receive the encoded video data from the source device 12 via the link 13.
- Link 13 may include one or more media or devices capable of moving encoded video data from source device 12 to destination device 14.
- link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real time.
- the source device 12 may modulate the encoded video data according to a communication standard (eg, a wireless communication protocol), and may transmit the modulated video data to the destination device 14.
- the one or more communication media may include wireless and / or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
- RF radio frequency
- the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (eg, the Internet).
- the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from source device 12 to destination device 14.
- the source device 12 includes an encoder 20.
- the source device 12 may further include a picture source 16, a picture pre-processor 18, and a communication interface 22.
- the encoder 20, the picture source 16, the picture preprocessor 18, and the communication interface 22 may be hardware components in the source device 12, or may be software programs in the source device 12. They are described as follows:
- Picture source 16 which can include or can be any kind of picture capture device, for example, to capture real-world pictures, and / or any kind of pictures or comments (for screen content encoding, some text on the screen is also considered to be encoded Part of the picture or image) generation device, for example, a computer graphics processor for generating computer animation pictures, or for acquiring and / or providing real-world pictures, computer animation pictures (for example, screen content, virtual reality, VR) pictures) in any category of equipment, and / or any combination thereof (eg, augmented reality (AR) pictures).
- the picture source 16 may be a camera for capturing pictures or a memory for storing pictures.
- the picture source 16 may also include any type of (internal or external) interface that stores previously captured or generated pictures and / or acquires or receives pictures.
- the picture source 16 may be, for example, a local or integrated camera integrated in the source device; when the picture source 16 is a memory, the picture source 16 may be a local or integrated, for example, integrated in the source device Memory.
- the interface may be, for example, an external interface that receives pictures from an external video source.
- the external video source is, for example, an external picture capture device, such as a camera, an external memory, or an external picture generation device.
- the external picture generation device for example It is an external computer graphics processor, computer or server.
- the interface may be any type of interface according to any proprietary or standardized interface protocol, such as a wired or wireless interface, an optical interface.
- the picture can be regarded as a two-dimensional array or matrix of pixels (picture elements).
- the pixels in the array can also be called sampling points.
- the number of sampling points of the array or picture in the horizontal and vertical directions (or axis) defines the size and / or resolution of the picture.
- three color components are usually used, that is, a picture can be represented or contain three sampling arrays.
- the picture includes corresponding red, green, and blue sampling arrays.
- each pixel is usually expressed in a brightness / chroma format or color space.
- a picture in YUV format includes the brightness component indicated by Y (sometimes also indicated by L) and the two indicated by U and V. Chroma components.
- the luma component Y represents luminance or gray-scale horizontal intensity (for example, both are the same in gray-scale pictures), and the two chroma components U and V represent chroma or color information components.
- the picture in the YUV format includes a luminance sampling array of luminance sampling values (Y), and two chrominance sampling arrays of chrominance values (U and V).
- RGB format pictures can be converted or transformed into YUV format and vice versa, this process is also called color transformation or conversion. If the picture is black and white, the picture may include only the brightness sampling array.
- the picture transmitted from the picture source 16 to the picture processor may also be referred to as original picture data 17.
- the picture pre-processor 18 is configured to receive the original picture data 17 and perform pre-processing on the original picture data 17 to obtain the pre-processed picture 19 or the pre-processed picture data 19.
- the pre-processing performed by the picture pre-processor 18 may include trimming, color format conversion (eg, conversion from RGB format to YUV format), color grading, or denoising.
- the encoder 20 (or video encoder 20) is used to receive the pre-processed picture data 19, and process the pre-processed picture data 19 in a related prediction mode (such as the prediction mode in various embodiments herein), thereby
- the encoded picture data 21 is provided (the structural details of the encoder 20 will be further described below based on FIG. 2 or FIG. 4 or FIG. 5).
- the encoder 20 may be used to execute various embodiments described below to implement the application of the chroma block prediction method described in the present application on the encoding side.
- the communication interface 22 can be used to receive the encoded picture data 21, and can transmit the encoded picture data 21 to the destination device 14 or any other device (such as a memory) through the link 13 for storage or direct reconstruction.
- the other device may be any device used for decoding or storage.
- the communication interface 22 may be used, for example, to encapsulate the encoded picture data 21 into a suitable format, such as a data packet, for transmission on the link 13.
- the destination device 14 includes a decoder 30, and optionally, the destination device 14 may further include a communication interface 28, a picture post-processor 32, and a display device 34. They are described as follows:
- the communication interface 28 may be used to receive the encoded picture data 21 from the source device 12 or any other source, such as a storage device, such as an encoded picture data storage device.
- the communication interface 28 can be used to transmit or receive the encoded picture data 21 via the link 13 between the source device 12 and the destination device 14 or via any type of network.
- the link 13 is, for example, a direct wired or wireless connection.
- the category of network is, for example, a wired or wireless network or any combination thereof, or any category of private and public networks, or any combination thereof.
- the communication interface 28 may be used, for example, to decapsulate the data packet transmitted by the communication interface 22 to obtain the encoded picture data 21.
- Both the communication interface 28 and the communication interface 22 can be configured as a one-way communication interface or a two-way communication interface, and can be used, for example, to send and receive messages to establish a connection, confirm and exchange any other communication link and / or for example encoded picture data Information about data transmission.
- the decoder 30 (or referred to as the decoder 30) is used to receive the encoded picture data 21 and provide the decoded picture data 31 or the decoded picture 31 (hereinafter, the decoder 30 will be further described based on FIG. 3 or FIG. 4 or FIG. 5 Structural details).
- the decoder 30 may be used to execute various embodiments described below to implement the application of the chroma block prediction method described in the present application on the decoding side.
- the post-picture processor 32 is configured to perform post-processing on the decoded picture data 31 (also referred to as reconstructed picture data) to obtain post-processed picture data 33.
- the post-processing performed by the image post-processor 32 may include: color format conversion (for example, conversion from YUV format to RGB format), color adjustment, retouching or resampling, or any other processing, and may also be used to convert the post-processed image data 33transmitted to the display device 34.
- the display device 34 is used to receive post-processed picture data 33 to display pictures to, for example, a user or a viewer.
- the display device 34 may be or may include any type of display for presenting reconstructed pictures, for example, an integrated or external display or monitor.
- the display may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), Digital Light Processor (DLP) or other displays of any kind.
- FIG. 1A depicts source device 12 and destination device 14 as separate devices
- device embodiments may also include the functionality of source device 12 and destination device 14 or both, ie source device 12 or The corresponding functionality and the destination device 14 or corresponding functionality.
- the source device 12 or corresponding functionality and the destination device 14 or corresponding functionality may be implemented using the same hardware and / or software, or using separate hardware and / or software, or any combination thereof .
- Source device 12 and destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, for example, notebook or laptop computers, mobile phones, smartphones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any kind of operating system.
- handheld or stationary device for example, notebook or laptop computers, mobile phones, smartphones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any kind of operating system.
- Both the encoder 20 and the decoder 30 can be implemented as any of various suitable circuits, for example, one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (application-specific integrated circuits) circuit, ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
- DSPs digital signal processors
- ASIC application-specific integrated circuits
- FPGA field-programmable gate array
- the device may store the instructions of the software in a suitable non-transitory computer-readable storage medium, and may use one or more processors to execute the instructions in hardware to perform the techniques of the present disclosure . Any one of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be regarded as one or more processors.
- the video encoding and decoding system 10 shown in FIG. 1A is only an example, and the technology of the present application may be applied to video encoding settings that do not necessarily include any data communication between encoding and decoding devices (for example, video encoding or video decoding).
- data can be retrieved from local storage, streamed on the network, and so on.
- the video encoding device may encode the data and store the data to the memory, and / or the video decoding device may retrieve the data from the memory and decode the data.
- encoding and decoding are performed by devices that do not communicate with each other but only encode data to and / or retrieve data from memory and decode the data.
- FIG. 1B is an explanatory diagram of an example of a video coding system 40 including the encoder 20 of FIG. 2 and / or the decoder 30 of FIG. 3 according to an exemplary embodiment.
- the video decoding system 40 can implement a combination of various technologies in the embodiments of the present application.
- the video decoding system 40 may include an imaging device 41, an encoder 20, a decoder 30 (and / or a video encoder / decoder implemented by the logic circuit 47 of the processing unit 46), an antenna 42 , One or more processors 43, one or more memories 44, and / or display devices 45.
- the imaging device 41, the antenna 42, the processing unit 46, the logic circuit 47, the encoder 20, the decoder 30, the processor 43, the memory 44, and / or the display device 45 can communicate with each other.
- the video coding system 40 is shown with the encoder 20 and the decoder 30, in different examples, the video coding system 40 may include only the encoder 20 or only the decoder 30.
- antenna 42 may be used to transmit or receive an encoded bitstream of video data.
- the display device 45 may be used to present video data.
- the logic circuit 47 may be implemented by the processing unit 46.
- the processing unit 46 may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
- the video decoding system 40 may also include an optional processor 43, which may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
- the logic circuit 47 may be implemented by hardware, such as dedicated hardware for video encoding, etc., and the processor 43 may be implemented by general-purpose software, an operating system, or the like.
- the memory 44 may be any type of memory, such as volatile memory (for example, static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.) or non-volatile Memory (for example, flash memory, etc.), etc.
- volatile memory for example, static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
- non-volatile Memory for example, flash memory, etc.
- the memory 44 may be implemented by cache memory.
- the logic circuit 47 can access the memory 44 (eg, to implement an image buffer).
- the logic circuit 47 and / or the processing unit 46 may include memory (eg, cache, etc.) for implementing image buffers and the like.
- the encoder 20 implemented by a logic circuit may include an image buffer (eg, implemented by the processing unit 46 or the memory 44) and a graphics processing unit (eg, implemented by the processing unit 46).
- the graphics processing unit may be communicatively coupled to the image buffer.
- the graphics processing unit may include the encoder 20 implemented by a logic circuit 47 to implement the various modules discussed with reference to FIG. 2 and / or any other encoder system or subsystem described herein.
- Logic circuits can be used to perform the various operations discussed herein.
- decoder 30 may be implemented by logic circuit 47 in a similar manner to implement the various modules discussed with reference to decoder 30 of FIG. 3 and / or any other decoder systems or subsystems described herein.
- the decoder 30 implemented by the logic circuit may include an image buffer (implemented by the processing unit 2820 or the memory 44) and a graphics processing unit (for example, implemented by the processing unit 46).
- the graphics processing unit may be communicatively coupled to the image buffer.
- the graphics processing unit may include a decoder 30 implemented by a logic circuit 47 to implement various modules discussed with reference to FIG. 3 and / or any other decoder system or subsystem described herein.
- antenna 42 may be used to receive an encoded bitstream of video data.
- the encoded bitstream may include data related to encoded video frames, indicators, index values, mode selection data, etc. discussed herein, such as data related to encoded partitions (eg, transform coefficients or quantized transform coefficients , (As discussed) optional indicators, and / or data that defines the code segmentation).
- the video coding system 40 may also include a decoder 30 coupled to the antenna 42 and used to decode the encoded bitstream.
- the display device 45 is used to present video frames.
- the decoder 30 may be used to perform the reverse process.
- the decoder 30 may be used to receive and parse such syntax elements and decode the relevant video data accordingly.
- encoder 20 may entropy encode syntax elements into an encoded video bitstream. In such instances, decoder 30 may parse such syntax elements and decode the relevant video data accordingly.
- the encoder 20 and the decoder 30 in the embodiment of the present application may be, for example, H .263, H.264, HEVV, MPEG-2, MPEG-4, VP8, VP9 and other video standard protocols or next-generation video standard protocols (such as H.266, etc.) corresponding codec / decoder.
- FIG. 2 shows a schematic / conceptual block diagram of an example of an encoder 20 for implementing an embodiment of the present application.
- the encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter Unit 220, decoded picture buffer (dcoded picture buffer, DPB) 230, prediction processing unit 260, and entropy encoding unit 270.
- the prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262.
- the inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
- the encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a video encoder based on a hybrid video codec.
- the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260, and the entropy encoding unit 270 form the forward signal path of the encoder 20, while, for example, the inverse quantization unit 210, the inverse transform processing unit 212, and The structural unit 214, the buffer 216, the loop filter 220, the decoded picture buffer (DPB) 230, and the prediction processing unit 260 form the backward signal path of the encoder, where the backward signal path of the encoder corresponds to The signal path of the decoder (see decoder 30 in FIG. 3).
- the encoder 20 receives a picture 201 or an image block 203 of the picture 201 through, for example, an input 202, for example, forming a picture in a picture sequence of a video or a video sequence.
- the image block 203 may also be referred to as a current picture block or a picture block to be coded
- the picture 201 may be referred to as a current picture or a picture to be coded (especially when the current picture is distinguished from other pictures in video coding, other pictures such as the same video sequence That is, the previously encoded and / or decoded pictures in the video sequence of the current picture are also included).
- An embodiment of the encoder 20 may include a division unit (not shown in FIG. 2) for dividing the picture 201 into a plurality of blocks such as an image block 203, usually into a plurality of non-overlapping blocks.
- the segmentation unit can be used to use the same block size and corresponding grid that defines the block size for all pictures in the video sequence, or to change the block size between pictures or subsets or picture groups, and divide each picture into The corresponding block.
- the prediction processing unit 260 of the encoder 20 may be used to perform any combination of the above-mentioned segmentation techniques.
- image block 203 is also or can be regarded as a two-dimensional array or matrix of sampling points with sample values, although its size is smaller than picture 201.
- the image block 203 may include, for example, one sampling array (for example, the brightness array in the case of a black and white picture 201) or three sampling arrays (for example, one brightness array and two chroma arrays in the case of a color picture) or An array of any other number and / or category depending on the color format applied.
- the number of sampling points in the horizontal and vertical directions (or axes) of the image block 203 defines the size of the image block 203.
- the encoder 20 shown in FIG. 2 is used to encode the picture 201 block by block, for example, to perform encoding and prediction on each image block 203.
- the residual calculation unit 204 is used to calculate the residual block 205 based on the picture image block 203 and the prediction block 265 (further details of the prediction block 265 are provided below), for example, by subtracting the sample value of the picture image block 203 sample by sample (pixel by pixel) The sample values of the block 265 are depredicted to obtain the residual block 205 in the sample domain.
- the transform processing unit 206 is used to apply a transform such as discrete cosine transform (DCT) or discrete sine transform (DST) to the sample values of the residual block 205 to obtain transform coefficients 207 in the transform domain .
- the transform coefficient 207 may also be referred to as a transform residual coefficient, and represents a residual block 205 in the transform domain.
- the transform processing unit 206 may be used to apply integer approximations of DCT / DST, such as the transform specified by HEVC / H.265. Compared with the orthogonal DCT transform, this integer approximation is usually scaled by a factor. In order to maintain the norm of the residual block processed by the forward and inverse transform, an additional scaling factor is applied as part of the transform process.
- the scaling factor is usually selected based on certain constraints, for example, the scaling factor is a power of two used for the shift operation, the bit depth of the transform coefficient, the accuracy, and the trade-off between implementation cost and so on.
- a specific scaling factor can be specified for the inverse transform by the inverse transform processing unit 212 on the decoder 30 side (and corresponding inverse transform by the inverse transform processing unit 212 on the encoder 20 side), and accordingly, the encoder can be The 20 side specifies the corresponding scaling factor for the positive transform by the transform processing unit 206.
- the quantization unit 208 is used to quantize the transform coefficient 207 by, for example, applying scalar quantization or vector quantization to obtain the quantized transform coefficient 209.
- the quantized transform coefficient 209 may also be referred to as the quantized residual coefficient 209.
- the quantization process can reduce the bit depth associated with some or all of the transform coefficients 207. For example, n-bit transform coefficients can be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
- the degree of quantization can be modified by adjusting the quantization parameter (QP). For example, for scalar quantization, different scales can be applied to achieve thinner or coarser quantization.
- QP quantization parameter
- a smaller quantization step size corresponds to a finer quantization
- a larger quantization step size corresponds to a coarser quantization.
- a suitable quantization step size can be indicated by a quantization parameter (QP).
- the quantization parameter may be an index of a predefined set of suitable quantization steps.
- smaller quantization parameters may correspond to fine quantization (smaller quantization step size)
- larger quantization parameters may correspond to coarse quantization (larger quantization step size)
- the quantization may include dividing by the quantization step size and the corresponding quantization or inverse quantization performed by, for example, inverse quantization 210, or may include multiplying the quantization step size.
- Embodiments according to some standards such as HEVC may use quantization parameters to determine the quantization step size.
- the quantization step size can be calculated based on the quantization parameter using fixed-point approximation that includes equations for division. Additional scaling factors can be introduced for quantization and inverse quantization to restore the norm of the residual block that may be modified due to the scale used in the fixed-point approximation of the equations for quantization step size and quantization parameter.
- the scale of inverse transform and inverse quantization may be combined.
- a custom quantization table can be used and signaled from the encoder to the decoder in a bitstream, for example. Quantization is a lossy operation, where the larger the quantization step, the greater the loss.
- the inverse quantization unit 210 is used to apply the inverse quantization of the quantization unit 208 on the quantized coefficients to obtain the inverse quantization coefficient 211, for example, based on or using the same quantization step size as the quantization unit 208, apply the quantization scheme applied by the quantization unit 208 Inverse quantization scheme.
- the inverse quantized coefficient 211 may also be referred to as the inverse quantized residual coefficient 211, which corresponds to the transform coefficient 207, although the loss due to quantization is usually not the same as the transform coefficient.
- the inverse transform processing unit 212 is used to apply the inverse transform of the transform applied by the transform processing unit 206, for example, inverse discrete cosine transform (DCT) or inverse discrete sine transform (DST), in the sample domain
- the inverse transform block 213 is obtained.
- the inverse transform block 213 may also be referred to as an inverse transform dequantized block 213 or an inverse transform residual block 213.
- the reconstruction unit 214 (eg, summer 214) is used to add the inverse transform block 213 (ie, the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, for example, The sample values of the reconstructed residual block 213 and the sample values of the prediction block 265 are added.
- a buffer unit 216 (or simply "buffer" 216), such as a line buffer 216, is used to buffer or store the reconstructed block 215 and corresponding sample values for, for example, intra prediction.
- the encoder may be used to use the unfiltered reconstructed blocks and / or corresponding sample values stored in the buffer unit 216 for any type of estimation and / or prediction, such as intra prediction.
- an embodiment of the encoder 20 may be configured such that the buffer unit 216 is used not only for storing the reconstructed block 215 for intra prediction 254, but also for the loop filter unit 220 (not shown in FIG. 2) Out), and / or, for example, the buffer unit 216 and the decoded picture buffer 230 form a buffer.
- Other embodiments may be used to use the filtered block 221 and / or blocks or samples from the decoded picture buffer 230 (neither shown in FIG. 2) as an input or basis for intra prediction 254.
- the loop filter unit 220 (or simply “loop filter” 220) is used to filter the reconstructed block 215 to obtain the filtered block 221, so as to smoothly perform pixel conversion or improve video quality.
- the loop filter unit 220 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
- the loop filter unit 220 is shown as an in-loop filter in FIG. 2, in other configurations, the loop filter unit 220 may be implemented as a post-loop filter.
- the filtered block 221 may also be referred to as the filtered reconstructed block 221.
- the decoded picture buffer 230 may store the reconstructed coding block after the loop filter unit 220 performs a filtering operation on the reconstructed coding block.
- Embodiments of the encoder 20 may be used to output loop filter parameters (eg, sample adaptive offset information), for example, directly output or by the entropy encoding unit 270 or any other
- the entropy coding unit outputs after entropy coding, for example, so that the decoder 30 can receive and apply the same loop filter parameters for decoding.
- the decoded picture buffer (DPB) 230 may be a reference picture memory for storing reference picture data for the encoder 20 to encode video data.
- DPB 230 can be formed by any of a variety of memory devices, such as dynamic random access memory (dynamic random access (DRAM) (including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), resistive RAM (resistive RAM, RRAM)) or other types of memory devices.
- DRAM dynamic random access
- the DPB 230 and the buffer 216 may be provided by the same memory device or separate memory devices.
- a decoded picture buffer (DPB) 230 is used to store the filtered block 221.
- the decoded picture buffer 230 may be further used to store the same current picture or other previous filtered blocks of different pictures such as the previous reconstructed picture, such as the previously reconstructed and filtered block 221, and may provide a complete previous reconstructed That is, decoded pictures (and corresponding reference blocks and samples) and / or partially reconstructed current pictures (and corresponding reference blocks and samples), for example, for inter prediction.
- a decoded picture buffer (DPB) 230 is used to store the reconstructed block 215.
- the prediction processing unit 260 also known as the block prediction processing unit 260, is used to receive or acquire the image block 203 (current image block 203 of the current picture 201) and reconstructed picture data, such as the same (current) picture from the buffer 216 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 230, and used to process such data for prediction, that is, to provide an inter prediction block 245 or a frame
- the prediction block 265 of the intra prediction block 255 is used to receive or acquire the image block 203 (current image block 203 of the current picture 201) and reconstructed picture data, such as the same (current) picture from the buffer 216 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 230, and used to process such data for prediction, that is, to provide an inter prediction block 245 or a frame
- the prediction block 265 of the intra prediction block 255 is used to receive or acquire the image block 203 (current image block
- the mode selection unit 262 may be used to select a prediction mode (eg, intra or inter prediction mode) and / or the corresponding prediction block 245 or 255 used as the prediction block 265 to calculate the residual block 205 and reconstruct the reconstructed block 215.
- a prediction mode eg, intra or inter prediction mode
- / or the corresponding prediction block 245 or 255 used as the prediction block 265 to calculate the residual block 205 and reconstruct the reconstructed block 215.
- An embodiment of the mode selection unit 262 may be used to select a prediction mode (eg, from those prediction modes supported by the prediction processing unit 260), which provides the best match or the minimum residual (the minimum residual means Better compression in transmission or storage), or provide minimum signaling overhead (minimum signaling overhead means better compression in transmission or storage), or consider or balance both at the same time.
- the mode selection unit 262 may be used to determine a prediction mode based on rate distortion optimization (RDO), that is, to select a prediction mode that provides minimum bit rate distortion optimization, or to select a prediction mode in which the related rate distortion at least meets the prediction mode selection criteria .
- RDO rate distortion optimization
- the encoder 20 is used to determine or select the best or optimal prediction mode from the (predetermined) prediction mode set.
- the set of prediction modes may include, for example, intra prediction modes and / or inter prediction modes.
- the intra prediction mode set may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.265, or may include 67 Different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in the developing H.266.
- non-directional modes such as DC (or mean) mode and planar mode
- directional modes as defined in the developing H.266.
- the set of inter prediction modes depends on the available reference pictures (ie, for example, the aforementioned at least partially decoded pictures stored in DBP 230) and other inter prediction parameters, for example, depending on whether the entire reference picture is used or only used A part of the reference picture, for example a search window area around the area of the current block, to search for the best matching reference block, and / or for example depending on whether pixel interpolation such as half-pixel and / or quarter-pixel interpolation is applied.
- the set of inter prediction modes may include, for example, Advanced Motion Vector (Advanced Motion Vector Prediction, AMVP) mode and merge mode.
- AMVP Advanced Motion Vector Prediction
- the set of inter prediction modes may include an improved control point-based AMVP mode according to an embodiment of the present application, and an improved control point-based merge mode.
- intra prediction unit 254 may be used to perform any combination of inter prediction techniques described below.
- the embodiments of the present application may also apply skip mode and / or direct mode.
- the prediction processing unit 260 may be further used to split the image block 203 into smaller block partitions or sub-blocks, for example, iteratively using quad-tree (QT) segmentation, binary-tree (BT) segmentation Or triple-tree (TT) partitioning, or any combination thereof, and for performing predictions for each of block partitions or sub-blocks, for example, where mode selection includes selecting the tree structure of the divided image block 203 and selecting applications The prediction mode for each of the block partitions or sub-blocks.
- QT quad-tree
- BT binary-tree
- TT triple-tree
- the inter prediction unit 244 may include a motion estimation (ME) unit (not shown in FIG. 2) and a motion compensation (MC) unit (not shown in FIG. 2).
- the motion estimation unit is used to receive or acquire a picture image block 203 (current picture image block 203 of the current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, for example, one or more other / different
- the reconstructed block of the previously decoded picture 231 is used for motion estimation.
- the video sequence may include the current picture and the previously decoded picture 31, or in other words, the current picture and the previously decoded picture 31 may be part of or form a sequence of pictures that form the video sequence.
- the encoder 20 may be used to select a reference block from multiple reference blocks of the same or different pictures in multiple other pictures, and provide a reference picture and / or provide a reference to a motion estimation unit (not shown in FIG. 2)
- the offset (spatial offset) between the position of the block (X, Y coordinates) and the position of the current block is used as an inter prediction parameter. This offset is also called motion vector (MV).
- the motion compensation unit is used to acquire inter prediction parameters and perform inter prediction based on or using inter prediction parameters to obtain inter prediction blocks 245.
- the motion compensation performed by the motion compensation unit may include extracting or generating a prediction block based on a motion / block vector determined by motion estimation (possibly performing interpolation of sub-pixel accuracy). Interpolation filtering can generate additional pixel samples from known pixel samples, potentially increasing the number of candidate prediction blocks that can be used to encode picture blocks.
- the motion compensation unit 246 may locate the prediction block pointed to by the motion vector in a reference picture list. Motion compensation unit 246 may also generate syntax elements associated with blocks and video slices for use by decoder 30 when decoding picture blocks of video slices.
- the above inter prediction unit 244 may transmit a syntax element to the entropy encoding unit 270, where the syntax element includes inter prediction parameters (such as an inter prediction mode selected for the current block prediction after traversing multiple inter prediction modes Instructions).
- inter prediction parameters such as an inter prediction mode selected for the current block prediction after traversing multiple inter prediction modes Instructions.
- the decoding terminal 30 may directly use the default prediction mode for decoding. It can be understood that the inter prediction unit 244 may be used to perform any combination of inter prediction techniques.
- the intra prediction unit 254 is used to acquire, for example, a picture block 203 (current picture block) that receives the same picture and one or more previously reconstructed blocks, such as reconstructed neighboring blocks, for intra estimation.
- the encoder 20 may be used to select an intra prediction mode from a plurality of (predetermined) intra prediction modes.
- Embodiments of the encoder 20 may be used to select an intra-prediction mode based on optimization criteria, for example, based on a minimum residual (eg, an intra-prediction mode that provides the prediction block 255 that is most similar to the current picture block 203) or minimum rate distortion.
- a minimum residual eg, an intra-prediction mode that provides the prediction block 255 that is most similar to the current picture block 203
- minimum rate distortion e.g, a minimum rate distortion.
- the intra prediction unit 254 is further used to determine the intra prediction block 255 based on the intra prediction parameters of the intra prediction mode as selected. In any case, after selecting the intra-prediction mode for the block, the intra-prediction unit 254 is also used to provide the intra-prediction parameters to the entropy encoding unit 270, that is, to provide an indication of the selected intra-prediction mode for the block Information. In one example, the intra prediction unit 254 may be used to perform any combination of intra prediction techniques.
- the above-mentioned intra-prediction unit 254 may transmit a syntax element to the entropy encoding unit 270, where the syntax element includes intra-prediction parameters (such as an intra-prediction mode selected for the current block prediction after traversing multiple intra-prediction modes) Instructions).
- the intra prediction parameters may not be carried in the syntax element.
- the decoding terminal 30 may directly use the default prediction mode for decoding.
- the entropy coding unit 270 is used to encode an entropy coding algorithm or scheme (for example, variable length coding (VLC) scheme, context adaptive VLC (context adaptive VLC, CAVLC) scheme, arithmetic coding scheme, context adaptive binary arithmetic) Encoding (context adaptive) binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval entropy (probability interval interpartitioning entropy, PIPE) encoding or other entropy Encoding method or technique) applied to a single or all of the quantized residual coefficients 209, inter prediction parameters, intra prediction parameters and / or loop filter parameters (or not applied) to obtain the output 272 to For example, the encoded picture data 21 output in the form of an encoded bit stream 21.
- VLC variable length coding
- CABAC context adaptive binary arithmetic
- SBAC syntax-based context-adaptive binary arithmetic coding
- the encoded bitstream can be transmitted to the video decoder 30 or archived for later transmission or retrieval by the video decoder 30.
- the entropy encoding unit 270 may also be used to entropy encode other syntax elements of the current video slice being encoded.
- video encoder 20 may be used to encode video streams.
- the non-transform based encoder 20 may directly quantize the residual signal without the transform processing unit 206 for certain blocks or frames.
- the encoder 20 may have a quantization unit 208 and an inverse quantization unit 210 combined into a single unit.
- the encoder 20 may be used to implement the inter prediction method described in the embodiments below.
- the video encoder 20 can directly quantize the residual signal without processing by the transform processing unit 206, and accordingly, without processing by the inverse transform processing unit 212; or, for some For image blocks or image frames, the video encoder 20 does not generate residual data, and accordingly does not need to be processed by the transform processing unit 206, quantization unit 208, inverse quantization unit 210, and inverse transform processing unit 212; or, the video encoder 20 may convert The reconstructed image block is directly stored as a reference block without being processed by the filter 220; alternatively, the quantization unit 208 and the inverse quantization unit 210 in the video encoder 20 may be merged together.
- the loop filter 220 is optional, and in the case of lossless compression coding, the transform processing unit 206, quantization unit 208, inverse quantization unit 210, and inverse transform processing unit 212 are optional. It should be understood that the inter prediction unit 244 and the intra prediction unit 254 may be selectively enabled according to different application scenarios.
- FIG. 3 shows a schematic / conceptual block diagram of an example of a decoder 30 for implementing an embodiment of the present application.
- the video decoder 30 is used to receive encoded picture data (eg, encoded bitstream) 21, for example, encoded by the encoder 20, to obtain the decoded picture 231.
- encoded picture data eg, encoded bitstream
- video decoder 30 receives video data from video encoder 20, such as an encoded video bitstream and associated syntax elements representing picture blocks of the encoded video slice.
- the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (eg, a summer 314), a buffer 316, a loop filter 320, and decoding The picture buffer 330 and the prediction processing unit 360.
- the prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362.
- video decoder 30 may perform a decoding pass that is generally reciprocal to the encoding pass described with reference to video encoder 20 of FIG. 2.
- the entropy decoding unit 304 is used to perform entropy decoding on the encoded picture data 21 to obtain, for example, quantized coefficients 309 and / or decoded encoding parameters (not shown in FIG. 3), for example, inter prediction, intra prediction parameters , Any or all of the loop filter parameters and / or other syntax elements (decoded).
- the entropy decoding unit 304 is further used to forward inter prediction parameters, intra prediction parameters, and / or other syntax elements to the prediction processing unit 360.
- Video decoder 30 may receive syntax elements at the video slice level and / or the video block level.
- the inverse quantization unit 310 can be functionally the same as the inverse quantization unit 110
- the inverse transform processing unit 312 can be functionally the same as the inverse transform processing unit 212
- the reconstruction unit 314 can be functionally the same as the reconstruction unit 214
- the buffer 316 can be functionally
- the loop filter 320 may be functionally the same as the loop filter 220
- the decoded picture buffer 330 may be functionally the same as the decoded picture buffer 230.
- the prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354, where the inter prediction unit 344 may be similar in function to the inter prediction unit 244, and the intra prediction unit 354 may be similar in function to the intra prediction unit 254 .
- the prediction processing unit 360 is generally used to perform block prediction and / or obtain the prediction block 365 from the encoded data 21, and receive or obtain prediction-related parameters and / or information about the entropy decoding unit 304 (explicitly or implicitly). Information about the selected prediction mode.
- the intra prediction unit 354 of the prediction processing unit 360 is used to signal-based the intra prediction mode and the previous decoded block from the current frame or picture. Data to generate a prediction block 365 for the picture block of the current video slice.
- the inter prediction unit 344 eg, motion compensation unit
- Other syntax elements generate a prediction block 365 for the video block of the current video slice.
- a prediction block may be generated from a reference picture in a reference picture list.
- the video decoder 30 may construct the reference frame lists: list 0 and list 1 using default construction techniques based on the reference pictures stored in the DPB 330.
- the prediction processing unit 360 is used to determine the prediction information for the video block of the current video slice by parsing the motion vector and other syntax elements, and use the prediction information to generate the prediction block for the current video block being decoded.
- the prediction processing unit 360 uses some received syntax elements to determine the prediction mode (eg, intra or inter prediction) of the video block used to encode the video slice, and the inter prediction slice type ( For example, B slice, P slice, or GPB slice), construction information for one or more of the reference picture lists for slices, motion vectors for each inter-coded video block for slices, The inter prediction status and other information of each inter-coded video block of the slice to decode the video block of the current video slice.
- the prediction mode eg, intra or inter prediction
- the inter prediction slice type For example, B slice, P slice, or GPB slice
- the syntax elements received by the video decoder 30 from the bitstream include an adaptive parameter set (adaptive parameter set, APS), a sequence parameter set (SPS), and a picture parameter set (picture parameter (set, PPS) or the syntax element in one or more of the stripe headers.
- an adaptive parameter set adaptive parameter set
- SPS sequence parameter set
- PPS picture parameter set
- the inverse quantization unit 310 may be used to inverse quantize (ie, inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 304.
- the inverse quantization process may include using the quantization parameters calculated by the video encoder 20 for each video block in the video slice to determine the degree of quantization that should be applied and also determine the degree of inverse quantization that should be applied.
- the inverse transform processing unit 312 is used to apply an inverse transform (eg, inverse DCT, inverse integer transform, or conceptually similar inverse transform process) to the transform coefficients, so as to generate a residual block in the pixel domain.
- an inverse transform eg, inverse DCT, inverse integer transform, or conceptually similar inverse transform process
- the reconstruction unit 314 (for example, the summer 314) is used to add the inverse transform block 313 (ie, the reconstructed residual block 313) to the prediction block 365 to obtain the reconstructed block 315 in the sample domain, for example, by adding The sample values of the reconstructed residual block 313 and the sample values of the prediction block 365 are added.
- the loop filter unit 320 (during the encoding loop or after the encoding loop) is used to filter the reconstructed block 315 to obtain the filtered block 321 to smoothly perform pixel conversion or improve video quality.
- the loop filter unit 320 may be used to perform any combination of filtering techniques described below.
- the loop filter unit 320 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
- the loop filter unit 320 is shown as an in-loop filter in FIG. 3, in other configurations, the loop filter unit 320 may be implemented as a post-loop filter.
- the decoded video block 321 in a given frame or picture is then stored in a decoded picture buffer 330 that stores reference pictures for subsequent motion compensation.
- the decoder 30 is used, for example, to output the decoded picture 31 through the output 332 for presentation to the user or for the user to view.
- video decoder 30 may be used to decode the compressed bitstream.
- the decoder 30 may generate the output video stream without the loop filter unit 320.
- the non-transform based decoder 30 may directly inversely quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames.
- the video decoder 30 may have an inverse quantization unit 310 and an inverse transform processing unit 312 combined into a single unit.
- the decoder 30 is used to implement the inter prediction method described in the embodiments below.
- video decoder 30 may be used to decode the encoded video bitstream.
- the video decoder 30 may generate an output video stream without being processed by the filter 320; or, for certain image blocks or image frames, the entropy decoding unit 304 of the video decoder 30 does not decode the quantized coefficients, and accordingly does not It needs to be processed by the inverse quantization unit 310 and the inverse transform processing unit 312.
- the loop filter 320 is optional; and in the case of lossless compression, the inverse quantization unit 310 and the inverse transform processing unit 312 are optional.
- the inter prediction unit and the intra prediction unit may be selectively enabled.
- the processing results for a certain link can be further processed and then output to the next link, for example, in interpolation filtering, motion vector derivation or loop filtering, etc. After the link, the results of the corresponding link are further clipped or shift shifted.
- the motion vector of the control point of the current image block derived from the motion vector of the adjacent affine coding block may be further processed, which is not limited in this application.
- the value range of the motion vector is constrained to be within a certain bit width. Assuming that the allowed bit width of the motion vector is bitDepth, the range of the motion vector is -2 ⁇ (bitDepth-1) ⁇ 2 ⁇ (bitDepth-1) -1, where the " ⁇ " symbol indicates a power. If bitDepth is 16, the value ranges from -32768 to 32767. If bitDepth is 18, the value ranges from -131072 to 131071. There are two ways to constrain:
- ux (vx + 2 bitDepth )% 2 bitDepth
- the value of vx is -32769, and 32767 is obtained by the above formula. Because in the computer, the value is stored in the form of two's complement, the complement of -32769 is 1,0111,1111,1111,1111 (17 bits), the computer handles the overflow as discarding the high bit, then the value of vx If it is 0111,1111,1111,1111, it is 32767, which is consistent with the result obtained by formula processing.
- vx Clip3 (-2 bitDepth-1 , 2 bitDepth-1 -1, vx)
- vy Clip3 (-2 bitDepth-1 , 2 bitDepth-1 -1, vy)
- Clip3 is to clamp the value of z to the interval [x, y]:
- FIG. 4 is a schematic structural diagram of a video decoding device 400 (for example, a video encoding device 400 or a video decoding device 400) provided by an embodiment of the present application.
- the video coding apparatus 400 is suitable for implementing the embodiments described herein.
- the video coding device 400 may be a video decoder (eg, decoder 30 of FIG. 1A) or a video encoder (eg, encoder 20 of FIG. 1A).
- the video decoding device 400 may be one or more components in the decoder 30 of FIG. 1A or the encoder 20 of FIG. 1A described above.
- the video decoding device 400 includes: an inlet port 410 for receiving data and a receiver (Rx) 420, a processor, a logic unit or a central processing unit (CPU) 430 for processing data, and a transmitter for transmitting data ( Tx) 440 and exit port 450, and a memory 460 for storing data.
- the video decoding device 400 may further include a photoelectric conversion component and an electro-optical (EO) component coupled to the inlet port 410, the receiver 420, the transmitter 440, and the outlet port 450 for the outlet or inlet of the optical signal or the electrical signal.
- EO electro-optical
- the processor 430 is implemented by hardware and software.
- the processor 430 may be implemented as one or more CPU chips, cores (eg, multi-core processors), FPGA, ASIC, and DSP.
- the processor 430 communicates with the inlet port 410, the receiver 420, the transmitter 440, the outlet port 450, and the memory 460.
- the processor 430 includes a decoding module 470 (for example, an encoding module 470 or a decoding module 470).
- the encoding / decoding module 470 implements the embodiments disclosed herein to implement the chroma block prediction method provided by the embodiments of the present application. For example, the encoding / decoding module 470 implements, processes, or provides various encoding operations.
- the encoding / decoding module 470 provides a substantial improvement in the function of the video decoding device 400 and affects the conversion of the video decoding device 400 to different states.
- the encoding / decoding module 470 is implemented with instructions stored in the memory 460 and executed by the processor 430.
- the memory 460 includes one or more magnetic disks, tape drives, and solid state drives, and can be used as an overflow data storage device for storing programs when these programs are selectively executed, and storing instructions and data read during the execution of the programs.
- the memory 460 may be volatile and / or non-volatile, and may be read only memory (ROM), random access memory (RAM), random access memory (ternary content-addressable memory (TCAM), and / or static Random Access Memory (SRAM).
- FIG. 5 is a simplified block diagram of an apparatus 500 that can be used as either or both of the source device 12 and the destination device 14 in FIG. 1A according to an exemplary embodiment.
- the device 500 can implement the technology of the present application.
- FIG. 5 is a schematic block diagram of an implementation manner of an encoding device or a decoding device (referred to simply as a decoding device 500) according to an embodiment of the present application.
- the decoding device 500 may include a processor 510, a memory 530, and a bus system 550.
- the processor and the memory are connected through a bus system, the memory is used to store instructions, and the processor is used to execute the instructions stored in the memory.
- the memory of the decoding device stores program codes, and the processor can call the program codes stored in the memory to perform various video encoding or decoding methods described in this application. In order to avoid repetition, they will not be described in detail here.
- the processor 510 may be a central processing unit (Central Processing Unit, referred to as "CPU"), and the processor 510 may also be other general-purpose processors, digital signal processors (DSPs), dedicated integrated Circuit (ASIC), ready-made programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the memory 530 may include a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may also be used as the memory 530.
- the memory 530 may include code and data 531 accessed by the processor 510 using the bus 550.
- the memory 530 may further include an operating system 533 and an application program 535 including at least one program that allows the processor 510 to execute the video encoding or decoding method described in this application.
- the application program 535 may include applications 1 to N, which further include a video encoding or decoding application that performs the video encoding or decoding method described in this application (referred to as a video coding application for short).
- the bus system 550 may also include a power bus, a control bus, and a status signal bus. However, for clear explanation, various buses are marked as the bus system 550 in the figure.
- the decoding device 500 may also include one or more output devices, such as a display 570.
- the display 570 may be a tactile display that combines the display with a tactile unit that operably senses touch input.
- the display 570 may be connected to the processor 510 via the bus 550.
- AMVP advanced motion vector prediction
- merge mode In HEVC, two inter prediction modes are used, which are advanced motion vector prediction (advanced motion vector prediction, AMVP) mode and merge mode.
- AMVP advanced motion vector prediction
- merge mode In HEVC, two inter prediction modes are used, which are advanced motion vector prediction (advanced motion vector prediction, AMVP) mode and merge mode.
- the AMVP mode For the AMVP mode, first traverse the coded blocks adjacent to the current block in the spatial or temporal domain (denoted as neighboring blocks), construct a candidate motion vector list (also called a motion information candidate list) based on the motion information of each neighboring block, and then The rate-distortion cost determines the optimal motion vector from the candidate motion vector list, and uses the candidate motion information with the lowest rate-distortion cost as the motion vector predictor (MVP) of the current block.
- MVP motion vector predictor
- the rate-distortion cost is calculated by formula (1), where J represents the rate-distortion cost RD Cost, and SAD is the sum of absolute errors between the predicted pixel value and the original pixel value obtained after motion estimation using candidate motion vector prediction values (sum of absolute differences, SAD), R represents the code rate, ⁇ represents the Lagrangian multiplier.
- the encoding end passes the index value of the selected motion vector prediction value in the candidate motion vector list and the reference frame index value to the decoding end. Further, a motion search is performed in the MVP-centered neighborhood to obtain the actual motion vector of the current block, and the encoding end transmits the difference (motion vector difference) between the MVP and the actual motion vector to the decoding end.
- the motion information of the coded blocks adjacent to the current block in the spatial or temporal domain is first used to construct a candidate motion vector list, and then the optimal motion information is determined from the candidate motion vector list as the current block by calculating the rate-distortion cost Motion information, and then pass the index value of the optimal motion information position in the candidate motion vector list (recorded as merge index, the same below) to the decoding end.
- the candidate motion information of the spatial and temporal domains of the current block is shown in FIG. 6A.
- the candidate motion information of the spatial domain comes from the 5 neighboring blocks (A0, A1, B0, B1, and B2).
- the motion information of the neighboring block is not added to the candidate motion vector list.
- the time-domain candidate motion information of the current block is obtained by scaling the MV of the corresponding position block in the reference frame according to the picture order count (POC) of the reference frame and the current frame. First determine whether the block at the T position in the reference frame is available, and if it is not available, select the block at the C position.
- POC picture order count
- the location and traversal order of neighbor blocks in the Merge mode are also predefined, and the location and traversal order of neighbor blocks may be different in different modes.
- HEVC inter prediction all pixels in the coding block adopt the same motion information, and then perform motion compensation according to the motion information to obtain the prediction value of the pixels of the coding block.
- the same motion information may lead to inaccurate motion compensation prediction, thereby increasing residual information.
- the AMVP mode can be divided into the AMVP mode according to the translation model and the AMVP mode according to the non-translation model
- the Merge mode can be divided into the Merge mode according to the translation model and the non-translation motion model Merge mode.
- Non-translational motion model prediction refers to using the same motion model at the codec to derive the motion information of each sub-block in the current block, performing motion compensation according to the motion information of the sub-block, and obtaining a prediction block, thereby improving prediction efficiency.
- Commonly used non-translational motion models are 4-parameter affine motion models or 6-parameter affine motion models.
- the sub-block involved in the embodiment of the present application may be a pixel or a pixel block of a size of N 1 ⁇ N 2 divided according to a specific method, where N 1 and N 2 are both positive integers, and N 1 may be equal to N 2 may not be equal to N 2 .
- the 4-parameter affine motion model can be represented by the motion vectors of two pixels and their coordinates relative to the upper left vertex pixel of the current block.
- the pixels used to represent the parameters of the motion model are called control points. If the upper left vertex (0,0) and upper right vertex (W, 0) pixels are used as control points, the motion vectors (vx0, vy0) and (vx1, vy1) of the control points of the upper left vertex and upper right vertex of the current block are determined first, Then obtain the motion information of each sub-block in the current block according to formula (3), where (x, y) is the coordinate of the sub-block relative to the upper left vertex pixel of the current block, and W is the width of the current block.
- the 6-parameter affine motion model can be represented by the motion vectors of three pixels and their coordinates relative to the top left vertex pixel of the current block. If the upper left vertex (0, 0), upper right vertex (W, 0) and lower left vertex (0, H) pixels are used as control points, then the motion vectors of the upper left vertex, upper right vertex and lower left vertex control points of the current block are determined first Are (vx0, vy0) and (vx1, vy1) and (vx2, vy2), and then get the motion information of each sub-block in the current block according to formula (5), where (x, y) is the sub-block relative to the current block The coordinates of the upper left vertex pixel, W and H are the width and height of the current block, respectively.
- the coding block predicted by the affine motion model is called the affine coding block.
- the Advanced Motion Vector Prediction (AMVP) mode based on the affine motion model or the Merge mode based on the affine motion model can be used to obtain the motion information of the control points of the affine coding block.
- AMVP Advanced Motion Vector Prediction
- the motion information of the control point of the current coding block can be obtained by the inherited control point motion vector prediction method or the constructed control point motion vector prediction method.
- the inherited control point motion vector prediction method refers to determining the candidate control point motion vectors of the current block using the motion models of adjacent coded affine coding blocks.
- the adjacent blocks around the current block are traversed to find the neighbors of the current block.
- the affine coding block where the position block is located obtains the control point motion information of the affine coding block, and then derives the control point motion vector of the current block based on the motion model constructed by the affine coding block control point motion information (used in Merge Mode) or the motion vector prediction value of the control point (for AMVP mode).
- A1-> B1-> B0-> A0-> B2 is only an example, and the order of other combinations also applies to this application.
- the adjacent position blocks are not limited to A1, B1, B0, A0, and B2.
- the adjacent position block may be a pixel point, a pixel block of a predetermined size divided according to a specific method, for example, a 4x4 pixel block, a 4x2 pixel block, or a pixel block of another size, not used limited.
- the motion vector (vx4, vy4) and the upper right vertex (x5, y5) of the upper left vertex (x4, y4) of the affine coding block are obtained.
- the combination of the motion vector (vx0, vy0) of the upper left vertex (x0, y0) of the current block and the motion vector (vx1, vy1) of the upper right vertex (x1, y1) obtained by the affine coding block where A1 is located as above is the current The candidate control point motion vector of the block.
- the motion vector (vx4, vy4) and the motion vector (vx5, y5) of the upper left vertex (x4, y4) of the affine coding block are obtained.
- the combination of the motion vectors (vx2, vy2) of the vertices (x2, y2) is the candidate control point motion vector of the current block.
- the constructed control point motion vector prediction method refers to combining the motion vectors of the coded blocks adjacent to the control point of the current block as the motion vectors of the control points of the current affine coding block without considering the neighboring Whether the coding block is an affine coding block.
- the motion vectors of the upper left vertex and the upper right vertex of the current block are determined using the motion information of the coded blocks adjacent to the current coding block. Taking the example shown in FIG. 6C as an example, the constructed control point motion vector prediction method will be described. It should be noted that FIG. 6C is only an example.
- the motion vectors of the coded blocks A2, B2, and B3 adjacent to the top left vertex are used as candidate motion vectors of the motion vectors at the top left vertex of the current block; the motion vectors of the coded blocks B1 and B0 adjacent to the top right vertex are used.
- the motion vector is a candidate motion vector as the motion vector of the upper right vertex of the current block.
- the candidate motion vectors of the upper left vertex and the upper right vertex are combined to form multiple binary groups.
- the motion vectors of the two encoded blocks included in the binary group can be used as the candidate control point motion vectors of the current block. See the following formula ( 11A) shows:
- v A2 represents the motion vector of A2
- v B1 represents the motion vector of B1
- v B0 represents the motion vector of B0
- v B2 represents the motion vector of B2
- v B3 represents the motion vector of B3.
- the motion vectors of the coded blocks A2, B2, and B3 adjacent to the top left vertex are used as candidate motion vectors for the motion vectors of the top left vertex of the current block;
- the motion vector is the candidate motion vector of the motion vector of the upper right vertex of the current block, and the motion vectors of the encoded blocks A0 and A1 adjacent to the sitting vertex are used as the motion vector candidate of the lower left vertex of the current block.
- the candidate motion vectors of the upper left vertex, the upper right vertex, and the lower left vertex are combined to form a triplet.
- the motion vectors of the three encoded blocks included in the triplet can be used as the candidate control point motion vectors of the current block. See the following formula (11B), (11C) shows:
- v A2 represents the motion vector of A2
- v B1 represents the motion vector of B1
- v B0 represents the motion vector of B0
- v B2 represents the motion vector of B2
- v B3 represents the motion vector of B3
- v A0 represents the motion vector of A0
- v A1 represents the motion vector of A1.
- Step 601 Obtain motion information of each control point of the current block.
- A0, A1, A2, B0, B1, B2, and B3 are the spatial adjacent positions of the current block, which are used to predict CP1, CP2, or CP3;
- T is the temporal adjacent positions of the current block, which are used to predict CP4.
- the inspection sequence is B2-> A2-> B3. If B2 is available, the motion information of B2 is used. Otherwise, detect A2, B3. If the motion information of the three positions is not available, the motion information of CP1 cannot be obtained.
- the inspection sequence is B0-> B1; if B0 is available, CP2 uses the motion information of B0. Otherwise, detect B1. If motion information is not available at both locations, CP2 motion information cannot be obtained.
- X available means that the block including the position of X (X is A0, A1, A2, B0, B1, B2, B3 or T) has been encoded and adopts the inter prediction mode; otherwise, the X position is not available.
- Step 602 Combine the motion information of the control point to obtain the constructed motion information of the control point.
- the motion information of the two control points is combined to form a binary group, which is used to construct a 4-parameter affine motion model.
- the combination of the two control points can be ⁇ CP1, CP4 ⁇ , ⁇ CP2, CP3 ⁇ , ⁇ CP1, CP2 ⁇ , ⁇ CP2, CP4 ⁇ , ⁇ CP1, CP3 ⁇ , ⁇ CP3, CP4 ⁇ .
- a 4-parameter affine motion model constructed using a binary group consisting of CP1 and CP2 control points can be written as Affine (CP1, CP2).
- the motion information of the three control points is combined to form a triple, which is used to construct a 6-parameter affine motion model.
- the combination of the three control points can be ⁇ CP1, CP2, CP4 ⁇ , ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ .
- a 6-parameter affine motion model constructed with a triple consisting of CP1, CP2, and CP3 control points can be written as Affine (CP1, CP2, CP3).
- a quadruple composed of the motion information of four control points is used to construct an 8-parameter bilinear model.
- An 8-parameter bilinear model constructed with a quadruple composed of CP1, CP2, CP3, and CP4 control points is recorded as Bilinear (CP1, CP2, CP3, CP4).
- the motion information combination of two control points is simply referred to as a binary group, and the motion information of three control points (or two coded blocks) is combined.
- the motion information combination of four control points is referred to simply as a quad.
- CurPoc represents the POC number of the current frame
- DesPoc represents the POC number of the reference frame of the current block
- SrcPoc represents the POC number of the reference frame of the control point
- MV s represents the motion vector obtained by scaling
- MV represents the motion vector of the control point.
- control points can also be converted into control points at the same position.
- the 4-parameter affine motion model obtained by combining ⁇ CP1, CP4 ⁇ , ⁇ CP2, CP3 ⁇ , ⁇ CP2, CP4 ⁇ , ⁇ CP1, CP3 ⁇ , ⁇ CP3, CP4 ⁇ into ⁇ CP1, CP2 ⁇ or ⁇ CP1, CP2, CP3 ⁇ to express.
- the conversion method is to substitute the motion vector of the control point and its coordinate information into formula (2) to obtain the model parameters, and then substitute the coordinate information of ⁇ CP1, CP2 ⁇ into formula (3) to obtain the motion vector.
- the conversion can be performed according to the following formulas (13)-(21), where W represents the width of the current block and H represents the height of the current block.
- (vx 0 , vy 0 ) represents the motion vector of CP1
- (vx 1 , vy 1 ) represents the motion vector of CP2
- (vx 2 , vy 2 ) represents the motion vector of CP3
- (vx 3 , vy 3 ) represents the motion vector of CP4.
- Conversion of ⁇ CP1, CP2 ⁇ to ⁇ CP1, CP2, CP3 ⁇ can be achieved by the following formula (13), that is, the motion vector of CP3 in ⁇ CP1, CP2, CP3 ⁇ can be determined by the formula (13):
- Conversion of ⁇ CP2, CP4 ⁇ to ⁇ CP1, CP2 ⁇ can be achieved by the following formula (18), and conversion of ⁇ CP2, CP4 ⁇ to ⁇ CP1, CP2, CP3 ⁇ can be achieved by the formulas (18) and (19):
- Conversion of ⁇ CP3, CP4 ⁇ to ⁇ CP1, CP2 ⁇ can be achieved by the following formula (20), and conversion of ⁇ CP3, CP4 ⁇ to ⁇ CP1, CP2, CP3 ⁇ can be achieved by the following formulas (20) and (21):
- the 6-parameter affine motion model combined with ⁇ CP1, CP2, CP4 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ is converted into control points ⁇ CP1, CP2, CP3 ⁇ to represent.
- the conversion method is to substitute the motion vector and coordinate information of the control point into formula (4) to obtain the model parameters, and then substitute the coordinate information of ⁇ CP1, CP2, CP3 ⁇ into formula (5) to obtain the motion vector.
- the conversion can be performed according to the following formulas (22)-(24), where W represents the width of the current block, H represents the height of the current block, and in formulas (13)-(21), (vx 0 , vy 0 ) represents the motion vector of CP1, (vx 1 , vy 1 ) represents the motion vector of CP2, (vx 2 , vy 2 ) represents the motion vector of CP3, and (vx 3 , vy 3 ) represents the motion vector of CP4.
- ATMVP Advanced temporal motion vector prediction
- all pixels in the current block use the same motion information for motion compensation to obtain the predicted value of the pixels in the block to be processed.
- the pixels in the block to be processed do not necessarily have the same motion characteristics. Therefore, using the same motion information to predict all the pixels in the block to be processed may reduce the accuracy of motion compensation, thereby increasing residual information.
- the existing scheme proposes advanced temporal motion vector prediction (advanced temporal vector prediction, ATMVP) technology.
- advanced temporal motion vector prediction advanced temporal vector prediction, ATMVP
- the process of forecasting using ATMVP technology mainly includes, as shown in Figure 6E:
- a scaling method may be used to determine the motion vector of the current sub-block to be processed.
- the scaling method is implemented by formula (25):
- CPoc represents the POC number of the frame where the block to be processed
- DPoc represents the POC number of the frame where the corresponding sub-block is located
- SrcPoc represents the POC number of the reference frame of the corresponding sub-block
- MV c represents the motion vector obtained by scaling
- MV g represents the corresponding Block motion vector.
- Planar motion vector prediction (PLANAR):
- PLANAR obtains the average of the motion information of the neighboring position of the upper space, the neighboring position of the left space, and the right and lower positions of each sub-block to be processed in the block to be processed, and converts it into the current value of each sub-block to be processed Sports information.
- the motion vector P (x, y) of the sub-block to be processed uses the horizontal interpolation motion vector P h (x, y) and the horizontal interpolation motion vector P v (x , Y) is calculated by the following formula (26):
- H represents the height of the block to be processed
- W represents the width of the block to be processed
- the horizontally interpolated motion vector P h (x, y and the horizontally interpolated motion vector P v (x, y) can use the motion vectors of the sub-blocks on the left, right, upper and lower sides of the current sub-block to be processed by the following formula (27) and (28) are calculated as:
- L (-1, y) represents the left side motion vector of the sub-block to be processed
- R (w, y) represents the right side motion vector of the sub-block to be processed
- a (x, -1) represents the motion vector above the sub-block to be processed
- B (x, H) represents the motion vector below the sub-block to be processed.
- the left motion vector L and the upper motion vector A are obtained from the spatial neighboring blocks of the current coding block.
- the motion vectors L (-1, y) and A (x, -1) of the coding block at the preset positions (-1, y) and (x, -1) are obtained according to the coordinates (x, y) of the sub-block to be processed .
- the right motion vector R (w, y) and the lower motion vector B (x, H) can be extracted by the following methods:
- the right-side motion vector R (w, y) is calculated using the extracted motion vector AR of the upper right space adjacent position and the time domain motion information BR of the lower right space adjacent position, as shown in formula (29):
- the motion vector used in the calculation refers to the motion vector obtained after being scaled to point to the first reference frame in the specific reference frame queue.
- a candidate motion vector list according to the AMVP mode of the affine motion model is constructed.
- the candidate motion vector list of the AMVP mode according to the affine motion model may be referred to as a control point motion vector predictor candidate list (control point motion vector predictor list), and the motion vector predictor value of each control point A motion vector including 2 (4-parameter affine motion models) control points or a motion vector including 3 (6-parameter affine motion models) control points.
- control point motion vector predictor candidate list is pruned and sorted according to specific rules, and it can be truncated or filled to a specific number.
- the motion vector of each control point in the candidate list of control point motion vector prediction values is used to obtain the motion vector of each sub-block in the current coding block by formula (3) / (5), and then each sub-block is obtained
- the pixel value of the corresponding position in the reference frame pointed by the motion vector of is used as its predicted value to perform motion compensation using the affine motion model.
- Calculate the average value of the difference between the original value and the predicted value of each pixel in the current coding block select the control point motion vector prediction value corresponding to the minimum average value as the optimal control point motion vector prediction value, and use it as the current coding
- the index number indicating the position of the control point motion vector prediction value in the control point motion vector prediction value candidate list is encoded into the code stream and sent to the decoder.
- the index number is parsed, and the control point motion vector predictor (CPMVP) is determined from the control point motion vector predictor candidate list according to the index number.
- CPMVP control point motion vector predictor
- control point motion vectors are obtained by performing motion search within a certain search range using the control point motion vector prediction value as the search starting point. And the difference between the control point motion vector and the control point motion vector prediction value (control point motion vectors differences (CPMVD)) is passed to the decoding end.
- control point motion vectors differences CPMVD
- the difference of the control point motion vector is analyzed and added to the predicted value of the control point motion vector to obtain the control point motion vector.
- the sub-block fusion candidate list is pruned and sorted according to specific rules, and it can be truncated or filled to a specific number.
- the motion vector of each sub-block (a pixel block of size N 1 ⁇ N 2 divided by pixels or a specific method) is obtained according to the method shown in 7). If PLANAR prediction is used, the motion vector of each sub-block is obtained according to the method shown in 8).
- each sub-block in the current block (the size of pixels divided by a specific method or a specific method is N 1 ⁇ N 2 Pixel block) motion vector.
- the pixel value of the position in the reference frame pointed by the motion vector of each sub-block is obtained as its predicted value, and affine motion compensation is performed.
- the index number indicating the position of the motion vector of the control point in the candidate list is encoded into the code stream and sent to the decoder.
- the index number is parsed, and the control point motion vectors (CPMV) are determined from the control point motion vector fusion candidate list according to the index number.
- CPMV control point motion vectors
- At least one refers to one or more, and “multiple” refers to two or more than two.
- “And / or” describes the relationship of the related objects, indicating that there can be three relationships, for example, A and / or B, which can mean: A exists alone, A and B exist at the same time, B exists alone, where A, B can be singular or plural.
- the character “/” generally indicates that the related object is a "or” relationship.
- At least one of the following” or a similar expression refers to any combination of these items, including any combination of a single item or a plurality of items.
- At least one item (a) in a, b, or c can represent: a, b, c, ab, ac, bc, or abc, where a, b, c can be a single or multiple .
- the syntax element when using the inter prediction mode to decode the current block, the syntax element may be used to signal the inter prediction mode.
- the part of the syntax structure currently adopted to parse the inter prediction mode adopted by the current block can be seen in Table 1. It should be noted that the grammatical elements in the grammatical structure can also be represented by other identifiers, which is not specifically limited in this application.
- variable treeType indicates the coding tree type used for coding of the current block.
- variable slice_type is used to indicate the type of the slice where the current block is located, such as P type, B type, or I type.
- the syntax element pred_mode_flag [x0] [y0] is used to indicate whether the prediction mode of the current block is inter prediction or intra prediction.
- the variable CuPredMode [x0] [y0] is determined by pre_mode_falg [x0] [y0].
- MODE_INTRA indicates intra prediction.
- x0, y0 represent the coordinates of the current block in the video image.
- cbWidth represents the width of the current block
- cbHeight represents the height of the current block
- merge_subblock_flag [x0] [y0] can be used to indicate whether the merge mode according to sub-blocks is adopted for the current block.
- the type (slice_type) of the slice where the current block is located is P type or B type.
- merge_subblock_flag [x0] [y0] 1 indicates that the merge mode according to the sub-block is used for the current block
- merge_subblock_flag [x0] [y0] 0 indicates that the merge mode according to the sub-block is not used for the current block, but it can be used
- the merge mode of the translational motion model is P type or B type.
- the syntax element merge_idx [x0] [y0] can be used to indicate the index value for the merge candidate list.
- the syntax element merge_subblock_idx [x0] [y0] can be used to indicate the index value for the merge candidate list according to the subblock.
- inter_affine_flag [x0] [y0] can be used to indicate whether the AMVP mode according to the affine motion model is adopted for the current block when the current block is a P-type strip or a B-type strip.
- inter_affine_flag [x0] [y0] 0 indicating that the AMVP mode according to the affine motion model is used for the current block
- inter_affine_flag [x0] [y0] 1, indicating that the AMVP mode according to the affine motion model is not used for the current block
- AMVP mode of translational motion model can be used.
- the syntax element cu_affine_type_flag [x0] [y0] can be used to indicate whether the 6-parameter affine motion model is used for motion compensation for the current block when the current block is a P-type strip or a B-type strip.
- cu_affine_type_flag [x0] [y0] 0, indicating that the 6-parameter affine motion model is not used for motion compensation for the current block, and only 4-parameter affine motion model can be used for motion compensation;
- cu_affine_type_flag [x0] [y0] 1, indicating A 6-parameter affine motion model is used for motion compensation for the current block.
- MotionModelIdc [x0] [y0] 1, indicating the use of 4-parameter affine motion model
- MotionModelIdc [x0] [y0] 2
- MaxNumMergeCand is used to indicate the maximum length of the merge candidate motion vector list
- MaxNumSubblockMergeCand is used to indicate the maximum length of the merge candidate motion vector list according to the subblock.
- inter_pred_idc [x0] [y0] is used to indicate the prediction direction.
- PRED_L0 indicates forward prediction.
- num_ref_idx_l0_active_minus1 indicates the number of reference frames in the forward reference frame list
- ref_idx_l0 [x0] [y0] indicates the forward reference frame index value of the current block.
- mvd_coding (x0, y0,0,0) indicates the first motion vector difference.
- mvp_l0_flag [x0] [y0] indicates the forward MVP candidate list index value.
- PRED_L1 is used to indicate backward prediction.
- num_ref_idx_l1_active_minus1 indicates the number of reference frames in the backward reference frame list.
- ref_idx_l1 [x0] [y0] indicates the backward reference frame index value of the current block, and
- mvp_l1_flag [x0] [y0] indicates the backward MVP candidate list index value.
- ae (v) indicates the use of syntax elements based on adaptive binary arithmetic coding (context-based adaptive binary arithmetic coding, cabac).
- Step 801 Analyze the code stream according to the syntax structure shown in Table 1, and determine the inter prediction mode of the current block.
- step 802a is performed.
- step 802b is executed.
- Step 802a Construct a candidate motion vector list corresponding to the AMVP mode of the affine motion model, and perform step 803a.
- the candidate control point motion vectors of the current block are derived to be added to the candidate motion vector list.
- the candidate motion vector list may include a 2-tuple list (the current coding block is a 4-parameter affine motion model) or a 3-tuple list.
- the two-tuple list includes one or more two-tuples used to construct a 4-parameter affine motion model.
- the triple list includes one or more triples used to construct a 6-parameter affine motion model.
- the candidate motion vector binary / triple list is pruned and sorted according to specific rules, and it can be truncated or filled to a specific number.
- A1 The flow of constructing a candidate motion vector list using the inherited control motion vector prediction method will be described.
- the adjacent position blocks around the current block are traversed to find the affine coding block where the adjacent position blocks are located
- To obtain the control point motion information of the affine coding block and then construct a motion model from the control point motion information of the affine coding block, and derive the candidate control point motion information of the current block.
- a motion model from the control point motion information of the affine coding block, and derive the candidate control point motion information of the current block.
- inherited control point motion vector prediction method which will not be repeated here.
- the affine decoding block is an affine coding block that uses an affine motion model for prediction in the encoding stage.
- the motion vectors of the upper left and upper right control points of the current block are derived according to the 4 affine motion model formulas (6) and (7), respectively. .
- the motion vectors of the three control points of the adjacent affine decoding block are obtained, such as the motion vector value (vx4) of the upper left control point (x4, y4) in FIG. 4 , vy4) and the motion vector value (vx5, vy5) of the upper right control point (x5, y5) and the motion vector (vx6, vy6) of the lower left vertex (x6, y6).
- the motion vectors of the 2 left and upper right control points of the current block are derived according to the 6-parameter motion model formulas (8) and (9), respectively.
- the phase The motion vectors of the three control points of the adjacent affine decoding block, such as the motion vector value (vx4, vy4) of the upper left control point (x4, y4) and the motion vector value of the upper right control point (x5, y5) in Figure 4, ( vx5, vy5) and the motion vector (vx6, vy6) of the lower left vertex (x6, y6).
- the upper left, upper right, and left of the current block are derived according to the corresponding formulas (8), (9), (10) of the 6 parameter affine motion model.
- the motion vectors of the 3 control points in the lower left are derived according to the corresponding formulas (8), (9), (10) of the 6 parameter affine motion model.
- the motion vectors of the two control points of the affine decoding block are obtained: the motion vector value (vx4) of the upper left control point (x4, y4) , vy4) and the motion vector values (vx5, vy5) of the upper right control point (x5, y5).
- the 4-parameter affine motion model composed of 2 control points of adjacent affine decoding blocks is derived according to the 4-parameter affine motion model formulas (6) and (7) to obtain the three control points of the upper left, upper right and lower left of the current block Sports vector.
- A2 The flow of constructing a candidate motion vector list using the structured control motion vector prediction method will be described.
- the affine motion model adopted by the current decoding block is a 4-parameter affine motion model (that is, MotionModelIdc is 1), and the motion information of the coded block adjacent to the current coding block is used to determine the upper left vertex and the upper right vertex of the current coding block Sport vector.
- the constructed control point motion vector prediction method 1 or the constructed control point motion vector prediction method 2 may be used to construct the candidate motion vector list. For the specific method, refer to the descriptions in 4) and 5) above, which will not be repeated here. .
- the affine motion model of the current decoding block is a 6-parameter affine motion model (that is, MotionModelIdc is 2), and the motion information of the neighboring coded blocks around the current coding block is used to determine the top left vertex, the top right vertex, and the bottom left Vertex motion vector.
- the constructed control point motion vector prediction method 1 or the constructed control point motion vector prediction method 2 may be used to construct the candidate motion vector list. For the specific method, refer to the descriptions in 4) and 5) above, which will not be repeated here. .
- control point motion information combination methods can also be applied to this application, which will not be repeated here.
- Step 803a Analyze the code stream to determine the optimal control point motion vector prediction value, and execute step 804a.
- the affine motion model used in the current decoding block is a 4-parameter affine motion model (MotionModelIdc is 1)
- the index number is parsed, and the optimal motion vector prediction for the 2 control points is determined from the candidate motion vector list according to the index number value.
- the index number is mvp_l0_flag or mvp_l1_flag.
- the affine motion model used in the current decoding block is a 6-parameter affine motion model (MotionModelIdc is 2)
- the index number is parsed, and the optimal motion vector prediction of the 3 control points is determined from the candidate motion vector list according to the index number value.
- Step 804a Analyze the code stream to determine the motion vector of the control point.
- the affine motion model used in the current decoding block is a 4-parameter affine motion model (MotionModelIdc is 1), and the motion vector difference of the two control points of the current block is decoded from the code stream, according to the motion of each control point
- the vector difference value and the motion vector prediction value obtain the motion vector value of the control point.
- the difference of the motion vectors of the two control points is mvd_coding (x0, y0, 0, 0) and mvd_coding (x0, y0, 0, 1).
- the motion vector difference between the upper left position control point and the upper right position control point is decoded from the code stream, and respectively added to the motion vector prediction value to obtain the motion vector of the upper left position control point and the upper right position control point of the current block value.
- the current affine motion model of the decoded block is a 6-parameter affine motion model (MotionModelIdc is 2), and the motion vector difference of the three control points of the current block is decoded from the code stream, respectively according to the motion vector difference of each control point And the motion vector prediction value to obtain the motion vector value of the control point.
- the motion vector differences of the three control points are mvd_coding (x0, y0, 0, 0) and mvd_coding (x0, y0, 0, 1), mvd_coding (x0, y0, 0, 2).
- the motion vector difference values of the upper left control point, the upper right control point, and the lower left control point are decoded from the code stream and added to the motion vector prediction values respectively to obtain the upper left control point, the upper right control point, and the lower left control of the current block The motion vector value of the point.
- Step 805a Obtain the motion vector value of each sub-block in the current block according to the motion information of the control point and the affine motion model adopted by the current decoding block.
- the preset position pixels in the motion compensation unit The motion information of the points represents the motion information of all pixels in the motion compensation unit.
- the pixel point at the preset position may be the center point of the motion compensation unit (M / 2, N / 2), the upper left pixel point (0,0), and the upper right pixel point (M-1,0 ), Or pixels at other locations.
- the following uses the center point of the motion compensation unit as an example, as shown in FIG. 8C.
- V0 represents the motion vector of the upper left control point
- V1 represents the motion vector of the upper right control point.
- Each small box represents a motion compensation unit.
- the coordinates of the center point of the motion compensation unit relative to the upper left vertex pixel of the current affine decoding block are calculated using formula (31), where i is the i-th motion compensation unit in the horizontal direction (from left to right) and j is the j Motion compensation units (from top to bottom), (x (i, j) , y (i, j) ) represents the center point of the (i, j) th motion compensation unit relative to the upper left control point pixel of the current affine decoding block coordinate of.
- the affine motion model used in the current affine decoding block is a 6-parameter affine motion model
- the motion vector of the center point of each motion compensation unit is used as the motion vector of all pixels in the motion compensation unit (vx (i, j) , vy (i, j) ).
- the affine motion model used in the current affine decoding block is a 4-affine motion model
- the motion vector of the center point of the motion compensation unit is used as the motion vector of all pixels in the motion compensation unit (vx (i, j) , vy (i, j) ).
- Step 806a For each sub-block, perform motion compensation according to the determined motion vector value of the sub-block to obtain the pixel prediction value of the sub-block.
- Step 802b Construct a motion information candidate list of the sub-block merge mode.
- one or more of advanced time domain motion vector prediction, inherited control point motion vector prediction method, constructed control point motion vector prediction method or PLADAR method can be used to construct a motion information candidate list based on sub-block fusion mode ( sub-block based merging candidate list).
- the motion information candidate list of the sub-block fusion mode may be simply referred to as the sub-block fusion candidate list.
- the motion information candidate list is pruned and sorted according to specific rules, and it can be truncated or filled to a specific number.
- D1 If sps_affine_enabled_flag is 1, use the inherited control point motion vector prediction method to derive the control point motion information of the current block candidate and add it to the sub-block fusion candidate list. (See 3)
- the motion information of the candidate control point is added to the sub-block fusion candidate list; otherwise, the motion information in the sub-block fusion candidate list is traversed in turn to check whether there is any The motion information of the candidate control point motion information is the same. If the same motion information as the candidate control point motion information does not exist in the sub-block fusion candidate list, the candidate control point motion information is added to the sub-block fusion candidate list.
- MaxNumSubblockMergeCand is a positive integer, such as 1, 2, 3, 4, 5, etc., 5 is used as an example for description below, and will not be repeated. The list is constructed, otherwise the next adjacent block is traversed.
- D2 If sps_affine_enabled_flag is 1, use the constructed control point motion vector prediction method to derive the motion information of the candidate control point of the current block, and add the sub-block fusion candidate list, as shown in FIG. 8B.
- Step 801c Obtain the motion information of each control point of the current block. Exemplarily, refer to the control point motion vector prediction method 2 constructed in 5), step 601, and details are not described herein again.
- step 802c the motion information of the control point is combined to obtain the constructed motion information of the control point. Refer to step 602 in FIG. 6D, which will not be repeated here.
- Step 803c Add the constructed control point motion information to the sub-block fusion candidate list.
- the combinations are traversed in a preset order to obtain legal combinations as candidate control point motion information. If the sub-block fusion candidate list is empty at this time, then The candidate control point motion information is added to the sub-block fusion candidate list; otherwise, the motion information in the candidate motion vector list is traversed sequentially, and it is checked whether the same motion information as the candidate control point motion information exists in the sub-block fusion candidate list. If the same motion information as the candidate control point motion information does not exist in the sub-block fusion candidate list, the candidate control point motion information is added to the sub-block fusion candidate list.
- a preset sequence is as follows: Affine (CP1, CP2, CP3)-> Affine (CP1, CP2, CP4)-> Affine (CP1, CP3, CP4)-> Affine (CP2, CP3, CP4) -> Affine (CP1, CP2)-> Affine (CP1, CP3), a total of 6 combinations.
- sps_affine_type_flag 1
- a preset sequence is as follows: Affine (CP1, CP2, CP3)-> Affine (CP1, CP2, CP4)-> Affine (CP1, CP3, CP4)-> Affine (CP2, CP3, CP4)-> Affine (CP1, CP2)-> Affine (CP1, CP3), a total of 6 combinations.
- the embodiment of the present application does not specifically limit the order in which six combinations of candidate motion vector lists are added.
- sps_affine_type_flag a preset sequence is as follows: Affine (CP1, CP2)-> Affine (CP1, CP3), a total of 2 combinations.
- the embodiment of the present application does not specifically limit the order in which the two combinations of candidate motion vector lists are added.
- the combination is considered unavailable. If the combination is available, determine the reference frame index of the combination (when two control points, select the reference frame index with the smallest reference frame index as the reference frame index of the combination; when greater than two control points, first select the reference frame index with the most occurrences, If there are as many occurrences of multiple reference frame indexes as possible, the smallest reference frame index is selected as the combined reference frame index), and the motion vector of the control point is scaled. If the motion information of all control points after zooming is consistent, the combination is invalid.
- D3 Optional, if sps_planar_enabled_flag is 1, add the motion information constructed by ATMVP to the candidate motion information list, see 7).
- the embodiment of the present application may also fill the candidate motion vector list. For example, after the above traversal process, when the length of the candidate motion vector list is less than the maximum list length MaxNumSubblockMergeCand, the candidate motion vector list may be filled. Until the length of the list is equal to MaxNumSubblockMergeCand.
- It can be filled by adding zero motion vectors, or by combining the motion information of the candidates already in the existing list and weighting the average. It should be noted that other methods for obtaining the candidate motion vector list filling can also be applied to the present application, and will not be repeated here.
- Step 803b Analyze the code stream to determine the optimal control point motion information.
- the index number merge_subblock_idx is parsed, and the optimal motion information is determined from the subblock fusion candidate list according to the index number.
- the binary value of merge_subblock_idx usually adopts TR code (truncated unary, truncated unary code), that is, it is mapped to different binary numbers according to the maximum index value.
- TR code truncated unary, truncated unary code
- the maximum index value is pre-configured or transmitted. For example, the maximum index value is 4, then binarization is performed according to the following 3.
- binarization is performed according to Table 4 below.
- Merge_subblock_idx is transmitted by binary number, then the decoding end can determine the index number by the maximum index value and Table 2 or Table 3.
- the maximum index value is 4, when decoding the index number, when encountering 0 or decoding to the index number equal to the maximum index value, the index number is determined.
- the first bit is 0, then stop decoding the index number, that is Determine that the index number is 0, for example, the first digit is 1, and the second digit is 0, then stop decoding the index number, that is, determine the index number is 1.
- the embodiment of the present application does not specifically limit the setting of the maximum index value and the binarized table.
- Step 804b If the optimal motion information is ATMVP or PLANR motion information, then directly use ATMVP or PLANER to obtain the motion vector of each sub-block.
- the optimal motion information is an affine mode
- the motion vector value of each sub-block in the current block is obtained according to the optimal control point motion information and the affine motion model used by the current decoding block. Same as step 805a.
- Step 805b For each sub-block, perform motion compensation according to the determined motion vector value of the sub-block to obtain the pixel prediction value of the sub-block.
- the embodiments of the present application provide a video image prediction method and device, which provide a way to determine the maximum length (MaxNumSubblockMergeCand) of the candidate motion vector list of the sub-block fusion mode.
- the method and the device are conceived according to the same invention. Since the principles of the method and the device to solve the problem are similar, the implementation of the device and the method can be referred to each other, and the repetition will not be repeated.
- the first case is:
- the sub-block fusion mode may include at least one of an affine mode, an advanced time-domain motion vector prediction mode, and a plane motion vector prediction mode.
- the second case is that the sub-block fusion mode does not consider the existence of a plane motion vector prediction mode, that is, the sub-block fusion mode may include at least one of an affine mode and an advanced time-domain motion vector prediction mode.
- the implementation of the present application will be described in detail from the decoding side with reference to the drawings. Specifically, it may be executed by the video decoder 30, or implemented by a motion compensation module in the video decoder, or executed by a processor.
- the first implementation mode is the fifth implementation mode.
- Step 901 Parse the first identifier from the code stream. Perform S902 or S904.
- the first flag is used to indicate whether the candidate mode used for the inter prediction of the block to be processed includes the affine mode.
- the flag 1 is used to indicate whether the affine can be (or allowed to be) used for the motion compensation of the block to be processed mode.
- the first identification may be configured in the SPS of the code stream. According to this, parsing the first identifier from the code stream can be achieved by parsing the identifier 1 from the SPS of the code stream. Alternatively, the first identifier may also be configured in the stripe header of the strip where the to-be-processed block of the code stream is located. Based on this, parsing the first identifier from the code stream may be achieved by: The stripe header of the stripe where the to-be-processed block is located parses the first identifier.
- the first identification may be represented by the syntax element sps_affine_enabled_flag.
- sps_affine_enabled_flag 1, indicating that the affine motion model can be used when performing inter prediction on the image blocks included in the video image.
- the first identifier indicates that the candidate mode adopted by the block to be processed for inter prediction includes an affine mode
- parse a second identifier from the code stream and the second identifier is used to indicate (or (Determine) the maximum length of the first candidate motion vector list, which is a candidate motion vector list constructed when the to-be-processed block adopts the sub-block fusion prediction mode.
- the first candidate motion vector list may be called MaxNumSubblockMergeCand.
- the second identifier may be configured in the SPS, PPS, or stripe header. Based on this, parsing the second identifier from the code stream may be achieved by: parsing from the sequence parameter set of the code stream The second identifier, or the second identifier is parsed from the slice header of the slice where the block to be processed of the code stream is located.
- the second identification may be represented by K_minus_max_num_subblock_merge_cand.
- the allowed value range of K_minus_max_num_subblock_merge_cand is 0-4.
- MaxNumSubblockMergeCand is 5.
- the second identifier may be represented by five_minus_max_num_subblock_merge_cand.
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand.
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- K is a preset non-negative integer
- the candidate modes used for inter prediction of the block to be processed include only the translational motion vector prediction mode, that is, the candidate modes used for inter prediction of the block to be processed cannot (allow) the affine mode.
- the third identifier is used to indicate whether the ATMVP exists in the sub-block fusion prediction mode. In other words, the third identifier is used to indicate whether inter-prediction of the block to be processed is allowed to adopt ATMVP.
- the third identification may be configured in the SPS or PPS or the stripe header.
- the third indicator is used to indicate that the ATMVP is in
- the maximum length of the first candidate motion vector list (MaxNumSubblockMergeCand) is equal to the third quantity value.
- the third identification may be represented by sps_sbtmvp_enabled_flag.
- sps_sbtmvp_enabled_flag when sps_sbtmvp_enabled_flag is a first value, it indicates that ATMVP does not exist in the sub-block fusion prediction mode, and when sps_sbtmvp_enabled_flag is a second value, it indicates that ATMVP is present in the sub-block fusion prediction mode.
- the first value is 0 and the second value is 1.
- the third quantity value may be used to represent the maximum number of motion vectors supported by ATMVP.
- MaxNumSubblockMergeCand takes the maximum value allowed by MaxNumSubblockMergeCand as 5, for example. If sps_affine_enable_flag is 0, MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand 5-five_minus_max_num_subblock_merge_cand.
- five_minus_max_num_subblock_merge_cand can be defined as 5 minus the maximum length of the sub-block fusion motion vector prediction list in the stripe (five_minus_max_num_subblock_merge_cand specification), the maximum number, the sublock, the merging, the motion, the vector, the prediction, the support, and the support. ).
- MaxNumSubblockMergeCandisisderivedasfollows The maximum number of subblocks merging MVP candidates, MaxNumSubblockMergeCandisisderivedasfollows:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag
- MaxNumSubblockMergeCand 5-five_minus_max_num_subblock_merge_cand
- MaxNumSubblockMergeCandhall is in the range of 0 to 5, inclusive.
- S1001 please refer to S901, which will not be repeated here. Perform S1002 or S1004.
- the first candidate motion vector list is a candidate motion vector list constructed when the to-be-processed block adopts the sub-block fusion prediction mode.
- the first quantity value is 0 at this time.
- the maximum number of motion vectors supported by ATMVP may be 1.
- the first quantity value may be equal to the value of the third identifier.
- the maximum length of the first candidate motion vector list may be according to the following formula obtain:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1;
- MaxNumSubblockMergeCand represents the maximum length of the first candidate motion vector list
- K_minus_max_num_subblock_merge_cand represents the second identifier
- L1 represents the first quantity value
- K is a preset non-negative integer.
- the allowed value range of K_minus_max_num_subblock_merge_cand may be 0-3.
- MaxNumSubblockMergeCand is 5.
- the second identifier may be represented by five_minus_max_num_subblock_merge_cand.
- L1 can be obtained by the following formula:
- MaxNumSubblockMergeCand takes the maximum value allowed by MaxNumSubblockMergeCand as 5, for example. If sps_affine_enable_flag is 0, MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag.
- MaxNumSubblockMergeCand is obtained by the following formula:
- five_minus_max_num_subblock_merge_cand can be defined as 5 minus the maximum length of the sub-block fusion motion vector prediction list in the stripe (five_minus_max_num_subblock_merge_cand specification), the maximum number, the sublock, the merging, the motion, the vector, the prediction, the support, and the support. ).
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag
- MaxNumSubblockMergeCandhall is in the range of 0 to 5, inclusive.
- the maximum value allowed by MaxNumSubblockMergeCand may be 5.
- the second identifier may be represented by K_minus_max_num_subblock_merge_cand, and the allowed value range of K_minus_max_num_subblock_merge_cand is 0-4.
- the second identifier may be represented by five_minus_max_num_subblock_merge_cand.
- the third identifier is used to indicate the existence state of the ATMVP mode in the sub-block fusion prediction mode.
- the relevant description of the third identifier refer to the description in the embodiment corresponding to FIG. 9, and details are not repeated here.
- the fourth flag is used to indicate the existence state of the translational motion vector prediction (PLANAR) mode in the sub-block fusion prediction mode.
- the fourth flag is used to indicate whether inter-prediction of the block to be processed is allowed to adopt the PLAnar mode.
- the fourth identifier when the fourth identifier is the third value, it indicates that the PLANER mode does not exist in the sub-block fusion prediction mode; when the fourth identifier is the fourth value, it indicates that the PLAnar mode is in the sub-block fusion prediction mode Exist.
- the third value is 0 and the fourth value is 1.
- the fourth identification may be configured in the SPS or PPS or the slice header.
- the fourth flag can be represented by sps_planar_enabled_flag.
- the fourth quantity value is the maximum number of motion vectors that the PLAnar mode supports prediction.
- the maximum length of the first candidate motion vector list is equal to the fourth quantity value.
- the maximum number of motion vectors supported by the PLAnar mode is 1, and the maximum length of the first candidate motion vector list is 1.
- the fourth identifier is 1, it indicates that the PLAnar mode exists in the sub-block fusion prediction mode, and the maximum length of the first candidate motion vector list is equal to the fourth identifier.
- the maximum number of motion vectors supported by the PLANAR mode is 1 and the maximum number of motion vectors that the ATMVP mode supports prediction is 1, for example, when the third flag is 1, it indicates that the ATMVP mode exists in the sub-block fusion prediction mode, and when the third flag is 0, it indicates that ATMVP mode does not exist in the sub-block fusion prediction mode, when the fourth flag is 1, it indicates the existence of the PLANER mode in the sub-block fusion prediction mode, and when the fourth flag is 0, it indicates the PLAnar mode In the presence of the sub-block fusion prediction mode, at this time, the maximum length of the first candidate motion vector list may be equal to the sum of the third identifier and the fourth identifier.
- the third mark is represented by sps_sbtmvp_enabled_flag, and the fourth mark is represented by sps_planar_enabled_flag.
- the maximum length of the first candidate motion vector list can be obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag.
- MaxNumSubblockMergeCand 0.
- MaxNumSubblockMergeCand As an example, take the maximum value allowed by MaxNumSubblockMergeCand as 5, for example.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand 5-five_minus_max_num_subblock_merge_cand.
- five_minus_max_num_subblock_merge_cand can be defined as 5 minus the maximum length of the sub-block fusion motion vector prediction list in the stripe (five_minus_max_num_subblock_merge_cand specification), the maximum number, the sublock, the merging, the motion, the vector, the prediction, the support, and the support. ).
- MaxNumSubblockMergeCandisisderivedasfollows The maximum number of subblocks merging MVP candidates, MaxNumSubblockMergeCandisisderivedasfollows:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag;
- MaxNumSubblockMergeCand 5-five_minus_max_num_subblock_merge_cand
- MaxNumSubblockMergeCandhall is in the range of 0 to 5, inclusive.
- S1201 refer to S901, which will not be repeated here. Perform S1202 or S1206.
- the second identifier is used to indicate (or determine) the maximum length of the first candidate motion vector list.
- the first candidate motion vector list is a candidate motion vector list constructed when the to-be-processed block adopts a sub-block fusion prediction mode. Perform S1203 or S1204 or S1205.
- the third flag indicates that the advanced time-domain motion vector prediction mode does not exist in the sub-block fusion prediction mode and the fourth flag indicates that the planar motion vector prediction mode is in the sub-block fusion prediction
- the first quantity value is determined according to the third identifier, and the maximum length of the first candidate motion vector list is determined according to the second identifier and the first quantity value.
- the second quantity value may be equal to the maximum number of motion vectors supported by the plan mode.
- sps_planar_enabled_flag 0.
- sps_planar_enabled_flag 0
- the second quantity value is the maximum value of the motion vector supported by ATMVP. Quantity.
- sps_planar_enabled_flag 1
- the maximum number of motion vectors supported by the planar mode may be 1.
- the second number may be equal to the value of sps_planar_enabled_flag.
- the maximum length of the first candidate motion vector list may be according to the following formula obtain:
- MaxNumSubblockMergeCand K-K_minus_max_num_subblock_merge_cand–L1-L2;
- the allowed value range of K_minus_max_num_subblock_merge_cand may be 0-2, or 0-3.
- MaxNumSubblockMergeCand may be 5 or 6.
- the second identifier may be represented by five_minus_max_num_subblock_merge_cand.
- the second identifier can be represented by six_minus_max_num_subblock_merge_cand.
- L1 can be obtained by the following formula:
- L2 can be obtained by the following formula:
- S1206-S1209 please refer to S1104-S1107, which will not be repeated here.
- MaxNumSubblockMergeCand Take the maximum value allowed by MaxNumSubblockMergeCand as 5, for example.
- the value range of five_minus_max_num_subblock_merge_cand is 0-2.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag
- MaxNumSubblockMergeCandhall be in the range of 0 to 5, inclusive.
- MaxNumSubblockMergeCand Take the maximum value allowed by MaxNumSubblockMergeCand as 6, for example.
- the value range of five_minus_max_num_subblock_merge_cand is 0-3.
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag;
- MaxNumSubblockMergeCand is obtained by the following formula:
- MaxNumSubblockMergeCand sps_sbtmvp_enabled_flag + sps_planar_enabled_flag;
- MaxNumSubblockMergeCandhall be in the range of 6 to inclusive.
- the sub-block fusion mode does not consider the existence of the plane motion vector prediction mode, that is, the sub-block fusion mode may include at least one of the affine mode and the advanced time-domain motion vector prediction mode. This embodiment or the second embodiment will not be repeated here.
- an embodiment of the present application also provides an apparatus.
- the apparatus 1300 may specifically be a processor in a video decoder, or a chip or a chip system, or a video A module in the decoder, such as the entropy decoding unit 304 and / or the inter prediction unit 344.
- the device may include an analysis unit 1301 and a determination unit 1302.
- the analyzing unit 1301 and the determining unit 1302 execute the method steps shown in the embodiments corresponding to FIGS. 9-12.
- the parsing unit 1301 may be used to parse each identifier included in the code stream (such as the first identifier, the second identifier, the third identifier, or the fourth identifier, etc.), and the determining unit 1302 is used to determine the first candidate motion vector list The maximum length.
- the apparatus 1400 may include a communication interface 1410 and a processor 1420.
- the device 1400 may further include a memory 1430.
- the memory 1430 may be provided inside the device or outside the device.
- the analysis unit 1301 and the determination unit 1302 shown in FIG. 13 described above can be implemented by the processor 1420.
- the processor 1420 sends or receives a video stream or a code stream through the communication interface 1410, and is used to implement the methods described in FIGS. 9-12. In the implementation process, each step of the processing flow may complete the methods described in FIGS. 9-12 through instructions in the form of integrated logic circuits of hardware in the processor 1420 or software.
- the communication interface 1410 may be a circuit, a bus, a transceiver, or any other device that can be used for information exchange.
- the other device may be a device connected to the device 1400.
- the other device may be a video decoder.
- the processor 1420 may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components, which can be implemented or executed.
- the general-purpose processor may be a microprocessor or any conventional processor.
- the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware processor, or may be executed and completed by a combination of hardware and software units in the processor.
- the program code executed by the processor 1420 for implementing the above method may be stored in the memory 1430.
- the memory 1430 and the processor 1420 are coupled.
- the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
- the processor 1420 may cooperate with the memory 1430.
- the memory 1430 may be a non-volatile memory, such as a hard disk (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), such as random access memory (random) -access memory, RAM).
- the memory 1430 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
- the specific connection medium between the communication interface 1410, the processor 1420, and the memory 1430 is not limited.
- the memory 1430, the processor 1420, and the communication interface 1410 are connected by a bus.
- the bus is shown by a thick line in FIG. 14. Not limited.
- the bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 14, but it does not mean that there is only one bus or one type of bus.
- the encoding end will determine Inter prediction mode, and coded into the code stream. After the final selected inter prediction mode is selected, the indication of the inter prediction mode (such as the first logo, second logo, third logo, or The fourth identification, etc.) is encoded into the code stream (corresponding to the analysis of the first identification, the second identification, the third identification, or the fourth identification during the decoding process). It should be understood that the determination of the maximum length of the first candidate motion vector list is completely consistent for the codec side.
- the specific embodiments of the encoding end will not be described in detail, but it should be understood that the video image prediction method described in this application is also applicable to the encoding device.
- the device 1500 may include a communication interface 1510 and a processor 1520.
- the device 1500 may further include a memory 1530.
- the memory 1530 may be provided inside the device or outside the device.
- the processor 1520 sends or receives a video stream or a code stream through the communication interface 1510.
- the processor 1520 is configured to encode a first identifier into the code stream; when the first identifier indicates that the candidate mode used for the inter prediction of the block to be processed includes an affine mode, the Encoding a second identifier, the second identifier is used to indicate the maximum length of a first candidate motion vector list, the first candidate motion vector list is a candidate motion vector constructed when the to-be-processed block adopts a sub-block fusion prediction mode List.
- the communication interface 1510 may be a circuit, a bus, a transceiver, or any other device that can be used for information exchange.
- the other device may be a device connected to the device 1500.
- the device when the device is a video encoder, the other device may be a video decoder.
- the processor 1520 may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components, which may be implemented or executed.
- the general-purpose processor may be a microprocessor or any conventional processor.
- the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware processor, or may be executed and completed by a combination of hardware and software units in the processor.
- the program code executed by the processor 1520 for implementing the above method may be stored in the memory 1530.
- the memory 1530 and the processor 1520 are coupled.
- the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
- the processor 1520 may cooperate with the memory 1530.
- the memory 1530 may be a non-volatile memory, such as a hard disk (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), such as random access memory (random) -access memory, RAM).
- the memory 1530 is any other medium that can be used to carry or store a desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
- the embodiment of the present application does not limit the specific connection medium between the communication interface 1510, the processor 1520, and the memory 1530.
- the memory 1530, the processor 1520, and the communication interface 1510 are connected by a bus.
- the bus is shown by a thick line in FIG. 15.
- the connection between other components is only for schematic illustration. Not limited.
- the bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only a thick line is used in FIG. 15, but it does not mean that there is only one bus or one type of bus.
- the embodiments of the present application further provide a computer storage medium in which a software program is stored, which can realize any one or more of the above when read and executed by one or more processors The method provided by the embodiment.
- the computer storage medium may include various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk, or an optical disk.
- an embodiment of the present application further provides a chip including a processor for implementing the functions involved in any one or more of the above embodiments, such as acquiring or processing information involved in the above method or News.
- the chip further includes a memory, and the memory is used for necessary program instructions and data executed by the processor.
- the chip may be composed of a chip, or may include a chip and other discrete devices.
- actions or events of any of the methods described herein may be performed in different sequences, may be added, merged, or omitted together (eg, not all described Actions or events are necessary for the practice method).
- actions or events may be performed simultaneously rather than sequentially, for example, via multi-threaded processing, interrupt processing, or multiple processors.
- specific aspects of the present application are described as being performed by a single module or unit for clarity, it should be understood that the technology of the present application may be performed by a combination of units or modules associated with a video decoder.
- the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted via a computer-readable medium as one or more instructions or codes, and executed by a processing unit according to hardware.
- the computer-readable medium may include a computer-readable storage medium or a communication medium, the computer-readable storage medium corresponding to a tangible medium such as a data storage medium, and the communication medium includes facilitating the transfer of a computer program (for example) from one place to another according to a communication protocol Any media.
- computer-readable media may exemplarily correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media such as signals or carrier waves.
- Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and / or data structures for implementation of the techniques described in this application.
- the computer program product may include a computer-readable medium.
- this computer-readable storage medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory or may be used to store instructions Or any other medium that can be accessed by a computer in the form of a desired code in the form of a data structure. Also, any connection is properly termed a computer-readable medium.
- coaxial cable fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
- DSL digital subscriber line
- coaxial Cables, fiber optic cables, twisted pairs, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media.
- computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are instead directed to non-transitory tangible storage media.
- magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), flexible disks, and Blu-ray disks, where disks usually reproduce data magnetically, while optical disks The data is reproduced optically. Combinations of the above should also be included within the scope of computer-readable media.
- DSP digital signal processors
- ASIC application specific integrated circuits
- FPGA field programmable gate arrays
- DSP digital signal processors
- ASSIC application specific integrated circuits
- FPGA field programmable gate arrays
- processor may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
- functionality described herein may be provided within dedicated hardware and / or software modules configured for encoding and decoding, or incorporated in a combined codec.
- the technology can be fully implemented in one or more circuits or logic elements.
- the technology of the present application can be implemented in a wide variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or collections of ICs (eg, chipsets).
- ICs integrated circuits
- collections of ICs eg, chipsets.
- Various components, modules, or units are described in this application to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need to be implemented by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or by interoperable hardware units (including one or more processors as described above) in combination with suitable software and / or firmware To provide.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
MotionModelIdc[x0][y0] | motion model for motion compensation(运动补偿采用的运动模型) |
0 | translational motion(平动运动) |
1 | 4-parameter affine motion(4参数仿射运动) |
2 | 6-parameter affine motion(6参数仿射运动) |
Claims (50)
- 一种视频图像预测方法,其特征在于,包括:从码流中解析第一标识;当所述第一标识指示待处理块进行帧间预测所采用的候选模式包括仿射模式时,从所述码流中解析第二标识,所述第二标识用于指示第一候选运动矢量列表的最大长度,所述第一候选运动矢量列表为所述待处理块采用子块融合预测模式时构建的候选运动矢量列表;根据所述第二标识确定所述第一候选运动矢量列表的最大长度。
- 如权利要求1所述的方法,其特征在于,在根据所述第二标识确定所述第一候选运动矢量列表的最大长度之前,还包括:从所述码流中解析第三标识,所述第三标识用于指示高级时域运动矢量预测模式在所述子块融合预测模式中的存在状态。
- 如权利要求2所述的方法,其特征在于,所述子块融合预测模式由平面运动矢量预测模式、所述高级时域运动矢量预测模式和所述仿射模式中的至少一个构成,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在时,所述根据所述第二标识确定所述第一候选运动矢量列表的最大长度,包括:根据所述第三标识确定第一数量值;根据所述第二标识以及所述第一数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求3所述的方法,其特征在于,所述根据所述第二标识确定所述第一候选运动矢量列表的最大长度之前,还包括:从所述码流中解析第四标识,所述第四标识用于指示所述平面运动矢量预测模式在所述子块融合预测模式中的存在状态。
- 如权利要求4所述的方法,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中存在且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述根据所述第二标识确定所述第一候选运动矢量列表的最大长度,包括:基于所述第四标识确定第二数量值;根据所述第二标识以及所述第二数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求4所述的方法,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,所述根据所述第二标识确定所述第一候选运动矢量列表的最大长度,包括:根据所述第二标识以及所述第一数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求5所述的方法,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述根据所述第二标识确定所述第一候选运动矢量列表的最大长度,包括:根据所述第二标识、所述第一数量值和所述第二数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求1、2、4任一所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,K为预设的非负整数。
- 如权利要求3或6所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L1;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L1表示所述第一数量值,K为预设的非负整数。
- 如权利要求5所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L2;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L2表示所述第二数量值,K为预设的非负整数。
- 如权利要求7所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L1–L2;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L1表示所述第一数量值,L2表示第二数量值,K为预设的非负整数。
- 如权利要求1-11任一项所述的方法,其特征在于,所述从所述码流中解析第二标识,包括:从所述码流的序列参数集中解析所述第二标识,或者,从所述码流的所述待处理块所在条带的条带头解析所述第二标识。
- 如权利要求3-12任一所述的方法,其特征在于,还包括:当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中存在时,根据第三标识确定第三数量值,根据所述第三数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求13所述的方法,其特征在于,当所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,所述根据第一数量值确定所述第一候选运动矢量列表的最大长度,包括:根据第四标识确定第四数量值;根据所述第一数量值和所述第四数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求5-12任一所述的方法,其特征在于,当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述 高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,根据所述第四标识确定第四数量值,并根据所述第四数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求3-12任一所述的方法,其特征在于,还包括:当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,且所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在时,所述第一候选运动矢量列表的最大长度为零。
- 如权利要求5-12任一项所述的方法,其特征在于,当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述第一候选运动矢量列表的最大长度为零。
- 如权利要求13所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度等于所述第三数量值。
- 如权利要求14所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度等于第三数量值与第四数量值的和。
- 如权利要求15所述的方法,其特征在于,所述第一候选运动矢量列表的最大长度等于第四数量值。
- 如权利要求3、6、7、9、11中任一所述的方法,其特征在于,所述第三标识为第一数值,所述第一数量值为1。
- 如权利要求5、7、10、11中任一所述的方法,其特征在于,所述第四标识为第三数值,所述第二数量值为1。
- 如权利要求14、18、19中任一所述的方法,其特征在于,所述第三标识为第二数值,所述第三数量值为1。
- 如权利要求14、15、19、20中任一所述的方法,其特征在于,所述第四标识为第四数值,所述第四数量值为1。
- 一种视频图像预测装置,其特征在于,包括:解析单元,用于从码流中解析第一标识;当所述第一标识指示待处理块进行帧间预测所采用的候选模式包括仿射模式时,从所述码流中解析第二标识,所述第二标识用于指示第一候选运动矢量列表的最大长度,所述第一候选运动矢量列表为所述待处理块采用子块融合预测模式时构建的候选运动矢量列表;确定单元,用于根据所述第二标识确定所述第一候选运动矢量列表的最大长度。
- 如权利要求25所述的装置,其特征在于,所述解析单元,还用于在根据所述第二标识确定所述第一候选运动矢量列表的最大长度之前,从所述码流中解析第三标识,所述第三标识用于指示高级时域运动矢量预测模式在所述子块融合预测模式中的存在状态。
- 如权利要求26所述的装置,其特征在于,所述子块融合预测模式由平面运动矢量预测模式、所述高级时域运动矢量预测模式和所述仿射模式中的至少一个构成,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在时,所述确定单元,具体用于:根据所述第三标识确定第一数量值;根据所述第二标识以及所述第一数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求27所述的装置,其特征在于,所述解析单元,在根据所述第二标识确定所述第一候选运动矢量列表的最大长度之前,还用于:从所述码流中解析第四标识,所述第四标识用于指示所述平面运动矢量预测模式在所述子块融合预测模式中的存在状态。
- 如权利要求28所述的装置,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中存在且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述确定单元,具体用于:基于所述第四标识确定第二数量值;根据所述第二标识以及所述第二数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求28所述的装置,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,所述确定单元,具体用于:根据所述第二标识以及所述第一数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求29所述的装置,其特征在于,当所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述确定单元,具体用于:根据所述第二标识、所述第一数量值和所述第二数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求25、26、28任一所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,K为预设的非负整数。
- 如权利要求27或30所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L1;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L1表示所述第一数量值,K为预设的非负整数。
- 如权利要求29所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L2;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L2表示所述第二数量值,K为预设的非负整数。
- 如权利要求31所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度根据如下公式获得:MaxNumSubblockMergeCand=K-K_minus_max_num_subblock_merge_cand–L1–L2;其中,MaxNumSubblockMergeCand表示所述第一候选运动矢量列表的最大长度,K_minus_max_num_subblock_merge_cand表示所述第二标识,L1表示所述第一数量值,L2表示第二数量值,K为预设的非负整数。
- 如权利要求25-35任一项所述的装置,其特征在于,所述解析单元,在从所述码流中解析第二标识,具体用于:从所述码流的序列参数集中解析所述第二标识,或者,从所述码流的所述待处理块所在条带的条带头解析所述第二标识。
- 如权利要求27-36任一所述的装置,其特征在于,所述确定单元,还用于:当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中存在时,根据第三标识确定第三数量值,根据所述第三数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求37所述的装置,其特征在于,当所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,所述确定单元,在根据第一数量值确定所述第一候选运动矢量列表的最大长度,具体用于:根据第四标识确定第四数量值;根据所述第一数量值和所述第四数量值确定所述第一候选运动矢量列表的最大长度。
- 如权利要求29-36任一所述的装置,其特征在于,所述确定单元,还用于当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中存在时,根据所述第四标识确定第四数量值,并根据所述第四数量值,确定所述第一候选运动矢量列表的最大长度。
- 如权利要求27-36任一所述的装置,其特征在于,当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,且所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在时,所述第一候选运动矢量列表的最大长度为零。
- 如权利要求29-36任一项所述的装置,其特征在于,当所述第一标识指示待处理块进行帧间预测所采用的候选模式仅包括所述平动运动矢量预测模式,所述第三标识指示所述高级时域运动矢量预测模式在所述子块融合预测模式中不存在,且所述第四标识指示所述平面运动矢量预测模式在所述子块融合预测模式中不存在时,所述第一候选运动矢量列表的最大长度为零。
- 如权利要求37所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度等于所述第三数量值。
- 如权利要求38所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度等于第三数量值与第四数量值的和。
- 如权利要求39所述的装置,其特征在于,所述第一候选运动矢量列表的最大长度等于第四数量值。
- 如权利要求27、30、31、33、35中任一所述的装置,其特征在于,所述第三标识为第一数值,所述第一数量值为1。
- 如权利要求29、31、34、35中任一所述的装置,其特征在于,所述第四标识为第三数值,所述第二数量值为1。
- 如权利要求38、42、43中任一所述的装置,其特征在于,所述第三标识为第二数值,所述第三数量值为1。
- 如权利要求38、39、43、44中任一所述的装置,其特征在于,所述第四标识为第四数值,所述第四数量值为1。
- 一种解码器,其特征在于,包括:存储器以及处理器;所述存储器,用于存储程序指令;所述处理器,用于调用并执行所述存储器中存储的程序指令,以实现如权利要求1-24任一所述的方法。
- 一种芯片,其特征在于,所述芯片与存储器相连,用于读取并执行所述存储器中存储的软件程序,以实现如权利要求1-24任一所述的方法。
Priority Applications (18)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2021004843A MX2021004843A (es) | 2018-10-29 | 2019-10-23 | Metodo y aparato de prediccion de imagenes de video. |
CA3113545A CA3113545C (en) | 2018-10-29 | 2019-10-23 | Video picture prediction method and apparatus |
SG11202102652QA SG11202102652QA (en) | 2018-10-29 | 2019-10-23 | Video picture prediction method and apparatus |
AU2019370424A AU2019370424B9 (en) | 2018-10-29 | 2019-10-23 | Video picture prediction method and apparatus |
KR1020217010841A KR20210057138A (ko) | 2018-10-29 | 2019-10-23 | 비디오 이미지 예측 방법 및 장치 |
JP2021521771A JP7352625B2 (ja) | 2018-10-29 | 2019-10-23 | ビデオピクチャ予測方法及び装置 |
BR112021007919-0A BR112021007919A2 (pt) | 2018-10-29 | 2019-10-23 | método e aparelho de predição de imagem de vídeo |
PL19879740.9T PL3852370T3 (pl) | 2018-10-29 | 2019-10-23 | Sposób i aparat do predykcji obrazu wideo |
CN201980017084.0A CN112005551B (zh) | 2018-10-29 | 2019-10-23 | 一种视频图像预测方法及装置 |
EP19879740.9A EP3852370B1 (en) | 2018-10-29 | 2019-10-23 | Video image prediction method and apparatus |
CN202110780573.0A CN115243039B (zh) | 2018-10-29 | 2019-10-23 | 一种视频图像预测方法及装置 |
PH12021550689A PH12021550689A1 (en) | 2018-10-29 | 2021-03-24 | Video picture prediction method and apparatus |
ZA2021/02152A ZA202102152B (en) | 2018-10-29 | 2021-03-30 | Video picture prediction method and apparatus |
US17/242,545 US11438578B2 (en) | 2018-10-29 | 2021-04-28 | Video picture prediction method and apparatus |
AU2021240264A AU2021240264B2 (en) | 2018-10-29 | 2021-09-30 | Video picture prediction method and apparatus |
US17/740,591 US20220279168A1 (en) | 2018-10-29 | 2022-05-10 | Video picture prediction method and apparatus |
JP2023123159A JP7485839B2 (ja) | 2018-10-29 | 2023-07-28 | ビデオピクチャ予測方法及び装置 |
AU2023233136A AU2023233136A1 (en) | 2018-10-29 | 2023-09-21 | Video picture prediction method and apparatus |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811268188.2 | 2018-10-29 | ||
CN201811268188 | 2018-10-29 | ||
CN201811642717.0A CN111107354A (zh) | 2018-10-29 | 2018-12-29 | 一种视频图像预测方法及装置 |
CN201811642717.0 | 2018-12-29 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/242,545 Continuation US11438578B2 (en) | 2018-10-29 | 2021-04-28 | Video picture prediction method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020088324A1 true WO2020088324A1 (zh) | 2020-05-07 |
Family
ID=70419856
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/112749 WO2020088324A1 (zh) | 2018-10-29 | 2019-10-23 | 一种视频图像预测方法及装置 |
Country Status (16)
Country | Link |
---|---|
US (2) | US11438578B2 (zh) |
EP (1) | EP3852370B1 (zh) |
JP (2) | JP7352625B2 (zh) |
KR (1) | KR20210057138A (zh) |
CN (3) | CN111107354A (zh) |
AU (3) | AU2019370424B9 (zh) |
BR (1) | BR112021007919A2 (zh) |
CA (1) | CA3113545C (zh) |
HU (1) | HUE065057T2 (zh) |
MX (2) | MX2021004843A (zh) |
PH (1) | PH12021550689A1 (zh) |
PL (1) | PL3852370T3 (zh) |
PT (1) | PT3852370T (zh) |
SG (2) | SG10202110925YA (zh) |
WO (1) | WO2020088324A1 (zh) |
ZA (1) | ZA202102152B (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022507590A (ja) * | 2018-11-22 | 2022-01-18 | 北京字節跳動網絡技術有限公司 | サブブロックに基づくインター予測のための調整方法 |
US20220368891A1 (en) * | 2020-01-12 | 2022-11-17 | Lg Electronics Inc. | Image encoding/decoding method and apparatus, and method of transmitting bitstream using sequence parameter set including information on maximum number of merge candidates |
US11695946B2 (en) | 2019-09-22 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Reference picture resampling in video processing |
US11871025B2 (en) | 2019-08-13 | 2024-01-09 | Beijing Bytedance Network Technology Co., Ltd | Motion precision in sub-block based inter prediction |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113287309A (zh) * | 2018-12-27 | 2021-08-20 | Oppo广东移动通信有限公司 | 编码预测方法、装置及计算机存储介质 |
US11405628B2 (en) * | 2020-04-06 | 2022-08-02 | Tencent America LLC | Method and apparatus for video coding |
CN112138394B (zh) * | 2020-10-16 | 2022-05-03 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
CN112541390B (zh) * | 2020-10-30 | 2023-04-25 | 四川天翼网络股份有限公司 | 一种用于考试视频违规分析的抽帧动态调度方法及系统 |
CN115662346B (zh) * | 2022-10-14 | 2024-01-19 | 格兰菲智能科技有限公司 | Demura补偿值压缩方法和系统 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103563384A (zh) * | 2011-05-27 | 2014-02-05 | 松下电器产业株式会社 | 运动图像编码方法、运动图像编码装置、运动图像解码方法、运动图像解码装置、及运动图像编解码装置 |
CN105163116A (zh) * | 2015-08-29 | 2015-12-16 | 华为技术有限公司 | 图像预测的方法及设备 |
US20170214932A1 (en) * | 2014-07-18 | 2017-07-27 | Mediatek Singapore Pte. Ltd | Method of Motion Vector Derivation for Video Coding |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SI2636218T1 (sl) * | 2010-11-04 | 2021-12-31 | Ge Video Compression, Llc | Kodiranje slike, ki podpira združevanje blokov in preskakovalni način |
US9247249B2 (en) | 2011-04-20 | 2016-01-26 | Qualcomm Incorporated | Motion vector prediction in video coding |
WO2014107066A1 (ko) | 2013-01-04 | 2014-07-10 | 삼성전자 주식회사 | 위상차를 고려한 영상 업샘플링을 이용하는 스케일러블 비디오 부호화 방법 및 장치, 스케일러블 비디오 복호화 방법 및 장치 |
KR20140121315A (ko) * | 2013-04-04 | 2014-10-15 | 한국전자통신연구원 | 참조 픽처 리스트를 이용한 다 계층 기반의 영상 부호화/복호화 방법 및 그 장치 |
WO2016078511A1 (en) * | 2014-11-18 | 2016-05-26 | Mediatek Inc. | Method of bi-prediction video coding based on motion vectors from uni-prediction and merge candidate |
CN106559669B (zh) | 2015-09-29 | 2018-10-09 | 华为技术有限公司 | 预测图像编解码方法及装置 |
US20190028731A1 (en) * | 2016-01-07 | 2019-01-24 | Mediatek Inc. | Method and apparatus for affine inter prediction for video coding system |
WO2017156705A1 (en) * | 2016-03-15 | 2017-09-21 | Mediatek Inc. | Affine prediction for video coding |
US20180310017A1 (en) * | 2017-04-21 | 2018-10-25 | Mediatek Inc. | Sub-prediction unit temporal motion vector prediction (sub-pu tmvp) for video coding |
-
2018
- 2018-12-29 CN CN201811642717.0A patent/CN111107354A/zh active Pending
-
2019
- 2019-10-23 KR KR1020217010841A patent/KR20210057138A/ko not_active Application Discontinuation
- 2019-10-23 PL PL19879740.9T patent/PL3852370T3/pl unknown
- 2019-10-23 EP EP19879740.9A patent/EP3852370B1/en active Active
- 2019-10-23 SG SG10202110925YA patent/SG10202110925YA/en unknown
- 2019-10-23 SG SG11202102652QA patent/SG11202102652QA/en unknown
- 2019-10-23 CN CN201980017084.0A patent/CN112005551B/zh active Active
- 2019-10-23 CA CA3113545A patent/CA3113545C/en active Active
- 2019-10-23 WO PCT/CN2019/112749 patent/WO2020088324A1/zh active Application Filing
- 2019-10-23 CN CN202110780573.0A patent/CN115243039B/zh active Active
- 2019-10-23 BR BR112021007919-0A patent/BR112021007919A2/pt unknown
- 2019-10-23 AU AU2019370424A patent/AU2019370424B9/en active Active
- 2019-10-23 JP JP2021521771A patent/JP7352625B2/ja active Active
- 2019-10-23 PT PT198797409T patent/PT3852370T/pt unknown
- 2019-10-23 MX MX2021004843A patent/MX2021004843A/es unknown
- 2019-10-23 HU HUE19879740A patent/HUE065057T2/hu unknown
-
2021
- 2021-03-24 PH PH12021550689A patent/PH12021550689A1/en unknown
- 2021-03-30 ZA ZA2021/02152A patent/ZA202102152B/en unknown
- 2021-04-27 MX MX2021011944A patent/MX2021011944A/es unknown
- 2021-04-28 US US17/242,545 patent/US11438578B2/en active Active
- 2021-09-30 AU AU2021240264A patent/AU2021240264B2/en active Active
-
2022
- 2022-05-10 US US17/740,591 patent/US20220279168A1/en active Pending
-
2023
- 2023-07-28 JP JP2023123159A patent/JP7485839B2/ja active Active
- 2023-09-21 AU AU2023233136A patent/AU2023233136A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103563384A (zh) * | 2011-05-27 | 2014-02-05 | 松下电器产业株式会社 | 运动图像编码方法、运动图像编码装置、运动图像解码方法、运动图像解码装置、及运动图像编解码装置 |
US20170214932A1 (en) * | 2014-07-18 | 2017-07-27 | Mediatek Singapore Pte. Ltd | Method of Motion Vector Derivation for Video Coding |
CN105163116A (zh) * | 2015-08-29 | 2015-12-16 | 华为技术有限公司 | 图像预测的方法及设备 |
Non-Patent Citations (2)
Title |
---|
HUANG, HAN ET AL.: "Control-Point Representation and Differential Coding Affine-Motion Compensation", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 23, no. 10, 1 October 2013 (2013-10-01), pages 1651 - 1660, XP055548912, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2013.2254977 * |
VIVIENNE SZE ET AL.: "High Efficiency Video Coding (HEVC)- Algorithms and Architectures", INTEGRATED CIRCUITS AND SYSTEMS, 31 December 2014 (2014-12-31), pages 209 - 269, XP055263413, ISSN: 1558-9412, DOI: 20200107142819Y * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2022507590A (ja) * | 2018-11-22 | 2022-01-18 | 北京字節跳動網絡技術有限公司 | サブブロックに基づくインター予測のための調整方法 |
US11632541B2 (en) | 2018-11-22 | 2023-04-18 | Beijing Bytedance Network Technology Co., Ltd. | Using collocated blocks in sub-block temporal motion vector prediction mode |
US11671587B2 (en) | 2018-11-22 | 2023-06-06 | Beijing Bytedance Network Technology Co., Ltd | Coordination method for sub-block based inter prediction |
JP7319365B2 (ja) | 2018-11-22 | 2023-08-01 | 北京字節跳動網絡技術有限公司 | サブブロックに基づくインター予測のための調整方法 |
US11871025B2 (en) | 2019-08-13 | 2024-01-09 | Beijing Bytedance Network Technology Co., Ltd | Motion precision in sub-block based inter prediction |
US11695946B2 (en) | 2019-09-22 | 2023-07-04 | Beijing Bytedance Network Technology Co., Ltd | Reference picture resampling in video processing |
US20220368891A1 (en) * | 2020-01-12 | 2022-11-17 | Lg Electronics Inc. | Image encoding/decoding method and apparatus, and method of transmitting bitstream using sequence parameter set including information on maximum number of merge candidates |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020088324A1 (zh) | 一种视频图像预测方法及装置 | |
JP7279154B2 (ja) | アフィン動きモデルに基づく動きベクトル予測方法および装置 | |
CN110891180B (zh) | 视频解码方法及视频解码器 | |
JP7317973B2 (ja) | 画像予測方法、機器、及びシステム、装置、及び記憶媒体 | |
CN115243048B (zh) | 视频图像解码、编码方法及装置 | |
WO2020088482A1 (zh) | 基于仿射预测模式的帧间预测的方法及相关装置 | |
CN111355951A (zh) | 视频解码方法、装置及解码设备 | |
WO2020143589A1 (zh) | 视频图像解码、编码方法及装置 | |
WO2020135467A1 (zh) | 帧间预测方法、装置以及相应的编码器和解码器 | |
CN111432219B (zh) | 一种帧间预测方法及装置 | |
WO2020155791A1 (zh) | 帧间预测方法和装置 | |
WO2020182194A1 (zh) | 帧间预测的方法及相关装置 | |
CN111372086B (zh) | 视频图像解码方法及装置 | |
WO2020114509A1 (zh) | 视频图像解码、编码方法及装置 | |
RU2787812C2 (ru) | Способ и аппаратура предсказания видеоизображений | |
RU2778993C2 (ru) | Способ и аппаратура предсказания видеоизображений | |
RU2783337C2 (ru) | Способ декодирования видео и видеодекодер | |
CN113615191B (zh) | 图像显示顺序的确定方法、装置和视频编解码设备 | |
WO2020135368A1 (zh) | 一种帧间预测的方法和装置 | |
WO2020143292A1 (zh) | 一种帧间预测方法及装置 | |
KR20210103561A (ko) | 인터 예측 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19879740 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3113545 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20217010841 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021521771 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019879740 Country of ref document: EP Effective date: 20210415 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 122021019452 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2019370424 Country of ref document: AU Date of ref document: 20191023 Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021007919 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112021007919 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210426 |