JPWO2020060327A5 - - Google Patents
Download PDFInfo
- Publication number
- JPWO2020060327A5 JPWO2020060327A5 JP2021515161A JP2021515161A JPWO2020060327A5 JP WO2020060327 A5 JPWO2020060327 A5 JP WO2020060327A5 JP 2021515161 A JP2021515161 A JP 2021515161A JP 2021515161 A JP2021515161 A JP 2021515161A JP WO2020060327 A5 JPWO2020060327 A5 JP WO2020060327A5
- Authority
- JP
- Japan
- Prior art keywords
- block
- seed vector
- affine seed
- affine
- current block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims 65
- 239000013598 vector Substances 0.000 claims 62
- 239000013074 reference sample Substances 0.000 claims 5
- 239000000523 sample Substances 0.000 claims 2
Claims (15)
現在ブロックのマージ候補リストを生成するステップと、
前記マージ候補リストに含まれる複数のマージ候補のうちのいずれか1つを指定するステップと、
指定されたマージ候補の第1アフィンシードベクトル及び第2アフィンシードベクトルに基づいて、前記現在ブロックの第1アフィンシードベクトル及び第2アフィンシードベクトルを導出するステップと、
前記現在ブロックの前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルを用いることで、前記現在ブロック内のサブブロックのアフィンベクトルを導出するステップであって、前記サブブロックは、前記現在ブロックのサイズよりも小さいサイズの領域である、ステップと、
前記アフィンベクトルに基づいて、前記サブブロックに対して動き補償予測を実行するステップと、を含み、
前記マージ候補の前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルは、前記現在ブロックに隣接する隣接ブロックの動き情報に基づいて導出される、ビデオ復号化方法。 A video decoding method comprising:
generating a merge candidate list for the current block;
designating any one of a plurality of merge candidates included in the merge candidate list;
deriving a first affine seed vector and a second affine seed vector for the current block based on the specified merge candidate first and second affine seed vectors;
deriving an affine vector for a sub-block within the current block using the first affine seed vector and the second affine seed vector for the current block, wherein the sub-block is the size of the current block a step, which is a region of size smaller than
performing motion compensated prediction for the sub-blocks based on the affine vectors;
The video decoding method, wherein the first affine seed vector and the second affine seed vector of the merging candidate are derived based on motion information of neighboring blocks adjacent to the current block.
請求項1に記載のビデオ復号化方法。 When the neighboring block is included in a coding tree unit different from the coding tree unit of the current block, the first affine seed vector and the second affine seed vector of the merging candidate are the lower left sub-block of the neighboring block. and the motion vector of the lower right sub-block.
請求項2に記載のビデオ復号化方法。 The lower left sub-block includes lower left reference samples located in the lower left corner of the adjacent block, and the lower right sub-block includes lower right reference samples located in the lower right corner of the adjacent block. 3. The video decoding method according to 2.
前記スケールファクタは、左下参照サンプルと右下参照サンプルとの水平距離とオフセットを加算することで得られた値に基づいて導出されることを特徴とする
請求項3に記載のビデオ復号化方法。 The first affine seed vector and the second affine seed vector of the merging candidate are subjected to a shift operation on a difference value of motion vectors between the lower left sub-block and the lower right sub-block using a scale factor. is derived based on the values obtained in
4. The video decoding method of claim 3, wherein the scale factor is derived based on a value obtained by adding a horizontal distance and an offset between lower left and lower right reference samples.
請求項3に記載のビデオ復号化方法。 The first affine seed vector and the second affine seed vector of the merging candidate are subjected to a shift operation on a difference value of motion vectors between the lower left sub-block and the lower right sub-block using a scale factor. 3, wherein the scale factor is derived based on the distance between the lower left reference sample and an adjacent sample adjacent to the right side of the lower right reference sample and the lower left reference sample. The video decoding method described in .
及び/又は
前記隣接ブロックが、前記現在ブロックの符号化ツリーユニットと同じである符号化ツリーユニットに含まれる場合、前記マージ候補の前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルは、前記隣接ブロックの第1アフィンシードベクトル及び第2アフィンシードベクトルに基づいて導出されることを特徴とする
請求項1に記載のビデオ復号化方法。 The merging candidate list includes a first merging candidate and a second merging candidate, and the first merging candidate is determined to be a first available block among upper adjacent blocks located above the current block. derived based on neighboring blocks, wherein the second merging candidate is derived based on a left neighboring block determined to be a first available left neighboring block located to the left of the current block ;
and/or
If the neighboring block is included in a coding tree unit that is the same as the coding tree unit of the current block, the first affine seed vector and the second affine seed vector of the merging candidate are the first affine seed vectors of the neighboring block. The video decoding method of claim 1, wherein the method is derived based on a first affine seed vector and a second affine seed vector .
(x n2 ,y n2 )は、アフィン隣接ブロックの左下制御点の座標を表し、また、(x n3 ,y n3 )は、アフィン隣接ブロックの右下制御点の座標を表し、
(x 0 ,y 0 )は、現在ブロックの左上制御点の座標を表し、また、(x 1 ,y 1 )は、現在ブロックの右上制御点の座標を表し、
(nv 2x ,nv 2y )ば、アフィン隣接ブロックの左下制御点の第3アフィンシードベクトルを表し、また、(nv 3x ,nv 3y )は、アフィン隣接ブロックの右下制御点の第4アフィンシードベクトルを表し、
(v 0x ,v 0y )は、現在ブロックの左上制御点の第1アフィンシードベクトルを表し、また、(v 1x ,v 1y )は、現在ブロックの右上制御点の第2アフィンシードベクトルを表すことを特徴とする
請求項1に記載のビデオ復号化方法。 deriving a first affine seed vector and a second affine seed vector of the current block using a third affine seed vector associated with the lower left control point and a fourth affine seed vector associated with the lower right control point of neighboring affine vectors;
(x n2 , y n2 ) represents the coordinates of the lower left control point of the affine neighboring block, and (x n3 , y n3 ) represents the coordinates of the lower right control point of the affine neighboring block,
(x 0 , y 0 ) represents the coordinates of the upper left control point of the current block, and (x 1 , y 1 ) represents the coordinates of the upper right control point of the current block,
(nv 2x , nv 2y ) represents the third affine seed vector of the lower left control point of the affine neighboring block, and (nv 3x , nv 3y ) represents the fourth affine seed vector of the lower right control point of the affine neighboring block. represents
(v 0x , v 0y ) represents the first affine seed vector of the upper left control point of the current block, and (v 1x , v 1y ) represents the second affine seed vector of the upper right control point of the current block. characterized by
2. A method of decoding video as claimed in claim 1.
現在ブロックのマージ候補リストを生成するステップと、
前記マージ候補リストに含まれる複数のマージ候補のうちのいずれか1つを指定するステップと、
指定されたマージ候補の第1アフィンシードベクトル及び第2アフィンシードベクトルに基づいて、前記現在ブロックの第1アフィンシードベクトル及び第2アフィンシードベクトルを導出するステップと、
前記現在ブロックの前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルを用いることで、前記現在ブロック内のサブブロックのアフィンベクトルを導出するステップであって、前記サブブロックは、前記現在ブロックのサイズよりも小さいサイズの領域である、ステップと、
前記アフィンベクトルに基づいて、前記サブブロックに対して動き補償予測を実行するステップと、を含み、
前記マージ候補の前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルは、前記現在ブロックに隣接する隣接ブロックの動き情報に基づいて導出される、ビデオ符号化方法。 A video encoding method comprising:
generating a merge candidate list for the current block;
designating any one of a plurality of merge candidates included in the merge candidate list;
deriving a first affine seed vector and a second affine seed vector for the current block based on the specified merge candidate first and second affine seed vectors;
deriving an affine vector for a sub-block within the current block using the first affine seed vector and the second affine seed vector for the current block, wherein the sub-block is the size of the current block a step, which is a region of size smaller than
performing motion compensated prediction for the sub-blocks based on the affine vectors;
The video coding method, wherein the first affine seed vector and the second affine seed vector of the merging candidate are derived based on motion information of neighboring blocks adjacent to the current block.
請求項8に記載のビデオ符号化方法。 When the neighboring block is included in a coding tree unit different from the coding tree unit of the current block, the first affine seed vector and the second affine seed vector of the merging candidate are the lower left sub-block of the neighboring block. and the motion vector of the lower right sub-block.
請求項9に記載のビデオ符号化方法。 The lower left sub-block includes lower left reference samples located in the lower left corner of the adjacent block, and the lower right sub-block includes lower right reference samples located in the lower right corner of the adjacent block. 9. The video encoding method according to 9.
前記スケールファクタは、左下参照サンプルと右下参照サンプルとの水平距離とオフセットを加算することで得られた値に基づいて導出され、又は、前記スケールファクタは、前記右下参照サンプルの右側に隣接する隣接サンプルと前記左下参照サンプルとの距離に基づいて導出されることを特徴とする
請求項10に記載のビデオ符号化方法。 The first affine seed vector and the second affine seed vector of the merging candidate are subjected to a shift operation on a difference value of motion vectors between the lower left sub-block and the lower right sub-block using a scale factor. is derived based on the values obtained in
The scale factor is derived based on a value obtained by adding the horizontal distance and offset of the lower left and lower right reference samples , or the scale factor is adjacent to the right of the lower right reference sample. 11. The video encoding method of claim 10, wherein the distance is derived based on the distance between the neighboring sample and the lower left reference sample .
及び/又は
前記隣接ブロックが、前記現在ブロックの符号化ツリーユニットと同じである符号化ツリーユニットに含まれる場合、前記マージ候補の前記第1アフィンシードベクトル及び前記第2アフィンシードベクトルは、前記隣接ブロックの第1アフィンシードベクトル及び第2アフィンシードベクトルに基づいて導出されることを特徴とする
請求項8に記載のビデオ符号化方法。 The merging candidate list includes a first merging candidate and a second merging candidate, and the first merging candidate is determined to be a first available block among upper adjacent blocks located above the current block. derived based on neighboring blocks, wherein the second merging candidate is derived based on a left neighboring block determined to be a first available left neighboring block located to the left of the current block ;
and/or
If the neighboring block is included in a coding tree unit that is the same as the coding tree unit of the current block, the first affine seed vector and the second affine seed vector of the merging candidate are the first affine seed vectors of the neighboring block. 9. The method of claim 8, wherein the affine seed vector is derived based on a first affine seed vector and a second affine seed vector .
(x n2 ,y n2 )は、アフィン隣接ブロックの左下制御点の座標を表し、また、(x n3 ,y n3 )は、アフィン隣接ブロックの右下制御点の座標を表し、
(x 0 ,y 0 )は、現在ブロックの左上制御点の座標を表し、また、(x 1 ,y 1 )は、現在ブロックの右上制御点の座標を表し、
(nv 2x ,nv 2y )ば、アフィン隣接ブロックの左下制御点の第3アフィンシードベクトルを表し、また、(nv 3x ,nv 3y )は、アフィン隣接ブロックの右下制御点の第4アフィンシードベクトルを表し、
(v 0x ,v 0y )は、現在ブロックの左上制御点の第1アフィンシードベクトルを表し、また、(v 1x ,v 1y )は、現在ブロックの右上制御点の第2アフィンシードベクトルを表すことを特徴とする
請求項8に記載のビデオ符号化方法。 deriving a first affine seed vector and a second affine seed vector of the current block using a third affine seed vector associated with the lower left control point and a fourth affine seed vector associated with the lower right control point of neighboring affine vectors;
(x n2 , y n2 ) represents the coordinates of the lower left control point of the affine neighboring block, and (x n3 , y n3 ) represents the coordinates of the lower right control point of the affine neighboring block,
(x 0 , y 0 ) represents the coordinates of the upper left control point of the current block, and (x 1 , y 1 ) represents the coordinates of the upper right control point of the current block,
(nv 2x , nv 2y ) represents the third affine seed vector of the lower left control point of the affine neighboring block, and (nv 3x , nv 3y ) represents the fourth affine seed vector of the lower right control point of the affine neighboring block. represents
(v 0x , v 0y ) represents the first affine seed vector of the upper left control point of the current block, and (v 1x , v 1y ) represents the second affine seed vector of the upper right control point of the current block. characterized by
A video encoding method according to claim 8.
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20180114343 | 2018-09-21 | ||
KR10-2018-0114342 | 2018-09-21 | ||
KR10-2018-0114343 | 2018-09-21 | ||
KR20180114342 | 2018-09-21 | ||
KR20180114344 | 2018-09-21 | ||
KR10-2018-0114344 | 2018-09-21 | ||
PCT/KR2019/012290 WO2020060327A1 (en) | 2018-09-21 | 2019-09-20 | Image signal encoding/decoding method and device therefor |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2024029968A Division JP2024063128A (en) | 2018-09-21 | 2024-02-29 | Video signal encoding/decoding method and device therefor |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2022502899A JP2022502899A (en) | 2022-01-11 |
JPWO2020060327A5 true JPWO2020060327A5 (en) | 2022-09-02 |
JP7449279B2 JP7449279B2 (en) | 2024-03-13 |
Family
ID=69887728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2021515161A Active JP7449279B2 (en) | 2018-09-21 | 2019-09-20 | Video signal encoding/decoding method and equipment therefor |
Country Status (17)
Country | Link |
---|---|
US (3) | US11223845B2 (en) |
EP (5) | EP4224849A1 (en) |
JP (1) | JP7449279B2 (en) |
KR (1) | KR20200034644A (en) |
CN (2) | CN113225560B (en) |
AU (1) | AU2019344978A1 (en) |
BR (1) | BR112021004922A2 (en) |
CA (1) | CA3113054A1 (en) |
CL (1) | CL2021000671A1 (en) |
ES (1) | ES2955040T3 (en) |
IL (2) | IL304953A (en) |
MX (1) | MX2021003325A (en) |
PH (1) | PH12021550627A1 (en) |
PL (1) | PL3840376T3 (en) |
SG (1) | SG11202102788YA (en) |
WO (1) | WO2020060327A1 (en) |
ZA (1) | ZA202101846B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112189342A (en) * | 2018-05-24 | 2021-01-05 | 株式会社Kt | Method and apparatus for processing video signal |
CN112400318B (en) * | 2018-07-11 | 2024-04-12 | 华为技术有限公司 | Video encoder, video decoder and corresponding methods |
SG11202102857QA (en) | 2018-09-21 | 2021-04-29 | Guangdong Oppo Mobile Telecommunications Corp Ltd | Method for encoding/decoding image signal and apparatus therefor |
EP3893509A4 (en) | 2018-12-07 | 2022-10-26 | Samsung Electronics Co., Ltd. | Video decoding method and device, and video encoding method and device |
AU2018454766A1 (en) | 2018-12-25 | 2021-04-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Decoding prediction method and apparatus, and computer storage medium |
KR102619997B1 (en) * | 2019-01-02 | 2024-01-02 | 애플 인크. | Method for encodign/decodign video signal and apparatus therefor |
WO2023220970A1 (en) * | 2022-05-18 | 2023-11-23 | Oppo广东移动通信有限公司 | Video coding method and apparatus, and device, system and storage medium |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7730406B2 (en) * | 2004-10-20 | 2010-06-01 | Hewlett-Packard Development Company, L.P. | Image processing system and method |
JP2011217044A (en) * | 2010-03-31 | 2011-10-27 | Sony Corp | Image processing apparatus, image processing method, and image processing program |
LT3197162T (en) | 2011-12-15 | 2020-05-11 | Tagivan Ii Llc | Signaling of luminance-chrominance coded block flags (cbf) in video coding |
CN107809642B (en) * | 2015-02-16 | 2020-06-16 | 华为技术有限公司 | Method for encoding and decoding video image, encoding device and decoding device |
WO2017118409A1 (en) * | 2016-01-07 | 2017-07-13 | Mediatek Inc. | Method and apparatus for affine merge mode prediction for video coding system |
US10798403B2 (en) | 2016-01-29 | 2020-10-06 | Sharp Kabushiki Kaisha | Prediction image generation device, video decoding device, and video coding device |
WO2018061563A1 (en) * | 2016-09-27 | 2018-04-05 | シャープ株式会社 | Affine motion vector derivation device, prediction image generation device, moving image decoding device, and moving image coding device |
US10448010B2 (en) | 2016-10-05 | 2019-10-15 | Qualcomm Incorporated | Motion vector prediction for affine motion models in video coding |
US20190273943A1 (en) * | 2016-10-10 | 2019-09-05 | Sharp Kabushiki Kaisha | Systems and methods for performing motion compensation for coding of video data |
WO2018070897A1 (en) * | 2016-10-14 | 2018-04-19 | Huawei Technologies Co., Ltd. | Devices and methods for video coding |
US10555006B2 (en) * | 2016-12-22 | 2020-02-04 | Qualcomm Incorporated | Deriving bilateral filter information based on a prediction mode in video coding |
US10681370B2 (en) * | 2016-12-29 | 2020-06-09 | Qualcomm Incorporated | Motion vector generation for affine motion model for video coding |
WO2018135885A1 (en) | 2017-01-19 | 2018-07-26 | 가온미디어 주식회사 | Image decoding and encoding method providing transformation processing |
KR20180085526A (en) * | 2017-01-19 | 2018-07-27 | 가온미디어 주식회사 | A method for encoding and decoding video using a processing of an efficent transform |
KR20180098158A (en) * | 2017-02-24 | 2018-09-03 | 주식회사 케이티 | Method and apparatus for processing a video signal |
US10701390B2 (en) * | 2017-03-14 | 2020-06-30 | Qualcomm Incorporated | Affine motion information derivation |
US11877001B2 (en) * | 2017-10-10 | 2024-01-16 | Qualcomm Incorporated | Affine prediction in video coding |
BR112020025916A2 (en) | 2018-06-30 | 2021-03-16 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | METHOD FOR DECODING A VIDEO SIGNAL |
KR20200028856A (en) | 2018-09-07 | 2020-03-17 | 김기백 | A method and an apparatus for encoding/decoding video using intra prediction |
-
2019
- 2019-09-20 CA CA3113054A patent/CA3113054A1/en active Pending
- 2019-09-20 PL PL19861535.3T patent/PL3840376T3/en unknown
- 2019-09-20 CN CN202110443353.9A patent/CN113225560B/en active Active
- 2019-09-20 EP EP23178805.0A patent/EP4224849A1/en active Pending
- 2019-09-20 JP JP2021515161A patent/JP7449279B2/en active Active
- 2019-09-20 ES ES19861535T patent/ES2955040T3/en active Active
- 2019-09-20 EP EP23178807.6A patent/EP4224851A1/en active Pending
- 2019-09-20 BR BR112021004922-3A patent/BR112021004922A2/en unknown
- 2019-09-20 IL IL304953A patent/IL304953A/en unknown
- 2019-09-20 CN CN201980054695.2A patent/CN112840647A/en active Pending
- 2019-09-20 EP EP19861535.3A patent/EP3840376B1/en active Active
- 2019-09-20 WO PCT/KR2019/012290 patent/WO2020060327A1/en unknown
- 2019-09-20 AU AU2019344978A patent/AU2019344978A1/en active Pending
- 2019-09-20 EP EP23178535.3A patent/EP4224848A1/en active Pending
- 2019-09-20 MX MX2021003325A patent/MX2021003325A/en unknown
- 2019-09-20 IL IL281622A patent/IL281622B2/en unknown
- 2019-09-20 KR KR1020190116383A patent/KR20200034644A/en unknown
- 2019-09-20 EP EP23178806.8A patent/EP4224850A1/en active Pending
- 2019-09-20 SG SG11202102788YA patent/SG11202102788YA/en unknown
-
2021
- 2021-03-17 US US17/204,306 patent/US11223845B2/en active Active
- 2021-03-18 CL CL2021000671A patent/CL2021000671A1/en unknown
- 2021-03-18 PH PH12021550627A patent/PH12021550627A1/en unknown
- 2021-03-18 ZA ZA2021/01846A patent/ZA202101846B/en unknown
- 2021-12-03 US US17/541,922 patent/US11758176B2/en active Active
-
2023
- 2023-07-10 US US18/219,824 patent/US20230421799A1/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI670966B (en) | Method and apparatus of adaptive bi-prediction for video coding | |
TWI478586B (en) | Motion vector predictive coding method, motion vector predictive decoding method, moving picture coding apparatus, moving picture decoding apparatus, and programs thereof | |
TW201811031A (en) | Method and apparatus for template-based intra prediction in image and video coding | |
ES2901803T3 (en) | Predictive coding method for motion vector, predictive decoding method for motion vector, image coding device, image decoding device, and programs therefor | |
CN110796662B (en) | Real-time semantic video segmentation method | |
JPWO2010064396A1 (en) | Video decoding method and video encoding method | |
KR20110048044A (en) | Propagation map | |
KR20110107827A (en) | Multiple-candidate motion estimation with advanced spatial filtering of differential motion vectors | |
TW201210349A (en) | Motion vector predictive coding method, motion vector predictive decoding method, moving picture coding apparatus, moving picture decoding apparatus, and programs thereof | |
CN101682777A (en) | Image prediction encoding device, image prediction decoding device, image prediction encoding method, image prediction decoding method, image prediction encoding program, and image prediction decoding | |
JP6154643B2 (en) | Moving picture coding apparatus, depth intra prediction method and program for moving picture coding apparatus, moving picture decoding apparatus, depth intra prediction method and program for moving picture decoding apparatus | |
FI3922025T3 (en) | Systems, apparatus and methods for inter prediction refinement with optical flow | |
JP2013110766A5 (en) | ||
JPWO2019192170A5 (en) | MOVING IMAGE PROCESSING METHOD, MOVING IMAGE PROCESSING DEVICE, AND COMPUTER PROGRAM | |
JP2010154490A5 (en) | ||
JP2023029596A5 (en) | ||
JPWO2020143741A5 (en) | ||
JPWO2020060327A5 (en) | ||
JPWO2020047132A5 (en) | ||
CN112203091B (en) | Motion vector prediction method, system and computer medium based on quadratic polynomial | |
WO2009125475A1 (en) | Video encoding method, video encoding equipment, video encoding program and its recording medium | |
JP2010278519A (en) | Motion vector detector | |
JP2009212667A (en) | Moving image encoding apparatus and decoding apparatus | |
JP2020074624A5 (en) | ||
JPWO2020060329A5 (en) |