WO2020122224A1 - 画像復号装置、画像復号方法、及び画像復号プログラム - Google Patents
画像復号装置、画像復号方法、及び画像復号プログラム Download PDFInfo
- Publication number
- WO2020122224A1 WO2020122224A1 PCT/JP2019/048855 JP2019048855W WO2020122224A1 WO 2020122224 A1 WO2020122224 A1 WO 2020122224A1 JP 2019048855 W JP2019048855 W JP 2019048855W WO 2020122224 A1 WO2020122224 A1 WO 2020122224A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- merge candidate
- prediction
- merge
- motion vector
- correction
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 27
- 230000033001 locomotion Effects 0.000 claims abstract description 230
- 239000013598 vector Substances 0.000 claims abstract description 206
- 238000012937 correction Methods 0.000 claims abstract description 77
- 230000002123 temporal effect Effects 0.000 claims description 14
- 230000007774 longterm Effects 0.000 claims description 6
- 230000004048 modification Effects 0.000 description 41
- 238000012986 modification Methods 0.000 description 41
- 238000012545 processing Methods 0.000 description 39
- 238000010586 diagram Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 7
- 238000009795 derivation Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 239000000872 buffer Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000002457 bidirectional effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/573—Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/58—Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
Definitions
- the present invention relates to image decoding technology.
- HEVC High Efficiency Video Coding
- the inter prediction mode has a merge mode and a differential motion vector mode, but the inventor has come to recognize that there is room to further improve the coding efficiency by correcting the motion vector in the merge mode.
- the present invention has been made in view of such a situation, and an object thereof is to provide a new inter prediction mode with higher efficiency by correcting the motion vector in the merge mode.
- an image decoding device is a merge candidate list generation unit that generates a merge candidate list including motion information of a plurality of blocks adjacent to a prediction target block as merge candidates;
- a merge candidate selection unit that selects a merge candidate as a selected merge candidate from the merge candidate list, a code string decoding unit that decodes a code string from an encoded stream to derive a correction vector, and a motion of the first prediction of the selected merge candidate.
- a merge candidate correction unit that adds the correction vector to the vector without scaling and subtracts the correction vector from the motion vector of the second prediction of the selected merge candidate without scaling to derive a correction merge candidate.
- FIG. 1A is a diagram illustrating a configuration of an image encoding device 100 according to the first embodiment
- FIG. 1B is a configuration of an image decoding device 200 according to the first embodiment. It is a figure explaining. It is a figure which shows the example in which the input image is divided
- FIG. 9 is a flowchart illustrating the operation of the merge mode according to the second embodiment. It is a figure which shows a part of syntax of the block which is the merge mode of 2nd Embodiment. It is a figure which shows the syntax of the differential motion vector of 2nd Embodiment. It is a figure explaining the effect of the modification 8 of 1st Embodiment.
- FIG. 16 is a diagram for explaining an effect when the picture intervals of the modified example 8 of the first embodiment are not uniform. It is a figure for demonstrating an example of the hardware constitutions of the encoding/decoding apparatus of 1st Embodiment.
- FIG. 1A is a diagram illustrating a configuration of an image encoding device 100 according to the first embodiment
- FIG. 1B is a configuration of an image decoding device 200 according to the first embodiment. It is a figure explaining.
- the image coding apparatus 100 includes a block size determination unit 110, an inter prediction unit 120, a conversion unit 130, a code string generation unit 140, a local decoding unit 150, and a frame memory 160.
- the image encoding device 100 receives an input image, performs intra prediction and inter prediction, and outputs an encoded stream.
- the image and the picture are used with the same meaning.
- the image decoding device 200 includes a code string decoding unit 210, an inter prediction unit 220, an inverse conversion unit 230, and a frame memory 240.
- the image decoding device 200 receives the encoded stream output from the image encoding device 100, performs intra prediction and inter prediction, and outputs a decoded image.
- the image encoding device 100 and the image decoding device 200 are realized by hardware such as an information processing device including a CPU (Central Processing Unit) and a memory.
- an information processing device including a CPU (Central Processing Unit) and a memory.
- the block size determination unit 110 determines the block size for inter prediction based on the input image, and supplies the determined block size, the position of the block, and the input pixel (input value) corresponding to the block size to the inter prediction unit 120.
- a method for determining the block size an RDO (rate distortion optimization) method used in HEVC reference software is used.
- FIG. 2 shows an example in which a partial area of an image input to the image encoding device 100 is divided into blocks based on the block size determined by the block size determination unit 110.
- the input image is divided into any of the above block sizes so that the blocks do not overlap.
- the inter prediction unit 120 uses information input from the block size determination unit 110 and reference pictures input from the frame memory 160 to determine inter prediction parameters used for inter prediction.
- the inter prediction unit 120 inter-predicts based on the inter prediction parameter to derive a prediction value, and supplies the block size, block position, input value, inter prediction parameter and prediction value to the conversion unit 130.
- the RDO (rate distortion optimization) method used in the reference software of HEVC is used as a method of determining the inter prediction parameter. Details of the inter prediction parameter and the operation of the inter prediction unit 120 will be described later.
- the conversion unit 130 subtracts the prediction value from the input value to calculate a difference value, performs processing such as orthogonal transformation and quantization on the calculated difference value to calculate prediction error data, and calculates the block size, the block position,
- the inter prediction parameter and the prediction error data are supplied to the code string generation unit 140 and the local decoding unit 150.
- the code string generation unit 140 may use an SPS (Sequence Parameter) as necessary. Set), PPS (Picture Parameter Set), and other information, encodes a code string for determining the block size supplied from the conversion unit 130, encodes an inter prediction parameter as a code string, and calculates prediction error data. Is encoded as a code string, and an encoded stream is output. Details of encoding the inter prediction parameter will be described later.
- SPS Sequence Parameter
- PPS Picture Parameter Set
- the local decoding unit 150 performs processing such as inverse orthogonal transformation and inverse quantization on the prediction error data to restore the difference value, adds the difference value and the prediction value to generate a decoded image, and decodes the decoded image and the inter prediction parameter. Are supplied to the frame memory 160.
- the frame memory 160 stores a decoded image and an inter prediction parameter for a plurality of images, and supplies the decoded image and the inter prediction parameter to the inter prediction unit 120.
- Intra prediction is performed similarly to HEVC, and inter prediction will be described below.
- the code string decoding unit 210 decodes the SPS, the PPS header and other information from the encoded stream as needed, and decodes the block size, the position of the block, the inter prediction parameter, and the prediction error data from the encoded stream, The block size, the block position, the inter prediction parameter, and the prediction error data are supplied to the inter prediction unit 220.
- the inter prediction unit 220 uses the information input from the code string decoding unit 210 and the reference picture input from the frame memory 240 to inter-predict and derive a prediction value.
- the inter prediction unit 220 supplies the block size, the position of the block, the inter prediction parameter, the prediction error data, and the prediction value to the inverse transform unit 230.
- the inverse transform unit 230 performs a process such as inverse orthogonal transform and inverse quantization on the prediction error data supplied from the inter prediction unit 220 to calculate a difference value, and adds the difference value and the predicted value to generate a decoded image. Then, the decoded image and the inter prediction parameter are supplied to the frame memory 240, and the decoded image is output.
- the frame memory 240 stores the decoded image and the inter prediction parameter for a plurality of images, and supplies the decoded image and the inter prediction parameter to the inter prediction unit 220.
- the inter prediction performed by the inter prediction unit 120 and the inter prediction unit 220 is the same operation, and the decoded image and the inter prediction parameter stored in the frame memory 160 and the frame memory 240 are also the same.
- the inter prediction parameter includes a merge flag, a merge index, a valid flag of the prediction LX, a motion vector of the prediction LX, a reference picture index of the prediction LX, a merge correction flag, and a difference motion vector of the prediction LX.
- LX is L0 and L1.
- the merge flag is a flag indicating whether to use the merge mode or the difference motion vector mode as the inter prediction mode. If the merge flag is 1, the merge mode is used, and if the merge flag is 0, the differential motion vector mode is used.
- the merge index is an index indicating the position of the selected merge candidate in the merge candidate list.
- the valid flag of the prediction LX is a flag indicating whether the prediction LX is valid or invalid.
- the merge correction flag is a flag indicating whether or not the motion information of the merge candidate is corrected. If the merge correction flag is 1, the merge candidate is corrected, and if the merge correction flag is 0, the merge candidate is not corrected.
- the code string generation unit 140 does not code the valid flag of the prediction LX as a code string in the coded stream. Further, the code string decoding unit 210 does not decode the valid flag of the prediction LX as a code string from the encoded stream.
- the reference picture index is an index for specifying the decoded image in the frame memory 160. Also, a combination of a valid flag for L0 prediction, a valid flag for L1 prediction, a motion vector for L0 prediction, a motion vector for L1 prediction, a reference picture index for L0 prediction, and a reference picture index for L1 prediction is defined as motion information.
- both the valid flag for L0 prediction and the valid flag for L1 prediction are invalid.
- the picture type will be described as a B picture that can use both unidirectional prediction of L0 prediction, unidirectional prediction of L1 prediction, and bidirectional prediction.
- the picture type is a P picture that can use only unidirectional prediction. It may be.
- the L0 prediction is the only target inter prediction parameter, and the L1 prediction does not exist.
- the reference picture for L0 prediction is a picture earlier than the prediction target picture
- the reference picture for L1 prediction is a future picture than the prediction target picture. This is because the coding efficiency is improved by the interpolative prediction when the L0 prediction reference picture and the L1 prediction reference picture are in opposite directions from the prediction target picture.
- Whether or not the L0 prediction reference picture and the L1 prediction reference picture are in the opposite directions from the prediction target picture can be determined by comparing the POC (Picture Order Count) of the reference picture.
- the L0 prediction reference picture and the L1 prediction reference picture are temporally opposite to the prediction target picture in the prediction target block.
- inter prediction unit 120 Next, details of the inter prediction unit 120 will be described. Unless otherwise specified, the configuration and operation of the inter prediction unit 120 of the image encoding device 100 and the inter prediction unit 220 of the image decoding device 200 are the same.
- FIG. 3 is a diagram illustrating the configuration of the inter prediction unit 120.
- the inter prediction unit 120 includes a merge mode determination unit 121, a merge candidate list generation unit 122, a merge candidate selection unit 123, a merge candidate correction determination unit 124, a merge candidate correction unit 125, a difference motion vector mode execution unit 126, and a prediction value derivation unit. Including 127.
- the inter prediction unit 120 switches the inter prediction mode between the merge mode and the differential motion vector mode for each block.
- the differential motion vector mode performed by the differential motion vector mode implementation unit 126 is assumed to be performed in the same manner as HEVC, and hereinafter, the merge mode will be mainly described.
- the merge mode determination unit 121 determines whether to use the merge mode as the inter prediction mode for each block. If the merge flag is 1, the merge mode is used, and if the merge flag is 0, the differential motion vector mode is used.
- the inter prediction unit 120 uses the RDO (rate distortion optimization) method or the like used in HEVC reference software to determine whether to set the merge flag to 1.
- the code string decoding unit 210 acquires the merge flag decoded from the coded stream based on the syntax. Details of the syntax will be described later.
- the differential motion vector mode implementation unit 126 implements the differential motion vector mode, and the inter prediction parameter of the differential motion vector mode is supplied to the predicted value derivation unit 127.
- the merge candidate list generation unit 122, the merge candidate selection unit 123, the merge candidate correction determination unit 124, and the merge candidate correction unit 125 perform the merge mode, and the inter prediction parameter of the merge mode is the predicted value. It is supplied to the derivation unit 127.
- FIG. 4 is a flowchart explaining the operation in the merge mode.
- the merge mode will be described in detail with reference to FIGS. 3 and 4.
- the merge candidate list generation unit 122 generates a merge candidate list from the motion information of the block adjacent to the processing target block and the motion information of the block of the decoded image (S100), and the generated merge candidate list is used as the merge candidate selection unit.
- the processing target block and the prediction target block are used with the same meaning.
- FIG. 5 is a diagram illustrating the configuration of the merge candidate list generation unit 122.
- the merge candidate list generating unit 122 includes a spatial merge candidate generating unit 201, a temporal merge candidate generating unit 202, and a merge candidate supplementing unit 203.
- FIG. 6 is a diagram illustrating a block adjacent to the processing target block.
- the blocks adjacent to the processing target block are block A, block B, block C, block D, block E, block F, and block G.
- a plurality of blocks adjacent to the processing target block are used, Not limited to.
- FIG. 7 is a diagram for explaining blocks on the decoded image at the same position as the processing target block and its periphery.
- the blocks on the decoded image at the same position as the processing target block and its periphery are referred to as block CO1, block CO2, and block CO3, but a plurality of blocks on the decoded image at the same position as the processing target block and their periphery Is not limited to this.
- the blocks CO1, CO2, and CO3 are called the same position block, and the decoded image including the same position block is called the same position picture.
- the spatial merge candidate generation unit 201 sequentially inspects the block A, the block B, the block C, the block D, the block E, the block F, and the block G to determine which of the valid flag for L0 prediction and the valid flag for L1 prediction. If either or both are valid, the motion information of the block is sequentially added to the merge candidate list as a merge candidate.
- the merge candidates generated by the spatial merge candidate generation unit 201 are called spatial merge candidates.
- the temporal merge candidate generation unit 202 sequentially inspects the block C01, the block CO2, and the block CO3, and the movement of the block in which either or both of the L0 prediction valid flag and the L1 prediction valid flag are valid first.
- the information is subjected to processing such as scaling and sequentially added as merge candidates to the merge candidate list.
- the merge candidates generated by the time merge candidate generation unit 202 are called time merge candidates.
- Scaling of time merge candidates is similar to HEVC.
- the motion vector of the temporal merge candidate the motion vector of the co-located block is compared with the picture with the prediction target block and the picture referenced by the temporal merge candidate based on the distance between the co-located picture and the reference picture referenced by the co-located block. It is derived by scaling by the distance.
- the picture referenced by the temporal merge candidate is a reference picture whose reference picture index is 0 in both L0 prediction and L1 prediction. Further, which of the same position block of L0 prediction and the same position block of L1 prediction is used as the same position block is determined by encoding (decoding) the same position derivation flag. As described above, as a temporal merge candidate, one motion vector of L0 prediction or L1 prediction of the same position block is scaled to L0 prediction and L1 prediction to derive new L0 prediction and L1 prediction motion vectors, Let L0 prediction and L1 prediction motion vectors of the temporal merge candidate.
- the merge candidate supplementing unit 203 determines that the number of merge candidates included in the merge candidate list is equal to the maximum number of merge candidates. Until this is reached, supplementary merge candidates are added to the merge candidate list to make the number of merge candidates included in the merge candidate list the maximum number of merge candidates.
- the supplemental merge candidate is motion information in which the motion vectors of L0 prediction and L1 prediction are both (0, 0) and the reference picture indexes of L0 prediction and L1 prediction are both 0.
- the maximum number of merge candidates is 6, but it may be 1 or more.
- the merge candidate selection unit 123 selects one merge candidate from the merge candidate list (S101), and supplies the selected merge candidate (referred to as “selected merge candidate”) and the merge index to the merge candidate correction determination unit 124. Then, the selected merge candidate is used as the motion information of the processing target block.
- the inter prediction unit 120 of the image encoding device 100 selects one merge candidate from the merge candidates included in the merge candidate list by using the RDO (rate distortion optimization) method or the like used in HEVC reference software. Determine the merge index.
- the code string decoding unit 210 acquires the merge index decoded from the encoded stream, and selects one merge candidate from the merge candidates included in the merge candidate list based on the merge index. Select as a candidate.
- the merge candidate correction determination unit 124 determines that the width of the processing target block is equal to or larger than a predetermined width, the height of the processing target block is equal to or larger than a predetermined height, and both or at least one of the L0 prediction and the L1 prediction of the selected merge candidate. It is checked whether it is valid (S102). If the width of the block to be processed is equal to or larger than a predetermined width, the height of the block to be processed is equal to or larger than the predetermined height, and at least one of L0 prediction and/or L1 prediction of the selected merge candidate is valid (S102). No), the process proceeds to step S111 without correcting the selected merge candidate as the motion information of the processing target block.
- the merge candidate list always includes merge candidates in which at least one of L0 prediction and L1 prediction is valid, it is obvious that both or at least one of L0 prediction and L1 prediction of the selected merge candidate is valid. is there. Therefore, in S102, "whether or not at least one of the L0 prediction and the L1 prediction of the selected merge candidate is effective" is omitted, and the width of the processing target block is equal to or larger than a predetermined width and the height of the processing target block is predetermined in S102. You may inspect whether it is more than the height.
- the merge is performed.
- the candidate correction determination unit 124 sets the merge correction flag (S103) and supplies the merge correction flag to the merge candidate correction unit 125.
- the merge correction flag is set to 1 and the inter prediction with the selected merge candidate is performed. If the prediction error is not greater than or equal to the predetermined prediction error, the merge correction flag is set to 0.
- the code string decoding unit 210 acquires the merge correction flag decoded from the coded stream based on the syntax.
- the merge candidate correction unit 125 checks whether the merge correction flag is 1 (S104). If the merge correction flag is not 1 (NO in S104), the process proceeds to step S111 without correcting the selected merge candidate as the motion information of the processing target block.
- the merge correction flag is 1 (YES in S104), it is checked whether the L0 prediction of the selected merge candidate is valid (S105). If the L0 prediction of the selected merge candidate is not valid (NO in S105), the process proceeds to step S108. If the L0 prediction of the selected merge candidate is valid (YES in S105), the differential motion vector of the L0 prediction is determined (S106). As described above, if the merge correction flag is 1, the motion information of the selected merge candidate is corrected, and if the merge correction flag is 0, the motion information of the selected merge candidate is not corrected.
- the differential motion vector of L0 prediction is obtained by motion vector search.
- the search range of the motion vector is ⁇ 16 in both the horizontal direction and the vertical direction, but may be a multiple of 2 such as ⁇ 64.
- the code string decoding unit 210 acquires the differential motion vector of L0 prediction decoded from the coded stream based on the syntax.
- the merge candidate correction unit 125 calculates a corrected motion vector for L0 prediction, and sets the corrected motion vector for L0 prediction as the motion vector for L0 prediction of the motion information of the block to be processed (S107).
- the corrected motion vector (mvL0) for L0 prediction is the sum of the motion vector (mmvL0) for L0 prediction of the selected merge candidate and the differential motion vector (mvdL0) for L0 prediction, and is given by the following formula. Note that [0] indicates the horizontal component of the motion vector, and [1] indicates the vertical component of the motion vector.
- the differential motion vector of L1 prediction is obtained by motion vector search.
- the search range of the motion vector is ⁇ 16 in both the horizontal and vertical directions, but is a power of 2 such as ⁇ 64.
- the code string decoding unit 210 acquires the differential motion vector of L1 prediction decoded from the coded stream based on the syntax.
- the merge candidate correction unit 125 calculates a corrected motion vector for L1 prediction, and sets the corrected motion vector for L1 prediction as the motion vector for L1 prediction of the motion information of the processing target block (S110).
- the corrected motion vector (mvL1) for L1 prediction is the sum of the motion vector (mmvL1) for L1 prediction of the selected merge candidate and the differential motion vector (mvdL1) for L1 prediction, and is given by the following formula. Note that [0] indicates the horizontal component of the motion vector, and [1] indicates the vertical component of the motion vector.
- the prediction value derivation unit 127 performs inter prediction of L0 prediction, L1 prediction, or bi-prediction based on the motion information of the processing target block, and derives a prediction value (S111). As described above, if the merge correction flag is 1, the motion vector of the selected merge candidate is corrected, and if the merge correction flag is 0, the motion vector of the selected merge candidate is not corrected.
- FIG. 8 is a diagram showing a part of the syntax of a block in the merge mode.
- Table 1 shows the relationship between inter prediction parameters and syntax.
- cbWidth is the width of the block to be processed
- cbHeight is the height of the block to be processed.
- the predetermined width and the predetermined height are both 8. By setting the predetermined width and the predetermined height, it is possible to reduce the processing amount by not correcting the merge candidates in units of small blocks.
- cu_skip_flag is 1 when the block is in the skip mode, and is 0 when the block is not in the skip mode.
- the skip mode syntax is the same as the merge mode syntax.
- merge_idx is a merge index for selecting a selected merge candidate from the merge candidate list.
- the merge_mod_jx is determined to be the encoding (decoding) of the merge_mod_flag, and the merge_idx is shared with the merge mode merge index to determine the syntax. Encoding efficiency is improved while suppressing complication and increase in context.
- FIG. 9 is a diagram showing the syntax of the differential motion vector.
- Mvd_coding(N) in FIG. 9 has the same syntax as that used in the differential motion vector mode.
- N is 0 or 1.
- the syntax of the differential motion vector is abs_mvd_greater0_flag[d], which is a flag indicating whether the component of the differential motion vector is greater than 0, and abs_mvd_greater1_flag[ which is a flag indicating whether the component of the differential motion vector is greater than 1.
- d mvd_sign_flag[d] indicating the sign ( ⁇ ) of the component of the differential motion vector
- abs_mvd_minus2[d] indicating the absolute value of the vector obtained by subtracting 2 from the component of the differential motion vector.
- d is 0 or 1.
- HEVC High Efficiency Video Coding
- a merge mode and a differential motion vector mode as inter prediction modes.
- the motion information in the merge mode depends on the processed block, the cases where the prediction efficiency is high are limited, and it is necessary to further improve the utilization efficiency.
- the differential motion vector mode L0 prediction and L1 prediction are separately prepared as syntaxes, and the prediction type (L0 prediction, L1 prediction or bi-prediction) and the motion vector predictor flag and the differential motion A vector and a reference picture index were needed. Therefore, the differential motion vector mode is not as efficient in encoding as the merge mode, but it is a motion that cannot be derived in the merge mode, and is a sudden motion that has little correlation with the motion of spatially adjacent blocks or temporally adjacent blocks. However, the prediction efficiency was stable and the prediction efficiency was high.
- the present embodiment it is possible to correct the motion vector of the merge mode while fixing the prediction type of the merge mode and the reference picture index, thereby improving the coding efficiency as compared with the differential motion vector mode and improving the usage efficiency. It can be improved over the merge mode.
- the size of the difference motion vector can be suppressed to be small, and the coding efficiency can be suppressed.
- the differential motion vector is used as the syntax of the block in the merge mode.
- the differential motion vector is defined to be encoded (or decoded) as a differential unit motion vector.
- the differential unit motion vector is a motion vector when the picture interval is the minimum interval. In HEVC or the like, the minimum picture interval is coded as a code string in the coded stream.
- the coding efficiency can be particularly improved when the difference motion vector is large and the distance between the prediction target picture and the reference picture is large. Further, even when the interval between the processing target picture and the reference picture and the speed of the object moving in the screen are in a proportional relationship, the prediction efficiency and the coding efficiency can be improved.
- 0 can be encoded (or decoded) as a component of the differential motion vector, and for example, only the L0 prediction can be changed. In this modification, it is assumed that 0 cannot be encoded (or decoded) as a component of the differential motion vector.
- FIG. 10 is a diagram showing the syntax of the differential motion vector of Modification 2.
- the syntax of the differential motion vector is abs_mvd_greater1_flag[d], which is a flag indicating whether the component of the differential motion vector is greater than 1, and abs_mvd_greater2_flag[ which is a flag indicating whether the component of the differential motion vector is greater than 2.
- abs_mvd_minus3[d] indicating the absolute value of a vector obtained by subtracting 3 from the difference motion vector component
- mvd_sign_flag[d] indicating the sign ( ⁇ ) of the difference motion vector component.
- the component of the differential motion vector is an integer, and in Modification 2, it is an integer except 0. In this modification, the components of the differential motion vector excluding the ⁇ sign are limited to powers of 2.
- Abs_mvd_pow_plus1[d] is used instead of abs_mvd_minus2[d], which is the syntax of this embodiment.
- the differential motion vector mvd[d] is calculated from mvd_sign_flag[d] and abs_mvd_pow_plus1[d] by the following equation.
- mvd[d] mvd_sign_flag[d]*2 ⁇ (abs_mvd_pow_plus1[d]+1)
- abs_mvd_pow_plus2[d] is used instead of abs_mvd_minus3[d], which is the syntax of the modified example 2.
- the differential motion vector mvd[d] is calculated from mvd_sign_flag[d] and abs_mvd_pow_plus2[d] by the following equation.
- mvd[d] mvd_sign_flag[d]*2 ⁇ (abs_mvd_pow_plus2[d]+2)
- abs_mvd_coding (N) of the present modified example abs_mvd_greater0_flag [d], abs_mvd_greater1_flag [d], mvd_sign_flag [d] is not present, but instead, a configuration including abs_mvr_plus2 [d] and mvr_sign_flag [d] ..
- the corrected motion vector (mvLN) for LN prediction is the product of the motion vector (mmvLN) for LN prediction of the selected merge candidate and the motion vector magnification (mvrLN), and is calculated by the following formula.
- Modification 5 In the syntax of this embodiment shown in FIG. 8, when cu_skip_flag is 1 (in the skip mode), there is a possibility that the merge_mod_flag exists, but in the skip mode, the merge_mod_flag does not exist. May be.
- the differential motion vector may be validated regardless of whether the LN prediction of the selected merge candidate is valid, without checking whether the LN prediction of the selected merge candidate is valid. In this case, when the LN prediction of the selected merge candidate is invalid, the motion vector of the LN prediction of the selected merge candidate is (0, 0), and the reference picture index of the LN prediction of the selected merge candidate is 0.
- the differential motion vector is validated regardless of whether the LN prediction of the selected merge candidate is valid, thereby increasing the chances of using bidirectional prediction and increasing the coding efficiency. Can be improved.
- whether or not the L0 prediction and L1 prediction of the selected merge candidate are individually valid is determined and whether or not the differential motion vector is encoded (or decoded) is controlled.
- the differential motion vector is encoded (or decoded) when both the L0 prediction and the L1 prediction are valid, and the differential motion vector is encoded (or decoded) when both the L0 prediction and the L1 prediction of the selected merge candidate are not valid ( (Or decryption) may not be performed.
- step S102 is as follows.
- the merge candidate correction determination unit 124 checks whether the width of the processing target block is a predetermined width or more, the height of the processing target block is a predetermined height or more, and both the L0 prediction and the L1 prediction of the selected merge candidate are valid. (S102).
- steps S105 and S108 are unnecessary in this modification.
- FIG. 11 is a diagram showing a part of the syntax of a block which is the merge mode of the modification 7. The syntax regarding steps S102, S105, and S108 is different.
- the differential motion vector is validated when both L0 prediction and L1 prediction of the selected merge candidate are valid, and thus the motion vector of the selected merge candidate of bidirectional prediction that is frequently used is high. It is possible to efficiently improve the prediction efficiency by correcting the.
- two differential motion vectors that is, the differential motion vector of L0 prediction and the differential motion vector of L1 prediction are used as the syntax of the block in the merge mode.
- only one differential motion vector is encoded (or decoded), and one differential motion vector is shared as a corrected motion vector for L0 prediction and a corrected motion vector for L1 prediction.
- the motion vector of the L1 prediction is calculated from the following formula. Difference motion vectors in the opposite direction to the L0 prediction are added. The differential motion vector may be subtracted from the L1 predicted motion vector of the selected merge candidate.
- FIG. 12 is a diagram showing a part of the syntax of the block in the merge mode of the modified example 8.
- the present embodiment differs from the present embodiment in that the validity of the L0 prediction and the validity of the L1 prediction are checked, but they are deleted, and mvd_coding(1) is not present.
- mvd_coding(0) corresponds to one differential motion vector.
- the present modification by defining only one differential motion vector for L0 prediction and L1 prediction, the number of differential motion vectors is halved in the case of bi-prediction, and L0 prediction and L1 prediction are performed. By sharing it with, it is possible to improve coding efficiency while suppressing a decrease in prediction efficiency.
- the differential motion vector is added in the opposite direction.
- FIG. 16 is a diagram for explaining the effect of the modified example 8.
- FIG. 16 is an image showing a state of a sphere (area shaded with diagonal lines) moving horizontally in a moving rectangular area (area surrounded by a broken line).
- the movement of the sphere with respect to the screen is the movement of the rectangle area and the movement of the sphere moving in the horizontal direction.
- the picture B is a prediction target picture
- the picture A is an L0 prediction reference picture
- the picture C is an L1 prediction reference picture.
- the picture A and the picture C are reference pictures that are in the opposite direction from the prediction target picture.
- the movement amount of the sphere that cannot be acquired from the adjacent block is added to the L0 prediction and subtracted from the L1 prediction. You can accurately reproduce the movement of the sphere.
- the pictures A, B, and C are not at equal intervals, but if the movement amounts corresponding to the rectangular region of the sphere are at equal intervals, then they are obtained from adjacent blocks.
- the movement of the sphere can be accurately reproduced by adding the amount of movement of the sphere that cannot be performed to the L0 prediction and subtracting it from the L1 prediction.
- FIG. 17 is a diagram illustrating an effect when the picture intervals of the modified example 8 are not equal. This example will be described in detail with reference to FIG. Pictures F0, F1,..., F8 in FIG. 17 indicate pictures at fixed intervals. From the picture F0 to the picture F4, the sphere is stationary, and after the picture F5, it moves at a constant speed in a certain direction.
- the picture F0 and the picture F6 are reference pictures and the picture F5 is a prediction target picture
- the picture F0, the picture F5, and the picture F6 are not at equal intervals, but the movement amounts corresponding to the rectangular region of the sphere are at equal intervals.
- the picture F5 is the prediction target picture
- the picture F4 having a short distance is selected as the reference picture
- the picture F0 is selected as the reference picture instead of the picture F4 because the picture F0 is distorted more than the picture F4. This is the case when the picture is a high-quality picture with few.
- Reference pictures are usually managed by a FIFO (First-In First-Out) method within a reference picture buffer, but a long-term reference picture is used as a mechanism for allowing a high-quality picture with little distortion to remain in the reference picture buffer for a long time. is there.
- the long-term reference picture is not managed as a FIFO in the reference picture buffer, and whether or not it is a long-term reference picture is managed by the reference picture list control information encoded in the slice header.
- the present modification can improve the prediction efficiency and the coding efficiency by being applied when either or both of the L0 prediction and the L1 prediction are long-term reference pictures. Further, the present modification can improve the prediction efficiency and the coding efficiency by being applied when either or both of the L0 prediction and the L1 prediction are intra pictures.
- the circuit scale and power consumption can be reduced by not scaling the differential motion vector based on the inter-picture distance like the time merge candidate. For example, when scaling a differential motion vector, if a temporal merge candidate is selected as a selective merge candidate, both scaling of a temporal merge candidate and scaling of a differential motion vector are needed. Since the scaling of the time merge candidate and the scaling of the differential motion vector are different in the motion vector serving as the scaling reference, both scalings cannot be performed together and must be performed separately.
- the temporal merge candidate is scaled, and if the differential motion vector is smaller than the temporal merge candidate motion vector, the differential motion vector is smaller.
- the coding efficiency can be improved without scaling the vector.
- the difference motion vector is large, it is possible to suppress the decrease in coding efficiency by selecting the difference motion vector mode.
- the maximum number of merge candidates when the merge correction flag is 0 and 1 is the same.
- the maximum number of merge candidates when the merge correction flag is 1 is set smaller than the maximum number of merge candidates when the merge correction flag is 0.
- the maximum number of merge candidates when the merge correction flag is 1 is 2.
- the maximum number of merge candidates when the merge correction flag is 1 is the maximum number of corrected merge candidates.
- the merge correction flag is encoded (decoded), and when the merge index is equal to or larger than the maximum number of corrected merge candidates, the merge correction flag is encoded (decoded). ) Try not to.
- the maximum number of merge candidates and the maximum number of correction merge candidates when the merge correction flag is 0 may be a predetermined value, or may be encoded (decoded) in SPS or PPS in the encoded stream. You may obtain it.
- the maximum number of merge candidates when the merge correction flag is 1 is set to be smaller than the maximum number of merge candidates when the merge correction flag is 0, so that the merge candidates having a higher selection probability Only by determining whether or not to correct the merge candidate, it is possible to reduce the processing of the encoding device and suppress the decrease in encoding efficiency. Further, when the merge index is equal to or larger than the maximum number of corrected merge candidates, it is not necessary to code (decode) the merge correction flag, and thus the coding efficiency is improved.
- the configurations of the image encoding device 100 and the image decoding device 200 according to the second embodiment are the same as those of the image encoding device 100 and the image decoding device 200 according to the first embodiment.
- the present embodiment differs from the first embodiment in the operation in the merge mode and the syntax. Hereinafter, the difference between this embodiment and the first embodiment will be described.
- FIG. 13 is a flowchart illustrating the operation of the merge mode according to the second embodiment.
- FIG. 14 is a diagram showing a part of the syntax of a block in the merge mode according to the second embodiment.
- FIG. 15 is a diagram showing the syntax of the differential motion vector according to the second embodiment.
- FIG. 13 differs from FIG. 4 in steps S205 to S207 and steps S209 to S211.
- the merge correction flag is 1 (YES in S104) If the merge correction flag is 1 (YES in S104), it is checked whether the L0 prediction of the selected merge candidate is invalid (S205). If the L0 prediction of the selected merge candidate is not invalid (NO in S205), the process proceeds to step S208. If the L0 prediction of the selected merge candidate is invalid (YES in S205), the corrected motion vector of the L0 prediction is determined (S206).
- the corrected motion vector for L0 prediction is obtained by motion vector search.
- the search range of the motion vector is ⁇ 1 both in the horizontal and vertical directions.
- the inter prediction unit 220 of the image decoding device 200 acquires the corrected motion vector for L0 prediction from the encoded stream.
- the reference picture index for L0 prediction is determined (S207).
- the reference picture index of L0 prediction is set to 0.
- the process proceeds to S111. If the slice type is B and the L1 prediction of the selected merge candidate is invalid (YES in S208), the corrected motion vector of L1 prediction is determined (S209).
- the corrected motion vector of L1 prediction is obtained by motion vector search.
- the search range of the motion vector is ⁇ 1 both in the horizontal and vertical directions.
- the inter prediction unit 220 of the image decoding device 200 acquires the corrected motion vector for L1 prediction from the encoded stream.
- the reference picture index for L1 prediction is determined (S110).
- the reference picture index of L1 prediction is set to 0.
- the slice type is a slice type that allows bi-prediction (that is, slice type B)
- the L0 prediction or L1 prediction merge candidate is converted into a bi-prediction merge candidate.
- improvement in prediction efficiency can be expected due to the filtering effect.
- the search range of the motion vector can be suppressed to the minimum.
- the reference picture index in step S207 and step S210 is set to 0.
- the reference picture index of the L0 prediction is set as the reference picture index of the L1 prediction
- the reference picture index of the L1 prediction Is the reference picture index for L0 prediction.
- the encoded bitstream output by the image encoding device has a specific data format so that the encoded bitstream can be decoded according to the encoding method used in the embodiments. doing.
- the coded bit stream may be provided by being recorded in a computer-readable recording medium such as an HDD, SSD, flash memory, or optical disk, or may be provided from a server through a wired or wireless network. Therefore, the image decoding device corresponding to this image coding device can decode the coded bit stream of this specific data format regardless of the providing means.
- the encoded bitstream When a wired or wireless network is used for exchanging the encoded bitstream between the image encoding device and the image decoding device, the encoded bitstream is converted into a data format suitable for the transmission mode of the communication path. It may be transmitted.
- a transmission device that converts the encoded bit stream output by the image encoding device into encoded data in a data format suitable for the transmission mode of the communication path and transmits the encoded data to the network, and receives the encoded data from the network.
- a receiving device that restores the encoded bitstream and supplies the encoded bitstream to the image decoding device is provided.
- the transmission device includes a memory for buffering the encoded bitstream output from the image encoding device, a packet processing unit for packetizing the encoded bitstream, and a transmission unit for transmitting packetized encoded data via a network.
- a packet processing unit for packetizing the encoded bitstream
- a transmission unit for transmitting packetized encoded data via a network.
- the receiving device receives a packetized encoded data via a network, a memory for buffering the received encoded data, packetizing the encoded data to generate an encoded bitstream, And a packet processing unit provided to the image decoding apparatus.
- the encoded data transmitted by the transmitting device is also transmitted.
- a relay device that receives and supplies to the receiving device may be provided.
- the relay device includes a receiving unit that receives the packetized encoded data transmitted by the transmitting device, a memory that buffers the received encoded data, and a transmitting unit that transmits the packetized encoded data and the network. Including. Further, the relay device packetizes the packetized coded data to generate a coded bitstream, a recording medium that stores the coded bitstream, and packetizes the coded bitstream.
- a transmission packet processing unit may be included.
- a display unit may be added by adding a display unit for displaying the image decoded by the image decoding device to the configuration.
- an image pickup unit may be added to the configuration, and the picked-up image may be input to the image coding apparatus to form the image pickup apparatus.
- FIG. 18 shows an example of the hardware configuration of the encoding/decoding device of the present application.
- the encoding/decoding device includes the configurations of the image encoding device and the image decoding device according to the embodiment of the present invention.
- the encoding/decoding device 9000 includes a CPU 9001, a codec IC 9002, an I/O interface 9003, a memory 9004, an optical disk drive 9005, a network interface 9006, and a video interface 9009, and each unit is connected by a bus 9010.
- the image encoding unit 9007 and the image decoding unit 9008 are typically implemented as a codec IC 9002.
- the image coding process of the image coding device according to the embodiment of the present invention is executed by the image coding unit 9007, and the image decoding process of the image decoding device according to the embodiment of the present invention is performed by the image coding unit 9007.
- the I/O interface 9003 is realized by a USB interface, for example, and is connected to an external keyboard 9104, mouse 9105, and the like.
- the CPU 9001 controls the encoding/decoding device 9000 to execute an operation desired by the user based on the user operation input via the I/O interface 9003.
- the user's operations with the keyboard 9104, mouse 9105, etc. include selection of which function of encoding or decoding is to be executed, encoding quality setting, input/output destination of encoded stream, image input/output destination, and the like. ..
- the optical disc drive 9005 When the user desires an operation of reproducing the image recorded on the disc recording medium 9100, the optical disc drive 9005 reads the encoded bitstream from the inserted disc recording medium 9100, and outputs the read encoded stream to the bus 9010. To the image decoding unit 9008 of the codec IC 9002. The image decoding unit 9008 executes image decoding processing in the image decoding apparatus according to the embodiment of the present invention on the input coded bitstream, and sends the decoded image to the external monitor 9103 via the video interface 9009.
- the encoding/decoding device 9000 has a network interface 9006 and can be connected to an external distribution server 9106 and a mobile terminal 9107 via the network 9101.
- the network interface 9006 sets the input from the input disk recording medium 9100. Instead of reading the encoded bitstream, the encoded stream is acquired from the network 9101.
- the image decoding process in the image decoding apparatus according to the embodiment of the present invention is performed on the encoded stream recorded in the memory 9004. Execute.
- the video interface 9009 inputs the image from the camera 9102, and via the bus 9010, the image encoding unit 9007 of the codec IC 9002. Send to.
- the image encoding unit 9007 executes an image encoding process in the image encoding device according to the embodiment of the present invention on an image input via the video interface 9009 to create an encoded bitstream. Then, the encoded bit stream is sent to the memory 9004 via the bus 9010.
- the optical disc drive 9005 writes the encoded stream to the inserted disc recording medium 9100.
- Such a hardware configuration is realized, for example, by replacing the codec IC 9002 with the image encoding unit 9007 or the image decoding unit 9008.
- the above-mentioned processing relating to encoding and decoding can be realized not only as a transmission device, a storage device, and a reception device using hardware such as an ASIC, but also as a ROM (read only memory) or a flash memory. It can also be realized by firmware stored in the computer or software of a computer such as a CPU or Soc (System on a chip). Recording the firmware program or software program in a computer-readable recording medium and providing it, providing it from a server through a wired or wireless network, or providing it as a terrestrial or satellite digital broadcast data broadcast. Is also possible.
- the present invention can be used for image decoding technology.
- 100 image encoding device 110 block size determination unit, 120 inter prediction unit, 121 merge mode determination unit, 122 merge candidate list generation unit, 123 merge candidate selection unit, 124 merge candidate correction determination unit, 125 merge candidate correction unit, 126 Differential motion vector mode execution unit, 127 prediction value derivation unit, 130 conversion unit, 140 code string generation unit, 150 local decoding unit, 160 frame memory, 200 image decoding device, 201 spatial merge candidate generation unit, 202 temporal merge candidate generation unit , 203 merge candidate replenishment unit, 210 code string decoding unit, 220 inter prediction unit, 230 inverse conversion unit, 240 frame memory.
Abstract
Description
以下、図面とともに本発明の第1の実施の形態に係る画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに画像復号装置、画像復号方法、及び画像復号プログラムの詳細について説明する。
Set)、PPS(Picture Parameter Set)やその他の情報を符号化し、変換部130から供給されたブロックサイズを判定するための符号列を符号化し、インター予測パラメータを符号列として符号化し、予測誤差データを符号列として符号化して、符号化ストリームを出力する。インター予測パラメータの符号化の詳細については後述する。
mvL0[1] = mmvL0[1] + mvdL0[1]
引き続いて、選択マージ候補のL1予測が有効であるか検査する(S108)。選択マージ候補のL1予測が有効でなければ(S108のNO)、ステップS111に進む。選択マージ候補のL1予測が有効であれば(S108のYES)、L1予測の差分動きベクトルを決定する(S109)。
mvL1[1] = mmvL1[1] + mvdL1[1]
続いて、予測値導出部127は、処理対象ブロックの動き情報に基づいて、L0予測、L1予測または双予測のいずれかのインター予測を行い、予測値を導出する(S111)。以上のように、マージ補正フラグが1であれば、選択マージ候補の動きベクトルを補正し、マージ補正フラグが0であれば、選択マージ候補の動きベクトルは補正しない。
[変形例1]
本実施の形態では、マージモードであるブロックのシンタックスとして差分動きベクトルを用いた。本変形例では、差分動きベクトルを差分単位動きベクトルとして符号化(または復号)するように定義する。差分単位動きベクトルとは、ピクチャ間隔が最小間隔である場合の動きベクトルである。HEVCなどでは最小のピクチャ間隔は符号化ストリーム中に符号列として符号化されている。
mvL0[1] = mmvL0[1] + umvdL0[1]*(POC(Cur)-POC(L0))
mvL1[0] = mmvL1[0] + umvdL1[0]*(POC(Cur)-POC(L1))
mvL1[1] = mmvL1[1] + umvdL1[1]*(POC(Cur)-POC(L1))
以上のように、本変形例では、差分動きベクトルとして差分単位動きベクトルを利用することで、差分動きベクトルの符号量を小さくすることで、符号化効率を向上させることができる。なお、差分動きベクトルが大きく且つ予測対象ピクチャと参照ピクチャの距離が大きくなる場合に、特に符号化効率を向上させることができる。また、処理対象ピクチャと参照ピクチャの間隔と画面内で動く物体の速度が比例関係にある場合にも、予測効率と符号化効率を向上させることができる。
[変形例2]
本実施の形態では差分動きベクトルの成分として0を符号化(または復号)できるものとして、例えば、L0予測のみを変更できるものとした。本変形例では差分動きベクトルの成分として0は符号化(または復号)できないものとする。
[変形例3]
本実施の形態では差分動きベクトルの成分を整数として、変形例2では0を除く整数とした。本変形例では差分動きベクトルの±の符号を除いた成分を2のべき乗に限定する。
また、変形例2のシンタックスであるabs_mvd_minus3[d]の代わりにabs_mvd_pow_plus2[d]を用いる。差分動きベクトルmvd[d]はmvd_sign_flag[d]とabs_mvd_pow_plus2[d]から下式のように算出する。
差分動きベクトルの成分を2のべき乗に限定することで、符号化装置の処理量を大幅に削減ながら、大きな動きベクトルの場合に予測効率を向上させることができる。
[変形例4]
本実施の形態では、mvd_coding(N)は差分動きベクトルを含むものとしたが、本変形例では、mvd_coding(N)は動きベクトル倍率を含むものとする。
差分動きベクトルの成分を2のべき乗に限定することで、符号化装置の処理量を大幅に削減ながら、大きな動きベクトルの場合に予測効率を向上させることができる。
[変形例5]
本実施の形態の図8のシンタックスでは、cu_skip_flagが1である(スキップモードである)場合、merge_mod_flagが存在する可能性を有するとしたが、スキップモードである場合にはmerge_mod_flagが存在しないようにしてもよい。
[変形例6]
本実施の形態では、選択マージ候補のLN予測(N=0または1)が有効であるか検査し、選択マージ候補のLN予測が有効でなければ、差分動きベクトルを有効にしないようにしたが、選択マージ候補のLN予測が有効であるか検査せずに、選択マージ候補のLN予測が有効であるか否かに関わらず差分動きベクトルを有効にしてもよい。この場合、選択マージ候補のLN予測が無効である場合には、選択マージ候補のLN予測の動きベクトルは(0,0)、選択マージ候補のLN予測の参照ピクチャインデックスは0とする。
[変形例7]
本実施の形態では、選択マージ候補のL0予測とL1予測を個別に有効であるか否かを判定して差分動きベクトルを符号化(または復号)するか否かを制御したが、選択マージ候補のL0予測とL1予測の両方が有効である場合に差分動きベクトルを符号化(または復号)し、選択マージ候補のL0予測とL1予測の両方が有効でない場合に、差分動きベクトルを符号化(または復号)しないようにすることもできる。本変形例の場合、ステップS102は下記のようになる。
[変形例8]
本実施の形態では、マージモードであるブロックのシンタックスとしてL0予測の差分動きベクトルとL1予測の差分動きベクトルの2つの差分動きベクトルを用いた。本変形例では、1つの差分動きベクトルだけを符号化(または復号)して、L0予測の補正動きベクトルとL1予測の補正動きベクトルとして1つの差分動きベクトルを共用して、下式のように選択マージ候補の動きベクトルmmvLN(N=0,1)と差分動きベクトルmvdから補正マージ候補の動きベクトルmvLN(N=0,1)を算出する。
mvL0[1] = mmvL0[1] + mvd[1]
選択マージ候補のL1予測が有効である場合は下記の式からL1予測の動きベクトルを算出する。L0予測とは逆方向の差分動きベクトルを加算する。選択マージ候補のL1予測の動きベクトルから差分動きベクトルを減算してもよい。
mvL1[1] = mmvL1[1] + mvd[1]*-1
図12は、変形例8のマージモードであるブロックのシンタックスの一部を示す図である。L0予測の有効性とL1予測の有効性を検査するが削除されていることと、mvd_coding(1)がない点が本実施の形態とは異なる。mvd_coding(0)が1つの差分動きベクトルに相当する。
[変形例9]
本実施の形態では、マージ補正フラグが0と1の場合の最大マージ候補数は同一であるとした。本変形例では、マージ補正フラグが1である場合の最大マージ候補数をマージ補正フラグが0である場合の最大マージ候補数より少なくする。例えば、マージ補正フラグが1である場合の最大マージ候補数を2とする。ここで、マージ補正フラグが1である場合の最大マージ候補数を最大補正マージ候補数とする。また、マージインデックスが最大補正マージ候補数より小さい場合には、マージ補正フラグを符号化(復号)し、マージインデックスが最大補正マージ候補数以上である場合には、マージ補正フラグを符号化(復号)しないようにする。ここで、マージ補正フラグが0である場合の最大マージ候補数と最大補正マージ候補数は予め定められている値であってもよいし、符号化ストリーム中のSPSやPPSに符号化(復号)して取得してもよい。
[第2の実施の形態]
第2の実施の形態の画像符号化装置100と画像復号装置200の構成は、第1の本実施の形態の画像符号化装置100と画像復号装置200と同一である。本実施の形態は第1の実施の形態とはマージモードの動作とシンタックスが異なる。以降、本実施の形態と第1の本実施の形態の相違点について説明する。
[変形例]
本実施の形態では、ステップS207とステップS210の参照ピクチャインデックスを0とした。本変形例では、選択マージ候補のL0予測が無効である場合、L0予測の参照ピクチャインデックスをL1予測の参照ピクチャインデックスとし、選択マージ候補のL1予測が無効である場合、L1予測の参照ピクチャインデックスをL0予測の参照ピクチャインデックスとする。
Claims (9)
- 予測対象ブロックに隣接する複数のブロックの動き情報をマージ候補として含むマージ候補リストを生成するマージ候補リスト生成部と、
前記マージ候補リストからマージ候補を選択マージ候補として選択するマージ候補選択部と、
符号化ストリームから符号列を復号して補正ベクトルを導出する符号列復号部と、
前記選択マージ候補の第1予測の動きベクトルに前記補正ベクトルをスケーリングしないで加算し、前記選択マージ候補の第2予測の動きベクトルに前記補正ベクトルをスケーリングしないで減算して補正マージ候補を導出するマージ候補補正部と、
を備えることを特徴とする画像復号装置。 - 前記符号列復号部は、前記符号化ストリームからマージフラグを取得して前記マージフラグがマージモードであることを示す場合にさらにマージ補正フラグを取得し、前記マージ補正フラグに基づいて、前記符号列を復号する請求項1に記載の画像復号装置。
- 前記マージ候補リストは、前記予測対象ブロックと同一位置にある復号画像上のブロックの動きベクトルをスケーリングして導出される動きベクトルを含む時間マージ候補を含む請求項1または請求項2に記載の画像復号装置。
- 前記選択マージ候補と前記補正マージ候補はいずれも双予測である請求項1から請求項3に記載の画像復号装置。
- 前記マージ候補補正部は、前記選択マージ候補の第1予測の参照ピクチャと前記選択マージ候補の第2予測の参照ピクチャが前記予測対象ブロックに対して時間的に逆方向である場合、前記補正マージ候補を導出する請求項4に記載の画像復号装置。
- 前記マージ候補補正部は、前記選択マージ候補の第1予測の参照ピクチャが長期間ピクチャまたは前記選択マージ候補の第2予測の参照ピクチャが長期間ピクチャである場合、前記双予測の補正マージ候補を生成する請求項4または請求項5に記載の画像復号装置。
- インター予測を行う画像復号装置であって、
予測対象ブロックに隣接する複数のブロックの動き情報からマージ候補リストを生成するマージ候補リスト生成部と、
符号化ストリームからマージインデックスを復号する符号列復号部と、
前記マージ候補リストから前記マージインデックスに基づいて1つのマージ候補を選択して選択マージ候補とするマージ候補選択部と、
マージ補正フラグに基づいて前記選択マージ候補を補正するか否かを判定するマージ候補補正判定部と、
前記選択マージ候補を補正する場合に前記選択マージ候補のL0予測の動きベクトルにL0予測の差分動きベクトルを加算して補正マージ候補を生成するマージ候補補正部と、
前記選択マージ候補を補正する場合は、前記補正マージ候補の動き情報に基づいてインター予測を行い、前記補正マージ候補を導出しない場合は、前記選択マージ候補の動き情報に基づいてインター予測を行うインター予測部とを含み、
前記符号列復号部は、前記選択マージ候補のL0予測が有効である場合、前記符号化ストリームから前記マージ補正フラグを復号することを特徴とする画像復号装置。 - 予測対象ブロックに隣接する複数のブロックの動き情報をマージ候補として含むマージ候補リストを生成するマージ候補リスト生成ステップと、
前記マージ候補リストからマージ候補を選択マージ候補として選択するマージ候補選択ステップと、
符号化ストリームから符号列を復号して補正ベクトルを導出する符号列復号ステップと、
前記選択マージ候補の第1予測の動きベクトルに前記補正ベクトルをスケーリングしないで加算し、前記選択マージ候補の第2予測の動きベクトルに前記補正ベクトルをスケーリングしないで減算して補正マージ候補を導出するマージ候補補正ステップと、
を有することを特徴とする画像復号方法。 - 予測対象ブロックに隣接する複数のブロックの動き情報をマージ候補として含むマージ候補リストを生成するマージ候補リスト生成ステップと、
前記マージ候補リストからマージ候補を選択マージ候補として選択するマージ候補選択ステップと、
符号化ストリームから符号列を復号して補正ベクトルを導出する符号列復号ステップと、
前記選択マージ候補の第1予測の動きベクトルに前記補正ベクトルをスケーリングしないで加算し、前記選択マージ候補の第2予測の動きベクトルに前記補正ベクトルをスケーリングしないで減算して補正マージ候補を導出するマージ候補補正ステップと、
をコンピュータに実行させことを特徴とする画像復号プログラム。
Priority Applications (16)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112021005875-3A BR112021005875B1 (pt) | 2018-12-13 | 2019-12-13 | Dispositivo de decodificação de imagem, método de decodificação de imagem, dispositivo de codificação de imagem e método de codificação de imagem |
CN201980050605.2A CN112514395B (zh) | 2018-12-13 | 2019-12-13 | 图像解码装置和方法、以及图像编码装置和方法 |
CN202310789721.4A CN116600108A (zh) | 2018-12-13 | 2019-12-13 | 图像解码装置和方法、以及图像编码装置和方法 |
CN202110325100.1A CN112887720B (zh) | 2018-12-13 | 2019-12-13 | 图像解码装置和方法、以及图像编码装置和方法 |
KR1020217003083A KR102600108B1 (ko) | 2018-12-13 | 2019-12-13 | 화상 복호 장치, 화상 복호 방법, 화상 복호 프로그램, 화상 부호화 장치, 화상 부호화 방법 및, 화상 부호화 프로그램 |
KR1020237038090A KR20230156809A (ko) | 2018-12-13 | 2019-12-13 | 화상 복호 장치, 화상 복호 방법, 화상 복호 프로그램 기록 매체, 화상 부호화 장치, 화상 부호화 방법, 화상 부호화 프로그램 기록 매체, 격납 방법, 및 전송 방법 |
CA3119641A CA3119641A1 (en) | 2018-12-13 | 2019-12-13 | Image decoding device, image decoding method, and image decoding program |
RU2021108001A RU2770794C1 (ru) | 2018-12-13 | 2019-12-13 | Устройство декодирования изображения, способ декодирования изображения |
CN202310789241.8A CN116582666A (zh) | 2018-12-13 | 2019-12-13 | 图像解码装置和方法、以及图像编码装置和方法 |
EP19896609.5A EP3896972A4 (en) | 2018-12-13 | 2019-12-13 | IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM |
CN202310789243.7A CN116582667A (zh) | 2018-12-13 | 2019-12-13 | 图像解码装置和方法、以及图像编码装置和方法 |
MX2021003506A MX2021003506A (es) | 2018-12-13 | 2019-12-13 | Dispositivo de decodificacion de imagenes, metodo de decodificacion de imagenes y programa de decodificacion de imagenes. |
US17/210,629 US11563935B2 (en) | 2018-12-13 | 2021-03-24 | Image decoding device, image decoding method, and image decoding program |
ZA2021/02033A ZA202102033B (en) | 2018-12-13 | 2021-03-25 | Image decoding device, image decoding method, and image decoding program |
US18/066,030 US11758129B2 (en) | 2018-12-13 | 2022-12-14 | Image decoding device, image decoding method, and image decoding program |
US18/358,189 US20230370584A1 (en) | 2018-12-13 | 2023-07-25 | Image decoding device, image decoding method, and image decoding program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-233432 | 2018-12-13 | ||
JP2018233432 | 2018-12-13 | ||
JP2019171782A JP6933235B2 (ja) | 2018-12-13 | 2019-09-20 | 画像復号装置、画像復号方法、及び画像復号プログラム |
JP2019-171782 | 2019-09-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/210,629 Continuation US11563935B2 (en) | 2018-12-13 | 2021-03-24 | Image decoding device, image decoding method, and image decoding program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020122224A1 true WO2020122224A1 (ja) | 2020-06-18 |
Family
ID=71075651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/048855 WO2020122224A1 (ja) | 2018-12-13 | 2019-12-13 | 画像復号装置、画像復号方法、及び画像復号プログラム |
Country Status (10)
Country | Link |
---|---|
US (2) | US11758129B2 (ja) |
EP (1) | EP3896972A4 (ja) |
JP (1) | JP7318686B2 (ja) |
KR (1) | KR20230156809A (ja) |
CN (4) | CN116582667A (ja) |
CL (1) | CL2021001412A1 (ja) |
MX (1) | MX2021003506A (ja) |
RU (1) | RU2770794C1 (ja) |
WO (1) | WO2020122224A1 (ja) |
ZA (1) | ZA202102033B (ja) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10276439A (ja) | 1997-03-28 | 1998-10-13 | Sharp Corp | 領域統合が可能な動き補償フレーム間予測方式を用いた動画像符号化・復号化装置 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG194746A1 (en) * | 2011-05-31 | 2013-12-30 | Kaba Gmbh | Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device |
JP5786498B2 (ja) * | 2011-06-30 | 2015-09-30 | 株式会社Jvcケンウッド | 画像符号化装置、画像符号化方法及び画像符号化プログラム |
KR20150088909A (ko) * | 2011-06-30 | 2015-08-03 | 가부시키가이샤 제이브이씨 켄우드 | 화상 부호화 장치, 화상 부호화 방법, 화상 부호화 프로그램, 화상 복호 장치, 화상 복호 방법 및 화상 복호 프로그램 |
PL2773112T3 (pl) * | 2011-10-27 | 2019-01-31 | Sun Patent Trust | Sposób kodowania obrazów, sposób dekodowania obrazów, urządzenie do kodowania obrazów oraz urządzenie do dekodowania obrazów |
JP5383958B2 (ja) | 2011-10-28 | 2014-01-08 | パナソニック株式会社 | 復号方法および復号装置 |
CN108235032B (zh) * | 2012-01-18 | 2022-01-07 | Jvc 建伍株式会社 | 动图像解码装置以及动图像解码方法 |
US9426463B2 (en) * | 2012-02-08 | 2016-08-23 | Qualcomm Incorporated | Restriction of prediction units in B slices to uni-directional inter prediction |
US9325991B2 (en) * | 2012-04-11 | 2016-04-26 | Qualcomm Incorporated | Motion vector rounding |
JP5633597B2 (ja) | 2012-04-12 | 2014-12-03 | 株式会社Jvcケンウッド | 動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム |
WO2014005503A1 (en) * | 2012-07-02 | 2014-01-09 | Mediatek Inc. | Method and apparatus of inter-view candidate derivation in 3d video coding |
AU2013285749B2 (en) | 2012-07-02 | 2016-06-16 | Samsung Electronics Co., Ltd. | Method and apparatus for predicting motion vector for coding video or decoding video |
US9325990B2 (en) | 2012-07-09 | 2016-04-26 | Qualcomm Incorporated | Temporal motion vector prediction in video coding extensions |
US10616607B2 (en) | 2013-02-25 | 2020-04-07 | Lg Electronics Inc. | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
JP2015089078A (ja) * | 2013-11-01 | 2015-05-07 | ソニー株式会社 | 画像処理装置および方法 |
KR102329126B1 (ko) | 2014-03-14 | 2021-11-19 | 삼성전자주식회사 | 인터 레이어 비디오의 복호화 및 부호화를 위한 머지 후보 리스트 구성 방법 및 장치 |
KR20170066457A (ko) * | 2014-09-26 | 2017-06-14 | 브이아이디 스케일, 인크. | 시간적 블록 벡터 예측을 갖는 인트라 블록 카피 코딩 |
WO2017039117A1 (ko) * | 2015-08-30 | 2017-03-09 | 엘지전자(주) | 영상의 부호화/복호화 방법 및 이를 위한 장치 |
CN110140355B (zh) | 2016-12-27 | 2022-03-08 | 联发科技股份有限公司 | 用于视频编解码的双向模板运动向量微调的方法及装置 |
CN114205620B (zh) | 2018-02-28 | 2023-07-25 | 三星电子株式会社 | 编码方法及其装置以及解码方法及其装置 |
EP3855738A4 (en) | 2018-09-17 | 2022-07-20 | Samsung Electronics Co., Ltd. | METHOD FOR ENCODING AND DECODING MOTION INFORMATION AND APPARATUS FOR ENCODING AND DECODING MOTION INFORMATION |
WO2020256468A1 (ko) | 2019-06-21 | 2020-12-24 | 삼성전자 주식회사 | 주변 움직임 정보를 이용하여 움직임 정보를 부호화 및 복호화하는 장치, 및 방법 |
-
2019
- 2019-12-13 RU RU2021108001A patent/RU2770794C1/ru active
- 2019-12-13 KR KR1020237038090A patent/KR20230156809A/ko not_active Application Discontinuation
- 2019-12-13 CN CN202310789243.7A patent/CN116582667A/zh active Pending
- 2019-12-13 WO PCT/JP2019/048855 patent/WO2020122224A1/ja active Application Filing
- 2019-12-13 CN CN202310789721.4A patent/CN116600108A/zh active Pending
- 2019-12-13 MX MX2021003506A patent/MX2021003506A/es unknown
- 2019-12-13 EP EP19896609.5A patent/EP3896972A4/en active Pending
- 2019-12-13 CN CN202110325100.1A patent/CN112887720B/zh active Active
- 2019-12-13 CN CN202310789241.8A patent/CN116582666A/zh active Pending
-
2021
- 2021-03-25 ZA ZA2021/02033A patent/ZA202102033B/en unknown
- 2021-05-28 CL CL2021001412A patent/CL2021001412A1/es unknown
- 2021-08-05 JP JP2021128928A patent/JP7318686B2/ja active Active
-
2022
- 2022-12-14 US US18/066,030 patent/US11758129B2/en active Active
-
2023
- 2023-07-25 US US18/358,189 patent/US20230370584A1/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10276439A (ja) | 1997-03-28 | 1998-10-13 | Sharp Corp | 領域統合が可能な動き補償フレーム間予測方式を用いた動画像符号化・復号化装置 |
Non-Patent Citations (5)
Title |
---|
BROSS, BENJAMIN ET AL.: "Versatile Video Coding (Draft 3", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 12TH MEETING, 10 December 2018 (2018-12-10), Macao, CN, pages 102 - 104, XP030251957 * |
CHEN, HUANBANG ET AL.: "CE4: Symmetrical mode for bi-prediction (Test 3.2", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/ WG 11 11TH MEETING, 10 July 2018 (2018-07-10), Ljubljana, SI, pages 1 - 2, XP030199229 * |
CHEN, XU ET AL.: "CE 4: Enhanced Merge Mode (Test 4.2.15", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 11TH MEETING, 10 July 2018 (2018-07-10), Ljubljana, SI, pages 1 - 8, XP030199235 * |
FUKUSHIMA, SHIGERU ET AL.: "Merge based mvd transmission", MERGE BASED MVD TRANSMISSION, JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11 6TH MEETING, 14 July 2011 (2011-07-14), Torino, IT, pages 1 - 8, XP030228768 * |
JEONG, SEUNGSOO ET AL.: "Proposed WD for CE4 Ultimate motion vector expression (Test 4.5.4), Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting", 11 October 2018 (2018-10-11), Macao, CN, pages 1 - 12, XP030195378 * |
Also Published As
Publication number | Publication date |
---|---|
KR20230156809A (ko) | 2023-11-14 |
CN112887720A (zh) | 2021-06-01 |
CL2021001412A1 (es) | 2022-01-14 |
CN116582666A (zh) | 2023-08-11 |
JP7318686B2 (ja) | 2023-08-01 |
CN116600108A (zh) | 2023-08-15 |
ZA202102033B (en) | 2022-09-28 |
RU2770794C1 (ru) | 2022-04-21 |
US11758129B2 (en) | 2023-09-12 |
US20230370584A1 (en) | 2023-11-16 |
JP2021192513A (ja) | 2021-12-16 |
CN116582667A (zh) | 2023-08-11 |
EP3896972A4 (en) | 2022-04-20 |
RU2022110107A (ru) | 2022-05-06 |
MX2021003506A (es) | 2021-05-27 |
US20230122817A1 (en) | 2023-04-20 |
EP3896972A1 (en) | 2021-10-20 |
CN112887720B (zh) | 2022-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2024051143A (ja) | 画像復号装置、画像復号方法、及び画像復号プログラム | |
JP5843032B2 (ja) | 動画像復号装置、動画像復号方法、及び動画像復号プログラム、並びに、受信装置、受信方法、及び受信プログラム | |
TW202008783A (zh) | 多通道視訊處理系統的方法與裝置 | |
WO2020122224A1 (ja) | 画像復号装置、画像復号方法、及び画像復号プログラム | |
JP5725119B2 (ja) | 動画像復号装置、動画像復号方法、及び動画像復号プログラム、並びに、受信装置、受信方法、及び受信プログラム | |
JP2020113966A (ja) | 画像符号化装置、画像符号化方法、及び画像符号化プログラム | |
RU2789732C2 (ru) | Устройство декодирования изображения и способ декодирования изображения | |
JP2013121163A (ja) | 画像符号化装置、画像符号化方法及び画像符号化プログラム | |
JP2013121164A (ja) | 画像復号装置、画像復号方法及び画像復号プログラム | |
JP2013121165A (ja) | 画像符号化装置、画像符号化方法及び画像符号化プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19896609 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
ENP | Entry into the national phase |
Ref document number: 20217003083 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2101001619 Country of ref document: TH |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112021005875 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 3119641 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019896609 Country of ref document: EP Effective date: 20210713 |
|
ENP | Entry into the national phase |
Ref document number: 112021005875 Country of ref document: BR Kind code of ref document: A2 Effective date: 20210326 |