WO2021031225A1 - 一种运动矢量导出方法、装置及电子设备 - Google Patents
一种运动矢量导出方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2021031225A1 WO2021031225A1 PCT/CN2019/102748 CN2019102748W WO2021031225A1 WO 2021031225 A1 WO2021031225 A1 WO 2021031225A1 CN 2019102748 W CN2019102748 W CN 2019102748W WO 2021031225 A1 WO2021031225 A1 WO 2021031225A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- block
- current block
- coding unit
- filtered
- Prior art date
Links
- 239000013598 vector Substances 0.000 title claims abstract description 389
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000009795 derivation Methods 0.000 title claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 45
- 230000002123 temporal effect Effects 0.000 claims description 79
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 9
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
Definitions
- This specification relates to the technical field of video coding and decoding, and in particular to a method, device and electronic equipment for deriving a motion vector.
- inter-frame prediction technology can be used to reduce the redundancy between video frames by using the time correlation between each video frame.
- the Skip/Direct mode directly derives the current coding unit by borrowing the motion information of the neighboring blocks that have been encoded in the spatial domain and the motion information in the encoded image in the time domain. Sports information. Therefore, although the Skip/Direct mode improves the coding efficiency to a certain extent, this method of deriving motion information using only the motion information in the coded neighboring blocks or images reduces the accuracy of inter-frame prediction.
- Skip The /Direct mode ultimately derives the motion information of the entire current coding unit, which further reduces the accuracy of inter-frame prediction and affects coding efficiency.
- the object of the present invention is to provide a method, device and electronic device for deriving a motion vector to solve the existing method of deriving motion information only by using the motion information in the coded neighboring blocks or images, which reduces The accuracy of inter-frame prediction affects coding efficiency.
- An embodiment of this specification provides a method for deriving a motion vector, which includes:
- a predetermined inter prediction mode using the spatial motion vector and temporal motion vector prediction of the filtered neighboring block and the coordinate position of the current block in the coding unit, it is determined that the current block is on its four sides Reference motion vector in the side direction;
- An embodiment of this specification provides a motion vector derivation device, the device includes:
- An acquisition module for acquiring spatial motion vector and temporal motion vector prediction of neighboring blocks of the coding unit in a predetermined direction
- a filtering module configured to perform a filtering operation on the spatial motion vector and time domain motion vector prediction to obtain the filtered spatial motion vector and time domain motion vector prediction of the neighboring block;
- the determining module is configured to determine the current block by using the spatial motion vector and time domain motion vector prediction of the filtered neighboring block and the coordinate position of the current block in the coding unit according to a predetermined inter prediction mode Reference motion vectors in its four side directions;
- the derivation module is configured to derive the motion vector of the current block according to the reference motion vector and the coordinate position of the current block in the coding unit.
- An electronic device provided by an embodiment of this specification includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the aforementioned motion vector derivation method when the program is executed.
- the present invention obtains the spatial motion vector and time domain motion vector prediction of the adjacent block of the coding unit in the predetermined direction; performs filtering operation on the spatial motion vector and the time domain motion vector prediction to obtain the filtered spatial motion vector of the adjacent block And time-domain motion vector prediction; according to a predetermined inter-frame prediction mode, using the spatial motion vector and time-domain motion vector prediction of the filtered neighboring block and the coordinate position of the current block in the coding unit, it is determined that the current block is in its Reference motion vectors in the four side directions; according to the reference motion vector and the coordinate position of the current block in the coding unit, the motion vector of the current block is derived. Based on the solution of the present invention, the accuracy of inter-frame prediction can be improved, and the coding efficiency can be further improved.
- FIG. 1 is a schematic flowchart of a method for deriving a motion vector provided by an embodiment of this specification
- FIG. 2 is a schematic diagram of the position structure of neighboring blocks for obtaining spatial motion vector and temporal motion vector prediction according to an embodiment of this specification;
- FIG. 3 is a schematic diagram of the principle of using five inter-frame prediction modes to determine the reference motion vector of the current block in its four side directions according to an embodiment of the present specification
- Fig. 4 is a schematic structural diagram of a motion vector derivation device provided by an embodiment of this specification.
- Commonly used methods of compressing video data include HEVC, AVS, H.264/MPEG-4AVC, etc. According to these methods, a picture is divided into multiple macroblocks to encode the image, and prediction blocks are generated using inter-frame prediction or intra-frame prediction , So as to encode the corresponding macro block.
- the difference between the initial block and the prediction block is transformed to generate a transformed block, and the transformed block is quantized using a quantization parameter and a quantization matrix.
- the quantized coefficients of the quantized block are scanned by a predetermined scanning mode and then entropy coding is performed.
- the quantization parameter is adjusted for each macroblock, and the previous quantization parameter is used to encode it.
- the motion vector of the current coded block It can be derived with reference to the motion vectors of neighboring blocks in the time domain and the space domain to save the bit rate overhead of directly transmitting the motion vectors and improve the coding efficiency.
- the current coding unit is coded in multiple modes, such as Skip/Direct mode, merge mode, etc. These modes borrow motion information from neighboring blocks that have been coded in the spatial domain and the coded image in the time domain.
- the motion information directly derives the motion information of the current coding unit instead of obtaining motion information through motion estimation, thus reducing the accuracy of inter-frame prediction; in addition, the existing model ultimately derives the motion information of the entire current coding unit. It cannot be accurate to the sub-blocks in each coding unit, which further reduces the accuracy of inter-frame prediction, thereby affecting coding efficiency.
- FIG. 1 is a schematic flowchart of a method for deriving a motion vector according to an embodiment of the present invention.
- the method may specifically include the following steps:
- step S110 the spatial motion vector and temporal motion vector prediction of neighboring blocks of the coding unit in a predetermined direction are obtained.
- a coding unit may refer to dividing a frame of image into several rectangular blocks of a certain size that do not overlap with each other. Each block is the largest coding unit, and each largest Coding units can be divided into coding units of different sizes from 64*64 to 8*8; coding units have separate prediction mode information, coefficients, etc.
- obtaining the spatial motion vector of the neighboring block of the coding unit in a predetermined direction may include the following process:
- Scaling the motion vectors of adjacent blocks on the left and upper sides of the coding unit to obtain the spatial motion vectors of the adjacent blocks specifically, scaling the forward motion vector of the adjacent block to the first frame of the forward reference frame list , Scale the backward motion vector of the adjacent block to the first frame of the backward reference frame list, and use the scaled motion vector as the spatial motion vector of the adjacent block.
- neighboring blocks refer to coding blocks adjacent to the coding unit in the four directions of left, right, up and down.
- the neighboring block does not belong to the coding block inside the current coding unit, and the size of the neighboring block is the same as the size of the current block in the coding unit. the same.
- the neighboring block from which the spatial motion vector is derived can be each neighboring block on the left and upper side of the current coding unit.
- the spatial motion vector of the neighboring block is obtained by comparing the motion vector of the neighboring block.
- the result of scaling that is, the spatial motion vector is the result of the scaling of the motion vector.
- the motion vector here refers to the motion vector of the neighboring blocks that have been encoded on the left and above, and the motion vector can also be represented by motion information.
- the specific scaling process may include scaling the forward motion vector of the adjacent block to the first frame of the forward reference frame list, scaling the backward motion vector of the adjacent block to the first frame of the backward reference frame list, and
- the spatial motion vector of neighboring blocks can be obtained by calculating the scaling results of the forward motion vector and the backward motion vector; for the neighboring blocks that cannot obtain the motion vector, find the space from the neighboring blocks available from the spatial motion vector.
- the neighboring block with the nearest position uses its spatial motion vector after scaling as its own spatial motion vector.
- the video coding technology includes two reference frame lists, namely the forward reference frame list and the backward reference frame list. These two reference frame lists store the encoded and reconstructed images in a certain order. These images may be The inter prediction mode of the current coding unit is selected as the reference frame.
- the first frame may refer to an image whose reference frame index is 0 in the reference frame list.
- obtaining the temporal motion vector prediction of neighboring blocks of a coding unit in a predetermined direction may include the following process:
- the neighboring block from which the time domain motion vector prediction is derived may be each neighboring block on the right and the lower side of the current coding unit.
- the time domain motion vector prediction of the neighboring block is obtained from other blocks in the time domain.
- the motion information of the block at the same coordinate position as the coding unit where the adjacent block is located is obtained, and the result of the scaling of the motion information is used as the temporal motion vector prediction of the adjacent block.
- the block with the same coordinate position as the current neighboring block in other encoded images in the time domain is called the co-located block
- the temporal motion vector prediction is to derive the motion of the neighboring block by scaling the motion information of the co-located block. information.
- the prediction is its own time domain motion vector prediction.
- the figure shows a schematic diagram of the location structure of neighboring blocks for obtaining spatial motion vectors and temporal motion vector predictions; among them, the left side of Figure 2 is the neighboring blocks and coding units for obtaining spatial motion vectors It can be seen that the neighboring blocks for obtaining the spatial motion vector are the neighboring blocks on the left and upper side of the coding unit.
- the right side in Figure 2 is the positional relationship between the neighboring blocks for obtaining the temporal motion vector prediction and the coding unit.
- the neighboring blocks for obtaining the temporal motion vector prediction are the neighboring blocks located on the right and lower sides of the coding unit.
- step S120 a filtering operation is performed on the spatial motion vector and time domain motion vector prediction to obtain the filtered spatial motion vector and time domain motion vector prediction of the neighboring block.
- filtering the spatial motion vector and temporal motion vector prediction of the neighboring blocks derived in step S110 may specifically include the following two steps:
- the first step is to perform a filling operation on both ends of the spatial motion vector and time domain motion vector prediction.
- the specific filling operation includes the following:
- MvR and MvB respectively represent the temporal motion vector prediction of the right and lower adjacent blocks before filtering
- MvL and MvT respectively represent the spatial motion vectors of the left and upper adjacent blocks before filtering
- i and j represent The column coordinates in units of blocks
- k and l represent row coordinates in units of sub-blocks
- M and N represent the width and height of the coding unit in units of sub-blocks.
- the motion vectors other than the outermost adjacent blocks at the two ends may be used during the filtering operation for the spatial motion vector and temporal motion vector prediction of adjacent blocks, and the motion vectors of these blocks may not Therefore, the motion vector of the neighboring block on the edge can be used as filling.
- the second step is to filter the spatial motion vector and temporal motion vector prediction according to the following formula:
- Mvt[x] (3*MvT[i 0 ]+8*MvT[i 1 ]+10*MvT[i 2 ]+8*MvT[i 3 ]+3*MvT[i 4 ])>>5
- Mvl[y] (3*MvL[j 0 ]+8*MvL[j 1 ]+10*MvL[j 2 ]+8*MvL[j 3 ]+3*MvL[j 4 ])>>5
- Mvb[x] (3*MvB[i 0 ]+8*MvB[i 1 ]+10*MvB[i 2 ]+8*MvB[i 3 ]+3*MvB[i 4 ])>>5
- Mvr[y] (3*MvR[j 0 ]+8*MvR[j 1 ]+10*MvR[j 2 ]+8*MvR[j 3 ]+3*MvR[j 4 ])>>5
- MvR and MvB respectively represent the temporal motion vector prediction of the right and lower adjacent blocks before filtering
- Mvr, Mvb respectively represent the temporal motion vector prediction of the right and lower adjacent blocks after filtering
- MvL, MvT respectively Represents the spatial motion vector of the adjacent block on the left and upper side before filtering
- Mvl, Mvt represent the spatial motion vector of the adjacent block on the left and upper side after filtering, respectively
- x and y represent the sub-block of the current block in the coding unit
- y 0 represents the first row sub-block in the current coding unit
- i and j represent the column coordinates in the unit of sub-block
- M, N Indicates the width and height of the coding unit in units of sub-blocks.
- step S130 according to a predetermined inter prediction mode, the current block is determined by using the spatial motion vector and time domain motion vector prediction of the filtered neighboring block and the coordinate position of the current block in the coding unit The reference motion vector in its four side directions.
- the embodiments of this specification propose 5 new inter prediction modes, including: the first inter prediction mode, the second inter prediction mode, the third inter prediction mode, the fourth inter prediction mode, and the fifth inter prediction mode mode.
- these five inter-frame prediction modes can be called SLTCRB mode, SLCB mode, STCR mode, SLT mode, and CRB mode, respectively.
- this figure shows a schematic diagram of the principle of using five inter-frame prediction modes to determine the reference motion vector of the current block in its four side directions in the embodiment of this specification; see FIG. 3,
- determining the reference motion vector of the current block in its four side directions may specifically include the following content:
- the left neighboring block and the right neighboring block in the abscissa direction of the current block are selected, and the upper neighboring block in the ordinate direction of the current block is selected
- the lower neighboring block, the spatial motion vector of the filtered left neighboring block and the upper neighboring block, and the filtered time domain motion vector of the right neighboring block and the lower neighboring block are respectively used as the current Reference motion vectors of the block in its four side directions;
- the left neighboring block in the abscissa direction of the current block and the lower neighboring block in the ordinate direction are selected, and the filtered left neighboring block
- the spatial motion vector and the filtered temporal motion vector prediction of the lower neighboring block are respectively used as the reference motion vector of the current block in the left and lower directions; and the filtered coding unit’s lower right neighboring block’s
- the temporal motion vector prediction and the spatial motion vector of the uppermost adjacent block on the left side of the coding unit after filtering are used as the reference motion vector of the current block in the right and upper directions respectively;
- the third inter-frame prediction mode (STCR mode)
- the right adjacent block in the abscissa direction of the current block and the upper adjacent block in the ordinate direction are selected, and the The temporal motion vector prediction and the spatial motion vector of the upper neighboring block after filtering are used as the reference motion vector of the current block in the right and upper direction respectively;
- the spatial motion vector and the temporal motion vector prediction of the lowermost adjacent block on the right side of the filtered coding unit are respectively used as the reference motion vector of the current block in the left and lower directions;
- the left adjacent block in the abscissa direction of the current block and the upper adjacent block in the ordinate direction are selected, and the filtered left adjacent block
- the spatial motion vector and the spatial motion vector of the upper neighboring block are used as the reference motion vector of the current block in the left and upper directions respectively;
- the spatial motion vector of the uppermost right neighboring block of the filtered coding unit and the filtering The spatial motion vector of the lowermost adjacent block on the left side of the subsequent coding unit is used as the reference motion vector of the current block in the right and lower directions respectively;
- the right adjacent block in the abscissa direction of the current block and the lower adjacent block in the ordinate direction are selected, and the filtered right adjacent block
- the time domain motion vector prediction and the time domain motion vector prediction of the lower neighboring block are respectively used as the reference motion vector of the current block in the right and lower directions; and the time domain of the lower leftmost neighboring block of the filtered coding unit
- the domain motion vector prediction and the temporal motion vector prediction of the uppermost adjacent block on the right side of the coding unit after filtering are respectively used as the reference motion vector of the current block in the left and upper directions.
- these five new inter-frame prediction modes all use bilinear interpolation to combine spatial motion information and temporal motion information in the neighborhood of the current coding unit to derive each sub-block in the current coding unit
- the reference motion vector Therefore, in practical applications, taking the current block with coordinates (x, y) as an example, for each of the above modes, four motion vectors can be selected as reference motion vectors in the left, right, up, and down directions according to the following formula:
- vl, vr, vt and vb respectively represent the reference motion vectors in the left, right, up and down directions of the current block in the coding unit;
- Mvr, Mvb respectively represent the temporal motion vector prediction of the right and lower adjacent blocks after filtering;
- Mvl and Mvt respectively represent the spatial motion vectors of the adjacent blocks on the left and upper side after filtering;
- x and y represent the coordinates of the current block in the coding unit in units of sub-blocks;
- M and N indicate the width and height of the coding unit in units of sub-blocks.
- step S140 the motion vector of the current block is derived according to the reference motion vector and the coordinate position of the current block in the coding unit.
- the following formula may be used to derive the motion vector of the current block according to the reference motion vector determined in step S130 in combination with the coordinate position of the current block in the coding unit, specifically:
- vl, vr, vt, and vb respectively represent the reference motion vector of the current block in the left, right, up, and down directions;
- x and y represent the coordinates of the current block in the coding unit in units of sub-blocks;
- M , N represents the width and height of the coding unit in units of sub-blocks;
- v h represents the horizontal motion vector of the current block;
- v v represents the vertical motion vector of the current block;
- v[x][y] represents the motion vector of the current block.
- the above inter prediction mode can also be marked in the code stream.
- the specific marking process may include the following :
- the 5 new inter-frame prediction modes proposed in this specification form a mode candidate list.
- a 1-bit flag needs to be transmitted to the code stream to identify whether the above-mentioned inter-frame prediction method is selected.
- the decoder needs to decode the 1-bit flag to determine whether the above inter prediction method is used. If the above inter prediction method is used, the decoder also needs a decoding index. The index is used to determine which of the above 5 new inter prediction modes belongs to One kind.
- the embodiment of this specification also provides a process for making a rate-distortion optimization decision on each coding unit at the encoding end, which specifically includes the following content:
- step S110 for each inter prediction direction, derive the spatial motion vector of each adjacent block on the left and above of the current coding unit and the temporal motion vector prediction of each adjacent block on the right and below of the current coding unit .
- step S120 the spatial motion vector and the temporal motion vector prediction are filtered.
- each mode derives the motion vector of each sub-block in the current coding unit, and each sub-block is derived from the motion vector to predict the block.
- step S110 for each inter-frame prediction direction, the spatial motion vector of each adjacent block on the left and above of the current coding unit and the temporal motion vector prediction of each adjacent block on the right and below of the current coding unit are derived .
- step S120 the spatial motion vector and the temporal motion vector prediction are filtered.
- step S130-step 140 select the corresponding new inter prediction mode, derive the motion vector of each sub-block in the current coding unit, and derive the prediction block for each sub-block according to the motion vector.
- Fig. 4 is a motion vector derivation device provided by the embodiment of this specification.
- the device mainly includes:
- the obtaining module 401 is configured to obtain the spatial motion vector and the temporal motion vector prediction of the neighboring blocks of the coding unit in a predetermined direction;
- the filtering module 402 is configured to perform a filtering operation on the spatial motion vector and the temporal motion vector prediction to obtain the filtered spatial motion vector and the temporal motion vector prediction of the adjacent block;
- the determining module 403 is configured to use the spatial motion vector and temporal motion vector prediction of the filtered neighboring block and the coordinate position of the current block in the coding unit according to a predetermined inter-frame prediction mode to determine the current Reference motion vectors of the block in its four side directions;
- the derivation module 404 is configured to derive the motion vector of the current block according to the reference motion vector and the coordinate position of the current block in the coding unit.
- the obtaining module 401 is further configured to:
- the forward motion vector of the adjacent block is scaled to the first frame of the forward reference frame list
- the backward motion vector of the adjacent block is scaled to the first frame of the backward reference frame list
- the scaled The motion vector is used as the spatial motion vector of the adjacent block.
- the obtaining module 401 is further configured to:
- the filtering module 402 is specifically configured to: fill the two ends of the spatial motion vector and time domain motion vector prediction, and predict the spatial motion vector and time domain motion vector according to the following formula Filtering:
- Mvt[x] (3*MvT[i 0 ]+8*MvT[i 1 ]+10*MvT[i 2 ]+8*MvT[i 3 ]+3*MvT[i 4 ])>>5
- Mvl[y] (3*MvL[j 0 ]+8*MvL[j 1 ]+10*MvL[j 2 ]+8*MvL[j 3 ]+3*MvL[j 4 ])>>5
- Mvb[x] (3*MvB[i 0 ]+8*MvB[i 1 ]+10*MvB[i 2 ]+8*MvB[i 3 ]+3*MvB[i 4 ])>>5
- Mvr[y] (3*MvR[j 0 ]+8*MvR[j 1 ]+10*MvR[j 2 ]+8*MvR[j 3 ]+3*MvR[j 4 ])>>5
- MvR and MvB respectively represent the temporal motion vector prediction of the right and lower adjacent blocks before filtering
- Mvr, Mvb respectively represent the temporal motion vector prediction of the right and lower adjacent blocks after filtering
- MvL, MvT respectively Represents the spatial motion vector of the adjacent block on the left and upper side before filtering
- Mvl, Mvt represent the spatial motion vector of the adjacent block on the left and upper side after filtering, respectively
- x and y represent the sub-block of the current block in the coding unit Is the coordinate in the unit
- i and j represent the column coordinates in the unit of sub-block
- M and N represent the width and height of the coding unit in the unit of sub-block.
- the inter prediction mode includes a first inter prediction mode, a second inter prediction mode, a third inter prediction mode, a fourth inter prediction mode, and a fifth inter prediction mode, so
- the determining module 403 is specifically used for:
- the left neighboring block and the right neighboring block in the abscissa direction of the current block are selected, and the upper neighboring block in the ordinate direction of the current block is selected
- the lower neighboring block, the filtered spatial motion vector of the left neighboring block and the upper neighboring block, and the filtered time domain motion vector of the right neighboring block and the lower neighboring block Prediction respectively as the reference motion vector of the current block in its four side directions;
- the left neighboring block in the abscissa direction and the lower neighboring block in the ordinate direction of the current block are selected, and the filtered left neighboring block
- the spatial motion vector and the filtered temporal motion vector prediction of the lower neighboring block are respectively used as the reference motion vector of the current block in the left and lower directions; and the filtered coding unit lower side
- the temporal motion vector prediction of the rightmost neighboring block and the filtered spatial motion vector of the uppermost neighboring block on the left side of the coding unit are respectively used as the reference motion vector of the current block in the right and upper directions;
- the right adjacent block in the abscissa direction and the upper adjacent block in the ordinate direction of the current block are selected, and the filtered right adjacent block
- the temporal motion vector prediction and the filtered spatial motion vector of the upper neighboring block are respectively used as the reference motion vector of the current block in the right and upper directions
- the filtered coding unit upper side The spatial motion vector of the leftmost adjacent block and the filtered temporal motion vector prediction of the lowermost adjacent block on the right side of the coding unit are respectively used as the reference motion vectors of the current block in the left and lower directions;
- the left neighboring block in the abscissa direction and the upper neighboring block in the ordinate direction of the current block are selected, and the filtered left neighboring block
- the spatial motion vector and the spatial motion vector of the upper neighboring block are respectively used as the reference motion vectors of the current block in the left and upper directions; and the filtered spatial domain of the uppermost neighboring block of the coding unit
- the motion vector and the filtered spatial motion vector of the neighboring block on the left side of the coding unit are respectively used as the reference motion vector of the current block in the right and lower directions;
- the right adjacent block in the abscissa direction and the lower adjacent block in the ordinate direction of the current block are selected, and the filtered right adjacent block
- the temporal motion vector prediction and the temporal motion vector prediction of the lower neighboring block are respectively used as the reference motion vectors of the current block in the right and lower directions; and the lower leftmost side of the coding unit after filtering is compared
- the temporal motion vector prediction of the neighboring block and the filtered temporal motion vector prediction of the uppermost neighboring block on the right side of the coding unit are respectively used as the reference motion vector of the current block in the left and upper directions.
- the deriving module 404 is specifically configured to derive the motion vector of the current block according to the following formula:
- vl, vr, vt, and vb respectively represent the reference motion vector of the current block in the left, right, up, and down directions;
- x and y represent the coordinates of the current block in the coding unit in units of sub-blocks;
- M , N represents the width and height of the coding unit in units of sub-blocks;
- v h represents the horizontal motion vector of the current block;
- v v represents the vertical motion vector of the current block;
- v[x][y] represents the motion vector of the current block.
- the embodiment of the present specification also provides an electronic device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the aforementioned method for deriving a motion vector when the program is executed.
- the device, electronic device, and method provided in the embodiments of this specification are corresponding. Therefore, the device and electronic device also have beneficial technical effects similar to the corresponding method. Since the beneficial technical effects of the method have been described in detail above, here The beneficial technical effects of the corresponding devices and electronic equipment will not be repeated.
- program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- the instructions can also be practiced in distributed computing environments, in which tasks are performed by remote processing devices connected through a communication network.
- program modules can be located in local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (14)
- 一种运动矢量导出方法,所述方法包括:获取编码单元在预定方向上的相邻块的空域运动矢量和时域运动矢量预测;对所述空域运动矢量和时域运动矢量预测执行滤波操作,得到滤波后的所述相邻块的空域运动矢量和时域运动矢量预测;根据预定的帧间预测模式,利用所述滤波后的相邻块的空域运动矢量和时域运动矢量预测以及当前块在所述编码单元内的坐标位置,确定所述当前块在其四个侧边方向上的参考运动矢量;根据所述参考运动矢量以及所述当前块在所述编码单元内的坐标位置,导出所述当前块的运动矢量。
- 如权利要求1所述的方法,所述获取编码单元在预定方向上的相邻块的空域运动矢量,包括:对位于所述编码单元左侧和上侧的相邻块的运动矢量进行缩放得到所述相邻块的空域运动矢量;具体地,将所述相邻块的前向运动矢量缩放到前向参考帧列表的第一帧,将所述相邻块的后向运动矢量缩放到后向参考帧列表的第一帧,将缩放后的运动矢量作为所述相邻块的空域运动矢量。
- 如权利要求1所述的方法,所述获取编码单元在预定方向上的相邻块的时域运动矢量预测,包括:获取位于所述编码单元右侧和下侧的相邻块的时域运动矢量预测;具体地,从前向参考帧列表的第一帧或后向参考帧列表的第一帧中获取与所述相邻块具有相同坐标位置的编码块的运动矢量,对所述编码块的运动矢量进行缩放得到所述相邻块的时域运动矢量预测。
- 如权利要求1所述的方法,所述对所述空域运动矢量和时域运动矢量预测执行滤波操作,具体包括,对所述空域运动矢量和时域运动矢量预测的两端进行填充,并根据以下公式对所述空域运动矢量和时域运动矢量预测进行滤波:Mvt[x]=(3*MvT[i 0]+8*MvT[i 1]+10*MvT[i 2]+8*MvT[i 3]+3*MvT[i 4])>>5Mvl[y]=(3*MvL[j 0]+8*MvL[j 1]+10*MvL[j 2]+8*MvL[j 3]+3*MvL[j 4])>>5Mvb[x]=(3*MvB[i 0]+8*MvB[i 1]+10*MvB[i 2]+8*MvB[i 3]+3*MvB[i 4])>>5Mvr[y]=(3*MvR[j 0]+8*MvR[j 1]+10*MvR[j 2]+8*MvR[j 3]+3*MvR[j 4])>>50≤x<M0≤y<N其中,MvR,MvB分别表示滤波前右侧和下侧相邻块的时域运动矢量预测;Mvr,Mvb分别表示滤波后右侧和下侧相邻块的时域运动矢量预测;MvL,MvT分别表示滤波前左侧和上侧相邻块的空域运动矢量;Mvl,Mvt分别表示滤波后左侧和上侧相邻块的空域运动矢量;x、y表示当前块在编码单元内的以子块为单位的坐标;i和j表示以子块为单位的列坐标;M、N表示编码单元以子块为单位的宽、高。
- 如权利要求1所述的方法,所述帧间预测模式包括:第一帧间预测模式、第二帧间预测模式、第三帧间预测模式、第四帧间预测模式以及第五帧间预测模式。
- 如权利要求5所述的方法,所述根据预定的帧间预测模式,利用所述滤波后的相邻块的空域运动矢量和时域运动矢量预测以及当前块在所述编码单元内的坐标位置,确定所述当前块在其四个侧边方向上的参考运动矢量,具体包括:当采用第一帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻块和右侧相邻块,并选取所述当前块的纵坐标方向上的上侧相邻块和下侧相邻块,将滤波后的所述左侧相邻块和上侧相邻块的空域运动矢量以及滤波后的所述右侧相邻块和下侧相邻块的时域运动矢量预测分别作为所述当前块在其四个侧边方向上的参考运动矢量;当采用第二帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻块以及纵坐标方向上的下侧相邻块,将滤波后的所述左侧相邻块的空域运动矢量和滤波后的所述下侧相邻块的时域运动矢量预测分别作为所述当前块在左侧和下侧方向上的参考运动矢量;并将滤波后的所述编码单元下侧最右边相邻块的时域运动矢量预测以及滤波后的所述编码单元左侧最上方相邻块的空域运动矢量,分别作为所述当前块在右侧和上侧方向上的参考运动矢量;当采用第三帧间预测模式时,选取所述当前块的横坐标方向上的右侧相邻块以及纵坐标方向上的上侧相邻块,将滤波后的所述右侧相邻块的时域运动矢量预测和滤波后的所述上侧相邻块的空域运动矢量分别作为所述当前块在右侧和上侧方向上的参考运动矢量;并将滤波后的所述编码单元上侧最左边相邻块的空域运动矢量以及滤波后的所述编码单元右侧最下方相邻块的时域运动矢量预测,分别作为所述当前块在左侧和下侧方向上的参考运动矢量;当采用第四帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻块以及纵坐标方向上的上侧相邻块,将滤波后的所述左侧相邻块的空域运动矢量和上侧相邻块的空域运动矢量分别作为所述当前块在左侧和上侧方向上的参考运动矢量;并将滤波后的所述编码单元上侧最右边相邻块的空域运动矢量以及滤波后的所述编码单元左侧最下方相邻块的空域运动矢量,分别作为所述当前块在右侧和下侧方向上的参考运动矢量;当采用第五帧间预测模式时,选取所述当前块的横坐标方向上的右侧相邻块以及纵坐标方向上的下侧相邻块,将滤波后的所述右侧相邻块的时域运动矢量预测和下侧相邻块的时域运动矢量预测分别作为所述当前块在右侧和下侧方 向上的参考运动矢量;并将滤波后的所述编码单元下侧最左边相邻块的时域运动矢量预测以及滤波后的所述编码单元右侧最上方相邻块的时域运动矢量预测,分别作为所述当前块在左侧和上侧方向上的参考运动矢量。
- 如权利要求1所述的方法,所述根据所述参考运动矢量以及所述当前块在所述编码单元内的坐标位置,导出所述当前块的运动矢量,具体包括,根据以下公式导出所述当前块的运动矢量:v h=((M-x)*vl+x*vr)/Mv v=((N-y)*vt+y*vb)/Nv[x][y]=(v h+v v)/2其中,vl、vr、vt和vb分别表示当前块在左、右、上、下四个方向上的参考运动矢量;x、y表示当前块在编码单元内的以子块为单位的坐标;M、N表示编码单元以子块为单位的宽、高;v h表示当前块的横向运动矢量;v v表示当前块的纵向运动矢量;v[x][y]表示当前块的运动矢量。
- 一种运动矢量导出装置,所述装置包括:获取模块,用于获取编码单元在预定方向上的相邻块的空域运动矢量和时域运动矢量预测;滤波模块,用于对所述空域运动矢量和时域运动矢量预测执行滤波操作,得到滤波后的所述相邻块的空域运动矢量和时域运动矢量预测;确定模块,用于根据预定的帧间预测模式,利用所述滤波后的相邻块的空域运动矢量和时域运动矢量预测以及当前块在所述编码单元内的坐标位置,确定所述当前块在其四个侧边方向上的参考运动矢量;导出模块,用于根据所述参考运动矢量以及所述当前块在所述编码单元内的坐标位置,导出所述当前块的运动矢量。
- 如权利要求8所述的装置,所述获取模块还用于:对位于所述编码单元左侧和上侧的相邻块的运动矢量进行缩放得到所述相邻块的空域运动矢量;具体地,将所述相邻块的前向运动矢量缩放到前向参考帧列表的第一帧,将所述相邻块的后向运动矢量缩放到后向参考帧列表的第一帧,将缩放后的运动矢量作为所述相邻块的空域运动矢量。
- 如权利要求8所述的装置,所述获取模块还用于:获取位于所述编码单元右侧和下侧的相邻块的时域运动矢量预测;具体地,从前向参考帧列表的第一帧或后向参考帧列表的第一帧中获取与所述相邻块具有相同坐标位置的编码块的运动矢量,对所述编码块的运动矢量进行缩放得到所述相邻块的时域运动矢量预测。
- 如权利要求8所述的装置,所述滤波模块具体用于:对所述空域运动矢量和时域运动矢量预测的两端进行填充,并根据以下公式对所述空域运动矢量和时域运动矢量预测进行滤波:Mvt[x]=(3*MvT[i 0]+8*MvT[i 1]+10*MvT[i 2]+8*MvT[i 3]+3*MvT[i 4])>>5Mvl[y]=(3*MvL[j 0]+8*MvL[j 1]+10*MvL[j 2]+8*MvL[j 3]+3*MvL[j 4])>>5Mvb[x]=(3*MvB[i 0]+8*MvB[i 1]+10*MvB[i 2]+8*MvB[i 3]+3*MvB[i 4])>>5Mvr[y]=(3*MvR[j 0]+8*MvR[j 1]+10*MvR[j 2]+8*MvR[j 3]+3*MvR[j 4])>>50≤x<M0≤y<N其中,MvR,MvB分别表示滤波前右侧和下侧相邻块的时域运动矢量预测;Mvr,Mvb分别表示滤波后右侧和下侧相邻块的时域运动矢量预测;MvL,MvT分 别表示滤波前左侧和上侧相邻块的空域运动矢量;Mvl,Mvt分别表示滤波后左侧和上侧相邻块的空域运动矢量;x、y表示当前块在编码单元内的以子块为单位的坐标;i和j表示以子块为单位的列坐标;M、N表示编码单元以子块为单位的宽、高。
- 如权利要求8所述的装置,所述帧间预测模式包括第一帧间预测模式、第二帧间预测模式、第三帧间预测模式、第四帧间预测模式以及第五帧间预测模式,所述确定模块具体用于:当采用第一帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻块和右侧相邻块,并选取所述当前块的纵坐标方向上的上侧相邻块和下侧相邻块,将滤波后的所述左侧相邻块和上侧相邻块的空域运动矢量以及滤波后的所述右侧相邻块和下侧相邻块的时域运动矢量预测分别作为所述当前块在其四个侧边方向上的参考运动矢量;当采用第二帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻块以及纵坐标方向上的下侧相邻块,将滤波后的所述左侧相邻块的空域运动矢量和滤波后的所述下侧相邻块的时域运动矢量预测分别作为所述当前块在左侧和下侧方向上的参考运动矢量;并将滤波后的所述编码单元下侧最右边相邻块的时域运动矢量预测以及滤波后的所述编码单元左侧最上方相邻块的空域运动矢量,分别作为所述当前块在右侧和上侧方向上的参考运动矢量;当采用第三帧间预测模式时,选取所述当前块的横坐标方向上的右侧相邻块以及纵坐标方向上的上侧相邻块,将滤波后的所述右侧相邻块的时域运动矢量预测和滤波后的所述上侧相邻块的空域运动矢量分别作为所述当前块在右侧和上侧方向上的参考运动矢量;并将滤波后的所述编码单元上侧最左边相邻块的空域运动矢量以及滤波后的所述编码单元右侧最下方相邻块的时域运动矢量预测,分别作为所述当前块在左侧和下侧方向上的参考运动矢量;当采用第四帧间预测模式时,选取所述当前块的横坐标方向上的左侧相邻 块以及纵坐标方向上的上侧相邻块,将滤波后的所述左侧相邻块的空域运动矢量和上侧相邻块的空域运动矢量分别作为所述当前块在左侧和上侧方向上的参考运动矢量;并将滤波后的所述编码单元上侧最右边相邻块的空域运动矢量以及滤波后的所述编码单元左侧最下方相邻块的空域运动矢量,分别作为所述当前块在右侧和下侧方向上的参考运动矢量;当采用第五帧间预测模式时,选取所述当前块的横坐标方向上的右侧相邻块以及纵坐标方向上的下侧相邻块,将滤波后的所述右侧相邻块的时域运动矢量预测和下侧相邻块的时域运动矢量预测分别作为所述当前块在右侧和下侧方向上的参考运动矢量;并将滤波后的所述编码单元下侧最左边相邻块的时域运动矢量预测以及滤波后的所述编码单元右侧最上方相邻块的时域运动矢量预测,分别作为所述当前块在左侧和上侧方向上的参考运动矢量。
- 如权利要求8所述的装置,所述导出模块具体用于:根据以下公式导出所述当前块的运动矢量:v h=((M-x)*vl+x*vr)/Mv v=((N-y)*vt+y*vb)/Nv[x][y]=(v h+v v)/2其中,vl、vr、vt和vb分别表示当前块在左、右、上、下四个方向上的参考运动矢量;x、y表示当前块在编码单元内的以子块为单位的坐标;M、N表示编码单元以子块为单位的宽、高;v h表示当前块的横向运动矢量;v v表示当前块的纵向运动矢量;v[x][y]表示当前块的运动矢量。
- 一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现权利要求1至7中任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/645,698 US11997284B2 (en) | 2019-08-19 | 2021-12-22 | Method for deriving motion vector, and electronic device of current block in coding unit |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910766509.XA CN110475116B (zh) | 2019-08-19 | 2019-08-19 | 一种运动矢量导出方法、装置及电子设备 |
CN201910766509.X | 2019-08-19 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/645,698 Continuation-In-Part US11997284B2 (en) | 2019-08-19 | 2021-12-22 | Method for deriving motion vector, and electronic device of current block in coding unit |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021031225A1 true WO2021031225A1 (zh) | 2021-02-25 |
Family
ID=68511213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/102748 WO2021031225A1 (zh) | 2019-08-19 | 2019-08-27 | 一种运动矢量导出方法、装置及电子设备 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11997284B2 (zh) |
CN (1) | CN110475116B (zh) |
WO (1) | WO2021031225A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111654708B (zh) * | 2020-06-07 | 2022-08-23 | 咪咕文化科技有限公司 | 一种运动矢量获取方法、装置及电子设备 |
CN111901590B (zh) * | 2020-06-29 | 2023-04-18 | 北京大学 | 一种用于帧间预测的细化运动矢量存储方法及装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105959699A (zh) * | 2016-05-06 | 2016-09-21 | 西安电子科技大学 | 一种基于运动估计和时空域相关性的快速帧间预测方法 |
CN108347616A (zh) * | 2018-03-09 | 2018-07-31 | 中南大学 | 一种基于可选时域运动矢量预测的深度预测方法及装置 |
CN109005412A (zh) * | 2017-06-06 | 2018-12-14 | 北京三星通信技术研究有限公司 | 运动矢量获取的方法及设备 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9137544B2 (en) * | 2010-11-29 | 2015-09-15 | Mediatek Inc. | Method and apparatus for derivation of mv/mvp candidate for inter/skip/merge modes |
US8711940B2 (en) * | 2010-11-29 | 2014-04-29 | Mediatek Inc. | Method and apparatus of motion vector prediction with extended motion vector predictor |
US8755437B2 (en) * | 2011-03-17 | 2014-06-17 | Mediatek Inc. | Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate |
WO2012122927A1 (en) * | 2011-03-14 | 2012-09-20 | Mediatek Inc. | Method and apparatus for derivation of motion vector candidate and motion vector prediction candidate |
CN102883161B (zh) * | 2012-09-19 | 2015-09-30 | 华为技术有限公司 | 视频编码和解码的处理方法和装置 |
CN104427345B (zh) * | 2013-09-11 | 2019-01-08 | 华为技术有限公司 | 运动矢量的获取方法、获取装置、视频编解码器及其方法 |
WO2016008161A1 (en) * | 2014-07-18 | 2016-01-21 | Mediatek Singapore Pte. Ltd. | Temporal derived bi-directional motion vector predictor |
KR101676791B1 (ko) * | 2015-04-14 | 2016-11-16 | 삼성전자주식회사 | 영상 복호화 방법 |
US10136155B2 (en) * | 2016-07-27 | 2018-11-20 | Cisco Technology, Inc. | Motion compensation using a patchwork motion field |
CN109089119B (zh) * | 2017-06-13 | 2021-08-13 | 浙江大学 | 一种运动矢量预测的方法及设备 |
-
2019
- 2019-08-19 CN CN201910766509.XA patent/CN110475116B/zh active Active
- 2019-08-27 WO PCT/CN2019/102748 patent/WO2021031225A1/zh active Application Filing
-
2021
- 2021-12-22 US US17/645,698 patent/US11997284B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105959699A (zh) * | 2016-05-06 | 2016-09-21 | 西安电子科技大学 | 一种基于运动估计和时空域相关性的快速帧间预测方法 |
CN109005412A (zh) * | 2017-06-06 | 2018-12-14 | 北京三星通信技术研究有限公司 | 运动矢量获取的方法及设备 |
CN108347616A (zh) * | 2018-03-09 | 2018-07-31 | 中南大学 | 一种基于可选时域运动矢量预测的深度预测方法及装置 |
Non-Patent Citations (1)
Title |
---|
THIRUMALAI, VIJAYARAGHAVAN ET AL.: "INTER-VIEW MOTION VECTOR PREDICTION FOR DEPTH CODING", HTTPS://WWW.RESEARCHGATE.NET/PUBLICATION/262972574, 31 July 2014 (2014-07-31), XP055783381 * |
Also Published As
Publication number | Publication date |
---|---|
US11997284B2 (en) | 2024-05-28 |
CN110475116B (zh) | 2021-09-21 |
CN110475116A (zh) | 2019-11-19 |
US20220191503A1 (en) | 2022-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102160564B1 (ko) | 픽쳐 예측 방법 및 관련 장치 | |
JP2021022936A (ja) | 画像予測方法および関連装置 | |
RU2559740C2 (ru) | Способ и устройство для кодирования/декодирования вектора движения | |
JP5234587B2 (ja) | 映像符号化方法及び復号方法、それらの装置、及びそれらのプログラム並びにプログラムを記録した記憶媒体 | |
TWI711297B (zh) | 利用量化係數分量與圖框間預測資訊解碼視訊資料的方法 | |
CN108781284A (zh) | 具有仿射运动补偿的视频编解码的方法及装置 | |
CN109068142B (zh) | 360度视频帧内预测快速决策方法、装置、编解码器和介质 | |
EP2214415A1 (en) | A dual prediction video encoding and decoding method and a device | |
TW202143731A (zh) | 在高級運動向量預測模式中的圖像解碼方法 | |
WO2015010319A1 (zh) | 一种基于p帧的多假设运动补偿编码方法 | |
CN101888546B (zh) | 一种运动估计的方法及装置 | |
US11997284B2 (en) | Method for deriving motion vector, and electronic device of current block in coding unit | |
WO2022227622A1 (zh) | 一种权值可配置的帧间帧内联合预测编解码的方法及装置 | |
CN109889838B (zh) | 一种基于roi区域的hevc快速编码方法 | |
JP2005348008A (ja) | 動画像符号化方法、動画像符号化装置、動画像符号化プログラム及びそのプログラムを記録したコンピュータ読み取り可能な記録媒体 | |
CN116980596A (zh) | 一种帧内预测方法、编码器、解码器及存储介质 | |
WO2023044916A1 (zh) | 帧内预测的方法、编码器、解码器和编解码系统 | |
WO2023044918A1 (zh) | 帧内预测的方法、编码器、解码器和编解码系统 | |
WO2023044917A1 (zh) | 帧内预测的方法、编码器、解码器和编解码系统 | |
KR101286071B1 (ko) | 부호화기 및 그 인트라 예측 방법 | |
CN113055670B (zh) | 一种基于hevc/h.265的视频编码的方法及系统 | |
CN105306953A (zh) | 图像编码方法和装置 | |
WO2020181507A1 (zh) | 图像处理的方法与装置 | |
WO2021056920A1 (zh) | 视频编解码的方法和装置 | |
WO2020258053A1 (zh) | 图像分量预测方法、装置及计算机存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19942389 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942389 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19942389 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/02/2023) |