WO2014000636A1 - 多视点视频编码的运动矢量预测和视差矢量预测方法 - Google Patents
多视点视频编码的运动矢量预测和视差矢量预测方法 Download PDFInfo
- Publication number
- WO2014000636A1 WO2014000636A1 PCT/CN2013/077924 CN2013077924W WO2014000636A1 WO 2014000636 A1 WO2014000636 A1 WO 2014000636A1 CN 2013077924 W CN2013077924 W CN 2013077924W WO 2014000636 A1 WO2014000636 A1 WO 2014000636A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image block
- block
- prediction
- current image
- motion vector
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Definitions
- the present application relates to the field of multi-view video coding, and in particular to a motion vector prediction and disparity vector prediction method for multi-view video coding.
- Multi-view video refers to a set of synchronous video signals obtained by shooting multiple cameras from different perspectives from different perspectives, which can reproduce scenes more vividly. It can be widely used in 3D TV, blending conference TV, telemedicine, A variety of emerging multimedia services such as virtual reality and video surveillance systems. Compared with single-view video, the amount of data of multi-view video increases linearly as the number of cameras increases. Therefore, how to improve the coding efficiency of multi-view video has become the main content of current research.
- the multi-view video coding technology mainly uses inter-view prediction to remove inter-view redundancy, that is, when the current image is encoded, the decoded image in other views is used as a reference image for inter-view prediction and time domain prediction. Due to the geometric correspondence between binocular stereoscopic video, there is a great correlation between the left and right viewpoints. Therefore, how to use the inter-view correlation to improve the coding efficiency is the key to improve the efficiency of multi-view video coding.
- a disparity vector is required for the inter-view prediction block, and a motion vector is required for the time domain prediction block.
- Median prediction is a commonly used prediction method for current motion vectors and disparity vectors.
- the time domain prediction block does not contribute to the prediction of the disparity vector due to the lack of the disparity vector.
- the inter-view prediction block disparity vector prediction efficiency is degraded.
- the coding mode of the image block around the real-time prediction block is the disparity compensation prediction mode, and the inter-view prediction block does not contribute to the prediction of the motion vector, and the prediction efficiency of the motion vector is also Will fall.
- the related art has proposed to estimate the disparity of the current image block by using the disparity vectors of the corresponding image blocks in the two frames in the time domain, but there are still two problems: one is that when the corresponding image blocks of the two frames before and after the time domain have no disparity vector, How to predict the disparity vector of the current image block; second, how to predict the motion vector of the current image block if there are no motion vectors in the surrounding image blocks.
- the present application provides a motion vector prediction and a disparity vector prediction method capable of improving coding efficiency in multi-view video coding.
- the present application provides a motion vector prediction method for multi-view video coding, including:
- the video frame to be encoded is divided into macroblocks.
- Determining whether a reference image block of the current image block to be encoded includes a time domain prediction block, and the time domain prediction block is an image block coded by using a motion compensation prediction mode.
- the current image block is subjected to motion vector prediction by using a median prediction method to obtain a motion vector prediction value of the current image block.
- the motion vector prediction is performed on the current image block by using the template matching method, and the motion vector prediction value of the current image block is obtained.
- the present application provides a disparity vector prediction method for multi-view video coding, including:
- the video frame to be encoded is divided into macroblocks.
- the current image block is subjected to disparity vector prediction by using a median prediction method to obtain a disparity vector predictor of the current image block.
- the current image block is subjected to disparity vector prediction by using a template matching method to obtain a disparity vector predictor of the current image block.
- the motion vector prediction and disparity vector prediction method for multi-view video coding provided by the present application, first determining whether a reference image block of a current image block includes a time domain prediction block or an inter-view prediction block, thereby selecting whether to use a median prediction method or a template
- the matching method is used to calculate the motion vector predictor and the disparity vector predictor of the current image block, thereby avoiding the phenomenon that the accuracy and efficiency of the motion vector prediction or the disparity vector prediction are degraded due to the lack of the motion vector or the disparity vector of the reference image block, thereby improving the motion
- the accuracy of the vector predictor and the disparity vector predictor improves the coding efficiency.
- FIG. 1 is a schematic diagram of a reference image block in an embodiment of the present application.
- FIG. 2 is a flowchart of a motion vector prediction method according to an embodiment of the present application.
- FIG. 3 is a flowchart of a method for predicting a disparity vector according to an embodiment of the present application
- FIG. 4 is a block diagram of encoding a multi-view video in an embodiment of the present application.
- FIG. 5 is a block diagram of decoding of multi-view video in an embodiment of the present application.
- FIG. 6 is a schematic diagram of a motion vector prediction and a disparity vector prediction method according to an embodiment of the present application
- FIG. 7 is a schematic diagram of an inverted “L” type template in a template matching method according to an embodiment of the present application.
- the embodiment provides a motion vector prediction and disparity vector prediction method for multi-view video coding, and the motion vector prediction and disparity vector prediction method uses MVC (Multi-view Video).
- MVC Multi-view Video
- the dual view video is taken as an example in the embodiment, and one of the two original signals of the dual view video is selected as the primary view and the other as the auxiliary view.
- each frame image is divided into macroblocks, each macroblock having a fixed size, starting from the first image block at the upper left and then from left to right.
- Each image block in one frame of image is processed in order from top to bottom.
- a frame of 16*16 pixels is divided into 4*4 pixel macroblocks (image blocks), each macroblock has a size of 4*4 pixels, and the processing order of the images is: The image block of the first line is processed left to right, and then the second line is processed in turn until the entire frame image is processed.
- the reference image block needs to be selected, and the motion vector and the disparity vector of the reference image block are used as reference values.
- the motion vector predictor and the disparity vector predictor of the current image block are calculated.
- the reference image block uses the encoded adjacent image block of the current image block.
- the reference image blocks of the current image block P are A, B, C, and D.
- the upper block, the upper right block, and the left block image block adjacent to the current image block may also be selected as the reference image block, for example, the reference image block of the current image block P in FIG. A, B, C; if the upper right block image block of the current image block does not exist (the current image block is located in the first column on the right), it is replaced with the upper left block image block of the current image block, for example, the current image block in FIG.
- the reference image blocks of P are A, B, and D.
- this embodiment provides a motion vector prediction method for multi-view video coding, which includes the following steps:
- step S11 the video frame to be encoded is divided into macroblocks to form a plurality of image blocks.
- Step S12 determining whether a reference image block of the current image block to be encoded includes a time domain prediction block, and the time domain prediction block refers to an image block coded by using a motion compensation prediction mode.
- Step S13 When it is determined in step S12 that at least one time domain prediction block is included in the reference image block, the current image block is subjected to motion vector prediction by using a median prediction method to obtain a motion vector prediction value of the current image block.
- Step S14 When it is determined in step S12 that the reference image block does not include the time domain prediction block, the current image block is subjected to motion vector prediction by using a template matching method to obtain a motion vector prediction value of the current image block.
- the template matching method in step S13 includes: searching for the best matching block of the reference image block in the previous frame image of the image frame in which the current image block is located, to calculate a motion vector of the reference image block, and referring to the motion vector of the reference image block.
- the reference calculates the motion vector predictor of the current image block.
- the best matching block is the absolute error and the smallest image block in the previous frame image of the image frame in which the current image block is located, and the reference image block.
- the current image block is subjected to motion vector prediction to obtain a motion vector predictor of the current image block, and the template matching method is used to perform motion vector prediction on the current image block to obtain a motion vector of the current image block.
- the median function is used in this embodiment to calculate the motion vector predictor of the current image block.
- this embodiment provides a disparity vector prediction method for multi-view video coding, which includes the following steps:
- step S21 the video frame to be encoded is divided into macroblocks to form a plurality of image blocks.
- Step S22 determining whether an inter-view prediction block is included in a reference image block of a current image block to be encoded, where the inter-view prediction block refers to an image block that is encoded by using a disparity compensation prediction mode.
- Step S23 When it is determined in step S22 that at least one inter-view prediction block is included in the reference image block, the current image block is subjected to disparity vector prediction by using a median prediction method to obtain a disparity vector predictor of the current image block.
- Step S24 When it is determined in step S22 that the reference image block does not include the inter-view prediction block, the current image block is subjected to disparity vector prediction by using a template matching method to obtain a disparity vector prediction value of the current image block.
- the template matching method in step S23 includes searching for a best matching block of the reference image block in the main view reference frame image to calculate a disparity vector of the reference image block, and calculating a current image block with reference to the disparity vector of the reference image block.
- Disparity vector predictor The best matching block is the absolute error and the smallest image block in the main view reference frame image with the reference image block.
- the median prediction method is used to perform disparity vector prediction on the current image block to obtain a disparity vector prediction value of the current image block, and the disparity vector of the current image block is obtained by performing a disparity vector prediction on the current image block by using a template matching method.
- the median function is used in this embodiment to calculate the disparity vector predictor of the current image block.
- FIG. 4 is a coding block diagram of multi-view video coding.
- the multi-view video coding process includes the following steps:
- Step 1 Input the original signal of the multi-view video, select one of the two original signals as the primary viewpoint, and the other as the secondary viewpoint, for example, select the left original signal as the primary viewpoint, and the right original signal as the secondary viewpoint.
- the first frame image of the left channel is encoded, and the intra prediction block of the current image block is obtained by performing intra prediction on the current image block, and the current image block is compared with the intra prediction block to obtain a residual value, and the residual value is obtained.
- the difference is transformed, quantized, and entropy encoded to form a code stream sequence, thereby completing the encoding of the first frame image of the left channel.
- Step 2 In order to provide the reference image required for subsequent encoding, the encoding end needs to have the capability of reconstructing the image when encoding, that is, having a decoding end, please refer to FIG. 5, which is a decoding block diagram of multi-view video coding.
- the first frame image of the left channel is decoded, and the code stream sequence is obtained by entropy decoding, inverse quantization and inverse transform to obtain residual values, and the intra prediction block of the current image block is obtained by intra prediction, and the residual value is obtained. Adding to the intra prediction block to obtain the current image block, and filtering to obtain a decoded image block, thereby obtaining a decoded image of the first frame image of the left channel.
- Step 3 Encoding the second frame image of the left channel, specifically, performing intra prediction on the current image block to obtain an intra prediction block of the current image block.
- Motion estimation is performed on the current image block to obtain a motion vector
- motion vector prediction is performed on the current image block to obtain a motion vector prediction value
- the motion vector is compared with the motion vector prediction value to obtain a motion vector difference value
- motion compensation prediction is performed on the current block.
- Motion compensated prediction block The rate correction optimization criterion is used to select the current image block to obtain the best prediction block.
- the best prediction block is the intra prediction block
- the current image block is compared with the intra prediction block to obtain the residual value, and the residual value is obtained.
- the prediction mode with the least distortion is selected.
- the intra prediction mode and the motion compensation prediction mode can be selected.
- Step 4 Decoding the second frame image of the left channel.
- the encoding mode selected in the third step is the intra prediction mode
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transform to obtain residual values, and the intra-frame is obtained.
- the intra prediction block of the current image block is predicted, the intra prediction block is added to the residual value, and filtered to obtain a decoded image block, thereby obtaining a decoded image of the second frame image.
- the code stream sequence output by the encoding end includes corresponding encoding mode information for the decoding end to decode.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transformation to obtain residual values and motion vector difference values; and the current image block is predicted by motion vector prediction.
- the motion vector prediction value is obtained by adding the motion vector prediction value to the motion vector difference value to obtain a motion vector, and performing motion compensation according to the motion vector and the previous frame image to obtain a motion compensation prediction block, and adding the motion compensation prediction block and the residual value, and Filtering is performed to obtain a decoded image block, thereby obtaining a decoded image of the second frame image.
- Step 5 Looping steps 3 and 4, continuing to encode and decode the subsequent frame image of the second frame image until all frame encoding and decoding of the left video signal is completed.
- Step 6 performing three-dimensional stereo coding on the first frame image of the right channel, specifically, performing intra prediction on the current image block to obtain an intra prediction block.
- Performing disparity estimation on the current image block to obtain a disparity vector performing disparity vector prediction on the current image block to obtain a disparity vector predictor, and disparating the disparity vector and the disparity vector predictor to obtain a disparity vector difference value, and performing motion compensation prediction on the current block to obtain motion Compensate the prediction block.
- the rate correction optimization criterion is used to select the current image block to obtain the best prediction block.
- the best prediction block is the intra prediction block, the residual difference is transformed, quantized and entropy coded to form a code stream sequence of the current image block.
- the intra prediction mode and the disparity compensation prediction mode may be selected.
- the disparity compensation prediction is based on the position of the current image block in the image, finds the corresponding position in the left reference frame, and performs the disparity compensation prediction block according to the disparity vector, and the left reference frame refers to the same number of frames as the current encoded frame. Frame.
- the image corresponding to the first frame of the left channel is the left reference frame.
- the disparity vector prediction when the disparity vector prediction is obtained by performing the disparity vector prediction on the current image block, it is first determined whether the inter-view prediction block is included in the encoded adjacent image block of the current image block, and if yes, the median prediction method is adopted.
- the disparity vector predictor of the current image block is calculated, if otherwise the disparity vector predictor of the current image block is calculated using a template matching method.
- determining whether an inter-view prediction block is included in the encoded adjacent image block of the current image block specifically determining whether the image block is inter-view prediction by retrieving a reference frame index number of the encoded adjacent image block. Piece.
- the adjacent image blocks that have been encoded are B1, B2, B3, and B5. If it is determined that the encoded adjacent image block of the current image block B6 includes the inter-view prediction block, for example, B1 and B2, and the disparity vectors corresponding to the image blocks B1 and B2 are D1 and D2, the method of median prediction is adopted.
- the disparity vector predictor D6p of the current image block B6 is estimated from its neighboring inter-view prediction block:
- the judging unit 104 controls the disparity vector predicting unit to use the decoding block of B1, B2, B3, and B5 as a template, and uses the template matching method to search for the most reconstructed image in the corresponding reference frame of the main view.
- the blocks B1', B2', B3', and B5' are matched, thereby obtaining the disparity vectors D1, D2, D3, and D5 of B1, B2, B3, and B5, thereby obtaining the disparity vector predictor value D6p of the current image block B6:
- D6p f(D1, D2, D3, D5).
- the image block is determined to be the best matching block.
- the f function is used to select the median function, namely:
- D6p median(D2, D3, D5).
- this embodiment adopts inverted "L” type template matching, and the coded adjacent image blocks of the current image block P constitute an inverted “L” type template, the template size is 4*4 pixels, and the template "L" is in The sub-image block covered by the translation search window in the main view reconstruction image is recorded as L'ij, i, j is the coordinates of the upper left vertex of the sub-image block in the main view image, and finally the template is completed by comparing the similarity between L and L'ij. Matching process.
- the image blocks in the first row and the first column have particularity, and the reference image block cannot be selected to calculate the motion vector predictor and the disparity vector predictor.
- the first image block (B1) When the first image block (B1) is encoded, its encoding mode directly uses the intra prediction mode, and when encoding the first image and other image blocks of the first column, the conventional motion vector prediction and disparity vector prediction methods are used. (Median prediction method) to calculate a motion vector predictor and a disparity vector predictor.
- Step 7 Decode the first frame image of the right channel.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transform to obtain residual values, and the intra prediction block of the current image block is obtained by intra prediction, and the frame is obtained.
- the intra prediction block is added to the residual difference and filtered to obtain a decoded image block, thereby obtaining a decoded image of the first frame image on the right.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transform to obtain residual values and disparity vector difference values; and the current image block is predicted by disparity vector prediction.
- the disparity vector predictor obtains a disparity vector by adding the disparity vector predictor to the disparity vector difference, and obtains a disparity compensated prediction block according to the disparity vector and the main view reference frame, and adds the disparity compensated prediction block to the residual value, and Filtering is performed to obtain a decoded image block, thereby obtaining a decoded image of the first frame image on the right.
- the seventh step when calculating the disparity vector prediction value of the current image block, the principle is the same as that in the sixth step, and details are not described herein again.
- Step 8 Encoding the second frame image of the right channel, specifically, performing intra prediction on the current image block to obtain an intra prediction block of the current image block.
- Motion estimation is performed on the current image block to obtain a motion vector
- motion vector prediction is performed on the current image block to obtain a motion vector prediction value
- the motion vector is compared with the motion vector prediction value to obtain a motion vector difference value
- motion compensation prediction is performed on the current image block.
- Obtaining a motion compensation prediction block performing disparity estimation on the current image block to obtain a disparity vector, performing a disparity vector prediction on the current image block to obtain a disparity vector prediction value, and comparing the disparity vector with the disparity vector prediction value to obtain a disparity vector difference value, and simultaneously
- the image block is subjected to disparity compensation prediction to obtain a disparity compensation prediction block.
- the rate correction optimization criterion is used to select the current image block to obtain the best prediction block.
- the best prediction block is the intra prediction block, the residual difference is transformed, quantized and entropy coded to form a code stream sequence of the current image block.
- the residual difference is transformed and quantized, and entropy coded together with the motion vector difference to form a code stream sequence of the current image block;
- the best prediction block is a disparity compensation prediction
- the residual value is transformed and quantized, and entropy encoded together with the disparity vector difference to form a code stream sequence of the current image block.
- step 8 the three modes of the intra prediction mode, the motion compensation prediction mode, and the parallax compensation prediction mode may be selected.
- the motion compensation prediction finds a corresponding position in the image of the previous frame in the time domain according to the position of the current image block in the image, and performs motion compensation prediction block according to the motion vector offset.
- motion vector prediction When motion vector prediction is performed on the current image block to obtain a motion vector predictor, it is first determined whether the encoded adjacent image block of the current image block includes a time domain prediction block, and if yes, the median prediction method is used to calculate the current image block. The motion vector predictor, if otherwise the template matching method is used to calculate the motion vector predictor of the current image block. Determining whether the time domain prediction block is included in the encoded adjacent image block of the current image block, specifically, determining whether the image block is a time domain prediction block by retrieving the reference frame index number of the encoded adjacent image block.
- the adjacent image blocks that have been encoded are B6, B7, B8, and B10. If it is determined that the coded adjacent image block of the current image block B11 includes a time domain prediction block, for example, B6, B7, and the motion vectors corresponding to the image blocks B6 and B7 are M6 and M7, the method of median prediction is adopted.
- the motion vector predictor M11p of the current image block B11 is estimated in its adjacent time domain prediction block:
- the encoded adjacent image block of the current image block B11 does not include the time domain prediction block, that is, the encoded adjacent image blocks B6, B7, B8, and B10 of the current image block B11 only include the inter-view prediction block. There is no motion vector available.
- the decoding blocks of B6, B7, B8 and B10 are used as templates, and the template matching method is used to search for the best matching blocks B6' and B7 in the reconstructed image of the previous frame of the secondary viewpoint. ', B8' and B10', thereby obtaining motion vectors M6, M7, M8 and M10 of B6, B7, B8 and B10, thereby obtaining a motion vector predictor M11p of the current image block B11:
- M11p f (M6, M7, M8, M10).
- the motion vector prediction unit searches for the best matching block of the adjacent image block in the reconstructed image of the previous frame image of the secondary view point, the absolute error of the adjacent image block and the search block is calculated in the reconstructed image of the image of the previous frame of the auxiliary view point. The absolute error and the smallest image block found are determined as the best matching block.
- the f function is used to select the median function, namely:
- M11p median (M7, M8, M10).
- the method for performing the disparity compensation prediction on the current image block in Step 8 to obtain the disparity vector prediction value is the same as Step 6 and will not be described here.
- Step 9 Decode the second frame image of the right channel.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transform to obtain residual values, and the intra prediction block of the current image block is obtained by intra prediction, and the frame is obtained.
- the intra prediction block is added to the residual difference and filtered to obtain a decoded image block, thereby obtaining a decoded image of the first frame image on the right.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transformation to obtain residual values and disparity vector difference values; and the current image block is predicted by the disparity compensation prediction.
- the disparity vector predictor obtains a disparity vector by adding the disparity vector predictor to the disparity vector difference, and obtains a disparity compensated prediction block according to the disparity vector and the main view reference frame, and adds the disparity compensated prediction block to the residual value, and Filtering is performed to obtain a decoded image block, thereby obtaining a decoded image of the first frame image on the right.
- the code stream sequence is subjected to entropy decoding, inverse quantization, and inverse transformation to obtain residual values and motion vector difference values; and the current image block is predicted by motion compensation prediction.
- the motion vector prediction value is obtained by adding the motion vector prediction value to the motion vector difference value to obtain a motion vector, and performing motion compensation according to the motion vector and the previous frame image to obtain a motion compensation prediction block, and adding the motion compensation prediction block and the residual value, and Filtering is performed to obtain a decoded image block, thereby obtaining a decoded image of the second frame image on the right.
- step IX When calculating the motion vector predictor and the disparity vector predictor of the current image block in step IX, the principle is the same as that in step VIII, and details are not described herein again.
- Step 10 Looping steps 8 and 9 continues to encode and decode subsequent frame images of the second frame image of the right channel until all frames of the right video signal are encoded and decoded.
- the motion vector prediction and disparity vector prediction method for multi-view video coding provided by the present application, first determining whether a reference image block of a current image block includes a time domain prediction block or an inter-view prediction block, thereby selecting whether to use a median prediction method or a template
- the matching method is used to calculate the motion vector predictor and the disparity vector predictor of the current image block, thereby avoiding the phenomenon that the accuracy and efficiency of the motion vector prediction or the disparity vector prediction are degraded due to the lack of the motion vector or the disparity vector of the reference image block, thereby improving the motion
- the accuracy of the vector predictor and the disparity vector predictor improves the coding efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
一种多视点视频编码的运动矢量预测和视差矢量预测方法,先判断当前图像块的参考图像块中是否包含时域预测块或视间预测块,从而选择采用中值预测方法还是模板匹配方法来计算当前图像块的运动矢量预测值和视差矢量预测值,避免由于参考图像块缺少运动矢量或视差矢量时造成运动矢量预测或视差矢量预测的准确度和效率下降的现象,从而提高运动矢量预测值和视差矢量预测值的准确度,提高编码效率。
Description
本申请涉及多视点视频编码领域,具体涉及一种多视点视频编码的运动矢量预测和视差矢量预测方法。
多视点视频指的是由不同视点的多个摄像机从不同视角拍摄同一场景得到的一组同步视频信号,能够更加生动地再现场景,可广泛应用于三维电视、交融式会议电视、远程医学诊疗、虚拟现实以及视频监视系统等多种正在兴起的多媒体业务。与单视点视频相比,多视点视频的数据量随着摄像机的数目增加而线性增加。因此,如何提高多视点视频的编码效率便成为了当前研究的主要内容。
多视点视频编码技术主要采用视点间预测来去除视点间冗余,即在编码当前图像时使用其他视点中的已解码图像作为参考图像进行视间预测和时域预测。由于双目立体视频之间的存在几何对应关系,左右视点之间存在很大的相关性。因此,如何利用视间相关性进行预测来提高编码效率是提高多视点视频编码效率的关键。
在码流里,对于视间预测块需要标记视差矢量,对于时域预测块需要标记运动矢量。中值预测是目前运动矢量和视差矢量常用的预测方法。当视间预测块周围是时域预测块时,即视间预测块周围的图像像采用的编码模式为运动补偿预测模式,时域预测块由于缺少视差矢量,对视差矢量的预测没有贡献,会造成视间域预测块视差矢量预测效率下降。同样,当时域预测块周围是视间预测块时,即时域预测块周围的图像块的编码模式为视差补偿预测模式,视间预测块对运动矢量的预测也没有贡献,运动矢量的预测效率也会下降。针对上述问题相关技术人员提出利用时域上前后两帧对应图像块的视差矢量来估计当前图像块的视差,但仍然存在两个问题:一是当时域前后两帧对应图像块无视差矢量时,如何对当前图像块的视差矢量进行预测;二是对于当前图像块的运动矢量,若周围图像块均无运动矢量时如何预测。
本申请提供一种在多视点视频编码中能够提高编码效率的运动矢量预测和视差矢量预测方法。
根据本申请的第一方面,本申请提供一种多视点视频编码的运动矢量预测方法,包括:
将待编码的视频帧划分宏块。
判断待编码的当前图像块的参考图像块中是否包含时域预测块,所述时域预测块为采用运动补偿预测模式进行编码的图像块。
当判断为参考图像块中包含至少一块时域预测块时,采用中值预测方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值。
否则采用模板匹配方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值。
根据本申请的第二方面,本申请提供一种多视点视频编码的视差矢量预测方法,包括:
将待编码的视频帧划分宏块。
判断待编码的当前图像块的参考图像块中是否包含视间预测块,所述视间预测块为采用视差补偿预测模式进行编码的图像块。
当判断为参考图像块中包含至少一块视间预测块时,采用中值预测方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值。
否则采用模板匹配方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值。
本申请提供的多视点视频编码的运动矢量预测和视差矢量预测方法中,先判断当前图像块的参考图像块中是否包含时域预测块或视间预测块,从而选择采用中值预测方法还是模板匹配方法来计算当前图像块的运动矢量预测值和视差矢量预测值,避免由于参考图像块缺少运动矢量或视差矢量时造成运动矢量预测或视差矢量预测的准确度和效率下降的现象,从而提高运动矢量预测值和视差矢量预测值的准确度,提高编码效率。
下面结合附图和具体实施方式作进一步详细的说明。
图1为本申请一种实施例中参考图像块的示意图;
图2为本申请一种实施例中运动矢量预测方法的流程图;
图3为本申请一种实施例中视差矢量预测方法的流程图;
图4为本申请一种实施例中多视点视频的编码框图;
图5为本申请一种实施例中多视点视频的解码框图;
图6为本申请一种实施例中运动矢量预测和视差矢量预测方法的示意图;
图7为本申请一种实施例模板匹配方法中倒“L”型模板的示意图。
本实施例提供了一种多视点视频编码的运动矢量预测和视差矢量预测方法,该运动矢量预测和视差矢量预测方法以MVC(Multi-view Video
Coding,多视点视频编码)标准为基础,在对多视点视频进行编码时,通常选择多视点视频原始信号的其中一路作为主视点,其它路的原始信号作为辅视点,在对辅视点进行编码时采用主视点的图像帧作为参考帧,以提高编码效率。为了便于对本申请的理解,本实施例中以双视点视频为例进行说明,从双视点视频的两路原始信号中选择其中一路作为主视点,另一路作为辅视点。
根据用于对运动画面编码的MVC标准,在对视频进行编码时,将每一帧图像划分宏块,每个宏块具有固定大小,从左上方的第一图像块开始依次按照从左往右、从上往下的顺序依次对一帧图像中的每一个图像块进行处理。请参考图1,例如将一帧16*16像素的图像划分为4*4像素的宏块(图像块),每一宏块的大小为4*4像素,对图像的处理顺序为,先从左到右处理第一行的图像块,然后再依次处理第二行,直到整帧图像被处理完毕。假设图像块P为当前图像块,往往在对当前图像块P进行处理时,如运动矢量预测、视差矢量预测时,需要选择参考图像块,以参考图像块的运动矢量、视差矢量作为参考值来计算当前图像块的运动矢量预测值、视差矢量预测值。
由于帧图像中的每一个图像块与其相邻的图像块具有最高的相似性,因此,优选的,本实施例中,参考图像块采用当前图像块的已编码的相邻图像块。如图1中,当前图像块P的参考图像块为A、B、C、D。
在另一实施例中,参考图像块在选择时,也可以选择当前图像块相邻的上块、右上块和左块图像块作为参考图像块,例如图1中当前图像块P的参考图像块为A、B、C;如果当前图像块的右上块图像块不存在(当前图像块位于右边第一列时),则用当前图像块的左上块图像块来代替,例如图1中当前图像块P的参考图像块为A、B、D。
请参考图2,本实施例提供了一种多视点视频编码的运动矢量预测方法,包括下面步骤:
步骤S11,将待编码的视频帧划分宏块,形成若干图像块。
步骤S12,判断待编码的当前图像块的参考图像块中是否包含时域预测块,时域预测块是指采用运动补偿预测模式进行编码的图像块。
步骤S13,当步骤S12中判断为参考图像块中包含至少一块时域预测块时,采用中值预测方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值。
步骤S14,当步骤S12中判断为参考图像块中不包含时域预测块时,采用模板匹配方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值。
在步骤S13中的模板匹配方法包括:在当前图像块所在图像帧的前一帧图像中搜索参考图像块的最佳匹配块,以计算参考图像块的运动矢量,以参考图像块的运动矢量作参考计算当前图像块的运动矢量预测值。最佳匹配块为当前图像块所在图像帧的前一帧图像中与参考图像块的绝对误差和最小的图像块。
在步骤S13和S14中,采用中值预测方法对当前图像块进行运动矢量预测得到当前图像块的运动矢量预测值,和采用模板匹配方法对当前图像块进行运动矢量预测得到当前图像块的运动矢量预测值时,本实施例中采用中值函数计算当前图像块的运动矢量预测值。
请参考图3,本实施例提供了一种多视点视频编码的视差矢量预测方法,包括下面步骤:
步骤S21,将待编码的视频帧划分宏块,形成若干图像块。
步骤S22,判断待编码的当前图像块的参考图像块中是否包含视间预测块,视间预测块是指采用视差补偿预测模式进行编码的图像块。
步骤S23,当步骤S22中判断为参考图像块中包含至少一块视间预测块时,采用中值预测方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值。
步骤S24,当步骤S22中判断为参考图像块中不包含视间预测块时,采用模板匹配方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值。
在步骤S23中的模板匹配方法包括:在主视点参考帧图像中搜索参考图像块的最佳匹配块,以计算参考图像块的视差矢量,以参考图像块的视差矢量作参考计算当前图像块的视差矢量预测值。最佳匹配块为主视点参考帧图像中与参考图像块的绝对误差和最小的图像块。
在步骤S23和S24中,采用中值预测方法对当前图像块进行视差矢量预测得到当前图像块的视差矢量预测值,和采用模板匹配方法对当前图像块进行视差矢量预测得到当前图像块的视差矢量预测值时,本实施例中采用中值函数计算当前图像块的视差矢量预测值。
下面通过多视点视频编码的具体过程来对上述运动矢量预测和视差矢量预测方法进行说明。
请参考图4,为多视点视频编码的编码框图,多视点视频编码过程包括下面步骤:
步骤一:输入多视点视频的原始信号,从两路原始信号中选择其中一路作为主视点,另一路作为辅视点,例如选择左路原始信号作为主视点,右路原始信号作为辅视点。
步骤一中对左路第一帧图像进行编码,先对当前图像块进行帧内预测得到当前图像块的帧内预测块,将当前图像块与帧内预测块相差得到残差值,并对残差值进行变换、量化和熵编码,形成码流序列,进而完成左路第一帧图像的编码。
步骤二:为了提供后续编码所需要的参考图像,编码端在进行编码时,还需要具备重建图像的能力,即具备解码端,请参考图5,为多视点视频编码的解码框图。步骤二中,对左路第一帧图像进行解码,将码流序列通过熵解码、反量化和反变换得到残差值,通过帧内预测得到当前图像块的帧内预测块,将残差值与帧内预测块相加得到当前图像块,并进行滤波得到解码图像块,进而得到左路第一帧图像的解码图像。
步骤三:对左路第二帧图像进行编码,具体为,对当前图像块进行帧内预测得到当前图像块的帧内预测块。对当前图像块进行运动估计得到运动矢量,对当前图像块进行运动矢量预测得到运动矢量预测值,并将运动矢量与运动矢量预测值相差得到运动矢量差值,同时对当前块进行运动补偿预测得到运动补偿预测块。采用率失真优化准则对当前图像块进行模式选择得到最佳预测块,当最佳预测块为帧内预测块时,将当前图像块与帧内预测块相差得到残差值,并对残差值进行变换、量化、熵编码,形成当前图像块的码流序列;当最佳预测块为运动补偿预测块时,将当前图像块与运动补偿预测块相差得到残差值,并对残差值进行变换、量化后,与运动矢量差值一起熵编码,形成当前图像块的码流序列。
采用率失真优化准则时,在限定比特率的情况下,选择失真最小的预测模式。步骤三中可以在帧内预测模式和运动补偿预测模式两种模式中进行选择。
步骤四:对左路第二帧图像进行解码,当步骤三中选择的编码方式为帧内预测模式时,将码流序列通过熵解码、反量化和反变换后得到残差值,通过帧内预测得到当前图像块的帧内预测块,将帧内预测块与残差值相加,并进行滤波得到解码图像块,进而得到第二帧图像的解码图像。在编码步骤中,编码端输出的码流序列包含有对应的编码模式信息,以便于解码端进行解码。
当步骤三中选择的编码方式为运动补偿预测模式时,将码流序列通过熵解码、反量化、反变换后得到残差值和运动矢量差值;并通过运动矢量预测预测出当前图像块的运动矢量预测值,将运动矢量预测值加上运动矢量差值得到运动矢量,根据运动矢量和前一帧图像进行运动补偿得到运动补偿预测块,将运动补偿预测块与残差值相加,并进行滤波得到解码图像块,进而得到第二帧图像的解码图像。
步骤五:循环步骤三和步骤四,继续对第二帧图像的后续帧图像进行编码和解码,直至左路视频信号的全部帧编码、解码完毕。
步骤六:对右路第一帧图像进行三维立体编码,具体为,对当前图像块进行帧内预测得到帧内预测块。对当前图像块进行视差估计得到视差矢量,对当前图像块进行视差矢量预测得到视差矢量预测值,将视差矢量和视差矢量预测值相差得到视差矢量差值,同时对当前块进行运动补偿预测得到运动补偿预测块。采用率失真优化准则对当前图像块进行模式选择得到最佳预测块,当最佳预测块为帧内预测块时,对残差值进行变换、量化、熵编码,形成当前图像块的码流序列;当最佳预测块为视差补偿预测块时,对残差值进行变换、量化后,与视差矢量差值一起熵编码,形成当前图像块的码流序列。步骤六中可以在帧内预测模式和视差补偿预测模式两种模式中进行选择。其中,视差补偿预测根据当前图像块在图像中的位置,在左路参考帧中找到对应位置并按照视差矢量进行偏移得到视差补偿预测块,左路参考帧是指与当前编码帧帧数相同的帧。步骤六中,编码右路第一帧图像时,对应左路第一帧图像即为左路参考帧。
本实施例中,对当前图像块进行视差矢量预测得到视差矢量预测值时,先判断当前图像块的已编码的相邻图像块中是否包含视间预测块,如果是则采用中值预测的方法计算当前图像块的视差矢量预测值,如果否则采用模板匹配的方法计算当前图像块的视差矢量预测值。本实施例中,判断当前图像块的已编码的相邻图像块中是否包含视间预测块,具体是通过检索已编码的相邻图像块的参考帧索引号判断该图像块是否为视间预测块。
请参考图6,假设当前图像块为B6,则其已编码的相邻图像块为B1、B2、B3、B5。如果判断到当前图像块B6的已编码的相邻图像块中包含有视间预测块,例如B1、B2,且图像块B1、B2对应的视差矢量为D1、D2,则通过中值预测的方法从其相邻视间预测块中估计出当前图像块B6的视差矢量预测值D6p:
D6p= f(D1,D2)。
如果判断到当前图像块B6的已编码的相邻图像块中不包含视间预测块时,即当前图像块B6的已编码的相邻图像块B1、B2、B3、B5只包含时域预测块,没有可用的视差矢量,此时,判断单元104则控制视差矢量预测单元以B1,B2,B3和B5的解码块作为模板,采用模板匹配的方法在主视点相应的参考帧重建图像中搜索最佳匹配块B1’、B2’、B3’和B5’,从而得到B1、B2、B3和B5的视差矢量D1、D2、D3和D5,进而得到当前图像块B6的视差矢量预测值D6p:
D6p = f(D1,D2, D3, D5)。
在主视点相应的参考帧图像中搜索相邻图像块的最佳匹配块时,在主视点相应的参考帧图像中计算相邻图像块与搜索块的绝对误差和,搜索到的绝对误差和最小的图像块即确定为最佳匹配块。
本实施例中,计算视差矢量预测值时,采用的f函数选择中值函数,即:
D6p = median(D2, D3, D5)。
请参考图7,本实施例采用倒“L”型模板匹配,当前图像块P的已编码的相邻图像块构成倒“L”型模板,模板大小为4*4像素,模板“L”在主视点重建图像中平移搜索窗口覆盖的子图像块记作L’ij,i、j为子图像块左上顶点在主视点图像中的坐标,最终通过比较L与L’ij的相似性,完成模板匹配过程。
需要说明的是,在一帧图像中,第一行和第一列中的图像块具有特殊性,无法选择参考图像块来计算运动矢量预测值和视差矢量预测值,请参考图6,在对第一图像块(B1)进行编码时,其编码模式直接使用帧内预测模式,在对第一行和第一列的其它图像块进行编码时,则采用常规的运动矢量预测和视差矢量预测方法(中值预测方法)来计算运动矢量预测值和视差矢量预测值。
步骤七:对右路第一帧图像进行解码。当步骤六中选择的编码方式为帧内预测模式时,将码流序列通过熵解码、反量化和反变换后得到残差值,通过帧内预测得到当前图像块的帧内预测块,将帧内预测块与残差值相加,并进行滤波得到解码图像块,进而得到右路第一帧图像的解码图像。
当步骤六中选择的编码方式为视差补偿预测模式时,将码流序列通过熵解码、反量化、反变换后得到残差值和视差矢量差值;并通过视差矢量预测预测出当前图像块的视差矢量预测值,将视差矢量预测值加上视差矢量差值得到视差矢量,根据视差矢量和主视点参考帧进行视差补偿得到视差补偿预测块,将视差补偿预测块与残差值相加,并进行滤波得到解码图像块,进而得到右路第一帧图像的解码图像。
步骤七中在计算当前图像块的视差矢量预测值时,其原理与步骤六中相同,此处不再赘述。
步骤八:对右路第二帧图像进行编码,具体为,对当前图像块进行帧内预测得到当前图像块的帧内预测块。对当前图像块进行运动估计得到运动矢量,对当前图像块进行运动矢量预测得到运动矢量预测值,并将运动矢量与运动矢量预测值相差得到运动矢量差值,同时对当前图像块进行运动补偿预测得到运动补偿预测块;对当前图像块进行视差估计得到视差矢量,对当前图像块进行视差矢量预测得到视差矢量预测值,并将视差矢量与视差矢量预测值相差得到视差矢量差值,同时对当前图像块进行视差补偿预测得到视差补偿预测块。采用率失真优化准则对当前图像块进行模式选择得到最佳预测块,当最佳预测块为帧内预测块时,对残差值进行变换、量化、熵编码,形成当前图像块的码流序列;当最佳预测块为运动补偿预测块时,对残差值进行变换、量化后,与运动矢量差值一起熵编码,形成当前图像块的码流序列;当最佳预测块为视差补偿预测块时,对残差值进行变换、量化后,与视差矢量差值一起熵编码,形成当前图像块的码流序列。
步骤八中可以在帧内预测模式、运动补偿预测模式和视差补偿预测模式三种模式中进行选择。其中,运动补偿预测根据当前图像块在图像中的位置,在时域前一帧图像中找到对应位置并按照运动矢量进行偏移得到运动补偿预测块。
对当前图像块进行运动矢量预测得到运动矢量预测值时,先判断当前图像块的已编码的相邻图像块中是否包含时域预测块,如果是则采用中值预测的方法计算当前图像块的运动矢量预测值,如果否则采用模板匹配的方法计算当前图像块的运动矢量预测值。判断当前图像块的已编码的相邻图像块中是否包含时域预测块,具体为,通过检索已编码的相邻图像块的参考帧索引号判断该图像块是否为时域预测块。
请参考图6,假设当前图像块为B11,则其已编码的相邻图像块为B6、B7、B8、B10。如果判断到当前图像块B11的已编码的相邻图像块中包含有时域预测块,例如B6、B7,且图像块B6、B7对应的运动矢量为M6、M7,则通过中值预测的方法从其相邻时域预测块中估计出当前图像块B11的运动矢量预测值M11p:
M11p= f(M6,M7)。
如果判断到当前图像块B11的已编码的相邻图像块中不包含时域预测块时,即当前图像块B11的已编码的相邻图像块B6、B7、B8、B10只包含视间预测块,没有可用的运动矢量,此时,则以B6,B7,B8和B10的解码块作为模板,采用模板匹配的方法在辅视点前一帧图像的重建图像中搜索最佳匹配块B6’、B7’、B8’和B10’,从而得到B6、B7、B8和B10的运动矢量M6、M7、M8和M10,进而得到当前图像块B11的运动矢量预测值M11p:
M11p= f(M6, M7, M8, M10)。
运动矢量预测单元在辅视点前一帧图像的重建图像中搜索相邻图像块的最佳匹配块时,在辅视点前一帧图像的重建图像中计算相邻图像块与搜索块的绝对误差和,搜索到的绝对误差和最小的图像块即确定为最佳匹配块。
本实施例中,计算运动矢量预测值时,采用的f函数选择中值函数,即:
M11p = median(M7, M8, M10)。
步骤八中对当前图像块进行视差补偿预测得到视差矢量预测值的方法与步骤六相同,此处不再赘述。
步骤九:对右路第二帧图像进行解码。当步骤八中选择的编码方式为帧内预测模式时,将码流序列通过熵解码、反量化和反变换后得到残差值,通过帧内预测得到当前图像块的帧内预测块,将帧内预测块与残差值相加,并进行滤波得到解码图像块,进而得到右路第一帧图像的解码图像。
当步骤八中选择的编码方式为视差补偿预测模式时,将码流序列通过熵解码、反量化、反变换后得到残差值和视差矢量差值;并通过视差补偿预测预测出当前图像块的视差矢量预测值,将视差矢量预测值加上视差矢量差值得到视差矢量,根据视差矢量和主视点参考帧进行视差补偿得到视差补偿预测块,将视差补偿预测块与残差值相加,并进行滤波得到解码图像块,进而得到右路第一帧图像的解码图像。
当步骤八中选择的编码方式为运动补偿预测模式时,将码流序列通过熵解码、反量化、反变换后得到残差值和运动矢量差值;并通过运动补偿预测预测出当前图像块的运动矢量预测值,将运动矢量预测值加上运动矢量差值得到运动矢量,根据运动矢量和前一帧图像进行运动补偿得到运动补偿预测块,将运动补偿预测块与残差值相加,并进行滤波得到解码图像块,进而得到右路第二帧图像的解码图像。
步骤九中在计算当前图像块的运动矢量预测值和视差矢量预测值时,其原理与步骤八中相同,此处不再赘述。
步骤十:循环步骤八和步骤九,继续对右路第二帧图像的后续帧图像进行编码和解码,直至右路视频信号的全部帧编码、解码完毕。
本申请提供的多视点视频编码的运动矢量预测和视差矢量预测方法中,先判断当前图像块的参考图像块中是否包含时域预测块或视间预测块,从而选择采用中值预测方法还是模板匹配方法来计算当前图像块的运动矢量预测值和视差矢量预测值,避免由于参考图像块缺少运动矢量或视差矢量时造成运动矢量预测或视差矢量预测的准确度和效率下降的现象,从而提高运动矢量预测值和视差矢量预测值的准确度,提高编码效率。
本领域技术人员可以理解,上述实施方式中各种方法的全部或部分步骤可以通过程序来指令相关硬件完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器、随机存储器、磁盘或光盘等。
以上内容是结合具体的实施方式对本申请所作的进一步详细说明,不能认定本申请的具体实施只局限于这些说明。对于本申请所属技术领域的普通技术人员来说,在不脱离本申请发明构思的前提下,还可以做出若干简单推演或替换。
Claims (10)
- 一种多视点视频编码的运动矢量预测方法,其特征在于,包括:将待编码的视频帧划分宏块;判断待编码的当前图像块的参考图像块中是否包含时域预测块,所述时域预测块为采用运动补偿预测模式进行编码的图像块;当判断为参考图像块中包含至少一块时域预测块时,采用中值预测方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值;否则采用模板匹配方法对当前图像块进行运动矢量预测,得到当前图像块的运动矢量预测值。
- 如权利要求1所述的方法,其特征在于,所述模板匹配方法包括:在当前图像块所在图像帧的前一帧图像中搜索参考图像块的最佳匹配块,以计算参考图像块的运动矢量,以参考图像块的运动矢量作参考计算当前图像块的运动矢量预测值。
- 如权利要求2所述的方法,其特征在于,所述最佳匹配块为当前图像块所在图像帧的前一帧图像中与参考图像块的绝对误差和最小的图像块。
- 如权利要求1所述的方法,其特征在于,所述采用中值预测方法对当前图像块进行运动矢量预测得到当前图像块的运动矢量预测值,和采用模板匹配方法对当前图像块进行运动矢量预测得到当前图像块的运动矢量预测值,包括采用中值函数计算当前图像块的运动矢量预测值。
- 如权利要求1-4任一项所述的方法,其特征在于,所述参考图像块为当前图像块的已编码的相邻图像块。
- 一种多视点视频编码的视差补偿预测方法,其特征在于,包括:将待编码的视频帧划分宏块;判断待编码的当前图像块的参考图像块中是否包含视间预测块,所述视间预测块为采用视差补偿预测模式进行编码的图像块;当判断为参考图像块中包含至少一块视间预测块时,采用中值预测方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值;否则采用模板匹配方法对当前图像块进行视差矢量预测,得到当前图像块的视差矢量预测值。
- 如权利要求6所述的方法,其特征在于,所述模板匹配方法包括:在主视点参考帧图像中搜索参考图像块的最佳匹配块,以计算参考图像块的视差矢量,以参考图像块的视差矢量作参考计算当前图像块的视差矢量预测值;所述主视点参考帧图像为主视点中与当前图像块所在帧帧数相同的帧图像。
- 如权利要求7所述的方法,其特征在于,所述最佳匹配块为主视点参考帧图像中与参考图像块的绝对误差和最小的图像块。
- 如权利要求6所述的方法,其特征在于,所述采用中值预测方法对当前图像块进行视差矢量预测得到当前图像块的视差矢量预测值,和采用模板匹配方法对当前图像块进行视差矢量预测得到当前图像块的视差矢量预测值,包括采用中值函数计算当前图像块的视差矢量预测值。
- 如权利要求6-9任一项所述的方法,其特征在于,所述参考图像块为当前图像块的已编码的相邻图像块。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210211415.4A CN102801995B (zh) | 2012-06-25 | 2012-06-25 | 一种基于模板匹配的多视点视频运动和视差矢量预测方法 |
CN201210211415.4 | 2012-06-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014000636A1 true WO2014000636A1 (zh) | 2014-01-03 |
Family
ID=47200950
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/077924 WO2014000636A1 (zh) | 2012-06-25 | 2013-06-25 | 多视点视频编码的运动矢量预测和视差矢量预测方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN102801995B (zh) |
WO (1) | WO2014000636A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906475A (zh) * | 2021-01-19 | 2021-06-04 | 郑州凯闻电子科技有限公司 | 基于人工智能的城市测绘无人机滚动快门成像方法与系统 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102801995B (zh) * | 2012-06-25 | 2016-12-21 | 北京大学深圳研究生院 | 一种基于模板匹配的多视点视频运动和视差矢量预测方法 |
CN107318027B (zh) * | 2012-12-27 | 2020-08-28 | 日本电信电话株式会社 | 图像编码/解码方法、图像编码/解码装置、以及图像编码/解码程序 |
CN103747265B (zh) * | 2014-01-03 | 2017-04-12 | 华为技术有限公司 | 一种nbdv获取方法及视频解码装置 |
WO2015139206A1 (en) * | 2014-03-18 | 2015-09-24 | Mediatek Singapore Pte. Ltd. | Methods for 3d video coding |
CN104394417B (zh) * | 2014-12-15 | 2017-07-28 | 哈尔滨工业大学 | 一种多视点视频编码中的视差矢量获取方法 |
CN104902256B (zh) * | 2015-05-21 | 2018-01-09 | 南京大学 | 一种基于运动补偿的双目立体图像编解码方法 |
CN111901590B (zh) * | 2020-06-29 | 2023-04-18 | 北京大学 | 一种用于帧间预测的细化运动矢量存储方法及装置 |
CN114666600B (zh) * | 2022-02-14 | 2023-04-07 | 北京大学 | 基于不规则模板的数据编码方法、装置、电子设备及介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070075043A (ko) * | 2006-01-11 | 2007-07-18 | 연세대학교 산학협력단 | 고속 움직임 및 변이 추정 방법 |
CN101600108A (zh) * | 2009-06-26 | 2009-12-09 | 北京工业大学 | 一种多视点视频编码中的运动和视差联合估计方法 |
CN101686393A (zh) * | 2008-09-28 | 2010-03-31 | 华为技术有限公司 | 应用于模板匹配的快速运动搜索方法及装置 |
US7822280B2 (en) * | 2007-01-16 | 2010-10-26 | Microsoft Corporation | Epipolar geometry-based motion estimation for multi-view image and video coding |
CN101917619A (zh) * | 2010-08-20 | 2010-12-15 | 浙江大学 | 一种多视点视频编码快速运动估计方法 |
JP2011193352A (ja) * | 2010-03-16 | 2011-09-29 | Sharp Corp | 多視点画像符号化装置 |
CN102801995A (zh) * | 2012-06-25 | 2012-11-28 | 北京大学深圳研究生院 | 一种基于模板匹配的多视点视频运动和视差矢量预测方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004227519A (ja) * | 2003-01-27 | 2004-08-12 | Matsushita Electric Ind Co Ltd | 画像処理方法 |
CN101415122B (zh) * | 2007-10-15 | 2011-11-16 | 华为技术有限公司 | 一种帧间预测编解码方法及装置 |
-
2012
- 2012-06-25 CN CN201210211415.4A patent/CN102801995B/zh active Active
-
2013
- 2013-06-25 WO PCT/CN2013/077924 patent/WO2014000636A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070075043A (ko) * | 2006-01-11 | 2007-07-18 | 연세대학교 산학협력단 | 고속 움직임 및 변이 추정 방법 |
US7822280B2 (en) * | 2007-01-16 | 2010-10-26 | Microsoft Corporation | Epipolar geometry-based motion estimation for multi-view image and video coding |
CN101686393A (zh) * | 2008-09-28 | 2010-03-31 | 华为技术有限公司 | 应用于模板匹配的快速运动搜索方法及装置 |
CN101600108A (zh) * | 2009-06-26 | 2009-12-09 | 北京工业大学 | 一种多视点视频编码中的运动和视差联合估计方法 |
JP2011193352A (ja) * | 2010-03-16 | 2011-09-29 | Sharp Corp | 多視点画像符号化装置 |
CN101917619A (zh) * | 2010-08-20 | 2010-12-15 | 浙江大学 | 一种多视点视频编码快速运动估计方法 |
CN102801995A (zh) * | 2012-06-25 | 2012-11-28 | 北京大学深圳研究生院 | 一种基于模板匹配的多视点视频运动和视差矢量预测方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906475A (zh) * | 2021-01-19 | 2021-06-04 | 郑州凯闻电子科技有限公司 | 基于人工智能的城市测绘无人机滚动快门成像方法与系统 |
Also Published As
Publication number | Publication date |
---|---|
CN102801995A (zh) | 2012-11-28 |
CN102801995B (zh) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014000636A1 (zh) | 多视点视频编码的运动矢量预测和视差矢量预测方法 | |
JP4195011B2 (ja) | 立体ビデオの符号化及び復号化方法、符号化及び復号化装置 | |
KR101753171B1 (ko) | 3d 비디오 코딩에서의 간략화된 뷰 합성 예측 방법 | |
JP5020953B2 (ja) | 時間及び視点間参照映像バッファを活用した予測符号化/復号化装置及びその方法 | |
WO2010068020A2 (ko) | 다시점 영상 부호화, 복호화 방법 및 그 장치 | |
US5619256A (en) | Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions | |
WO2009116745A2 (en) | Method and apparatus for encoding and decoding image | |
US20140002594A1 (en) | Hybrid skip mode for depth map coding and decoding | |
JP2016513925A (ja) | 3dビデオ符号化におけるビュー合成予測の方法と装置 | |
WO2020058955A1 (en) | Multiple-hypothesis affine mode | |
CN116800961A (zh) | 对视频信号进行编解码的设备和发送图像的数据的设备 | |
EP2923491A1 (en) | Method and apparatus for bi-prediction of illumination compensation | |
BR122021009784A2 (pt) | Método e aparelho de decodificação de imagens com base em predição de movimento afim usando lista de candidatos a mvp afim no sistema de codificação de imagens | |
JP6039178B2 (ja) | 画像符号化装置、画像復号装置、並びにそれらの方法及びプログラム | |
WO2013133648A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013176485A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
JP2016501469A (ja) | 3dビデオ符号化における制約される視差ベクトル導出の方法と装置 | |
WO2007069487A1 (ja) | 多視点画像の圧縮符号化方法及び復号化方法 | |
JP2024113121A (ja) | クロマブロックに対する分割制限を用いた画像符号化/復号化方法、装置、及びビットストリームを伝送する方法 | |
WO2013133587A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
JPH10191393A (ja) | 多視点画像符号化装置 | |
CN116684591A (zh) | 视频编码器、视频解码器、及对应方法 | |
WO2015152504A1 (ko) | 시점 간 움직임 병합 후보 유도 방법 및 장치 | |
WO2012099352A2 (ko) | 다시점 영상 부호화/복호화 장치 및 방법 | |
CN112118452A (zh) | 视频解码的方法和装置、计算机设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13808759 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13808759 Country of ref document: EP Kind code of ref document: A1 |