CN107396102A - A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors - Google Patents
A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors Download PDFInfo
- Publication number
- CN107396102A CN107396102A CN201710762301.1A CN201710762301A CN107396102A CN 107396102 A CN107396102 A CN 107396102A CN 201710762301 A CN201710762301 A CN 201710762301A CN 107396102 A CN107396102 A CN 107396102A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- merge
- mode
- inter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000013598 vector Substances 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 abstract description 11
- 238000004422 calculation algorithm Methods 0.000 abstract description 9
- 238000012360 testing method Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
本发明公开了一种基于Merge技术运动矢量的帧间模式快速选择方法及装置,该方法通过当前编码单元CU的运动矢量MV投影到参考帧,从而找到参考帧中对应当前CU的投影块,利用两者者在预测模式上的相关性,利用投影块的特征决定是否跳过当前CU是否跳过运动估计和运动补偿的帧间模式;相较于现有技术,通过已知的投影块的信息,来对当前编码单元进行预测,降低了视频编码器的帧间预测计算复杂度,减少了编码时间,提高了编码效率;且本发明算法简单,计算量小,可方便地投入实际应用。
The present invention discloses a method and device for fast selection of an inter-frame mode based on motion vectors of Merge technology. The method projects the motion vector MV of the current coding unit CU to a reference frame, thereby finding the projected block corresponding to the current CU in the reference frame. The correlation between the two in the prediction mode, using the characteristics of the projected block to decide whether to skip the current CU or not to skip the inter-frame mode of motion estimation and motion compensation; compared with the prior art, through the known information of the projected block , to predict the current coding unit, which reduces the computational complexity of the inter-frame prediction of the video encoder, reduces the encoding time, and improves the encoding efficiency; and the algorithm of the present invention is simple, the amount of calculation is small, and it can be conveniently put into practical application.
Description
技术领域technical field
本发明属于视频编码领域,特别涉及一种基于Merge技术运动矢量的帧间模式快速选择方法及装置。The invention belongs to the field of video coding, in particular to a method and device for fast selection of an inter-frame mode based on motion vectors of Merge technology.
背景技术Background technique
在编码框架中,预测编码是视频编码的核心技术之一,预测编码又分为帧内预测和帧间预测。帧内预测是根据视频图像的空间相关性,利用图像内已编码的邻近像素预测当前像素。帧间编码是根据视频图像的时间相关性,利用已编码图像预测待编码图像。经过帧内和帧间预测,编码器可以消除视频的时空相关性,对预测后的残差而不是原始像素值进行变换、量化、熵编码,由此大幅提高编码效率。In the coding framework, predictive coding is one of the core technologies of video coding, and predictive coding is divided into intra-frame prediction and inter-frame prediction. Intra prediction is based on the spatial correlation of the video image, using the encoded adjacent pixels in the image to predict the current pixel. Inter-frame coding is based on the time correlation of video images, using the encoded image to predict the image to be encoded. After intra-frame and inter-frame prediction, the encoder can eliminate the spatiotemporal correlation of the video, and perform transformation, quantization, and entropy coding on the predicted residual instead of the original pixel value, thereby greatly improving the coding efficiency.
目前主要的视频编码标准帧间预测部分都采用了基于块的运动补偿技术。其主要原理是为当前图像的每个像素块在之前已编码图像中寻找一个最佳匹配块,该过程称为运动估计。其中用于预测的图像称为参考图像,参考块到当前像素块的位移称为运动矢量,当前块与参考块的差值称为预测残差。由于视频图像序列的连续性,通常运动矢量在空间和时间上也存在一定的相关性,同理,利用空间或时间上相邻的运动矢量对当前块运动矢量进行预测,仅对预测残差进行编码,也能大幅节省运动矢量的编码比特数。这种预测运动矢量的技术称为Merge。At present, the inter-frame prediction part of major video coding standards adopts block-based motion compensation technology. Its main principle is to find a best matching block in the previously encoded image for each pixel block of the current image, and this process is called motion estimation. The image used for prediction is called a reference image, the displacement from the reference block to the current pixel block is called a motion vector, and the difference between the current block and the reference block is called a prediction residual. Due to the continuity of the video image sequence, there is usually a certain correlation between the motion vector in space and time. Similarly, the motion vector of the current block is predicted by using the adjacent motion vector in space or time, and only the prediction residual is calculated. Coding can also greatly save the number of coding bits of the motion vector. This technique of predicting motion vectors is called Merge.
2013年,ITU-T的VCEG(视频编码专家组)和ISO/IEC的MPEG(动态图像专家组)联合推出了HEVC(高效视频编码)视频压缩方案。自2016年始,VCEG和MPEG开始研究新一代视频编码器,并成立了一个专家小组——JVET(联合视频研究小组),旨在进一步提升HEVC的压缩率。新一代视频编码标准是在HEVC的基础上发展而来,二者在帧间预测过程中都采用了Merge技术,不同的是,新一代视频的Merge模式有三种:基于仿射变换的AffineMerge模式、基于模板匹配的FRUC Merge模式以及基于时空相关性的2Nx2N Merge模式。这些模式的应用提高了编码器的压缩性能,也大大增加了编码时间,影响了标准的研发速度和应用价值。在关于新一代视频编码标准的第三次会议上就有提案指出这种弊端,并请求对其复杂度采取行动。In 2013, ITU-T's VCEG (Video Coding Experts Group) and ISO/IEC's MPEG (Motion Picture Experts Group) jointly launched the HEVC (High Efficiency Video Coding) video compression scheme. Since 2016, VCEG and MPEG began to study a new generation of video encoders, and established an expert group - JVET (Joint Video Research Team), aiming to further improve the compression rate of HEVC. The new-generation video coding standard is developed on the basis of HEVC. Both of them use Merge technology in the inter-frame prediction process. The difference is that there are three Merge modes for the new-generation video: AffineMerge mode based on affine transformation, FRUC Merge mode based on template matching and 2Nx2N Merge mode based on spatio-temporal correlation. The application of these modes improves the compression performance of the encoder and greatly increases the encoding time, which affects the development speed and application value of the standard. Proposals at the third conference on next-generation video coding standards pointed to this drawback and called for action on its complexity.
新一代视频编码标准在做帧间预测时步骤如下:The next-generation video coding standard performs the following steps when making inter-frame prediction:
步骤一:先做Affine Merge模式,即仿射运动补偿预测,保存其率失真代价以及预测信息,并将当前最佳模式置为Affine Merge模式,;Step 1: First do Affine Merge mode, that is, affine motion compensation prediction, save its rate-distortion cost and prediction information, and set the current best mode as Affine Merge mode;
步骤二:再做2Nx2N Merge模式,即普通运动补偿预测,若该模式的率失真代价小于Affine Merge模式的率失真代价,则将最佳模式置为2Nx2N Merge模式并保存其率失真代价以及预测信息;Step 2: Do 2Nx2N Merge mode again, that is, ordinary motion compensation prediction. If the rate-distortion cost of this mode is lower than the rate-distortion cost of Affine Merge mode, set the best mode to 2Nx2N Merge mode and save its rate-distortion cost and prediction information ;
步骤三:然后做FRUC Merge模式,即基于模板匹配的运动矢量生成,若该模式的率失真代价小于当前最佳模式的率失真代价,则将最佳模式置为FRUC Merge模式并保存其率失真代价以及预测信息。以上三种模式均属于Merge模式;Step 3: Then do FRUC Merge mode, that is, motion vector generation based on template matching. If the rate-distortion cost of this mode is less than the rate-distortion cost of the current best mode, set the best mode to FRUC Merge mode and save its rate-distortion costs and predictions. The above three modes all belong to the Merge mode;
步骤四:接着做运动估计和运动补偿的帧间预测模式,该模式通过运动搜索找出参考帧中的匹配块得出运动矢量和预测残差,因此耗时较多。Step 4: Then do the inter-frame prediction mode of motion estimation and motion compensation. This mode uses motion search to find the matching block in the reference frame to obtain the motion vector and prediction residual, so it takes more time.
若该模式的率失真代价小于当前最佳模式的率失真代价,则将最佳模式置为运动估计和运动补偿的帧间预测模式并保存其率失真代价以及预测信息。If the rate-distortion cost of this mode is less than the rate-distortion cost of the current best mode, set the best mode as the inter-frame prediction mode of motion estimation and motion compensation and save its rate-distortion cost and prediction information.
其中,运动估计和运动补偿的帧间模式的编码时间占总编码时间的41%,因此,如果能在运动估计和运动补偿之前就预测出最佳帧间模式是三种Merge模式中的一种从而跳过运动估计和运动补偿则将减少大量编码时间。Among them, the encoding time of the inter-frame mode of motion estimation and motion compensation accounts for 41% of the total encoding time. Therefore, if the best inter-frame mode can be predicted before motion estimation and motion compensation, it is one of the three Merge modes. Thus skipping motion estimation and motion compensation will reduce a lot of encoding time.
虽然目前有许多针对HM视频编码器的帧间快速算法,如T.Mallikarachchi学者在2014年IEEE图像处理国际会议上提出根据运动匀质性跳过特定尺寸CU的预测编码,S.Ahn在2015年的Circuits and System for Video Technology,IEEE Transactions上提出用同位CU的像素自适应补偿参数评估当前CU的纹理复杂度,根据纹理复杂度跳过某些帧间预测模式。但是由于新一代视频编码标准采用了QTBT(四叉二叉划分)的编码结构并取消了预测单元PU的概念,所以以上算法并不适用于新一代视频编码标准。另外一些,例如基于方差的、基于贝叶斯的方法,由于计算复杂度太高并不适用于实际应用。Although there are currently many inter-frame fast algorithms for HM video encoders, such as T. Mallikarachchi scholar proposed at the 2014 IEEE International Conference on Image Processing to skip the predictive coding of a specific size CU according to motion uniformity, S. Ahn in 2015 Circuits and System for Video Technology, IEEE Transactions proposed to use the pixel adaptive compensation parameters of the same CU to evaluate the texture complexity of the current CU, and skip some inter-frame prediction modes according to the texture complexity. However, since the new-generation video coding standard adopts the coding structure of QTBT (Quad Binary Partitioning) and cancels the concept of the prediction unit PU, the above algorithm is not applicable to the new-generation video coding standard. Others, such as variance-based and Bayesian-based methods, are not suitable for practical applications due to their high computational complexity.
2016年五月的日内瓦会议提出了新一代视频编码标准的测试模型JEM2.0,此时JEM编码器在随机配置下的平均编码时间是HEVC编码器的5.3倍。其中,帧间预测在总的编码时间中占据约68%的时间,同样地,在以往的编码标准中,帧间预测也占据了大量编码时间,因此帧间预测是减少编码时间的重要模块,具有很大的改进空间,如果能将帧间预测的时间减少将大大提高编码器的效率。The Geneva conference in May 2016 proposed the test model JEM2.0 of the next-generation video coding standard. At this time, the average encoding time of the JEM encoder under random configuration is 5.3 times that of the HEVC encoder. Among them, inter-frame prediction occupies about 68% of the total encoding time. Similarly, in previous encoding standards, inter-frame prediction also occupies a large amount of encoding time, so inter-frame prediction is an important module to reduce encoding time. There is a lot of room for improvement. If the time of inter-frame prediction can be reduced, the efficiency of the encoder will be greatly improved.
发明内容Contents of the invention
本发明的目的是针对帧间预测编码时间过长的缺陷以及现有技术的不足,提出一种基于Merge技术运动矢量的帧间模式快速选择方法,缩短其编码时间,提高其实际应用性,同时也为其进一步的研究开发提供了便利。The purpose of the present invention is to propose a kind of fast selection method of the inter-frame mode based on the motion vector of Merge technology, shorten its coding time, improve its practical applicability, at the same time It also facilitates its further research and development.
一种基于Merge技术运动矢量的帧间模式快速选择方法,包括以下步骤:A method for fast selection of inter-frame mode based on Merge technology motion vector, comprising the following steps:
步骤一:获取当前编码单元在最佳帧间预测模式下,对应于参考帧上的投影块;Step 1: Obtain the projected block corresponding to the reference frame in the best inter-frame prediction mode of the current coding unit;
在当前编码单元CU做完Affine Merge、2Nx2N Merge和FRUC Merge模式后,根据率失真代价决策出当前编码单元CU的最佳帧间预测模式;After completing the Affine Merge, 2Nx2N Merge and FRUC Merge modes of the current coding unit CU, the optimal inter-frame prediction mode of the current coding unit CU is determined according to the rate-distortion cost;
基于最佳帧间预测模式获取当前编码单元CU的运动矢量MV,将当前编码单元CU中的每个像素点平移MV后得到与当前编码单元CU大小相同的平移块,最后将该平移块投影到参考帧中,得到参考帧中对应当前编码单元CU的投影块;Obtain the motion vector MV of the current coding unit CU based on the optimal inter-frame prediction mode, translate each pixel in the current coding unit CU by MV to obtain a translation block with the same size as the current coding unit CU, and finally project the translation block to In the reference frame, the projected block corresponding to the current coding unit CU in the reference frame is obtained;
步骤二:计算步骤一得到的投影块中帧间模式为Merge的面积:Step 2: Calculate the area where the inter-frame mode is Merge in the projection block obtained in step 1:
SM=∑f(Mode(x,y)) (1)S M =∑f(Mode(x,y)) (1)
其中,SM为投影块中帧间模式为Merge的面积,(x,y)为投影块中像素点的坐标,Mode(x,y)为坐标为(x,y)的像素点的最佳帧间预测模式;当坐标为(x,y)的像素点的最佳模式为Merge时,Mode(x,y)取1,否则取0;Among them, S M is the area where the inter-frame mode is Merge in the projected block, (x, y) is the coordinate of the pixel in the projected block, and Mode(x, y) is the best pixel point with the coordinates (x, y). Inter-frame prediction mode; when the best mode of the pixel with coordinates (x, y) is Merge, Mode (x, y) takes 1, otherwise takes 0;
步骤三:计算当前编码单元CU的总面积:Step 3: Calculate the total area of the current coding unit CU:
SC=∑g(x1,y1) (3)S C =∑g(x1,y1) (3)
其中,SC为当前编码单元CU的总面积,Cur_CU表示当前编码单元CU的像素坐标范围;(x1,y1)为当前帧图像中像素点的坐标,当像素点(x1,y1)的坐标在当前编码单元CU范围内时,g(x1,y1)取1,否则取0;Among them, SC is the total area of the current coding unit CU, Cur_CU represents the pixel coordinate range of the current coding unit CU; (x1, y1) is the coordinate of the pixel in the current frame image, when the coordinate of the pixel (x1, y1) is in When it is within the range of the current coding unit CU, g(x1, y1) takes 1, otherwise takes 0;
步骤四:由步骤二的投影块的Merge面积和步骤三中的当前编码单元CU总面积计算投影块中Merge模式的面积占总面积的比例γ:Step 4: Calculate the ratio of the area of the Merge mode in the projected block to the total area γ from the Merge area of the projected block in Step 2 and the total area of the current coding unit CU in Step 3:
步骤五:当步骤四的比例γ大于设定阈值λ时,跳过步骤六,结束当前编码单元CU的预测编码;否则,进入步骤六;Step 5: When the ratio γ in step 4 is greater than the set threshold λ, skip step 6 and end the predictive coding of the current coding unit CU; otherwise, go to step 6;
其中,λ可取[0,1]中的任意实数;Among them, λ can take any real number in [0,1];
步骤六:对当前编码单元CU进行运动估计和运动补偿的帧间预测。Step 6: Perform inter-frame prediction of motion estimation and motion compensation on the current coding unit CU.
进一步地,所述λ取值为0.85。Further, the value of λ is 0.85.
一种基于Merge技术运动矢量的帧间模式快速选择装置,包括:A device for fast selection of inter-frame modes based on Merge technology motion vectors, comprising:
投影块获取单元:获取当前编码单元在最佳帧间预测模式下,对应于参考帧上的投影块;Projection block acquisition unit: obtain the projection block on the reference frame corresponding to the current coding unit in the best inter-frame prediction mode;
在当前编码单元CU做完Affine Merge、2Nx2N Merge和FRUC Merge模式后,根据率失真代价决策出当前编码单元CU的最佳帧间预测模式;After completing the Affine Merge, 2Nx2N Merge and FRUC Merge modes of the current coding unit CU, the optimal inter-frame prediction mode of the current coding unit CU is determined according to the rate-distortion cost;
基于最佳帧间预测模式获取当前编码单元CU的运动矢量MV,将当前编码单元CU中的每个像素点平移MV后得到与当前编码单元CU大小相同的平移块,最后将该平移块投影到参考帧中,得到参考帧中对应当前编码单元CU的投影块;Obtain the motion vector MV of the current coding unit CU based on the optimal inter-frame prediction mode, translate each pixel in the current coding unit CU by MV to obtain a translation block with the same size as the current coding unit CU, and finally project the translation block to In the reference frame, the projected block corresponding to the current coding unit CU in the reference frame is obtained;
帧间模式Merge的面积计算单元:依据投影块中各像素点的帧间模式,计算帧间模式为Merge的面积:The area calculation unit of the inter-frame mode Merge: according to the inter-frame mode of each pixel in the projection block, calculate the area of the inter-frame mode as Merge:
SM=∑f(Mode(x,y))S M =∑f(Mode(x,y))
其中,SM为投影块中帧间模式为Merge的面积,(x,y)为投影块中像素点的坐标,Mode(x,y)为坐标为(x,y)的像素点的最佳帧间预测模式;当坐标为(x,y)的像素点的最佳模式为Merge时,Mode(x,y)取1,否则取0;Among them, S M is the area where the inter-frame mode is Merge in the projected block, (x, y) is the coordinate of the pixel in the projected block, and Mode(x, y) is the best pixel point with the coordinates (x, y). Inter-frame prediction mode; when the best mode of the pixel with coordinates (x, y) is Merge, Mode (x, y) takes 1, otherwise takes 0;
当前编码单元CU的总面积计算单元:依据当前帧图像中各像素点是否属于当前编码单元CU,计算当前编码单元CU的总面积:The total area calculation unit of the current coding unit CU: Calculate the total area of the current coding unit CU according to whether each pixel in the current frame image belongs to the current coding unit CU:
SC=∑g(x1,y1)S C =∑g(x1,y1)
其中,SC为当前编码单元CU的总面积,Cur_CU表示当前编码单元CU的像素坐标范围;(x1,y1)为当前帧图像中像素点的坐标,当像素点(x1,y1)的坐标在当前编码单元CU范围内时,g(x1,y1)取1,否则取0;Among them, SC is the total area of the current coding unit CU, Cur_CU represents the pixel coordinate range of the current coding unit CU; (x1, y1) is the coordinate of the pixel in the current frame image, when the coordinate of the pixel (x1, y1) is in When it is within the range of the current coding unit CU, g(x1, y1) takes 1, otherwise takes 0;
投影块Merge模式比例计算单元:由投影块的Merge面积和当前编码单元CU总面积计算投影块中Merge模式的面积占总面积的比例γ:Projection block Merge mode ratio calculation unit: Calculate the ratio of the area of the Merge mode in the projected block to the total area γ from the Merge area of the projected block and the total area of the current coding unit CU:
跳过单元:当比例γ大于设定阈值λ时,跳过对当前编码单元CU进行运动估计和运动补偿的帧间预测,结束当前编码单元CU的预测编码;Skip unit: when the ratio γ is greater than the set threshold λ, skip the inter-frame prediction for motion estimation and motion compensation of the current coding unit CU, and end the predictive coding of the current coding unit CU;
其中,λ可取[0,1]中的任意实数。Among them, λ can take any real number in [0,1].
进一步地,所述跳过单元中的阈值λ取值为0.85。Further, the threshold λ in the skipping unit is 0.85.
有益效果Beneficial effect
本发明提供了一种基于Merge技术运动矢量的帧间模式快速选择方法及装置,该方法通过当前编码单元CU的运动矢量MV投影到参考帧,从而找到参考帧中对应当前CU的投影块,利用两者者在预测模式上的相关性,利用投影块的特征决定是否跳过当前CU是否跳过运动估计和运动补偿的帧间模式;相较于现有技术,通过已知的投影块的信息,来对当前编码单元进行预测,降低了视频编码器的帧间预测计算复杂度,减少了编码时间,提高了编码效率;且本发明算法简单,计算量小,可方便地投入实际应用。The present invention provides a method and device for fast selection of an inter-frame mode based on a Merge technology motion vector. The method projects the motion vector MV of the current coding unit CU to a reference frame, thereby finding the projected block corresponding to the current CU in the reference frame. The correlation between the two in the prediction mode, using the characteristics of the projected block to decide whether to skip the current CU or not to skip the inter-frame mode of motion estimation and motion compensation; compared with the prior art, through the known information of the projected block , to predict the current coding unit, which reduces the computational complexity of the inter-frame prediction of the video encoder, reduces the encoding time, and improves the encoding efficiency; and the algorithm of the present invention is simple, the amount of calculation is small, and it can be conveniently put into practical application.
附图说明Description of drawings
图1为当前编码单元及其投影块对应关系和运动矢量示意图,其中,(a)为对应关系,(b)为运动矢量示意图;FIG. 1 is a schematic diagram of the corresponding relationship between the current coding unit and its projection block and a motion vector, wherein (a) is a corresponding relationship, and (b) is a schematic diagram of a motion vector;
图2为CU的信息存储方式;Fig. 2 is the information storage method of CU;
图3为本发明所述方法的流程图。Fig. 3 is a flowchart of the method of the present invention.
具体实施方式detailed description
下面结合附图以一个优选实施例来对本发明的技术方案进行详细说明。所选实施例所用的编码器为下一代视频编码标准专家组发布的测试模型——JEM4.0,具体编码参数的配置选用JEM标准配置文件:encoder_randomaccess_jvet10.cfg,以及对应测试序列的标准配置文件。The technical solution of the present invention will be described in detail below with a preferred embodiment in conjunction with the accompanying drawings. The encoder used in the selected embodiment is the test model released by the Next Generation Video Coding Standards Expert Group——JEM4.0. The configuration of the specific encoding parameters uses the JEM standard configuration file: encoder_randomaccess_jvet10.cfg, and the standard configuration file corresponding to the test sequence.
为减少编码时间,提高工作效率,本发明具体采用的技术方案为:通过当前CU(编码块)的运动矢量MV投影到参考帧,从而找到参考帧中对应当前CU的投影块,理论上可近似认为该投影块经过运动矢量MV的位移运动到了当前帧的当前编码CU的位置(见图一左)。因此该投影块的一些性质与当前编码CU的一些性质应该是吻合的,例如像素分布情况、帧间预测模式等。本发明就根据二者在预测模式上的相似度,设定一个阈值(以下记为skip阈值),根据该阈值决定是否跳过运动估计和运动补偿的帧间模式。In order to reduce encoding time and improve work efficiency, the technical solution specifically adopted in the present invention is: project the motion vector MV of the current CU (coding block) to the reference frame, thereby finding the projected block corresponding to the current CU in the reference frame, which can be approximated in theory It is considered that the projected block moves to the position of the current coded CU of the current frame through the displacement of the motion vector MV (see the left side of Figure 1). Therefore, some properties of the projected block should be consistent with some properties of the current coded CU, such as pixel distribution, inter-frame prediction mode, and so on. The present invention sets a threshold (referred to as the skip threshold hereinafter) according to the similarity between the two prediction modes, and decides whether to skip the inter-frame mode of motion estimation and motion compensation according to the threshold.
如图3所示,本发明的具体方法如下:As shown in Figure 3, the concrete method of the present invention is as follows:
步骤一:JEM编码器在当前CU做完Affine Merge、2Nx2N Merge和FRUC Merge模式后,此时会根据率失真代价决策出一个最佳模式。首先根据最佳模式中的运动矢量MV将当前CU进行平移(见图一右)。其中,运动矢量包含水平位移分量MVx和垂直位移分量MVy,平移的方法是先记录当前CU的顶点坐标、CU的宽和高,分别记为(x,y)、width和height,则平移块顶点在参考帧的坐标为(x+MVx,y+MVy),平移块的大小与当前编码CU一致,然后再将平移块投影到参考帧中(如图1a所示)。Step 1: After the JEM encoder completes the Affine Merge, 2Nx2N Merge, and FRUC Merge modes on the current CU, it will determine an optimal mode based on the rate-distortion cost. Firstly, the current CU is translated according to the motion vector MV in the best mode (see the right of Figure 1). Among them, the motion vector includes the horizontal displacement component MVx and the vertical displacement component MVy. The method of translation is to first record the vertex coordinates of the current CU, the width and height of the CU, which are respectively recorded as (x, y), width and height, and then translate the block vertex The coordinates of the reference frame are (x+MVx, y+MVy), the size of the translation block is consistent with the current coded CU, and then the translation block is projected into the reference frame (as shown in Figure 1a).
步骤二:统计在步骤一中所得的投影块中帧间模式最终为Merge的面积。由于新一代视频编码标准的模式信息存储方式是以像素4x4大小的块为单位存储而不是以像素点为单位(见图2),所以统计方式是遍历投影块的各个小块,面积可以用下式计算。Step 2: Calculate the final Merge area of the inter-frame mode in the projected block obtained in Step 1. Since the mode information storage method of the new-generation video coding standard is stored in units of 4x4 pixels rather than pixels (see Figure 2), the statistical method is to traverse each small block of the projected block, and the area can be used as follows formula calculation.
SM=∑f(Mode(x,y))S M =∑f(Mode(x,y))
其中,SM为投影块中帧间模式最终为Merge的面积,(x,y)为投影块中像素点的坐标。Mode(x,y)为投影块中坐标为(x,y)的像素点的最佳帧间预测模式。当投影块中像素点坐标(x,y)的最佳模式为Merge时,Mode(x,y)取1,否则取0。Wherein, S M is the area where the inter-frame mode in the projected block is finally merged, and (x, y) is the coordinate of the pixel in the projected block. Mode(x, y) is the best inter-frame prediction mode for the pixel with coordinates (x, y) in the projected block. When the best mode of pixel coordinates (x, y) in the projection block is Merge, Mode (x, y) takes 1, otherwise takes 0.
步骤三:计算当前编码单元CU的总面积。同步骤二,CU总面积的统计方式也是遍历当前编码单元CU的各个小块,获得当前编码单元CU内像素点的数量。具体计算方式如下:Step 3: Calculate the total area of the current coding unit CU. Similar to step 2, the method of counting the total area of the CU is to traverse each small block of the current coding unit CU to obtain the number of pixels in the current coding unit CU. The specific calculation method is as follows:
SC=∑g(x1,y1)S C =∑g(x1,y1)
其中,SC为当前编码单元CU的总面积,Cur_CU为当前编码单元CU的坐标范围,(x1,y1)为图像中像素点的坐标,当像素点的坐标在当前编码单元CU范围内时,g(x1,y1)取1,否则取0。Among them, SC is the total area of the current coding unit CU, Cur_CU is the coordinate range of the current coding unit CU, (x1, y1) is the coordinate of the pixel in the image, when the coordinate of the pixel is within the range of the current coding unit CU, g(x1, y1) takes 1, otherwise takes 0.
步骤四:由步骤二的投影块的Merge面积和步骤三中的当前编码单元CU总面积计算投影块中Merge模式的面积占总面积的比例γ。Step 4: Calculate the ratio γ of the area of the Merge mode in the projected block to the total area from the Merge area of the projected block in Step 2 and the total area of the current coding unit CU in Step 3.
任意一个投影块的Merge模式的面积占总面积的比例可以由下式得到:The ratio of the area of the Merge mode of any projected block to the total area can be obtained by the following formula:
步骤五:当步骤四的比例γ大于阈值λ时,说明当前编码单元CU的最佳帧间模式也极有可能为Merge,因此跳过步骤六,结束当前CU的预测编码。其中,γ可取[0,1]中的任意实数,当对视频质量要求较高,编码时间要求不严格时,λ可取范围内较大的值,反之可取较小的值。经过大量实验统计,当λ取0.85时能在视频质量和编码时间之间取得较好的平衡。Step 5: When the ratio γ in step 4 is greater than the threshold λ, it means that the best inter mode of the current coding unit CU is also very likely to be Merge, so skip step 6 and end the predictive coding of the current CU. Among them, γ can take any real number in [0,1]. When the video quality is high and the encoding time is not strict, λ can take a larger value within the range, otherwise it can take a smaller value. After a large number of experiments and statistics, when λ is 0.85, a good balance between video quality and encoding time can be achieved.
步骤六:进行运动估计和运动补偿的帧间预测。Step 6: Perform inter-frame prediction for motion estimation and motion compensation.
一种基于Merge技术运动矢量的帧间模式快速选择装置,包括:A device for fast selection of inter-frame modes based on Merge technology motion vectors, comprising:
投影块获取单元:获取当前编码单元在最佳帧间预测模式下,对应于参考帧上的投影块;Projection block acquisition unit: obtain the projection block on the reference frame corresponding to the current coding unit in the best inter-frame prediction mode;
在当前编码单元CU做完Affine Merge、2Nx2N Merge和FRUC Merge模式后,根据率失真代价决策出当前编码单元CU的最佳帧间预测模式;After completing the Affine Merge, 2Nx2N Merge and FRUC Merge modes of the current coding unit CU, the optimal inter-frame prediction mode of the current coding unit CU is determined according to the rate-distortion cost;
基于最佳帧间预测模式获取当前编码单元CU的运动矢量MV,将当前编码单元CU中的每个像素点平移MV后得到与当前编码单元CU大小相同的平移块,最后将该平移块投影到参考帧中,得到参考帧中对应当前编码单元CU的投影块;Obtain the motion vector MV of the current coding unit CU based on the optimal inter-frame prediction mode, translate each pixel in the current coding unit CU by MV to obtain a translation block with the same size as the current coding unit CU, and finally project the translation block to In the reference frame, the projected block corresponding to the current coding unit CU in the reference frame is obtained;
帧间模式Merge的面积计算单元:依据投影块中各像素点的帧间模式,计算帧间模式为Merge的面积:The area calculation unit of the inter-frame mode Merge: according to the inter-frame mode of each pixel in the projection block, calculate the area of the inter-frame mode as Merge:
SM=∑f(Mode(x,y))S M =∑f(Mode(x,y))
其中,SM为投影块中帧间模式为Merge的面积,(x,y)为投影块中像素点的坐标,Mode(x,y)为坐标为(x,y)的像素点的最佳帧间预测模式;当坐标为(x,y)的像素点的最佳模式为Merge时,Mode(x,y)取1,否则取0;Among them, S M is the area where the inter-frame mode is Merge in the projected block, (x, y) is the coordinate of the pixel in the projected block, and Mode(x, y) is the best pixel point with the coordinates (x, y). Inter-frame prediction mode; when the best mode of the pixel with coordinates (x, y) is Merge, Mode (x, y) takes 1, otherwise takes 0;
当前编码单元CU的总面积计算单元:依据当前帧图像中各像素点是否属于当前编码单元CU,计算当前编码单元CU的总面积:The total area calculation unit of the current coding unit CU: Calculate the total area of the current coding unit CU according to whether each pixel in the current frame image belongs to the current coding unit CU:
SC=∑g(x1,y1)S C =∑g(x1,y1)
其中,SC为当前编码单元CU的总面积,Cur_CU表示当前编码单元CU的像素坐标范围;(x1,y1)为当前帧图像中像素点的坐标,当像素点(x1,y1)的坐标在当前编码单元CU范围内时,g(x1,y1)取1,否则取0;Among them, SC is the total area of the current coding unit CU, Cur_CU represents the pixel coordinate range of the current coding unit CU; (x1, y1) is the coordinate of the pixel in the current frame image, when the coordinate of the pixel (x1, y1) is in When it is within the range of the current coding unit CU, g(x1, y1) takes 1, otherwise takes 0;
投影块Merge模式比例计算单元:由投影块的Merge面积和当前编码单元CU总面积计算投影块中Merge模式的面积占总面积的比例γ:Projection block Merge mode ratio calculation unit: Calculate the ratio of the area of the Merge mode in the projected block to the total area γ from the Merge area of the projected block and the total area of the current coding unit CU:
跳过单元:当比例γ大于设定阈值λ时,跳过对当前编码单元CU进行运动估计和运动补偿的帧间预测,结束当前编码单元CU的预测编码;Skip unit: when the ratio γ is greater than the set threshold λ, skip the inter-frame prediction for motion estimation and motion compensation of the current coding unit CU, and end the predictive coding of the current coding unit CU;
其中,λ可取[0,1]中的任意实数。Among them, λ can take any real number in [0,1].
所述跳过单元中的阈值λ取值为0.85。The threshold λ in the skipping unit is 0.85.
为了验证所提出的帧间快速算法的可行性以及有效性,基于新一代视频编码标准测试模型JEM4.0实现了上文提到的快速算法。并且最后的数据都是在学校的高性能平台中运行得到,保证实验数据的真实和准确。所有实验的具体编码参数的配置选用JEM标准配置文件:encoder_randomaccess_jvet10.cfg,以及对应测试序列的标准配置文件。In order to verify the feasibility and effectiveness of the proposed inter-frame fast algorithm, the fast algorithm mentioned above is implemented based on the new generation of video coding standard test model JEM4.0. And the final data is obtained by running on the high-performance platform of the school to ensure the authenticity and accuracy of the experimental data. The configuration of the specific encoding parameters of all experiments uses the JEM standard configuration file: encoder_randomaccess_jvet10.cfg, and the standard configuration file corresponding to the test sequence.
实验结果如表1所示。其中,QP为量化参数,ΔBits%为与传统的编码器相比比特率变化百分比,ΔPSNR/dB为与传统的编码器相比峰值信噪比变化,TS/%为与传统的编码器相比所节省的时间百分比。ΔBDBR表示了在同样的客观质量下,传统编码器和改进的编码器的码率节省情况。ΔBDBR越小说明算法效果越好。The experimental results are shown in Table 1. Among them, QP is the quantization parameter, ΔBits% is the percentage change of the bit rate compared with the traditional coder, ΔPSNR/dB is the change of peak signal-to-noise ratio compared with the traditional coder, TS/% is compared with the traditional coder The percentage of time saved. ΔBDBR represents the bit rate savings of the traditional encoder and the improved encoder at the same objective quality. The smaller the ΔBDBR, the better the effect of the algorithm.
表1实验结果Table 1 Experimental results
通过在实验仿真,本发明中所提出的快速帧间算法的实验结果如表1所示。由表1可知,该算法达到了在保证视频的质量的前提下,提高了编码的效率的目的。Through the experimental simulation, the experimental results of the fast inter-frame algorithm proposed in the present invention are shown in Table 1. It can be seen from Table 1 that the algorithm achieves the purpose of improving the coding efficiency under the premise of ensuring the quality of the video.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710762301.1A CN107396102B (en) | 2017-08-30 | 2017-08-30 | A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710762301.1A CN107396102B (en) | 2017-08-30 | 2017-08-30 | A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107396102A true CN107396102A (en) | 2017-11-24 |
CN107396102B CN107396102B (en) | 2019-10-08 |
Family
ID=60348165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710762301.1A Active CN107396102B (en) | 2017-08-30 | 2017-08-30 | A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107396102B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174204A (en) * | 2018-03-06 | 2018-06-15 | 中南大学 | A fast mode selection method between frames based on decision tree |
CN108347616A (en) * | 2018-03-09 | 2018-07-31 | 中南大学 | A kind of depth prediction approach and device based on optional time domain motion-vector prediction |
CN110662041A (en) * | 2018-06-29 | 2020-01-07 | 北京字节跳动网络技术有限公司 | Extending interactions between Merge modes and other video coding tools |
CN110809156A (en) * | 2018-08-04 | 2020-02-18 | 北京字节跳动网络技术有限公司 | Interaction between different decoder-side motion vector derivation modes |
CN111698502A (en) * | 2020-06-19 | 2020-09-22 | 中南大学 | VVC (variable visual code) -based affine motion estimation acceleration method and device and storage medium |
CN112637592A (en) * | 2020-12-11 | 2021-04-09 | 百果园技术(新加坡)有限公司 | Method and device for video predictive coding |
CN112839224A (en) * | 2019-11-22 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Prediction mode selection method and device, video coding equipment and storage medium |
CN114339231A (en) * | 2021-12-27 | 2022-04-12 | 杭州当虹科技股份有限公司 | Method for fast jumping Cu-level mode selection by utilizing motion vector |
US11778170B2 (en) | 2018-10-06 | 2023-10-03 | Beijing Bytedance Network Technology Co., Ltd | Temporal gradient calculations in bio |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080165855A1 (en) * | 2007-01-08 | 2008-07-10 | Nokia Corporation | inter-layer prediction for extended spatial scalability in video coding |
CN103338372A (en) * | 2013-06-15 | 2013-10-02 | 浙江大学 | Method and device for processing video |
CN103379324A (en) * | 2012-04-16 | 2013-10-30 | 乐金电子(中国)研究开发中心有限公司 | Parallel realization method, device and system for advanced motion vector prediction AMVP |
CN104038764A (en) * | 2014-06-27 | 2014-09-10 | 华中师范大学 | H.264-to-H.265 video transcoding method and transcoder |
CN104601988A (en) * | 2014-06-10 | 2015-05-06 | 腾讯科技(北京)有限公司 | Video coder, method and device and inter-frame mode selection method and device thereof |
US20150222904A1 (en) * | 2011-03-08 | 2015-08-06 | Texas Instruments Incorporated | Parsing friendly and error resilient merge flag coding in video coding |
CN105959611A (en) * | 2016-07-14 | 2016-09-21 | 同观科技(深圳)有限公司 | Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus |
TW201637449A (en) * | 2015-01-29 | 2016-10-16 | Vid衡器股份有限公司 | Intra-block copy searching |
US20160373766A1 (en) * | 2015-06-22 | 2016-12-22 | Cisco Technology, Inc. | Block-based video coding using a mixture of square and rectangular blocks |
-
2017
- 2017-08-30 CN CN201710762301.1A patent/CN107396102B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080165855A1 (en) * | 2007-01-08 | 2008-07-10 | Nokia Corporation | inter-layer prediction for extended spatial scalability in video coding |
US20150222904A1 (en) * | 2011-03-08 | 2015-08-06 | Texas Instruments Incorporated | Parsing friendly and error resilient merge flag coding in video coding |
CN103379324A (en) * | 2012-04-16 | 2013-10-30 | 乐金电子(中国)研究开发中心有限公司 | Parallel realization method, device and system for advanced motion vector prediction AMVP |
CN103338372A (en) * | 2013-06-15 | 2013-10-02 | 浙江大学 | Method and device for processing video |
CN104601988A (en) * | 2014-06-10 | 2015-05-06 | 腾讯科技(北京)有限公司 | Video coder, method and device and inter-frame mode selection method and device thereof |
CN104038764A (en) * | 2014-06-27 | 2014-09-10 | 华中师范大学 | H.264-to-H.265 video transcoding method and transcoder |
TW201637449A (en) * | 2015-01-29 | 2016-10-16 | Vid衡器股份有限公司 | Intra-block copy searching |
US20160373766A1 (en) * | 2015-06-22 | 2016-12-22 | Cisco Technology, Inc. | Block-based video coding using a mixture of square and rectangular blocks |
CN105959611A (en) * | 2016-07-14 | 2016-09-21 | 同观科技(深圳)有限公司 | Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus |
Non-Patent Citations (1)
Title |
---|
黄晗: "《HEVC帧间帧内预测及优化技术研究》", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108174204A (en) * | 2018-03-06 | 2018-06-15 | 中南大学 | A fast mode selection method between frames based on decision tree |
CN108347616A (en) * | 2018-03-09 | 2018-07-31 | 中南大学 | A kind of depth prediction approach and device based on optional time domain motion-vector prediction |
CN108347616B (en) * | 2018-03-09 | 2020-02-14 | 中南大学 | Depth prediction method and device based on optional time domain motion vector prediction |
CN110662041B (en) * | 2018-06-29 | 2022-07-29 | 北京字节跳动网络技术有限公司 | Method and apparatus for video bitstream processing, method of storing video bitstream, and non-transitory computer-readable recording medium |
CN110662041A (en) * | 2018-06-29 | 2020-01-07 | 北京字节跳动网络技术有限公司 | Extending interactions between Merge modes and other video coding tools |
US11451819B2 (en) | 2018-08-04 | 2022-09-20 | Beijing Bytedance Network Technology Co., Ltd. | Clipping of updated MV or derived MV |
CN110809156B (en) * | 2018-08-04 | 2022-08-12 | 北京字节跳动网络技术有限公司 | Interaction between different decoder-side motion vector derivation modes |
US12120340B2 (en) | 2018-08-04 | 2024-10-15 | Beijing Bytedance Network Technology Co., Ltd | Constraints for usage of updated motion information |
US11109055B2 (en) | 2018-08-04 | 2021-08-31 | Beijing Bytedance Network Technology Co., Ltd. | MVD precision for affine |
US11470341B2 (en) | 2018-08-04 | 2022-10-11 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between different DMVD models |
US11330288B2 (en) | 2018-08-04 | 2022-05-10 | Beijing Bytedance Network Technology Co., Ltd. | Constraints for usage of updated motion information |
CN110809156A (en) * | 2018-08-04 | 2020-02-18 | 北京字节跳动网络技术有限公司 | Interaction between different decoder-side motion vector derivation modes |
CN110809155A (en) * | 2018-08-04 | 2020-02-18 | 北京字节跳动网络技术有限公司 | Restriction using updated motion information |
US11778170B2 (en) | 2018-10-06 | 2023-10-03 | Beijing Bytedance Network Technology Co., Ltd | Temporal gradient calculations in bio |
CN112839224B (en) * | 2019-11-22 | 2023-10-10 | 腾讯科技(深圳)有限公司 | Prediction mode selection method and device, video coding equipment and storage medium |
CN112839224A (en) * | 2019-11-22 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Prediction mode selection method and device, video coding equipment and storage medium |
CN111698502A (en) * | 2020-06-19 | 2020-09-22 | 中南大学 | VVC (variable visual code) -based affine motion estimation acceleration method and device and storage medium |
WO2022121786A1 (en) * | 2020-12-11 | 2022-06-16 | 百果园技术(新加坡)有限公司 | Video predictive coding method and apparatus |
CN112637592B (en) * | 2020-12-11 | 2024-07-05 | 百果园技术(新加坡)有限公司 | Video predictive coding method and device |
CN112637592A (en) * | 2020-12-11 | 2021-04-09 | 百果园技术(新加坡)有限公司 | Method and device for video predictive coding |
CN114339231A (en) * | 2021-12-27 | 2022-04-12 | 杭州当虹科技股份有限公司 | Method for fast jumping Cu-level mode selection by utilizing motion vector |
CN114339231B (en) * | 2021-12-27 | 2023-10-27 | 杭州当虹科技股份有限公司 | Method for rapidly jumping Cu-level mode selection by utilizing motion vector |
Also Published As
Publication number | Publication date |
---|---|
CN107396102B (en) | 2019-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107396102B (en) | A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector | |
US11363294B2 (en) | Image processing method and image processing device | |
CN111385569B (en) | Coding and decoding method and equipment thereof | |
CN107147911B (en) | Method and device for fast inter-frame coding mode selection based on local luminance compensation LIC | |
CN104935938B (en) | Inter-frame prediction method in a kind of hybrid video coding standard | |
CN101600108B (en) | Joint estimation method for movement and parallax error in multi-view video coding | |
CN101888546B (en) | A kind of method of estimation and device | |
CN110087087A (en) | VVC interframe encode unit prediction mode shifts to an earlier date decision and block divides and shifts to an earlier date terminating method | |
CN108347616A (en) | A kind of depth prediction approach and device based on optional time domain motion-vector prediction | |
CN110519600A (en) | Unified prediction, device, codec and storage device between intra frame | |
US20120076207A1 (en) | Multiple-candidate motion estimation with advanced spatial filtering of differential motion vectors | |
WO2015010319A1 (en) | P frame-based multi-hypothesis motion compensation encoding method | |
CN108174204A (en) | A fast mode selection method between frames based on decision tree | |
CN109688411B (en) | A method and apparatus for estimating rate-distortion cost of video coding | |
CN102075757B (en) | Video foreground object coding method by taking boundary detection as motion estimation reference | |
CN102647598A (en) | H.264 inter-frame mode optimization method based on maximum and minimum MV difference | |
CN107222742A (en) | Video coding Merge mode quick selecting methods and device based on time-space domain correlation | |
Kim et al. | Fast motion estimation for HEVC with adaptive search range decision on CPU and GPU | |
CN102075751A (en) | Macro block motion state-based H264 quick mode selection method | |
CN104918047A (en) | Bidirectional motion estimation elimination method and device | |
CN101742278B (en) | Method and system for acquiring motion vector and edge intensity of image | |
WO2021031225A1 (en) | Motion vector derivation method and apparatus, and electronic device | |
CN102186079A (en) | Motion-vector-based H.264 baseline profile intra mode decision method | |
CN112055221B (en) | Inter-frame prediction method, video coding method, electronic device and storage medium | |
CN107592547A (en) | A kind of motion perception figure extracting method based on HEVC compression domains |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |