CN107396102A - A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors - Google Patents

A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors Download PDF

Info

Publication number
CN107396102A
CN107396102A CN201710762301.1A CN201710762301A CN107396102A CN 107396102 A CN107396102 A CN 107396102A CN 201710762301 A CN201710762301 A CN 201710762301A CN 107396102 A CN107396102 A CN 107396102A
Authority
CN
China
Prior art keywords
mrow
mtd
coded unit
current coded
merge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710762301.1A
Other languages
Chinese (zh)
Other versions
CN107396102B (en
Inventor
张昊
符婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201710762301.1A priority Critical patent/CN107396102B/en
Publication of CN107396102A publication Critical patent/CN107396102A/en
Application granted granted Critical
Publication of CN107396102B publication Critical patent/CN107396102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors, this method projects to reference frame by current coded unit CU motion vector MV, so as to find the projecting block that current CU is corresponded in reference frame, using correlation of both persons on predictive mode, decide whether to skip the inter-frame mode whether current CU skip Motion estimation and compensation using the feature of projecting block;Compared to prior art, by the information of known projecting block, to be predicted to current coded unit, the inter prediction computation complexity of video encoder is reduced, reduces the scramble time, improves code efficiency;And inventive algorithm is simple, amount of calculation is small, it is convenient to puts into practical application.

Description

A kind of inter-frame mode fast selecting method based on Merge technological movement vectors and Device
Technical field
The invention belongs to field of video encoding, more particularly to a kind of inter-frame mode based on Merge technological movement vectors is fast Fast system of selection and device.
Background technology
In coding framework, predictive coding is one of core technology of Video coding, and predictive coding is divided into infra-frame prediction again And inter prediction.Infra-frame prediction is the spatial coherence according to video image, is predicted using neighborhood pixels encoded in image Current pixel.Interframe encode is the temporal correlation according to video image, and image to be encoded is predicted using encoded image.By In frame and inter prediction, encoder can eliminate the temporal correlation of video, to the residual error rather than original pixel value after prediction Enter line translation, quantization, entropy code, thus greatly improve code efficiency.
Video encoding standard inter-predicted portions main at present all employ block-based motion compensation technique.Its is main Principle is that a best matching blocks are found in encoded image before for each block of pixels of present image, and the process is referred to as transporting Dynamic estimation.The image for being wherein used to predict is referred to as reference picture, and the displacement of reference block to current pixel block is referred to as motion vector, when The difference of preceding piece and reference block is referred to as prediction residual.Due to the continuity of sequence of video images, usual motion vector in space and There is also certain correlation on time, similarly, utilization space or time upper adjacent motion vector are to current block motion vector It is predicted, only prediction residual is encoded, also can significantly saves the number of coded bits of motion vector.This predicted motion arrow The technology of amount is referred to as Merge.
2013, ITU-T VCEG (Video Coding Experts Group) and ISO/IEC MPEG (dynamic image expert group) joints It is proposed HEVC (efficient video coding) video compression scheme.Began from 2016, VCEG and MPEG begin one's study video of new generation Encoder, and set up a panel of expert --- JVET (joint video research group), it is intended to further lifting HEVC pressure Shrinkage.Video encoding standard of new generation is developed on the basis of HEVC, and the two is all employed in inter predication process Merge technologies, unlike, the Merge patterns of video of new generation have three kinds:AffineMerge patterns based on affine transformation, FRUC Merge patterns based on template matches and the 2Nx2N Merge patterns based on temporal correlation.These patterns are answered With the compression performance for improving encoder, the scramble time is also considerably increased, have impact on the research and development speed and application value of standard. Just there is motion to point out this drawback in the third session on video encoding standard of new generation, and ask to adopt its complexity Take action.
Video encoding standard of new generation step when doing inter prediction is as follows:
Step 1:Affine Merge patterns, i.e. affine motion compensation prediction are first done, preserves its rate distortion costs and pre- Measurement information, and current best mode is set to Affine Merge patterns,;
Step 2:2Nx2N Merge patterns, i.e. ordinary movement compensation prediction are done again, if the rate distortion costs of the pattern are small In the rate distortion costs of Affine Merge patterns, then optimal mode is set to 2Nx2N Merge patterns and preserves its rate distortion Cost and information of forecasting;
Step 3:Then FRUC Merge patterns are done, i.e., the motion vector generation based on template matches, if the rate of the pattern Distortion cost is less than the rate distortion costs of current best mode, then optimal mode is set into FRUC Merge patterns and preserves its rate Distortion cost and information of forecasting.Three of the above pattern belongs to Merge patterns;
Step 4:Then take exercises the inter-frame forecast mode of estimation and motion compensation, the pattern found out by motion search Match block in reference frame draws motion vector and prediction residual, therefore time-consuming more.
If the rate distortion costs of the pattern are less than the rate distortion costs of current best mode, optimal mode is set to motion The inter-frame forecast mode of estimation and motion compensation simultaneously preserves its rate distortion costs and information of forecasting.
Wherein, the scramble time of the inter-frame mode of Motion estimation and compensation accounts for the 41% of total encoding time, therefore, such as It is a kind of so as to jumping in three kinds of Merge patterns that fruit can just predict best inter mode before Motion estimation and compensation A large amount of scramble times will then be reduced by crossing Motion estimation and compensation.
Although there are many interframe fast algorithms for being directed to HM video encoders at present, as T.Mallikarachchi scholar exists Specific dimensions CU predictive coding, S.Ahn are skipped in proposition according to motion homogeneity in IEEE image procossing international conferences in 2014 It is proposed to use on Circuits and System for Video Technology, the IEEE Transactions of 2015 With the position CU current CU of pixel adaptive equalization parameter evaluation Texture complication, it is pre- that some interframe are skipped according to Texture complication Survey pattern.But because video encoding standard of new generation employs QTBT (four fork y-bend divisions) coding structure and eliminates pre- Unit PU concept is surveyed, so algorithm above is not particularly suited for video encoding standard of new generation.Other, such as based on variance , method based on Bayes, be not particularly suited for practical application because computation complexity is too high.
The Geneva conference in May, 2016 proposes the test model JEM2.0 of video encoding standard of new generation, now Average coding time of the JEM encoders under random arrangement is 5.3 times of HEVC encoders.Wherein, inter prediction is in total volume Occupy for about 68% time in the code time, similarly, in conventional coding standard, when inter prediction also occupies a large amount of codings Between, therefore inter prediction is the important module for reducing the scramble time, has very big room for improvement, if can be by inter prediction Time, which is reduced, to greatly improve the efficiency of encoder.
The content of the invention
The purpose of the present invention is the defects of being directed to inter prediction encoding overlong time and the deficiencies in the prior art, proposes one Inter-frame mode fast selecting method of the kind based on Merge technological movement vectors, shortens its scramble time, improves its practical application Property, while also provided convenience for its further research and development.
A kind of inter-frame mode fast selecting method based on Merge technological movement vectors, comprises the following steps:
Step 1:Current coded unit is obtained under optimal inter-frame forecast mode, corresponding to the projecting block on reference frame;
After current coded unit CU finishes Affine Merge, 2Nx2N Merge and FRUC Merge patterns, according to rate Distortion cost decision-making goes out current coded unit CU optimal inter-frame forecast mode;
Current coded unit CU motion vector MV is obtained based on optimal inter-frame forecast mode, by current coded unit CU Each pixel translation MV after obtain with current coded unit CU size identical translational blocks, finally the translational block is projected to In reference frame, obtain corresponding to current coded unit CU projecting block in reference frame;
Step 2:Inter-frame mode is Merge area in the projecting block that calculation procedure one obtains:
SM=∑ f (Mode (x, y)) (1)
Wherein, SMThe area for being Merge for inter-frame mode in projecting block, (x, y) are the coordinate of pixel in projecting block, Mode (x, y) is the optimal inter-frame forecast mode for the pixel that coordinate is (x, y);When coordinate is the optimal of the pixel of (x, y) When pattern is Merge, Mode (x, y) takes 1, otherwise takes 0;
Step 3:Calculate the current coded unit CU gross area:
SC=∑ g (x1, y1) (3)
Wherein, SCFor the current coded unit CU gross area, Cur_CU represents current coded unit CU pixel coordinate model Enclose;(x1, y1) is the coordinate of pixel in current frame image, when the coordinate of pixel (x1, y1) is in current coded unit CU models When enclosing interior, g (x1, y1) takes 1, otherwise takes 0;
Step 4:By the current coded unit CU gross area meters in the Merge areas and step 3 of the projecting block of step 2 The area for calculating Merge patterns in projecting block accounts for the ratio γ of the gross area:
Step 5:When the ratio γ of step 4 is more than given threshold λ, step 6 is skipped, terminates current coded unit CU Predictive coding;Otherwise, into step 6;
Wherein, λ can use any real number in [0,1];
Step 6:The inter prediction of Motion estimation and compensation is carried out to current coded unit CU.
Further, the λ values are 0.85.
A kind of quick selection device of inter-frame mode based on Merge technological movement vectors, including:
Projecting block acquiring unit:Current coded unit is obtained under optimal inter-frame forecast mode, corresponding on reference frame Projecting block;
After current coded unit CU finishes Affine Merge, 2Nx2N Merge and FRUC Merge patterns, according to rate Distortion cost decision-making goes out current coded unit CU optimal inter-frame forecast mode;
Current coded unit CU motion vector MV is obtained based on optimal inter-frame forecast mode, by current coded unit CU Each pixel translation MV after obtain with current coded unit CU size identical translational blocks, finally the translational block is projected to In reference frame, obtain corresponding to current coded unit CU projecting block in reference frame;
Inter-frame mode Merge areal calculation unit:According to the inter-frame mode of each pixel in projecting block, interframe mould is calculated Formula is Merge area:
SM=∑ f (Mode (x, y))
Wherein, SMThe area for being Merge for inter-frame mode in projecting block, (x, y) are the coordinate of pixel in projecting block, Mode (x, y) is the optimal inter-frame forecast mode for the pixel that coordinate is (x, y);When coordinate is the optimal of the pixel of (x, y) When pattern is Merge, Mode (x, y) takes 1, otherwise takes 0;
Current coded unit CU gross area computing unit:Whether belong to current volume according to each pixel in current frame image Code unit CU, calculate the current coded unit CU gross area:
SC=∑ g (x1, y1)
Wherein, SCFor the current coded unit CU gross area, Cur_CU represents current coded unit CU pixel coordinate model Enclose;(x1, y1) is the coordinate of pixel in current frame image, when the coordinate of pixel (x1, y1) is in current coded unit CU models When enclosing interior, g (x1, y1) takes 1, otherwise takes 0;
Projecting block Merge pattern ratio computing units:By the Merge areas and the current coded unit CU gross areas of projecting block The area for calculating Merge patterns in projecting block accounts for the ratio γ of the gross area:
Skip unit:When ratio γ is more than given threshold λ, skips and estimation and fortune are carried out to current coded unit CU The inter prediction of dynamic compensation, terminates current coded unit CU predictive coding;
Wherein, λ can use any real number in [0,1].
Further, the threshold value λ values skipped in unit are 0.85.
Beneficial effect
, should the invention provides a kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors Method projects to reference frame by current coded unit CU motion vector MV, so as to find the throwing that current CU is corresponded in reference frame Shadow block, using correlation of both persons on predictive mode, decide whether to skip whether current CU jumps using the feature of projecting block Cross the inter-frame mode of Motion estimation and compensation;Compared to prior art, by the information of known projecting block, to current Coding unit is predicted, and is reduced the inter prediction computation complexity of video encoder, is reduced the scramble time, improves volume Code efficiency;And inventive algorithm is simple, amount of calculation is small, it is convenient to puts into practical application.
Brief description of the drawings
Fig. 1 is current coded unit and its projecting block corresponding relation and motion vector schematic diagram, wherein, (a) closes to be corresponding System, (b) is motion vector schematic diagram;
Fig. 2 is CU information storage means;
Fig. 3 is the flow chart of the method for the invention.
Embodiment
Technical scheme is described in detail with preferred embodiment below in conjunction with the accompanying drawings.Selected implementation Test model --- the JEM4.0 that encoder used in example is issued for video encoding standard expert group of future generation, specific coding parameter Configuration selection JEM standard configuration files:Encoder_randomaccess_jvet10.cfg, and corresponding cycle tests Standard configuration file.
To reduce the scramble time, improve operating efficiency, the technical scheme that specifically uses of the present invention for:(compiled by current CU Code block) motion vector MV project to reference frame, in theory can be approximate so as to find the projecting block that current CU is corresponded in reference frame Think that the projecting block has arrived the present encoding CU of present frame position (left see figure one) by motion vector MV displacement movement.Cause The properties of this projecting block and present encoding CU properties should be identical, such as pixel distribution situation, interframe Predictive mode etc..Just the similarity both on predictive mode, one threshold value of setting (are designated as skip thresholds to the present invention below Value), decided whether to skip the inter-frame mode of Motion estimation and compensation according to the threshold value.
As shown in figure 3, the specific method of the present invention is as follows:
Step 1:JEM encoders finish Affine Merge, 2Nx2N Merge and FRUC Merge patterns in current CU Afterwards, an optimal mode can be now gone out according to rate distortion costs decision-making.Motion vector MV first in optimal mode ought Preceding CU is translated (right see figure one).Wherein, motion vector includes horizontal displacement components MVx and vertical displacement components MVy, translation Method be first to record current CU apex coordinate, the wide of CU and high, be designated as (x, y), width and height respectively, then translate Block summit is (x+MVx, y+MVy) in the coordinate of reference frame, and the size of translational block is consistent with present encoding CU, then again will translation Block is projected in reference frame (as shown in Figure 1a).
Step 2:Count the area that inter-frame mode in the projecting block obtained by step 1 is finally Merge.Due to new one Pattern information storage mode for video encoding standard be in units of the block of pixel 4x4 sizes storage rather than using pixel as Unit (see Fig. 2), so statistical is each fritter for traveling through projecting block, area can be calculated with following formula.
SM=∑ f (Mode (x, y))
Wherein, SMIt is finally Merge area for inter-frame mode in projecting block, (x, y) is the seat of pixel in projecting block Mark.Mode (x, y) is that coordinate is the optimal inter-frame forecast mode of the pixel of (x, y) in projecting block.When pixel in projecting block When the optimal mode of coordinate (x, y) is Merge, Mode (x, y) takes 1, otherwise takes 0.
Step 3:Calculate the current coded unit CU gross area.Same step 2, the statistical of the CU gross areas is also traversal Current coded unit CU each fritter, obtain the quantity of pixel in current coded unit CU.Specific calculation is as follows:
SC=∑ g (x1, y1)
Wherein, SCFor the current coded unit CU gross area, Cur_CU is current coded unit CU coordinate range, (x1, Y1) be pixel in image coordinate, when the coordinate of pixel is in the range of current coded unit CU, g (x1, y1) takes 1, no Then take 0.
Step 4:By the current coded unit CU gross area meters in the Merge areas and step 3 of the projecting block of step 2 The area for calculating Merge patterns in projecting block accounts for the ratio γ of the gross area.
The ratio that the area of the Merge patterns of any one projecting block accounts for the gross area can be obtained by following formula:
Step 5:When the ratio γ of step 4 is more than threshold value λ, illustrate current coded unit CU best inter mode It is very likely Merge, therefore skips step 6, terminates current CU predictive coding.Wherein, γ can use any in [0,1] Real number, when higher to video quality demands, when the scramble time requires not strict, λ can use value larger in scope, on the contrary it is desirable compared with Small value.Counted by many experiments, preferable balance can be obtained between video quality and scramble time when λ takes 0.85.
Step 6:Carry out the inter prediction of Motion estimation and compensation.
A kind of quick selection device of inter-frame mode based on Merge technological movement vectors, including:
Projecting block acquiring unit:Current coded unit is obtained under optimal inter-frame forecast mode, corresponding on reference frame Projecting block;
After current coded unit CU finishes Affine Merge, 2Nx2N Merge and FRUC Merge patterns, according to rate Distortion cost decision-making goes out current coded unit CU optimal inter-frame forecast mode;
Current coded unit CU motion vector MV is obtained based on optimal inter-frame forecast mode, by current coded unit CU Each pixel translation MV after obtain with current coded unit CU size identical translational blocks, finally the translational block is projected to In reference frame, obtain corresponding to current coded unit CU projecting block in reference frame;
Inter-frame mode Merge areal calculation unit:According to the inter-frame mode of each pixel in projecting block, interframe mould is calculated Formula is Merge area:
SM=∑ f (Mode (x, y))
Wherein, SMThe area for being Merge for inter-frame mode in projecting block, (x, y) are the coordinate of pixel in projecting block, Mode (x, y) is the optimal inter-frame forecast mode for the pixel that coordinate is (x, y);When coordinate is the optimal of the pixel of (x, y) When pattern is Merge, Mode (x, y) takes 1, otherwise takes 0;
Current coded unit CU gross area computing unit:Whether belong to current volume according to each pixel in current frame image Code unit CU, calculate the current coded unit CU gross area:
SC=∑ g (x1, y1)
Wherein, SCFor the current coded unit CU gross area, Cur_CU represents current coded unit CU pixel coordinate model Enclose;(x1, y1) is the coordinate of pixel in current frame image, when the coordinate of pixel (x1, y1) is in current coded unit CU models When enclosing interior, g (x1, y1) takes 1, otherwise takes 0;
Projecting block Merge pattern ratio computing units:By the Merge areas and the current coded unit CU gross areas of projecting block The area for calculating Merge patterns in projecting block accounts for the ratio γ of the gross area:
Skip unit:When ratio γ is more than given threshold λ, skips and estimation and fortune are carried out to current coded unit CU The inter prediction of dynamic compensation, terminates current coded unit CU predictive coding;
Wherein, λ can use any real number in [0,1].
The threshold value λ values skipped in unit are 0.85.
In order to verify the feasibility and validity of proposed interframe fast algorithm, based on video encoding standard of new generation Test model JEM4.0 realizes fast algorithm mentioned above.And last data are all in the high performance platform of school Operation obtains, and ensures the true and accurate of experimental data.The configuration selection JEM standard configurations of the specific coding parameter of all experiments File:Encoder_randomaccess_jvet10.cfg, and the standard configuration file of corresponding cycle tests.
Experimental result is as shown in table 1.Wherein, QP is quantization parameter, and Δ Bits% is the bit compared with traditional encoder Rate changes percentage, and Δ PSNR/dB is that Y-PSNR changes compared with traditional encoder, and TS/% is and traditional coding Device compares saved percentage of time.Δ BDBR is illustrated under same objective quality, conventional codec and improved volume The code check of code device saves situation.The smaller explanation algorithm effects of Δ BDBR are better.
The experimental result of table 1
By the way that in experiment simulation, the experimental result of the quick inter-frame algorithm proposed in the present invention is as shown in table 1.By table 1 Understand, the algorithm has reached the purpose on the premise of the quality of video is ensured, improving the efficiency of coding.
Specific embodiment described herein is only to spirit explanation for example of the invention.Technology belonging to the present invention is led The technical staff in domain can be made various modifications or supplement to described specific embodiment or be replaced using similar mode Generation, but without departing from the spiritual of the present invention or surmount scope defined in appended claims.

Claims (4)

1. a kind of inter-frame mode fast selecting method based on Merge technological movement vectors, it is characterised in that including following step Suddenly:
Step 1:Current coded unit is obtained under optimal inter-frame forecast mode, corresponding to the projecting block on reference frame;
After current coded unit CU finishes Affine Merge, 2Nx2N Merge and FRUC Merge patterns, according to rate distortion Cost decision-making goes out current coded unit CU optimal inter-frame forecast mode;
Current coded unit CU motion vector MV is obtained based on optimal inter-frame forecast mode, will be every in current coded unit CU Obtained after individual pixel translation MV with current coded unit CU size identical translational blocks, the translational block is finally projected into reference In frame, obtain corresponding to current coded unit CU projecting block in reference frame;
Step 2:Inter-frame mode is Merge area in the projecting block that calculation procedure one obtains:
SM=∑ f (Mode (x, y)) (1)
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>M</mi> <mi>e</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>!</mo> <mo>=</mo> <mi>M</mi> <mi>e</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, SMThe area for being Merge for inter-frame mode in projecting block, (x, y) be projecting block in pixel coordinate, Mode (x, Y) it is optimal inter-frame forecast mode of the coordinate for the pixel of (x, y);When coordinate is for the optimal mode of the pixel of (x, y) During Merge, Mode (x, y) takes 1, otherwise takes 0;
Step 3:Calculate the current coded unit CU gross area:
SC=∑ g (x1, y1) (3)
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;Element;</mo> <mi>C</mi> <mi>u</mi> <mi>r</mi> <mo>_</mo> <mi>C</mi> <mi>U</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;NotElement;</mo> <mi>C</mi> <mi>u</mi> <mi>r</mi> <mo>_</mo> <mi>C</mi> <mi>U</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, SCFor the current coded unit CU gross area, Cur_CU represents current coded unit CU pixel coordinate scope; (x1, y1) is the coordinate of pixel in current frame image, when the coordinate of pixel (x1, y1) is in the range of current coded unit CU When, g (x1, y1) takes 1, otherwise takes 0;
Step 4:Calculated and thrown by the current coded unit CU gross areas in the Merge areas and step 3 of the projecting block of step 2 The area of Merge patterns accounts for the ratio γ of the gross area in shadow block:
<mrow> <mi>&amp;gamma;</mi> <mo>=</mo> <mfrac> <msub> <mi>S</mi> <mi>M</mi> </msub> <msub> <mi>S</mi> <mi>C</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Step 5:When the ratio γ of step 4 is more than given threshold λ, step 6 is skipped, terminates the pre- of current coded unit CU Survey coding;Otherwise, into step 6;
Wherein, λ can use any real number in [0,1];
Step 6:The inter prediction of Motion estimation and compensation is carried out to current coded unit CU.
2. according to the method for claim 1, it is characterised in that the λ values are 0.85.
A kind of 3. quick selection device of inter-frame mode based on Merge technological movement vectors, it is characterised in that including:
Projecting block acquiring unit:Current coded unit is obtained under optimal inter-frame forecast mode, corresponding to the projection on reference frame Block;
After current coded unit CU finishes Affine Merge, 2Nx2N Merge and FRUC Merge patterns, according to rate distortion Cost decision-making goes out current coded unit CU optimal inter-frame forecast mode;
Current coded unit CU motion vector MV is obtained based on optimal inter-frame forecast mode, will be every in current coded unit CU Obtained after individual pixel translation MV with current coded unit CU size identical translational blocks, the translational block is finally projected into reference In frame, obtain corresponding to current coded unit CU projecting block in reference frame;
Inter-frame mode Merge areal calculation unit:According to the inter-frame mode of each pixel in projecting block, calculating inter-frame mode is Merge area:
SM=∑ f (Mode (x, y))
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>M</mi> <mi>e</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <mi>M</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>!</mo> <mo>=</mo> <mi>M</mi> <mi>e</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, SMThe area for being Merge for inter-frame mode in projecting block, (x, y) be projecting block in pixel coordinate, Mode (x, Y) it is optimal inter-frame forecast mode of the coordinate for the pixel of (x, y);When coordinate is for the optimal mode of the pixel of (x, y) During Merge, Mode (x, y) takes 1, otherwise takes 0;
Current coded unit CU gross area computing unit:Whether belong to present encoding list according to each pixel in current frame image First CU, calculate the current coded unit CU gross area:
SC=∑ g (x1, y1)
<mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;Element;</mo> <mi>C</mi> <mi>u</mi> <mi>r</mi> <mo>_</mo> <mi>C</mi> <mi>U</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>y</mi> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;NotElement;</mo> <mi>C</mi> <mi>u</mi> <mi>r</mi> <mo>_</mo> <mi>C</mi> <mi>U</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, SCFor the current coded unit CU gross area, Cur_CU represents current coded unit CU pixel coordinate scope; (x1, y1) is the coordinate of pixel in current frame image, when the coordinate of pixel (x1, y1) is in the range of current coded unit CU When, g (x1, y1) takes 1, otherwise takes 0;
Projecting block Merge pattern ratio computing units:Calculated by the Merge areas and the current coded unit CU gross areas of projecting block The area of Merge patterns accounts for the ratio γ of the gross area in projecting block:
<mrow> <mi>&amp;gamma;</mi> <mo>=</mo> <mfrac> <msub> <mi>S</mi> <mi>M</mi> </msub> <msub> <mi>S</mi> <mi>C</mi> </msub> </mfrac> </mrow>
Skip unit:When ratio γ is more than given threshold λ, skips and estimation and motion benefit are carried out to current coded unit CU The inter prediction repaid, terminate current coded unit CU predictive coding;
Wherein, λ can use any real number in [0,1].
4. device according to claim 1, it is characterised in that the threshold value λ values skipped in unit are 0.85.
CN201710762301.1A 2017-08-30 2017-08-30 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector Active CN107396102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710762301.1A CN107396102B (en) 2017-08-30 2017-08-30 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710762301.1A CN107396102B (en) 2017-08-30 2017-08-30 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector

Publications (2)

Publication Number Publication Date
CN107396102A true CN107396102A (en) 2017-11-24
CN107396102B CN107396102B (en) 2019-10-08

Family

ID=60348165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710762301.1A Active CN107396102B (en) 2017-08-30 2017-08-30 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector

Country Status (1)

Country Link
CN (1) CN107396102B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108174204A (en) * 2018-03-06 2018-06-15 中南大学 A kind of interframe fast schema selection method based on decision tree
CN108347616A (en) * 2018-03-09 2018-07-31 中南大学 A kind of depth prediction approach and device based on optional time domain motion-vector prediction
CN110662041A (en) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 Extending interactions between Merge modes and other video coding tools
CN110809156A (en) * 2018-08-04 2020-02-18 北京字节跳动网络技术有限公司 Interaction between different decoder-side motion vector derivation modes
CN111698502A (en) * 2020-06-19 2020-09-22 中南大学 VVC (variable visual code) -based affine motion estimation acceleration method and device and storage medium
CN112637592A (en) * 2020-12-11 2021-04-09 百果园技术(新加坡)有限公司 Method and device for video predictive coding
CN112839224A (en) * 2019-11-22 2021-05-25 腾讯科技(深圳)有限公司 Prediction mode selection method and device, video coding equipment and storage medium
CN114339231A (en) * 2021-12-27 2022-04-12 杭州当虹科技股份有限公司 Method for fast jumping Cu-level mode selection by utilizing motion vector
US11778170B2 (en) 2018-10-06 2023-10-03 Beijing Bytedance Network Technology Co., Ltd Temporal gradient calculations in bio

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080165855A1 (en) * 2007-01-08 2008-07-10 Nokia Corporation inter-layer prediction for extended spatial scalability in video coding
CN103338372A (en) * 2013-06-15 2013-10-02 浙江大学 Method and device for processing video
CN103379324A (en) * 2012-04-16 2013-10-30 乐金电子(中国)研究开发中心有限公司 Parallel realization method, device and system for advanced motion vector prediction AMVP
CN104038764A (en) * 2014-06-27 2014-09-10 华中师范大学 H.264-to-H.265 video transcoding method and transcoder
CN104601988A (en) * 2014-06-10 2015-05-06 腾讯科技(北京)有限公司 Video coder, method and device and inter-frame mode selection method and device thereof
US20150222904A1 (en) * 2011-03-08 2015-08-06 Texas Instruments Incorporated Parsing friendly and error resilient merge flag coding in video coding
CN105959611A (en) * 2016-07-14 2016-09-21 同观科技(深圳)有限公司 Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus
TW201637449A (en) * 2015-01-29 2016-10-16 Vid衡器股份有限公司 Intra-block copy searching
US20160373766A1 (en) * 2015-06-22 2016-12-22 Cisco Technology, Inc. Block-based video coding using a mixture of square and rectangular blocks

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080165855A1 (en) * 2007-01-08 2008-07-10 Nokia Corporation inter-layer prediction for extended spatial scalability in video coding
US20150222904A1 (en) * 2011-03-08 2015-08-06 Texas Instruments Incorporated Parsing friendly and error resilient merge flag coding in video coding
CN103379324A (en) * 2012-04-16 2013-10-30 乐金电子(中国)研究开发中心有限公司 Parallel realization method, device and system for advanced motion vector prediction AMVP
CN103338372A (en) * 2013-06-15 2013-10-02 浙江大学 Method and device for processing video
CN104601988A (en) * 2014-06-10 2015-05-06 腾讯科技(北京)有限公司 Video coder, method and device and inter-frame mode selection method and device thereof
CN104038764A (en) * 2014-06-27 2014-09-10 华中师范大学 H.264-to-H.265 video transcoding method and transcoder
TW201637449A (en) * 2015-01-29 2016-10-16 Vid衡器股份有限公司 Intra-block copy searching
US20160373766A1 (en) * 2015-06-22 2016-12-22 Cisco Technology, Inc. Block-based video coding using a mixture of square and rectangular blocks
CN105959611A (en) * 2016-07-14 2016-09-21 同观科技(深圳)有限公司 Adaptive H264-to-HEVC (High Efficiency Video Coding) inter-frame fast transcoding method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄晗: "《HEVC帧间帧内预测及优化技术研究》", 《中国优秀硕士学位论文全文数据库》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108174204A (en) * 2018-03-06 2018-06-15 中南大学 A kind of interframe fast schema selection method based on decision tree
CN108347616A (en) * 2018-03-09 2018-07-31 中南大学 A kind of depth prediction approach and device based on optional time domain motion-vector prediction
CN108347616B (en) * 2018-03-09 2020-02-14 中南大学 Depth prediction method and device based on optional time domain motion vector prediction
CN110662041B (en) * 2018-06-29 2022-07-29 北京字节跳动网络技术有限公司 Method and apparatus for video bitstream processing, method of storing video bitstream, and non-transitory computer-readable recording medium
CN110662041A (en) * 2018-06-29 2020-01-07 北京字节跳动网络技术有限公司 Extending interactions between Merge modes and other video coding tools
US11451819B2 (en) 2018-08-04 2022-09-20 Beijing Bytedance Network Technology Co., Ltd. Clipping of updated MV or derived MV
CN110809156B (en) * 2018-08-04 2022-08-12 北京字节跳动网络技术有限公司 Interaction between different decoder-side motion vector derivation modes
US12120340B2 (en) 2018-08-04 2024-10-15 Beijing Bytedance Network Technology Co., Ltd Constraints for usage of updated motion information
US11109055B2 (en) 2018-08-04 2021-08-31 Beijing Bytedance Network Technology Co., Ltd. MVD precision for affine
US11470341B2 (en) 2018-08-04 2022-10-11 Beijing Bytedance Network Technology Co., Ltd. Interaction between different DMVD models
US11330288B2 (en) 2018-08-04 2022-05-10 Beijing Bytedance Network Technology Co., Ltd. Constraints for usage of updated motion information
CN110809156A (en) * 2018-08-04 2020-02-18 北京字节跳动网络技术有限公司 Interaction between different decoder-side motion vector derivation modes
CN110809155A (en) * 2018-08-04 2020-02-18 北京字节跳动网络技术有限公司 Restriction using updated motion information
US11778170B2 (en) 2018-10-06 2023-10-03 Beijing Bytedance Network Technology Co., Ltd Temporal gradient calculations in bio
CN112839224B (en) * 2019-11-22 2023-10-10 腾讯科技(深圳)有限公司 Prediction mode selection method and device, video coding equipment and storage medium
CN112839224A (en) * 2019-11-22 2021-05-25 腾讯科技(深圳)有限公司 Prediction mode selection method and device, video coding equipment and storage medium
CN111698502A (en) * 2020-06-19 2020-09-22 中南大学 VVC (variable visual code) -based affine motion estimation acceleration method and device and storage medium
WO2022121786A1 (en) * 2020-12-11 2022-06-16 百果园技术(新加坡)有限公司 Video predictive coding method and apparatus
CN112637592B (en) * 2020-12-11 2024-07-05 百果园技术(新加坡)有限公司 Video predictive coding method and device
CN112637592A (en) * 2020-12-11 2021-04-09 百果园技术(新加坡)有限公司 Method and device for video predictive coding
CN114339231A (en) * 2021-12-27 2022-04-12 杭州当虹科技股份有限公司 Method for fast jumping Cu-level mode selection by utilizing motion vector
CN114339231B (en) * 2021-12-27 2023-10-27 杭州当虹科技股份有限公司 Method for rapidly jumping Cu-level mode selection by utilizing motion vector

Also Published As

Publication number Publication date
CN107396102B (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN107396102B (en) A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector
CN107147911A (en) LIC quick interframe coding mode selection method and device is compensated based on local luminance
CN102137263B (en) Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM)
CN110519600B (en) Intra-frame and inter-frame joint prediction method and device, coder and decoder and storage device
CN108781284A (en) The method and device of coding and decoding video with affine motion compensation
CN110087087A (en) VVC interframe encode unit prediction mode shifts to an earlier date decision and block divides and shifts to an earlier date terminating method
CN102025995B (en) Spatial enhancement layer rapid mode selection method of scalable video coding
CN108347616A (en) A kind of depth prediction approach and device based on optional time domain motion-vector prediction
CN107222742B (en) Video coding Merge mode quick selecting method and device based on time-space domain correlation
CN110062239B (en) Reference frame selection method and device for video coding
CN107079165A (en) Use the method for video coding and device of prediction residual
CN107197297A (en) A kind of video steganalysis method of the detection based on DCT coefficient steganography
CN102932642A (en) Interframe coding quick mode selection method
CN102065298A (en) High-performance macroblock coding implementation method
CN104811729B (en) A kind of video multi-reference frame coding method
CN107318016A (en) A kind of HEVC inter-frame forecast mode method for rapidly judging based on zero piece of distribution
CN105120290A (en) Fast coding method for depth video
CN101888546A (en) Motion estimation method and device
CN105898308A (en) Resolution-variable coding mode prediction method and device
CN108769696A (en) A kind of DVC-HEVC video transcoding methods based on Fisher discriminates
CN101102492A (en) Conversion method from compression domain MPEG-2 based on interest area to H.264 video
CN110519591A (en) A kind of prediction mode fast selecting method based on intraframe coding in multipurpose coding
CN106331700A (en) Coding and decoding methods of reference image, coding device, and decoding device
CN106131573B (en) A kind of HEVC spatial resolutions code-transferring method
CN109688411B (en) Video coding rate distortion cost estimation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant