CN104601988A - Video coder, method and device and inter-frame mode selection method and device thereof - Google Patents

Video coder, method and device and inter-frame mode selection method and device thereof Download PDF

Info

Publication number
CN104601988A
CN104601988A CN201410256678.6A CN201410256678A CN104601988A CN 104601988 A CN104601988 A CN 104601988A CN 201410256678 A CN201410256678 A CN 201410256678A CN 104601988 A CN104601988 A CN 104601988A
Authority
CN
China
Prior art keywords
depth
unit
coding unit
pattern
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410256678.6A
Other languages
Chinese (zh)
Other versions
CN104601988B (en
Inventor
谷沉沉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201410256678.6A priority Critical patent/CN104601988B/en
Publication of CN104601988A publication Critical patent/CN104601988A/en
Application granted granted Critical
Publication of CN104601988B publication Critical patent/CN104601988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video coder, a method and a device and an inter-frame mode selection method and device thereof. The inter-frame mode selection method of video coding comprises the following steps: skipping first calculation and selecting minimum coding expenditure in first coding expenditure under the condition that the calculation of the coding expenditure required for coding a target coding unit according to a current mode is skipped; determining a mode corresponding to the minimum coding expenditure as the mode adopted for coding the target coding unit under the condition that target parameters for coding the target coding unit by using the mode corresponding to the minimum coding expenditure meet a preset condition. The problem of high calculation complexity of mode selection in video coding in the prior art is solved, and further the effects of reducing the complexity and improving the coding speed are achieved.

Description

Video encoder, method and apparatus and inter-frame mode selecting method thereof and device
Technical field
The present invention relates to computer technology and internet arena, in particular to a kind of video encoder, method and apparatus and inter-frame mode selecting method thereof and device.
Background technology
H.264/AVC, since video encoding standard issues draft from May, 2003, with it relative to the clear superiority of video compression standard in the past in compression efficiency and network adaptability, the mainstream standard of field of video applications has been become rapidly.But, what require multimedia experiences along with the variation of terminal equipment form and people improves constantly, high definition, high frame per second, 3D, mobile platform has become the main trend of Video Applications, on the other hand, transmission bandwidth and memory space are resources the most key in Video Applications always, in limited space and transmission channel, how to obtain the target that best video tastes is user's unremitting pursue always, the compression efficiency of existing H.264/AVC coding standard still can not meet these growing demands, therefore, in January, 2010, ITU-T VCEG (VideoCoding Experts Group) tissue and ISO/IEC MPEG (Moving Picture ExpertsGroup) tissue are combined and have been set up JCT-VC (Joint Collaborative Team on VideoCoding), unified formulation coding standard HEVC of future generation (High Efficiency Video Coding), and the final version of standard has formally been issued in January, 2013.HEVC continues to use hybrid encoding frame H.264/AVC, have employed again a large amount of new technologies simultaneously, H.264/AVC, code efficiency is doubled than existing, namely HEVC only just can reach and H.264/AVC identical video quality with the code check of half, therefore HEVC is in high definition, ultra high-definition video storage, and the aspects such as Streaming Media and mobile Internet video all have high using value.
A wherein most important new technology of HEVC standard have employed quadtree coding structure more flexibly exactly, use coding unit (Coding Unit, be called for short CU), predicting unit (PredictionUnit, be called for short PU) and converter unit (Transform unit, be called for short TU)) 3 whole cataloged procedures of conceptual description, to improve the compression coding efficiency of high definition, ultra high-definition video.
In HEVC, a two field picture is divided into the code tree unit (CodingTree Units is called for short CTU) of many non-overlapping copies, and CTU is similar to the macro block H.264/AVC, and CTU is 2Nx2N (N=2 c, C be greater than 1 integer) square pixel block, the maximum permission of CTU is of a size of 64x64.Each CTU can be divided into foursquare coding unit according to quad-tree structure recurrence, and CU is the elementary cell of HEVC coding, and the minimum dimension of permission is 8x8, and full-size is the size of CTU, and Fig. 1 is that the CTU quad-tree structure of a 64x64 divides example.The degree of depth (Depth) that note size equals the CU of CTU is 0, if the CU that degree of depth is n can continue to be divided into 4 sub-CU that the degree of depth equals n+1, every sub-CU size is 1/4 of a upper degree of depth CU.The predictive coding type of CU has (Intra) predictive mode and interframe (Inter) predictive mode two kinds in frame.All CU are that the frame of intra prediction mode is called infra-frame prediction frame (that is, I frame), and the frame comprising the CU of intra prediction mode and inter-frame forecast mode is called MB of prediction frame (that is, GPB frame or B frame).
PU is the elementary cell of prediction, and a CU can comprise the size that one or more PU, PU full-size is CU, allows square or rectangular piecemeal.For the CU of inter prediction encoding, just like 8 kinds of PU dividing mode shown in Fig. 2 a to Fig. 2 h, wherein, Fig. 2 a to Fig. 2 d shows 4 kinds of symmetrical dividing mode, be followed successively by 2Nx2N, 2NxN, Nx2N, N × N, Fig. 2 e to Fig. 2 h shows 4 kinds of asymmetrical dividing mode, is followed successively by 2NxnU, 2NxnD, nLx2N, nRx2N.
For the PU dividing mode of inter prediction 2Nx2N, if residual error coefficient and motion vector difference are all zero, then this CU is claimed to encode with skip mode (that is, Skip pattern).Different from skip mode H.264/AVC, the motion vector of the skip mode of HEVC obtains and have employed Motion fusion technology (Merge), namely the movable information (comprising motion vector, reference frame index, reference frame lists) of current PU can construct the list of a fusional movement vectors candidates by the movable information of adjacent PU, only need during coding to transmit and merge mark (Merge Flag) and index (Merge Index) corresponding to best fusional movement vector, without the need to transmitting other movable informations.If the motion vector of current PU adopts Motion fusion technical limit spacing, but comprises the residual error coefficient of non-zero, then this PU is claimed to merge (Merge) pattern-coding.
For the situation of all the other inter predictions, that HEVC adopts is adaptive motion vector Predicting Technique (Adaptive Motion Vector Prediction, AMVP), namely a motion vectors candidate list is constructed by the motion vector information of adjacent PU, select best motion vectors, select best motion vector by estimation again, during coding, need the complete movable information transmitting residual error coefficient and comprise motion vector difference.
For the CU of intraframe predictive coding, only have 2Nx2N, NxN two kinds of PU dividing mode, and the dividing mode of NxN is only carried out for the CU of the depth capacity to the degree of depth being permission.
TU is the elementary cell of transform and quantization, and a CU can comprise the recurrence partition structure that one or more TU, TU adopt quaternary tree type equally, and its size is size from 4x4 to 32x32, and comparable PU is large, but is no more than the size of CU.Fig. 3 is the schematic diagram of the TU dividing mode of a CU.
When actual coding, need to carry out model selection to each CTU, select optimum CU, PU, TU dividing mode, and predictive coding type, usually according to the criterion of rate distortion optimum, to often kind of CU, PU dividing mode, the intraframe predictive coding expense of different TU dividing mode is adopted under needing to calculate maximum 34 prediction direction, and under often kind of motion-vector prediction mode, carry out the prediction CU that estimation selects to mate most, calculate the inter prediction encoding expense under the different TU dividing mode of employing again, finally select the CU of Least-cost, PU, TU dividing mode, and corresponding predictive coding type is as the optimum code pattern of current C TU.
HEVC standard have employed above-mentioned flexible CU, PU, TU dividing mode, and more intra prediction direction, interframe movement vector prediction mode, greatly improve forecasting accuracy, thus raising code efficiency, but owing to relating to a large amount of absolute errors and (Sum of Absolute Difference when the estimation of model selection and encoding overhead calculate, be called for short SAD), absolute error after Hardmard conversion and (Sum of Absolute Transformed Difference, abbreviate SAT D), quadratic sum (the Sum of Squared Error of difference, be called for short SSE) and code check estimate the computational process of contour complexity, this more flexile dividing mode of HEVC and prediction mode sharply add the computation complexity of mode selection processes.In current HEVC reference software realizes, the calculating of model selection is consuming time accounts for more than 90% of the whole scramble time, this high complicated model selection directly causes the high encoder complexity of HEVC, therefore be difficult to meet the application of Video coding of increasing at present video real time phone call (video calling especially in handheld device), the contour requirement of real-time of live streaming media, and for the film source offline video compression of order video, also need to drop into a large amount of server computational resources and scramble time cost.Again because the code efficiency of MB of prediction frame will far away higher than infra-frame prediction frame, therefore in general Video coding, in order to ensure the random access of robustness and the code stream transmitted, usually just 1-2 inserts an I frame second, even the frame per second of video is 15fps, an I frame is only had inside 15-30 frame, all the other are all MB of prediction frame, and in the application such as HD video storage or streaming media on demand, in order to ensure high compression efficiency, I frame period can be larger, reach as high as 100-200 frame, therefore the ratio that MB of prediction frame is shared in video code flow is usually very high, wherein inter mode decision is also whole Video coding bottleneck module consuming time.
Fig. 4 is the flow chart of a kind of complete (not opening any fast mode decision algorithm) model selection when encoding to a CU in prior art, as shown in Figure 4, in this kind of method, carries out model selection and mainly comprises the steps S41 to S45:
S41: to current C U, judges whether its degree of depth (Depth) exceedes the depth capacity of permission, if exceeded, then flow process terminates, otherwise continues to perform step S42.
S42: calculate successively and carry out the cost of encoding by NxN pattern in 2Nx2N, frame in fusion 2Nx2N, interframe 2Nx2N, interframe NxN, interframe Nx2N, interframe 2NxN, frame.Under the asymmetric PU dividing mode of permission, also to calculate successively again and carry out the cost of encoding by interframe 2NxnU, interframe 2NxnD, interframe nLx2N, interframe nRx2N tetra-kinds of patterns.Wherein in interframe NxN and frame, the calculating of NxN pattern is only carried out when the degree of depth of current C U equals the depth capacity allowed, and if interframe _ 4x4_enabled_flag is masked as 0, when CU size is 8x8, do not calculate the cost of interframe NxN pattern yet.
S43: the optimum code pattern of coding mode as current C U selecting Least-cost in the pattern of step S42 calculating, and record the encoding overhead of this minimum cost as current C U.
S44: current C U is divided into 4 sub-CU that the degree of depth is Depth+1, to every sub-this flow process of CU recursive call.
S45: the encoding overhead addition by 4 degree of depth being the sub-CU (as shown in Figure 5) of Depth+1, compare with the encoding overhead of current C U, if the encoding overhead of current C U is larger, then the optimum code pattern of current C U is the optimum code pattern divided under sub-CU, otherwise the optimum code pattern of current C U is the optimum code pattern do not divided under sub-CU.
Above-mentionedly carry out in the method for model selection, due to the mode selection processes to each CTU, all to travel through and calculate by often kind of CU, PU, TU dividing mode, and in frame, inter prediction mode carries out the cost of encoding, therefrom select the coding mode of Least-cost as the optimum code pattern of current C TU, although the optimum code pattern that this mode selecting method obtains is comparatively accurate, computation complexity is very high.General offline video compression application is difficult to bear compression so for a long time and huge server computational resource expense, cannot meet the application of the needs such as video calling, live streaming media Video Coding especially.
Fig. 6 is the flow chart of another kind of mode selecting method in prior art, as shown in Figure 6, in this kind of method, carries out model selection and mainly comprises the steps S61 to S68:
S61: to current C U, judges whether its degree of depth (Depth) exceedes the depth capacity of permission, if exceeded, then flow process terminates, otherwise continues to perform step S62.
S62: calculate by interframe 2Nx2N successively and merge the cost that 2Nx2N pattern carries out encoding.
S63: when judging by interframe 2Nx2N and merge the pattern-coding of Least-cost in 2Nx2N pattern, whether residual error coefficient and motion vector difference are all zero, and if so, then current C U coding mode is judged to skip mode in advance, flow process terminates, otherwise continues to perform step S64.
S64: calculate successively and carry out the cost of encoding by NxN pattern in 2Nx2N, frame in interframe NxN, interframe Nx2N, interframe 2NxN, frame.Under the asymmetric PU dividing mode of permission, also to calculate successively again and carry out the cost of encoding by interframe 2NxnU, interframe 2NxnD, interframe nLx2N, interframe nRx2N tetra-kinds of patterns.Wherein in interframe NxN and frame, the calculating of NxN pattern is only carried out when the degree of depth of current C U equals the depth capacity allowed, and if interframe _ 4x4_enabled_flag is masked as 0, when CU size is 8x8, do not calculate the cost of interframe NxN pattern yet.
S65: the optimum code pattern of coding mode as current C U selecting Least-cost in the pattern of step S64 calculating.
S66: if the optimum code pattern of current C U is skip mode, flow process terminates, otherwise continue to perform step S67.
S67: current C U is divided into 4 sub-CU that the degree of depth equals Depth+1, to every sub-this flow process of CU recursive call.
S68: the sub-CU encoding overhead addition by 4 degree of depth being Depth+1, compare with the encoding overhead of current C U, if the encoding overhead of current C U is larger, then the optimum code pattern of current C U is the optimum code pattern divided under sub-CU, otherwise the optimum code pattern of current C U is the optimum code pattern do not divided under sub-CU.
System of selection described by above-mentioned steps S61 to step S68, compare the system of selection described by above-mentioned steps S41 to step S45, the former proposes a kind of fast mode decision algorithm of decision-making CU dividing mode in advance, if the optimum code pattern of current C U is skip mode, then no longer continue to divide sub-CU to current C U, stop CU mode selection processes.And propose a kind of fast mode decision algorithm detecting skip mode in advance, if according to interframe 2Nx2N with merge 2Nx2N pattern and carry out the Least-cost pattern that codes selection goes out, its residual error coefficient and motion vector difference are all zero, then can judge that the optimum code pattern of current C U is as skip mode in advance, stop CU mode selection processes.
But, this fast schema selection method detecting skip mode in advance and the CU of skip mode is stopped in advance to sub-CU division, the video scene more static to the picture of the CU large percentage of skip mode, certain computation complexity can be saved, but picture is had to the general video scene of certain movement, during owing to calculating the cost by interframe 2Nx2N pattern-coding, relate to the very high estimation of complexity and encoding overhead computational process, therefore computation complexity is still very large, requires to still have very large gap with the encoder complexity of practical application.
For the problem that the computation complexity carrying out model selection in Video coding in correlation technique is higher, at present effective solution is not yet proposed.
Summary of the invention
The main purpose of the embodiment of the present invention is to provide a kind of video encoder, method and apparatus and inter-frame mode selecting method thereof and device, to solve the higher problem of the computation complexity that carries out model selection in prior art in Video coding.
According to an aspect of the embodiment of the present invention, provide a kind of inter-frame mode selecting method of Video coding.
Inter-frame mode selecting method according to the Video coding of the embodiment of the present invention comprises: when the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip described first to calculate, and minimum code expense is selected from the first encoding overhead, wherein, described first calculating being calculated as the encoding overhead needed for according to described present mode described target code unit being encoded, described first encoding overhead for calculate according to the encoding overhead needed for encoding to described target code unit in first pattern, described is pattern before described present mode in first pattern, and when adopting pattern corresponding to described minimum code expense to meet pre-conditioned to the target component that described target code unit is encoded, determine that pattern corresponding to described minimum code expense is adopted pattern of encoding to described target code unit.
According to another aspect of the embodiment of the present invention, provide a kind of method for video coding.
Method for video coding according to the embodiment of the present invention comprises: receive video source data to be encoded; Determine the encoding frame type of each frame in described video source data, obtain MB of prediction frame and infra-frame prediction frame; Determine the coding mode of described infra-frame prediction frame, and adopt presetting method to determine the coding mode of described MB of prediction frame, wherein, any one inter-frame mode selecting method that described presetting method provides for embodiment of the present invention foregoing; And adopt first mode to encode described infra-frame prediction frame, and adopt MB of prediction frame described in the second pattern-coding, wherein, described first mode is the coding mode of the described infra-frame prediction frame determined, described second pattern is the coding mode of the described MB of prediction frame adopting described presetting method to determine.
According to another aspect of the embodiment of the present invention, provide a kind of inter mode decision device of Video coding.
Inter mode decision device according to the Video coding of the embodiment of the present invention comprises: selected cell, for when the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip described first to calculate, and minimum code expense is selected from the first encoding overhead, wherein, described first calculating being calculated as the encoding overhead needed for according to described present mode described target code unit being encoded, described first encoding overhead for calculate according to the encoding overhead needed for encoding to described target code unit in first pattern, described is pattern before described present mode in first pattern, and first determining unit, for when adopting pattern corresponding to described minimum code expense to meet pre-conditioned to the target component that described target code unit is encoded, determine that pattern corresponding to described minimum code expense is adopted pattern of encoding to described target code unit.
According to another aspect of the embodiment of the present invention, provide a kind of video coding apparatus.
Video coding apparatus according to the embodiment of the present invention comprises: receiving element, receives video source data to be encoded; Frame type selected cell, determines the encoding frame type of each frame in described video source data, obtains MB of prediction frame and infra-frame prediction frame; Mode selecting unit, determines the coding mode of described infra-frame prediction frame, and adopts presetting method to determine the coding mode of described MB of prediction frame, wherein, and any one inter-frame mode selecting method that described presetting method provides for embodiment of the present invention foregoing; And coding unit, first mode is adopted to encode described infra-frame prediction frame, and adopt MB of prediction frame described in the second pattern-coding, wherein, described first mode is the coding mode of the described infra-frame prediction frame determined, described second pattern is the coding mode of the described MB of prediction frame adopting described presetting method to determine.
According to another aspect of the embodiment of the present invention, provide a kind of video encoder.
Video encoder according to the embodiment of the present invention comprises: the inter mode decision device of any one Video coding that foregoing of the present invention provides.
In embodiments of the present invention, adopt before according to present mode target code unit being encoded, first judge whether to skip the calculating calculating the encoding overhead needed for according to present mode target code unit being encoded, and when judging to skip, minimum code expense is selected from the encoding overhead calculated, tentatively determine the current optimum code pattern of pattern corresponding to this minimum code expense as target code unit, judge to adopt pattern corresponding to this minimum code expense whether to meet pre-conditioned to the target component that target code unit is encoded further, if meet pre-conditioned, then determine that the optimum code pattern of target code unit is above-mentioned minimum code expense associative mode, the model selection mode of the less coded system of possibility is skipped in this kind of employing, achieve the computation complexity effectively reducing mode selection processes, improve coding rate, solve the problem that the computation complexity that carries out model selection in prior art in Video coding is higher, and then reach reduction complexity, improve the effect of coding rate.
Accompanying drawing explanation
The accompanying drawing forming a application's part is used to provide a further understanding of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 divides example according to the structure of the code tree unit of correlation technique;
Fig. 2 a to Fig. 2 h is the dividing mode schematic diagram carrying out predicting division to coding unit according to correlation technique;
Fig. 3 is the dividing mode schematic diagram carrying out converting division to coding unit according to correlation technique;
Fig. 4 is the flow chart carrying out model selection according to a kind of coding unit of correlation technique;
Fig. 5 is the schematic diagram adopting the mode selecting method in Fig. 4 to divide coding unit;
Fig. 6 is the flow chart carrying out model selection according to another coding unit of correlation technique;
Fig. 7 is the flow chart of the inter-frame mode selecting method of Video coding according to the embodiment of the present invention;
Fig. 8 a to 8c is the schematic diagram of the adjacent encoder unit CU ' adopting the inter-frame mode selecting method in Fig. 7 to get;
Fig. 9 is the flow chart of the inter-frame mode selecting method of Video coding according to the preferred embodiment of the invention;
Figure 10 is the flow chart of the method for video coding according to the embodiment of the present invention;
Figure 11 is the schematic diagram of the inter mode decision device of Video coding according to the embodiment of the present invention;
Figure 12 is the schematic diagram of the video coding apparatus according to the embodiment of the present invention; And
Figure 13 is the schematic diagram of the video encoder according to the embodiment of the present invention.
Embodiment
The present invention program is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is only the embodiment of a part of the present invention, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, should belong to the scope of protection of the invention.
It should be noted that, term " first ", " second " etc. in specification of the present invention and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged in the appropriate case, so as embodiments of the invention described herein can with except here diagram or describe those except order implement.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
Embodiment 1
According to the embodiment of the present invention, a kind of embodiment of the method that may be used for implementing the application's device embodiment can be provided, it should be noted that, can perform in the computer system of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing, and, although show logical order in flow charts, in some cases, can be different from the step shown or described by order execution herein.
According to the embodiment of the present invention, provide a kind of inter-frame mode selecting method of Video coding, below concrete introduction done to the inter-frame mode selecting method of the Video coding that the embodiment of the present invention provides:
Fig. 7 is the flow chart of the inter-frame mode selecting method of Video coding according to the embodiment of the present invention, and as shown in Figure 7, the method comprises following step S702 and step S704:
S702: when the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip the first calculating, and minimum code expense is selected from the first encoding overhead, wherein, first calculating being calculated as the encoding overhead needed for according to present mode target code unit being encoded, first encoding overhead for calculate according to the encoding overhead needed for encoding to target code unit in first pattern, be the pattern before present mode in first pattern.
Wherein, before step S702, the inter-frame mode selecting method of the Video coding of the embodiment of the present invention also comprises: the calculating judging whether to skip the encoding overhead needed for encoding to target code unit according to present mode.Target code unit adopted coding mode of encoding mainly is comprised: merge 2Nx2N pattern (namely, Merge2Nx2N pattern), interframe 2Nx2N pattern (that is, Inter2Nx2N pattern), the non-2Nx2N pattern of interframe and frame mode.
Particularly, (namely the non-2Nx2N pattern of interframe mainly comprises interframe NxN pattern, Inter NxN pattern), interframe Nx2N pattern (namely, Inter Nx2N pattern), interframe 2NxN pattern (namely, Inter2NxN pattern), under the asymmetric predicting unit PU dividing mode of permission, (namely the non-2Nx2N pattern of interframe also comprises interframe nRx2N pattern, Inter nRx2N pattern), interframe nLx2N pattern (namely, Inter nLx2N pattern), interframe 2NxnD pattern (namely, Inter2NxnD pattern), interframe 2NxnU pattern (namely, Inter2NxnU pattern), for according to the calculating of the non-2Nx2N pattern of interframe to the encoding overhead that target code unit CU encodes, then calculate according to interframe NxN pattern successively, the encoding overhead that interframe Nx2N pattern and interframe 2NxN pattern are encoded to target code unit CU, and under the asymmetric predicting unit PU dividing mode of permission, calculate according to interframe nRx2N pattern successively, interframe nLx2N pattern, interframe 2NxnD pattern, the encoding overhead that interframe 2NxnU pattern is encoded to target code unit CU.
(namely frame mode to comprise in frame 2Nx2N pattern, Intra 2Nx2N pattern) and frame in NxN pattern (namely, Intra NxN pattern), for according to the calculating of frame mode to the encoding overhead that target code unit CU encodes, be then calculate the encoding overhead of target code unit CU being encoded according to NxN pattern in 2Nx2N pattern in frame and frame successively.
For calculating initial period, after calculating the encoding overhead of target code unit CU being encoded according to fusion 2Nx2N pattern, judge whether to skip and calculate according to the encoding overhead of interframe 2Nx2N pattern to target code cell encoding, for in computational process, after calculating the encoding overhead of encoding to target code unit CU according to X pattern, judge whether to skip the encoding overhead calculating and encode to target code unit CU according to one or more patterns after X pattern.
If after calculating the encoding overhead of encoding to target code unit CU according to fusion 2Nx2N pattern, judge to skip calculating according to the encoding overhead of interframe 2Nx2N pattern to target code cell encoding, then minimum code expense is MAX, MAX=2 a-1, a is the number of bits of encoding overhead value type, and such as, the value type of encoding overhead is 32 unsigned ints, then MAX=2 32-1.
S704: when adopting pattern corresponding to minimum code expense to meet pre-conditioned to the target component that target code unit is encoded, determines that pattern corresponding to minimum code expense is adopted pattern of encoding to target code unit.Wherein, before step S704, whether the inter-frame mode selecting method of the embodiment of the present invention also comprises: judge to adopt pattern corresponding to minimum code expense to meet pre-conditioned to the target component that target code unit is encoded, particularly, main judge pattern corresponding to minimum code expense be whether skip mode (namely, Skip pattern), can by judging whether the residual error coefficient of encoding to target code unit according to pattern corresponding to minimum code expense and motion vector difference are zero, when the two is zero, can determine to be to skip pattern, when judging that pattern corresponding to minimum code expense is not to skip pattern, determine to meet pre-conditioned.
The inter-frame mode selecting method of the Video coding that the embodiment of the present invention provides, adopt before according to present mode target code unit being encoded, first judge whether to skip the calculating calculating the encoding overhead needed for according to present mode target code unit being encoded, and when judging to skip, minimum code expense is selected from the encoding overhead calculated, tentatively determine the current optimum code pattern of pattern corresponding to this minimum code expense as target code unit, judge to adopt pattern corresponding to this minimum code expense whether to meet pre-conditioned to the target component that target code unit is encoded further, if meet pre-conditioned, then determine that the optimum code pattern of target code unit is above-mentioned minimum code expense associative mode, the model selection mode of the less coded system of possibility is skipped in this kind of employing, achieve the computation complexity effectively reducing mode selection processes, improve coding rate, solve the problem that the computation complexity that carries out model selection in prior art in Video coding is higher, and then reach reduction complexity, improve the effect of coding rate.
Below illustrate the judgment mode of the calculating judging whether to skip the encoding overhead needed for according to present mode target code unit being encoded.
If in first pattern for merging 2Nx2N pattern, present mode is interframe 2Nx2N pattern, can one judge whether to skip and calculates the encoding overhead needed for encoding to target code unit CU according to interframe 2Nx2N pattern in the following ways.
Mode one:
S101: obtain and target code unit CU is encoded pattern (hereinafter referred to as target pattern) adopted, according to merging the residual error coefficient and motion vector difference that 2Nx2N pattern encodes to target code unit CU, and obtain the encoding overhead of coding unit CU ' coding, wherein, coding unit CU ' is the coding unit CU at current depth Depth depthbefore coding and with coding unit CU depththe adjacent coding unit of time domain and/or spatial domain, particularly, as shown in Fig. 8 a to 8c, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
S102: judge whether to skip according to the encoding overhead of target pattern, residual error coefficient, motion vector difference and coding unit CU ' coding and calculate the encoding overhead needed for according to present mode target code unit being encoded.
Particularly, judge whether to skip according to the encoding overhead of target pattern, residual error coefficient, motion vector difference and coding unit CU ' coding and calculate the encoding overhead needed for according to present mode target code unit being encoded and mainly comprise the steps that S1021 is to step S1024:
S1021: the size calculating the first variable according to residual error coefficient and motion vector difference, particularly, first, judge whether residual error coefficient and motion vector difference are zero, then, when judging residual error coefficient and motion vector difference is zero, determining the first variable bMergeDetectSkip=1, and determining Cost x, Merge=MAX, or when judging that residual error coefficient and motion vector difference are not zero, determining the first variable bMergeDetectSkip=0, and determining Cost x, Mergefor according to fusion 2Nx2N pattern to coding unit CU depthcarry out the encoding overhead of encoding, wherein, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, such as, Cost x, Mergevalue type be 32 unsigned ints, then MAX=2 32-1.
S1022: the size calculating the second variable and ternary according to target pattern, particularly, first, judge whether target pattern is judged in advance and merge 2Nx2N pattern or frame mode, then, when judging that target pattern is judged to fusion 2Nx2N pattern in advance, determine the second variable bFastDeciMerge=1, otherwise, determine the second variable bFastDeciMerge=0, or when judging that target pattern is judged to frame mode in advance, determine ternary bFastDeciIntra=1, otherwise, determine ternary bFastDeciIntra=0.
S1023: according to according to fusion 2Nx2N pattern to coding unit CU depththe encoding overhead carrying out encoding overhead and the coding unit CU ' coding of encoding calculates the size of the 4th variable, particularly, first, judges min i ( Cost i , Merge ′ ′ ) ≠ MAX And min i ( Cost i , Merge ′ ′ ) ≠ 0 Time, Cost x , Merge > T 1 × min i ( Cost i , Merge ′ ′ ) Whether set up, or judge min i ( Cost i , Merge ′ ′ ) = MAX Or min i ( Cost i , Merge ′ ′ ) = 0 Time, Cost x , Merge > T 2 × min i ( Cost i ′ ′ ) Whether set up, wherein, cost i, Mergefor according to fusion 2Nx2N pattern to coding unit CU ithe encoding overhead of ' coding, depth ifor coding unit CU i' the degree of depth, Cost ifor coding unit CU i' encoding overhead, i ∈ [0,1,2,3,4,5], T1 and T2 is preset multiple, and T1 ≠ T2, general desirable T1=1, T2=1.2, i=2,3,4,5, also can according to the computation complexity demand of application, the suitably value of adjustment T2.For the value of i, also can get in 0 ~ 5 any number of come controlling calculation complexity, complexity is larger, and T value is larger, otherwise, less.Wherein, judging min i ( Cost i , Merge ′ ′ ) ≠ MAX And min i ( Cost i , Merge ′ ′ ) ≠ 0 Time, Cost x , Merge > min i ( Cost i , Merge ′ ′ ) When setting up, determine the 4th variable bCheckFurther=1, otherwise, determine the 4th variable bCheckFurther=0, or judging min i ( Cost i , Merge ′ ′ ) = MAX Or min i ( Cost i , Merge ′ ′ ) = 0 Time, when setting up, determine the 4th variable bCheckFurther=1, otherwise, determine the 4th variable bCheckFurther=0.
S1024: according to the first variable, second variable, the size of ternary and the 4th variable judges whether to skip the calculating of the encoding overhead needed for encoding to target code unit according to present mode, particularly, whether Rule of judgment 1 and condition 2 are set up, wherein, condition 1 is bMergeDetectSkip=0 and bFastDeciMerge=0 and bFastDeciIntra=0, condition 2 is bMergeDetectSkip=1 and bCheckFurther=1 and bFastDeciIntra=0, when judging condition 1 or condition 2 sets up, determine to skip the calculating calculating the encoding overhead needed for according to present mode target code unit being encoded.
Wherein, when judging condition 1 or condition 2 sets up, determine to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern, namely, as long as any one condition in condition 1 and condition 2 is set up, can determine to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern.
Further, the inter-frame mode selecting method of the Video coding of the embodiment of the present invention also comprises the coding mode obtaining coding unit CU ', wherein, judge whether target pattern is judged to fusion 2Nx2N pattern in advance or frame mode comprises: judge whether there is N in coding unit CU ' mindividual second coding unit and N iindividual 3rd coding unit, wherein, the second coding unit is the coding unit according to merging 2Nx2N pattern and carrying out encoding, and the 3rd coding unit is the coding unit according to frame mode coding, N mbe the second parameter preset, N iit is the 3rd parameter preset; And judging to there is N in coding unit CU ' mwhen individual second coding unit, determine that the optimum code pattern of target code unit is judged in advance and merge 2Nx2N pattern, judging to there is N in coding unit CU ' iwhen individual 3rd coding unit, determine that the optimum code pattern of target code unit is judged to frame mode in advance.
Particularly, the inter-frame mode selecting method of the Video coding of the embodiment of the present invention also comprises acquisition coding unit CU depththe degree of depth and the degree of depth of coding unit CU ', wherein:
Judge whether there is N in coding unit CU ' mindividual second coding unit and N iindividual 3rd coding unit comprises:
Judge M>=N mand Cost x , Merge < min j ( Cost j &prime; &prime; ) And min j ( Cost j &prime; &prime; ) &NotEqual; MAX Whether set up, wherein, judging M>=N mand Cost x , Merge < min j ( Cost j &prime; &prime; ) And min j ( Cost j &prime; &prime; ) &NotEqual; MAX When setting up, determine to there is N in coding unit CU ' mindividual second coding unit, that is, M is set [M 0, M 1, M 2, M 3, M 4, M 5] in any n 1(1≤n 1≤ 6) individual element and, if coding unit CU i' coding mode for merging 2Nx2N pattern, then M i=1, if coding unit CU i' coding mode be non-fused 2Nx2N pattern or coding unit CU i' do not exist, then M i=0, depth jfor coding unit CU j' the degree of depth, Cost jfor coding unit CU j' encoding overhead, j ∈ [0,1,2,3,4,5], N mbe the second parameter preset, above-mentioned judgement that is to say in the coding unit CU ' that judgement and target code unit CU time domain and/or spatial domain are adjacent, whether reaches N according to the quantity of the coding unit CU ' merging 2Nx2N pattern-coding mindividual, be balance code efficiency and coding rate, generally desirable n m=4, j=0,1,2,4.Also according to the computation complexity demand of application, suitably N can be reduced mvalue, and when forming the unit of M and have more, N mcorrespondingly increase, otherwise, N mcorrespondingly reduce.For the value of j, also can get in 0 ~ 5 any number of come controlling calculation complexity.
Judge I>=N iwhether set up, wherein, judging I>=N iwhen setting up, determine to there is N in coding unit CU ' iindividual 3rd coding unit, that is, I is set [I 0, I 1, I 2, I 3, I 4, I 5] in any n 2(1≤n 2≤ 6) individual element and, if coding unit CU i' coding mode be frame mode, then I i=1, if coding unit CU i' coding mode be non-frame mode or coding unit CU ido not exist, then I i=0, N ibe the 3rd parameter preset, that is, judge, with target code unit CU time domain and/or the adjacent coding unit CU ' in spatial domain, whether to reach N according to the quantity of the coding unit CU ' of frame mode coding iindividual, generally get N i=3, also according to the computation complexity demand of application, suitably can increase or reduce N ivalue, and when forming the unit of I and have more, N icorrespondingly increase, otherwise, N icorrespondingly reduce.
If in first pattern for merging 2Nx2N pattern or interframe 2Nx2N pattern, can two judge whether to skip the encoding overhead calculating and encode to target code unit CU according to the non-2Nx2N pattern of interframe in the following ways.
Mode two:
First, obtain target pattern, wherein, target pattern is adopted pattern of encoding to target code unit.
Secondly, the size of ternary is calculated according to target pattern, particularly, by judging whether target pattern is judged to frame mode in advance, when judging that target pattern is judged to frame mode in advance, determine ternary bFastDeciIntra=1, otherwise, determine ternary bFastDeciIntra=0, wherein, judge whether target pattern is judged to the concrete judgment mode of frame mode in advance same as described above, repeat no more herein.
Then, the calculating of the encoding overhead needed for according to present mode target code unit being encoded is judged whether to skip according to the size of ternary, particularly, judge whether bFastDeciIntra=1 sets up, wherein, when judging that bFastDeciIntra=1 sets up, determine to skip the calculating of the encoding overhead needed for according to present mode target code unit being encoded.
Further, aforesaid way two can be further refined as following steps S301 to step S304:
S301: obtain and encoded before target code unit, and with the coding mode of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
S302: judge I>=N iwhether set up, wherein, that is, I is set [I 0, I 1, I 2, I 3, I 4, I 5] in any n 2(1≤n 2≤ 6) individual element and, if coding unit CU i' coding mode be frame mode, then I i=1, if coding unit CU i' coding mode be non-frame mode or coding unit CU i' do not exist, then I i=0, N ibe the 3rd parameter preset, generally get N i=3, also according to the computation complexity demand of application, suitably can increase or reduce N ivalue, and when forming the unit of I and have more, N icorrespondingly increase, otherwise, N icorrespondingly reduce;
S303: judging I>=N iwhen setting up, record bFastDeciIntra=1, is judging I>=N iin invalid situation, record bFastDeciIntra=0; And
S304: judge whether bFastDeciIntra=1 sets up,
Wherein, when judging that bFastDeciIntra=1 sets up, determine to skip calculating according to the non-2Nx2N pattern of interframe to the encoding overhead of target code cell encoding.
Further, can also judge whether target code unit CU is less than threshold value T8 according to the minimum code expense after fusion 2Nx2N pattern or interframe 2Nx2N mode computation, if judge that target code unit CU is less than threshold value T8 according to the minimum code expense after fusion 2Nx2N pattern and interframe 2Nx2N mode computation, then determine to skip calculating according to the non-2Nx2N pattern of interframe to the encoding overhead of target code cell encoding.
If in first pattern for merging 2Nx2N pattern or interframe non-2Nx2N pattern or the non-2Nx2N pattern of interframe, can three judge whether to skip the encoding overhead calculating and encode to target code unit CU according to frame mode in the following ways.
Mode three:
S401: obtain and encoded before target code unit, and with the coding mode of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5; And
S402: judge whether the coding mode of the coding unit CU ' got exists frame mode,
Wherein, when the coding mode of the coding unit CU ' judging to get does not exist frame mode, determine to skip calculating according to the encoding overhead of frame mode to target code cell encoding.
Further, can also judge whether target code unit CU is less than threshold value T10 according to the minimum code expense after fusion 2Nx2N pattern, interframe 2Nx2N pattern and the non-2Nx2N mode computation of interframe, if judge that target code unit CU is less than threshold value T10 according to the minimum code expense after fusion 2Nx2N pattern, interframe 2Nx2N pattern and the non-2Nx2N mode computation of interframe, then determine to skip calculating according to the encoding overhead of frame mode volume to target code element number.
Fig. 9 is the particular flow sheet of the inter-frame mode selecting method of Video coding according to the embodiment of the present invention, and as shown in Figure 9, the inter-frame mode selecting method of the Video coding of this embodiment mainly comprises the steps that S801 is to step S817:
S801: judge whether the degree of depth of target code unit CU exceedes the depth capacity of permission, particularly, mainly get the degree of depth of target code unit CU, then the size of the depth capacity of the degree of depth and the permission got is compared, if the degree of depth got is less than the depth capacity of permission, then determine that the degree of depth of target code unit CU does not exceed the depth capacity of permission, continue to perform step S802, otherwise then flow process terminates.
S802: judge whether to skip the target code unit CU being in current depth (being assumed to be Depth) depththe calculating of the encoding overhead needed for encoding, if judge to skip, then determines that the minimum code expense of target code unit CU is MAX, and jumps to step S816.
S803: calculate the encoding overhead of target code unit CU being encoded according to fusion 2Nx2N pattern.
S804: judge whether to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern, concrete judgment mode is same as described above, repeats no more herein.When judging to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern, jump to step S806, otherwise, perform step S805.
S805: calculate the encoding overhead of target code unit CU according to interframe 2Nx2N pattern-coding.
S806: select minimum code expense from the encoding overhead calculated, and determine the optimization model of pattern corresponding to minimum code expense as target code unit CU.
S807: judge whether optimization model is skip mode, particularly, mainly judge whether the residual error coefficient after encoding to target code unit CU according to the pattern that minimum code expense is corresponding and motion vector difference are zero, if be zero, determine that optimization model is skip mode, flow process terminates, otherwise, then not, step S808 need be performed, to select the inter-frame mode of target code unit further.
S808: the calculating judging whether the target code unit CU pattern stopping being in current depth, if judged result is yes, then jumps to step S815, otherwise, continue to perform step S809.
S809: judge whether to skip the encoding overhead calculating and encode to target code unit CU according to the non-2Nx2N pattern of interframe, concrete judgment mode is same as described above, repeats no more herein.If judged result is yes, then jump to step S811, otherwise, continue to perform step S810.
S810: calculate the encoding overhead of encoding to target code unit CU according to the non-2Nx2N pattern of interframe, particularly, calculates the cost of encoding to target code unit CU by interframe NxN, interframe Nx2N, interframe 2NxN pattern successively.Further, under the asymmetric PU dividing mode of permission, also will calculate successively by interframe 2NxnU, interframe 2NxnD, interframe nLx2N, interframe nRx2N tetra-kinds of patterns carry out the cost of encoding again.Wherein, the calculating of interframe NxN pattern is only carried out when the degree of depth of target code unit CU equals the depth capacity allowed, if and interframe _ 4x4_enabled_flag is masked as 0, when target code unit CU size is 8x8, do not calculate the cost of interframe NxN pattern yet.
S811: judge whether to skip the encoding overhead calculating and encode to target code unit CU according to frame mode, concrete judgment mode is same as described above, repeats no more herein.If judged result is yes, then jump to step S813, otherwise, continue to perform step S812.
S812: calculate the encoding overhead of encoding to target code unit CU according to frame mode, particularly, calculates the cost that in 2Nx2N frame by frame, frame, NxN pattern is encoded to target code unit CU successively.Wherein in frame, the coding of NxN pattern only carries out when the degree of depth of target code unit CU equals the depth capacity allowed.
S813: select minimum code expense from the encoding overhead calculated, and determine the optimization model of pattern corresponding to minimum code expense as target code unit CU.
S814: judge whether optimization model is skip mode, particularly, mainly judge whether the residual error coefficient after encoding to target code unit CU according to the pattern that minimum code expense is corresponding and motion vector difference are zero, if be zero, determine that optimization model is skip mode, flow process terminates, otherwise, then not, step S815 need be performed, to select the inter-frame mode of target code unit further.
S815: judge whether target code unit CU meets the condition stopping dividing, wherein, if judged result is yes, then the optimization model determined in process ends determining step S813 is the optimization model of target code unit CU, further, target code unit CU encodes according to current depth; Otherwise, perform step S816.
S816: target code unit CU is divided into the target code subelement that 4 degree of depth are Depth+1, then, the above-mentioned flow process of recursive call, calculate the encoding overhead sum of 4 target code subelements, and calculate the optimization model of 4 target code subelements, that is, if judge to skip coding unit CU depththe calculating of the encoding overhead needed for encoding, then by coding unit CU depthbe divided into the coding unit CU that 4 degree of depth are Depth+1 depth+1, and determine each coding unit CU depth+1be the above-mentioned flow process of target code unit recursive call, calculate 4 coding unit CU depth+1encoding overhead sum, and calculate 4 coding unit CU depth+1optimization model.
S817: the size of the encoding overhead sum of the minimum code expense determined in comparison step S814 and 4 target code subelements, when comparing minimum code expense and being greater than encoding overhead sum, determine that the optimization model of target code unit is the optimization model of the multiple target code subelements after dividing; When comparing minimum code expense and being less than encoding overhead sum, determine that the optimization model of target code unit is pattern corresponding to minimum code expense.That is, compare the size of the first encoding overhead and the second encoding overhead, wherein, the first encoding overhead is coding unit CU depthminimum code expense, the second encoding overhead is multiple coding unit CU depth+1minimum code expense sum, coding unit CU depthminimum code expense identical with the minimum code expense of the target code unit of degree of depth Depth-1; And when comparing the first encoding overhead and being less than the second encoding overhead, determine that the target code unit of pattern to current depth Depth that employing first encoding overhead is corresponding is encoded, or when comparing the first encoding overhead and being greater than the second encoding overhead, determine that the target code unit of pattern to each degree of depth Depth+1 that employing the 3rd encoding overhead is corresponding is encoded, wherein, the 3rd encoding overhead is the minimum code expense of the target code unit of each degree of depth Depth+1.
Particularly, step S815 is mainly through judging with under type four whether target code unit CU meets the condition stopping dividing.
Mode four:
S901: obtain target code unit the degree of depth and according to fusion 2Nx2N pattern to the encoding overhead of target code cell encoding;
S902: obtain and encoded before target code unit, and with the degree of depth of target code unit time domain and/or adjacent coding unit CU ', the encoding overhead of coding unit CU ', the coding mode of coding unit CU ' and the coding unit CU ' in spatial domain, particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
S903: judge S>=N sand Cost x , Merge < min j ( Cost j , Merge &prime; &prime; ) And min j ( Cost j , Merge &prime; &prime; ) &NotEqual; MAX Whether set up, wherein, Cost x, Mergefor according to fusion 2Nx2N pattern to the encoding overhead of target code cell encoding, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, such as, Cost x, Mergevalue type be 32 unsigned ints, then MAX=2 32-1, that is, S is set [S 0, S 1, S 2, S 3, S 4, S 5] in any n 3(1≤n 3≤ 6) individual element and, depth is the degree of depth of target code unit, Depth ifor coding unit CU i' the degree of depth, Depth jfor coding unit CU j' the degree of depth, Cost j, Mergefor according to fusion 2Nx2N pattern to coding unit CU jthe encoding overhead of ' coding, j ∈ [0,1,2,3,4,5], N sbe the 4th parameter preset, usually, get N s=4, n 3=4, j=0,1,2,3,4,5, also according to the computation complexity demand of application, suitably can increase or reduce N svalue, and when forming the unit of S and have more, N scorrespondingly increase, otherwise, N scorrespondingly reduce;
S905: judging S>=N sand Cost x , Merge < min j ( Cost j , Merge &prime; &prime; ) And min j ( Cost j , Merge &prime; &prime; ) &NotEqual; MAX In invalid situation, divide target code unit, obtain multiple target code subelement, otherwise, then stop dividing target code unit CU.
Step S808 is then mainly through judging whether bMergeDetectSkip=1 or bFastDeciMerge=1 sets up, judge whether the calculating of the target code unit CU pattern stopping being in current depth, wherein, when judging that bMergeDetectSkip=1 or bFastDeciMerge=1 sets up, jump to step S815, or judging in the invalid situation of bMergeDetectSkip=1 or bFastDeciMerge=1, perform step S809.
Further, whether threshold value T7 can also be less than by the minimum cost determined in determining step S806, judge whether the calculating of the target code unit CU pattern stopping being in current depth, wherein, if judge that the minimum cost determined in step S806 is less than threshold value T7, then jump to step S815, otherwise, continue to perform step S809.In embodiments of the present invention, T8>T7.
Mainly through with under type five in step S802, judge whether to skip the calculating of the target code unit CU pattern being in current depth.
Mode five:
S1001: obtain the degree of depth of target code unit, before target code unit coding and with the degree of depth of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5; And
S1002: judge C>=N cwhether set up, wherein, that is, C is set [C 0, C 1, C 2, C 3, C 4, C 5] in any n 4(1≤n 4≤ 6) individual element and, n cbe the first parameter preset, that is to say in the coding unit CU ' that judgement and target code unit CU time domain and/or spatial domain are adjacent, whether the number that the degree of depth is greater than the coding unit CU ' of the target code unit CU degree of depth reaches N cindividual, generally get N c=6, n 4=6, also according to the computation complexity demand of application, suitably can increase or reduce N cvalue, and when forming the unit of C and have more, N ccorrespondingly increase, otherwise, N ccorrespondingly reduce, wherein, judging C>=N cwhen setting up, jump to step S816, or judging C>=N cin invalid situation, perform step S803.
The inter-frame mode selecting method of the Video coding that this embodiment provides, by utilizing coding mode and the correlation of encoding overhead in time domain and spatial domain, on the basis that the coding mode coding that realization is skipped or premature termination possibility is less and cost calculate, also achieve and to skip or the coding of premature termination coding unit dividing mode and cost calculate, reach the computation complexity reducing mode selection processes further, improve coding rate.Experiment in HEVC reference software shows, to the inter-frame mode selecting method that HEVC standard cycle tests adopts the embodiment of the present invention to provide, coding rate can be made on average to improve about 50%, and the code efficiency damage control is within 1%.
Embodiment 2
The embodiment of the present invention additionally provides a kind of method for video coding, and the method for video coding provided the embodiment of the present invention below does concrete introduction:
Figure 10 is the flow chart of the method for video coding according to the embodiment of the present invention, and as shown in Figure 10, this method for video coding mainly comprises the steps that S1102 is to step S1108:
S1102: receive video source data to be encoded.
S1104: the encoding frame type determining each frame in video source data, namely distinguishes MB of prediction frame and infra-frame prediction frame.
S1106: the coding mode determining infra-frame prediction frame, and the coding mode adopting presetting method determination MB of prediction frame, wherein, any one inter-frame mode selecting method that presetting method provides for embodiment of the present invention foregoing.
S1108: adopt predictive frame in first mode coded frame, and adopt the second pattern-coding MB of prediction frame, wherein, first mode is the coding mode of the infra-frame prediction frame determined, the second pattern is the coding mode of the MB of prediction frame adopting presetting method to determine.
The method for video coding that the embodiment of the present invention provides, by the inter-frame mode selecting method adopting embodiment of the present invention foregoing to provide, achieve the computation complexity effectively reducing and in video coding process, model selection is calculated, and then reach the effect improving coding rate.
It should be noted that, for aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not by the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in specification all belongs to preferred embodiment, and involved action and module might not be that the present invention is necessary.
Through the above description of the embodiments, those skilled in the art can be well understood to the mode that can add required general hardware platform by software according to the method for above-described embodiment and realize, hardware can certainly be passed through, but in a lot of situation, the former is better execution mode.Based on such understanding, technical scheme of the present invention can embody with the form of software product the part that prior art contributes in essence in other words, this computer software product is stored in a storage medium (as ROM/RAM, magnetic disc, CD), comprising some instructions in order to make a station terminal equipment (can be mobile phone, computer, server, or the network equipment etc.) perform method described in each embodiment of the present invention.
Embodiment 3
According to the embodiment of the present invention, additionally provide a kind of inter mode decision device of Video coding of the inter-frame mode selecting method for implementing above-mentioned Video coding, this inter mode decision device is mainly used in the inter-frame mode selecting method that execution embodiment of the present invention foregoing provides, and does concrete introduction below to the inter mode decision device of the Video coding that the embodiment of the present invention provides:
Figure 11 is the schematic diagram of the inter mode decision device of Video coding according to the embodiment of the present invention, and as shown in figure 11, this inter mode decision device mainly comprises selected cell 20 and the first determining unit 40, wherein:
Selected cell 20 is for when the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip the first calculating, and minimum code expense is selected from the first encoding overhead, wherein, first calculating being calculated as the encoding overhead needed for according to present mode target code unit being encoded, first encoding overhead for calculate according to the encoding overhead needed for encoding to target code unit in first pattern, be the pattern before present mode in first pattern.
Wherein, the inter mode decision device of the Video coding of the embodiment of the present invention also comprises the unit of the calculating judging whether to skip the encoding overhead needed for encoding to target code unit according to present mode.Target code unit adopted coding mode of encoding mainly is comprised: merge 2Nx2N pattern (namely, Merge 2Nx2N pattern), interframe 2Nx2N pattern (that is, Inter 2Nx2N pattern), the non-2Nx2N pattern of interframe and frame mode.
Particularly, (namely the non-2Nx2N pattern of interframe mainly comprises interframe NxN pattern, Inter NxN pattern), interframe Nx2N pattern (namely, Inter Nx2N pattern), interframe 2NxN pattern (namely, Inter 2NxN pattern), under the asymmetric predicting unit PU dividing mode of permission, (namely the non-2Nx2N pattern of interframe also comprises interframe nRx2N pattern, Inter nRx2N pattern), interframe nLx2N pattern (namely, Inter nLx2N pattern), interframe 2NxnD pattern (namely, Inter 2NxnD pattern), interframe 2NxnU pattern (namely, Inter2NxnU pattern), for according to the calculating of the non-2Nx2N pattern of interframe to the encoding overhead that target code unit CU encodes, then calculate according to interframe NxN pattern successively, the encoding overhead that interframe Nx2N pattern and interframe 2NxN pattern are encoded to target code unit CU, and under the asymmetric predicting unit PU dividing mode of permission, calculate according to interframe nRx2N pattern successively, interframe nLx2N pattern, interframe 2NxnD pattern, the encoding overhead that interframe 2NxnU pattern is encoded to target code unit CU.
(namely frame mode to comprise in frame 2Nx2N pattern, Intra 2Nx2N pattern) and frame in NxN pattern (namely, Intra NxN pattern), for according to the calculating of frame mode to the encoding overhead that target code unit CU encodes, be then calculate the encoding overhead of target code unit CU being encoded according to NxN pattern in 2Nx2N pattern in frame and frame successively.
For calculating initial period, after calculating the encoding overhead of target code unit CU being encoded according to fusion 2Nx2N pattern, judge whether to skip and calculate according to the encoding overhead of interframe 2Nx2N pattern to target code cell encoding, for in computational process, after calculating the encoding overhead of encoding to target code unit CU according to X pattern, judge whether to skip the encoding overhead calculating and encode to target code unit CU according to one or more patterns after X pattern.
If after calculating the encoding overhead of encoding to target code unit CU according to fusion 2Nx2N pattern, judge to skip calculating according to the encoding overhead of interframe 2Nx2N pattern to target code cell encoding, then minimum code expense is MAX, MAX=2 a-1, a is the number of bits of encoding overhead value type, and such as, the value type of encoding overhead is 32 unsigned ints, then MAX=2 32-1.
First determining unit 40, for when adopting pattern corresponding to minimum code expense to meet pre-conditioned to the target component that target code unit is encoded, determines that pattern corresponding to minimum code expense is adopted pattern of encoding to target code unit.Wherein, the inter mode decision device of the embodiment of the present invention also comprises and judges to adopt pattern corresponding to minimum code expense whether to meet pre-conditioned unit to the target component that target code unit is encoded, particularly, main judge pattern corresponding to minimum code expense be whether skip mode (namely, Skip pattern), can by judging whether the residual error coefficient of encoding to target code unit according to pattern corresponding to minimum code expense and motion vector difference are zero, when the two is zero, can determine to be to skip pattern, when judging that pattern corresponding to minimum code expense is not to skip pattern, determine to meet pre-conditioned.
The inter mode decision device of the Video coding that the embodiment of the present invention provides, adopt before according to present mode target code unit being encoded, first judge whether to skip the calculating calculating the encoding overhead needed for according to present mode target code unit being encoded, and when judging to skip, minimum code expense is selected from the encoding overhead calculated, tentatively determine the current optimum code pattern of pattern corresponding to this minimum code expense as target code unit, judge to adopt pattern corresponding to this minimum code expense whether to meet pre-conditioned to the target component that target code unit is encoded further, if meet pre-conditioned, then determine that the optimum code pattern of target code unit is above-mentioned minimum code expense associative mode, the model selection mode of the less coded system of possibility is skipped in this kind of employing, achieve the computation complexity effectively reducing mode selection processes, improve coding rate, solve the problem that the computation complexity that carries out model selection in prior art in Video coding is higher, and then reach reduction complexity, improve the effect of coding rate.
Below illustrate the structure composition that inter mode decision device performs the calculating judging whether to skip the encoding overhead needed for encoding to target code unit according to present mode.
If in first pattern for merging 2Nx2N pattern, present mode is interframe 2Nx2N pattern, and inter mode decision device can adopt the unit in following structure one to judge whether to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern.
Structure one: inter mode decision device mainly comprises the first acquiring unit and the second judging unit, particularly:
Second acquisition unit is mainly used in obtaining encodes pattern (hereinafter referred to as target pattern) adopted to target code unit CU, according to merging the residual error coefficient and motion vector difference that 2Nx2N pattern encodes to target code unit CU, and obtain the encoding overhead of coding unit CU ' coding, wherein, coding unit CU ' is the coding unit CU at current depth Depth depthbefore coding and with coding unit CU depththe adjacent coding unit of time domain and/or spatial domain, particularly, as shown in Fig. 8 a to 8c, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
Second judging unit is mainly used in judging whether to skip according to the encoding overhead of target pattern, residual error coefficient, motion vector difference and coding unit CU ' coding calculating the encoding overhead needed for encoding to target code unit according to present mode.
Further, the second judging unit mainly comprises the first computing module, the second computing module, the 3rd computing module and the first judge module, wherein,
First computing module is used for the size calculating the first variable according to residual error coefficient and motion vector difference, particularly, first computing module mainly comprises the first judgement submodule and first and determines submodule, wherein, first judges that submodule is for judging whether residual error coefficient and motion vector difference are zero, first determines that submodule is for when first judges that submodule judges that residual error coefficient and motion vector difference are zero, determines the first variable bMergeDetectSkip=1, and determines Cost x, Merge=MAX, or when judging that residual error coefficient and motion vector difference are not zero, determining the first variable bMergeDetectSkip=0, and determining Cost x, Mergefor according to fusion 2Nx2N pattern to coding unit CU depthcarry out the encoding overhead of encoding, wherein, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, such as, Cost x, Mergevalue type be 32 unsigned ints, then MAX=2 32-1.
Second computing module is used for the size calculating the second variable and ternary according to target pattern, particularly, second computing module mainly comprises the second judgement submodule and second and determines submodule, wherein, second judges that submodule merges 2Nx2N pattern or frame mode for judging whether target pattern is judged in advance, second determines that submodule is for when second judges that submodule judges that target pattern is judged to fusion 2Nx2N pattern in advance, determine the second variable bFastDeciMerge=1, otherwise, determine the second variable bFastDeciMerge=0, or when judging that target pattern is judged to frame mode in advance, determine ternary bFastDeciIntra=1, otherwise, determine ternary bFastDeciIntra=0.
3rd computing module is used for basis according to fusion 2Nx2N pattern to coding unit CU depththe encoding overhead carrying out encoding overhead and the coding unit CU ' coding of encoding calculates the size of the 4th variable, and particularly, the 3rd computing module mainly comprises the 3rd and judges that submodule and the 3rd determines submodule, and wherein, the 3rd judges that submodule is used for judging min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > T 1 &times; min i ( Cost i , Merge &prime; &prime; ) Whether set up, or judge min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, Cost x , Merge > T 2 &times; min i ( Cost i &prime; &prime; ) Whether set up, wherein, cost i, Mergefor according to fusion 2Nx2N pattern to coding unit CU ithe encoding overhead of ' coding, depth ifor coding unit CU i' the degree of depth, Cost ifor coding unit CU i' encoding overhead, i ∈ [0,1,2,3,4,5], T1 and T2 is preset multiple, and T1 ≠ T2, general desirable T1=1, T2=1.2, i=2,3,4,5, also can according to the computation complexity demand of application, the suitably value of adjustment T2.For the value of i, also can get in 0 ~ 5 any number of come controlling calculation complexity, complexity is larger, and T value is larger, otherwise, less.3rd determines the 3rd, submodule is for judging that submodule is judged min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > min i ( Cost i , Merge &prime; &prime; ) When setting up, determine the 4th variable bCheckFurther=1, otherwise, determine the 4th variable bCheckFurther=0, or judging min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, when setting up, determine the 4th variable bCheckFurther=1, otherwise, determine the 4th variable bCheckFurther=0.
First judge module is used for according to the first variable, second variable, the size of ternary and the 4th variable judges whether to skip the calculating of the encoding overhead needed for encoding to target code unit according to present mode, particularly, first judge module mainly comprises the 4th and judges that submodule and the 4th determines submodule, wherein, 4th judges that submodule is used for Rule of judgment 1 and whether condition 2 is set up, wherein, condition 1 is bMergeDetectSkip=0 and bFastDeciMerge=0 and bFastDeciIntra=0, condition 2 is bMergeDetectSkip=1 and bCheckFurther=1 and bFastDeciIntra=0, 4th determines that submodule is for when the 4th judges that submodule judges that condition 1 or condition 2 are set up, determine to skip the calculating calculating the encoding overhead needed for according to present mode target code unit being encoded.
Wherein, when judging condition 1 or condition 2 sets up, determine to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern, namely, as long as any one condition in condition 1 and condition 2 is set up, can determine to skip the encoding overhead calculating and encode to target code unit CU according to interframe 2Nx2N pattern.
Further, the inter mode decision device of the Video coding of the embodiment of the present invention also comprises the unit of the coding mode obtaining coding unit CU ', wherein, second judges that submodule mainly comprises one-level and judges submodule one and one-level determination submodule one, and this one-level judges that submodule one is for judging whether there is N in coding unit CU ' mindividual second coding unit and N iindividual 3rd coding unit, wherein, the second coding unit is the coding unit according to merging 2Nx2N pattern and carrying out encoding, and the 3rd coding unit is the coding unit according to frame mode coding, N mbe the second parameter preset, N iit is the 3rd parameter preset; One-level determination submodule one is for judging that in one-level submodule one judges to there is N in coding unit CU ' mwhen individual second coding unit, determine that the optimum code pattern of target code unit is judged in advance and merge 2Nx2N pattern, judge that submodule one judges to there is N in coding unit CU ' in one-level iwhen individual 3rd coding unit, determine that the optimum code pattern of target code unit is judged to frame mode in advance.
Particularly, the inter mode decision device of the Video coding of the embodiment of the present invention also comprises for obtaining coding unit CU depththe degree of depth and the unit of the degree of depth of coding unit CU ', wherein:
One-level judges that submodule one comprises secondary and judges that submodule one and secondary judge submodule two, particularly:
Secondary judges that submodule one is for judging M>=N mand Cost x , Merge < min j ( Cost j &prime; &prime; ) And min j ( Cost j &prime; &prime; ) &NotEqual; MAX Whether set up, wherein, judging M>=N mand Cost x , Merge < min j ( Cost j &prime; &prime; ) And min j ( Cost j &prime; &prime; ) &NotEqual; MAX When setting up, determine to there is N in coding unit CU ' mindividual second coding unit, that is, M is set [M 0, M 1, M 2, M 3, M 4, M 5] in any n 1(1≤n 1≤ 6) individual element and, if coding unit CU i' coding mode for merging 2Nx2N pattern, then M i=1, if coding unit CU i' coding mode be non-fused 2Nx2N pattern or coding unit CU i' do not exist, then M i=0, depth jfor coding unit CU j' the degree of depth, Cost jfor coding unit CU j' encoding overhead, j ∈ [0,1,2,3,4,5], N mbe the second parameter preset, above-mentioned judgement that is to say in the coding unit CU ' that judgement and target code unit CU time domain and/or spatial domain are adjacent, whether reaches N according to the quantity of the coding unit CU ' merging 2Nx2N pattern-coding mindividual, be balance code efficiency and coding rate, generally desirable n m=4, j=0,1,2,4.Also according to the computation complexity demand of application, suitably N can be reduced mvalue, and when forming the unit of M and have more, N mcorrespondingly increase, otherwise, N mcorrespondingly reduce.For the value of j, also can get in 0 ~ 5 any number of come controlling calculation complexity.
Secondary judges that submodule two is for judging I>=N iwhether set up, wherein, judging I>=N iwhen setting up, determine to there is N in coding unit CU ' iindividual 3rd coding unit, that is, I is set [I 0, I 1, I 2, I 3, I 4, I 5] in any n 2(1≤n 2≤ 6) individual element and, if coding unit CU i' coding mode be frame mode, then I i=1, if coding unit CU i' coding mode be non-frame mode or coding unit CU ido not exist, then I i=0, N ibe the 3rd parameter preset, that is, judge, with target code unit CU time domain and/or the adjacent coding unit CU ' in spatial domain, whether to reach N according to the quantity of the coding unit CU ' of frame mode coding iindividual, generally get N i=3, also according to the computation complexity demand of application, suitably can increase or reduce N ivalue, and when forming the unit of I and have more, N icorrespondingly increase, otherwise, N icorrespondingly reduce.
If in first pattern for merging 2Nx2N pattern or interframe 2Nx2N pattern, inter mode decision device can adopt the unit in following structure two to judge whether to skip the encoding overhead calculating and encode to target code unit CU according to the non-2Nx2N pattern of interframe.
Structure two: inter mode decision device mainly comprises second acquisition unit, computing unit and the second judging unit, particularly:
Second acquisition unit is for obtaining target pattern, and wherein, target pattern is adopted pattern of encoding to target code unit.
Computing unit is used for the size calculating ternary according to target pattern, particularly, computing unit mainly comprises the second judge module and determination module, wherein, second judge module is for judging whether target pattern is judged to frame mode in advance, and determination module is used for when the second judge module judges that target pattern is judged to frame mode in advance, determines ternary bFastDeciIntra=1, otherwise, determine ternary bFastDeciIntra=0.
3rd judging unit is used for the calculating judging whether to skip the encoding overhead needed for encoding to target code unit according to present mode according to the size of ternary, particularly, 3rd judging unit mainly comprises the 3rd judge module, 3rd judge module is mainly used in judging whether bFastDeciIntra=1 sets up, wherein, when judging that bFastDeciIntra=1 sets up, determine to skip the calculating of the encoding overhead needed for according to present mode target code unit being encoded.
Wherein, judge whether target pattern is judged to frame mode in advance mainly through following steps S301 to step S304:
S301: obtain and encoded before target code unit, and with the coding mode of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
S302: judge I>=N iwhether set up, wherein, that is, I is set [I 0, I 1, I 2, I 3, I 4, I 5] in any n 2(1≤n 2≤ 6) individual element and, if coding unit CU i' coding mode be frame mode, then I i=1, if coding unit CU i' coding mode be non-frame mode or coding unit CU i' do not exist, then I i=0, N ibe the 3rd parameter preset, generally get N i=3, also according to the computation complexity demand of application, suitably can increase or reduce N ivalue, and when forming the unit of I and have more, N icorrespondingly increase, otherwise, N icorrespondingly reduce;
S303: judging I>=N iwhen setting up, determine that target pattern is judged to frame mode in advance, record bFastDeciIntra=1, is judging I>=N iin invalid situation, determine that target pattern can not be judged to frame mode in advance, record bFastDeciIntra=0.
Further, can also judge whether target code unit CU is less than threshold value T8 according to the minimum code expense after fusion 2Nx2N pattern or interframe 2Nx2N mode computation, if judge that target code unit CU is less than threshold value T8 according to the minimum code expense after fusion 2Nx2N pattern and interframe 2Nx2N mode computation, then determine to skip calculating according to the non-2Nx2N pattern of interframe to the encoding overhead of target code cell encoding.
If in first pattern for merging 2Nx2N pattern or interframe non-2Nx2N pattern or the non-2Nx2N pattern of interframe, inter mode decision device can adopt the unit in following structure three to judge whether to skip the encoding overhead calculating and encode to target code unit CU according to frame mode.
Structure three: inter mode decision device mainly comprises the 3rd acquiring unit and the 4th judging unit, particularly:
3rd acquiring unit is used for obtaining encoded before target code unit, and with the coding mode of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5;
4th judging unit for judging whether the coding mode of the coding unit CU ' got exists frame mode,
Wherein, when the coding mode of the coding unit CU ' judging to get does not exist frame mode, determine to skip calculating according to the encoding overhead of frame mode to target code cell encoding.
Further, can also judge whether target code unit CU is less than threshold value T10 according to the minimum code expense after fusion 2Nx2N pattern, interframe 2Nx2N pattern and the non-2Nx2N mode computation of interframe, if judge that target code unit CU is less than threshold value T10 according to the minimum code expense after fusion 2Nx2N pattern, interframe 2Nx2N pattern and the non-2Nx2N mode computation of interframe, then determine to skip calculating according to the encoding overhead of frame mode volume to target code element number.
Further, the inter mode decision device of the Video coding of the embodiment of the present invention also comprises by the first judging unit of repeating to call and the second determining unit, before the calculating judging whether to skip the encoding overhead needed for according to present mode target code unit being encoded, inter mode decision device repeats to call the first judging unit and the second determining unit performs following function, until the degree of depth of described target code unit reaches preset maximum depth, wherein, the initial value of current depth Depth is 1:
First judging unit judges whether to skip the coding unit CU to current depth Depth depththe calculating of the encoding overhead needed for encoding.Particularly, first judging unit mainly comprises the first acquisition subelement and the first judgment sub-unit, this first obtain subelement for obtain target code unit the degree of depth, before target code unit coding and with the degree of depth of target code unit time domain and/or the adjacent coding unit CU ' in spatial domain and coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, are designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5; First judgment sub-unit is for judging whether there is N in coding unit CU ' cindividual first coding unit, the degree of depth of the first coding unit is greater than current depth Depth, that is to say in the coding unit CU ' that judgement and target code unit CU time domain and/or spatial domain are adjacent, whether the number that the degree of depth is greater than the coding unit CU ' of the target code unit CU degree of depth reaches N cindividual, particularly, the first judge module comprised mainly through the first judgment sub-unit is to judge C>=N cwhether set up, wherein, that is, C is set [C 0, C 1, C 2, C 3, C 4, C 5] in any n 4(1≤n 4≤ 6) individual element and, n cbe the first parameter preset, generally get N c=6, n 4=6, also according to the computation complexity demand of application, suitably can increase or reduce N cvalue, and when forming the unit of C and have more, N ccorrespondingly increase, otherwise, N ccorrespondingly reduce.Judging C>=N cwhen setting up, determine whether there is N in coding unit CU ' cindividual first coding unit, that is, determining to skip to the degree of depth is the coding unit CU of current depth depththe calculating of the encoding overhead needed for encoding.
Second determining unit, when the first judging unit is judged to skip the second calculating, divides coding unit CU depthfor the coding unit CU of multiple degree of depth Depth+1 depth+1, and determine each coding unit CU depth+1be target code unit, or when judging not skip the second calculating, determine coding unit CU depthfor target code unit, wherein, second is calculated as coding unit CU depththe calculating of the encoding overhead needed for encoding.
Further, the inter mode decision device of Video coding also comprises comparing unit and the 3rd determining unit, is judging that skipping above-mentioned second calculates, by coding unit CU depthbe divided into the coding unit CU of multiple degree of depth Depth+1 depth+1, and determine each coding unit CU depth+1when being target code unit, comparing unit is for comparing the size of the first encoding overhead and the second encoding overhead, and wherein, the first encoding overhead is coding unit CU depthminimum code expense, the second encoding overhead is multiple coding unit CU depth+1minimum code expense sum, coding unit CU depthminimum code expense identical with the minimum code expense of the target code unit of degree of depth Depth-1; 3rd determining unit is less than the second encoding overhead for comparing the first encoding overhead at comparing unit, determine that the target code unit of pattern to current depth Depth that employing first encoding overhead is corresponding is encoded, or when comparing unit compare the first encoding overhead be greater than the second encoding overhead, determine that the target code unit of pattern to each degree of depth Depth+1 that employing the 3rd encoding overhead is corresponding is encoded, wherein, the 3rd encoding overhead is the minimum code expense of the target code unit of each degree of depth Depth+1.
The inter mode decision device of the Video coding that the embodiment of the present invention provides also comprises the 5th judging unit and the 6th judging unit, for in first pattern being fusion 2Nx2N pattern, present mode is the situation of interframe 2Nx2N pattern, if judge target component do not meet pre-conditioned (namely, residual error coefficient and motion vector difference are not zero), the calculating of the encoding overhead needed for the 5th judging unit judges whether to stop to encode to the target code unit CU being in current depth, that supposes the encoding overhead needed for encoding to the target code unit of current depth Depth is calculated as the 3rd calculating, then when the 5th judging unit judges that termination the 3rd calculates, 6th judging unit judges whether to stop dividing coding unit CU depth, wherein, when judging not stop the 3rd and calculating, judge whether to skip the calculating of the encoding overhead needed for according to interframe non-2Nx2N pattern target code unit being encoded further.
Particularly, the 6th judging unit judges whether to stop dividing coding unit CU mainly through following structure four depth.
Structure four: the six judging unit mainly comprises the 3rd and obtains subelement, the 3rd judgment sub-unit and the 4th judgment sub-unit, wherein:
3rd obtain subelement for obtain target code unit the degree of depth and according to fusion 2Nx2N pattern to the encoding overhead of target code cell encoding, and acquisition was encoded before target code unit, and with target code unit time domain and/or the adjacent coding unit CU ' in spatial domain, the encoding overhead of coding unit CU ', the coding mode of coding unit CU ', and the degree of depth of coding unit CU ', particularly, the coding unit of encoding before target code unit CU generally comprises: two coding units adjacent with target code unit CU time domain, be designated as coding unit CU ' respectively 0with coding unit CU 1', adjacent with target code unit CU spatial domain and be positioned at four coding units at target code unit CU left position, top-left position, upper position and upper-right position place, be designated as coding unit CU ' respectively 2, coding unit CU ' 3, coding unit CU ' 4with coding unit CU ' 5,
3rd judgment sub-unit is for judging whether there is N in coding unit CU ' sindividual 4th coding unit, wherein, the degree of depth of the 4th coding unit is less than or equal to coding unit CU depththe degree of depth;
4th judgment sub-unit is used for judging according to fusion 2Nx2N coding unit CU depthwhether the encoding overhead carrying out encoding is less than the minimum code expense of encoding to coding unit CU ' according to fusion 2Nx2N.
Wherein, judging to there is N in coding unit CU ' sindividual 4th coding unit, and according to fusion 2Nx2N to coding unit CU depthwhen the encoding overhead carrying out encoding is less than the minimum code expense of encoding to coding unit CU ' according to fusion 2Nx2N, determine to stop dividing coding unit CU depth.
Further, the 3rd judgment sub-unit and the 4th judgment sub-unit mainly comprise for judging S>=N sand Cost x , Merge < min j ( Cost j , Merge &prime; &prime; ) And min j ( Cost j , Merge &prime; &prime; ) &NotEqual; MAX The module whether set up, wherein, Cost x, Mergefor according to fusion 2Nx2N pattern to the encoding overhead of target code cell encoding, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, such as, Cost x, Mergevalue type be 32 unsigned ints, then MAX=2 32-1, that is, S is set [S 0, S 1, S 2, S 3, S 4, S 5] in any n 3(1≤n 3≤ 6) individual element and, depth is the degree of depth of target code unit, Depth ifor coding unit CU i' the degree of depth, Depth jfor coding unit CU j' the degree of depth, Cost j, Mergefor according to fusion 2Nx2N pattern to coding unit CU jthe encoding overhead of ' coding, j ∈ [0,1,2,3,4,5], N sbe the 4th parameter preset, usually, get N s=4, n 3=4, j=0,1,2,3,4,5, also according to the computation complexity demand of application, suitably can increase or reduce N svalue, and when forming the unit of S and have more, N scorrespondingly increase, otherwise, N scorrespondingly reduce, judging S>=N sand Cost x , Merge < min j ( Cost j , Merge &prime; &prime; ) And min j ( Cost j , Merge &prime; &prime; ) &NotEqual; MAX In invalid situation, divide target code unit, obtain multiple target code subelement, otherwise, then stop dividing target code unit CU.
5th judging unit, then mainly through following structure five, judges whether to skip the calculating of the target code unit CU pattern being in current depth.
Structure five: the five judging unit comprises the second acquisition subelement and the second judgment sub-unit, wherein:
Second obtains subelement encodes pattern (hereinafter referred to as target pattern) for obtaining adopted to target code unit CU, according to merging the residual error coefficient and motion vector difference that 2Nx2N pattern encodes to target code unit CU.
Second judgment sub-unit then judges whether termination the 3rd calculating according to target pattern, residual error coefficient and motion vector difference, wherein, second judgment sub-unit mainly comprises the module of the size for calculating the first variable according to residual error coefficient and motion vector difference, calculate the module of bivariate size according to target pattern, judge whether according to the first variable and the second variable the module that termination the 3rd calculates.
Particularly, the size calculating the first variable according to residual error coefficient and motion vector difference is specially: judge whether residual error coefficient and motion vector difference are zero, when judging residual error coefficient and motion vector difference is zero, determine bMergeDetectSkip=1, or when judging that residual error coefficient and motion vector difference are not zero, determine bMergeDetectSkip=0, wherein, bMergeDetectSkip is the first variable.Target pattern calculates bivariate size and is specially: judge whether target pattern is judged in advance and merge 2Nx2N pattern, when judging that target pattern is judged to fusion 2Nx2N pattern in advance, determine bFastDeciMerge=1, otherwise, determine bFastDeciMerge=0, wherein, bFastDeciMerge is the second variable.Judge whether that termination the 3rd calculating is specially according to the first variable and the second variable: judge whether bMergeDetectSkip=1 or bFastDeciMerge=1 sets up, wherein, when judging that bMergeDetectSkip=1 or bFastDeciMerge=1 sets up, determine termination the 3rd calculating.
By unit, subelement etc. that the inter mode decision device of Video coding also comprises, achieve and utilize coding mode and the correlation of encoding overhead in time domain and spatial domain, on the basis that the coding mode coding that realization is skipped or premature termination possibility is less and cost calculate, also achieve and to skip or the coding of premature termination coding unit dividing mode and cost calculate, reach the computation complexity reducing mode selection processes further, improve coding rate.Experiment in HEVC reference software shows, to the inter-frame mode selecting method that HEVC standard cycle tests adopts the embodiment of the present invention to provide, coding rate can be made on average to improve about 50%, and the code efficiency damage control is within 1%.
Embodiment 4
The embodiment of the present invention additionally provides a kind of video coding apparatus, and this video coding apparatus is mainly used in the method for video coding that execution embodiment of the present invention foregoing provides, and the video coding apparatus provided the embodiment of the present invention below does concrete introduction:
Figure 12 is the schematic diagram of the video coding apparatus according to the embodiment of the present invention, and as shown in figure 12, this method for video coding mainly comprises following receiving element 100, frame type selected cell 200, mode selecting unit 300 and coding unit 400, wherein:
Receiving element 100 is for receiving video source data to be encoded.
Frame type selected cell 200, for determining the encoding frame type of each frame in video source data, namely distinguishes MB of prediction frame and infra-frame prediction frame.
Mode selecting unit 300 for determining the coding mode of infra-frame prediction frame, and adopts the coding mode of presetting method determination MB of prediction frame, wherein, and any one inter-frame mode selecting method that presetting method provides for embodiment of the present invention foregoing.
Coding unit 400 is for adopting predictive frame in first mode coded frame, and adopt the second pattern-coding MB of prediction frame, wherein, first mode is the coding mode of the infra-frame prediction frame determined, the second pattern is the coding mode of the MB of prediction frame adopting presetting method to determine.
The video coding apparatus that the embodiment of the present invention provides, by the inter-frame mode selecting method adopting embodiment of the present invention foregoing to provide, achieve the computation complexity effectively reducing and in video coding process, model selection is calculated, and then reach the effect improving coding rate.
Embodiment 5
The embodiment of the present invention additionally provides a kind of video encoder, and the method for video coding provided the embodiment of the present invention below does concrete introduction:
Figure 13 is the schematic diagram of the video encoder according to the embodiment of the present invention, as shown in figure 13, this video encoder mainly comprises frame type selection portion, inter mode decision calculating part, intra mode decision calculating part, code stream organization portion and inter mode decision device, wherein:
The video source data to be encoded of input is by frame type selection portion, if be confirmed as I frame, then video encoder carries out intra mode decision calculating by intra mode decision calculating part to I frame, and record optimum code pattern and corresponding coded data, then write bit stream data through code stream organization portion and export; If be confirmed as GPB frame or B frame, then inter mode decision device and the acting in conjunction of inter mode decision calculating part, inter mode decision calculating is carried out to GPB frame or B frame, and record optimum code pattern and corresponding coded data, write bit stream data through code stream organization portion again and export, wherein, carrying out the concrete grammar of inter mode decision calculating, introduce in embodiment of the present invention foregoing, repeat no more herein.
The video encoder that the embodiment of the present invention provides, by the inter mode decision device adopting embodiment of the present invention foregoing to provide, achieve the computation complexity effectively reducing and in video coding process, model selection is calculated, and then reach the effect improving coding rate
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
In the above embodiment of the present invention, the description of each embodiment is all emphasized particularly on different fields, in certain embodiment, there is no the part described in detail, can see the associated description of other embodiments.
In several embodiments that the application provides, should be understood that, disclosed device, the mode by other realizes.Wherein, device embodiment described above is only schematic, the such as division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of unit or module or communication connection can be electrical or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprises all or part of step of some instructions in order to make a computer equipment (can be personal computer, server or the network equipment etc.) perform method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), portable hard drive, magnetic disc or CD etc. various can be program code stored medium.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (29)

1. an inter-frame mode selecting method for Video coding, is characterized in that, comprising:
When the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip the first calculating, and minimum code expense is selected from the first encoding overhead, wherein, described first calculating being calculated as the encoding overhead needed for according to described present mode described target code unit being encoded, described first encoding overhead for calculate according to the encoding overhead needed for encoding to described target code unit in first pattern, described is pattern before described present mode in first pattern; And
When adopting pattern corresponding to described minimum code expense to meet pre-conditioned to the target component that described target code unit is encoded, determine that pattern corresponding to described minimum code expense is adopted pattern of encoding to described target code unit.
2. inter-frame mode selecting method according to claim 1, it is characterized in that, described skip the first calculating before, described inter-frame mode selecting method also comprises and repeats following steps, until the degree of depth of described target code unit reaches preset maximum depth, wherein, the initial value of current depth Depth is 1:
Judge whether to skip the coding unit CU to current depth Depth depththe calculating of the encoding overhead needed for encoding; And
When judging to skip the second calculating, divide described coding unit CU depthfor the coding unit CU of multiple degree of depth Depth+1 depth+1, and determine each described coding unit CU depth+1be described target code unit, or when judging not skip described second calculating, determine described coding unit CU depthfor described target code unit, wherein, described second is calculated as described coding unit CU depththe calculating of the encoding overhead needed for encoding.
3. inter-frame mode selecting method according to claim 2, is characterized in that, is judging that skipping described second calculates, and divides described coding unit CU depthfor the coding unit CU of multiple degree of depth Depth+1 depth+1, and determine each described coding unit CU depth+1when being described target code unit, described inter-frame mode selecting method also comprises:
The relatively size of the first encoding overhead and the second encoding overhead, wherein, described first encoding overhead is described coding unit CU depthminimum code expense, described second encoding overhead is multiple described coding unit CU depth+1minimum code expense sum, described coding unit CU depthminimum code expense identical with the minimum code expense of the described target code unit of degree of depth Depth-1; And
When comparing described first encoding overhead and being less than described second encoding overhead, determine to adopt the described target code unit of pattern to current depth Depth corresponding to described first encoding overhead to encode, or when comparing described first encoding overhead and being greater than described second encoding overhead, determine that the described target code unit of pattern to each degree of depth Depth+1 that employing the 3rd encoding overhead is corresponding is encoded, wherein, described 3rd encoding overhead is the minimum code expense of the described target code unit of each degree of depth Depth+1.
4. inter-frame mode selecting method according to claim 2, is characterized in that, judges whether to skip the coding unit CU to current depth Depth depththe calculating of the encoding overhead needed for encoding comprises:
Obtain at described coding unit CU depthcarry out encoding before and with described coding unit CU depththe degree of depth of time domain and/or the adjacent coding unit CU ' in spatial domain and described coding unit CU '; And
Judge whether there is N in described coding unit CU ' cindividual first coding unit, wherein, N cbe the first parameter preset, the degree of depth of described first coding unit is greater than current depth Depth,
Wherein, judging to there is N in described coding unit CU ' cwhen individual described first coding unit, determine to skip described coding unit CU depththe calculating of the encoding overhead needed for encoding.
5. the inter-frame mode selecting method according to any one of claim 2 to 4, it is characterized in that, described is fusion 2Nx2N pattern in first pattern, described present mode is interframe 2Nx2N pattern, and described inter-frame mode selecting method also comprises the calculating judging whether to skip the encoding overhead needed for encoding to described target code unit according to described present mode in the following ways:
Obtain target pattern, according to described fusion 2Nx2N pattern to described coding unit CU depthcarry out the residual error coefficient of encoding and motion vector difference, and obtain the encoding overhead of coding unit CU ' coding, wherein, described coding unit CU ' is at described coding unit CU depthbefore coding and with described coding unit CU depththe adjacent coding unit of time domain and/or spatial domain, described target pattern is adopted pattern of encoding to described target code unit; And
Judge whether to skip according to the encoding overhead of described target pattern, described residual error coefficient, described motion vector difference and described coding unit CU ' coding and calculate the encoding overhead needed for according to described present mode described target code unit being encoded.
6. inter-frame mode selecting method according to claim 5, it is characterized in that, judge whether to skip according to the encoding overhead of described target pattern, described residual error coefficient, described motion vector difference and described coding unit CU ' coding and calculate the encoding overhead needed for according to described present mode described target code unit being encoded and comprise:
The size of the first variable is calculated according to described residual error coefficient and described motion vector difference;
The size of the second variable and ternary is calculated according to described target pattern;
According to according to described fusion 2Nx2N pattern to described coding unit CU depththe encoding overhead carrying out encoding overhead and the described coding unit CU ' coding of encoding calculates the size of the 4th variable; And
The calculating of the encoding overhead needed for according to described present mode described target code unit being encoded is judged whether to skip according to the size of described first variable, described second variable, described ternary and described 4th variable.
7. inter-frame mode selecting method according to claim 6, is characterized in that:
The size calculating the first variable according to described residual error coefficient and described motion vector difference comprises: judge whether described residual error coefficient and described motion vector difference are zero; And when judging described residual error coefficient and described motion vector difference is zero, determine bMergeDetectSkip=1, and determine Cost x, Merge=MAX, or when judging described residual error coefficient and described motion vector difference is not zero, determine bMergeDetectSkip=0, and determine Cost x, Mergefor according to described fusion 2Nx2N pattern to described coding unit CU depthcarry out the encoding overhead of encoding, wherein, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, bMergeDetectSkip is described first variable,
The size calculating the second variable and ternary according to described target pattern comprises: judge whether described target pattern is judged to described fusion 2Nx2N pattern or frame mode in advance; And when judging that described target pattern is judged to described fusion 2Nx2N pattern in advance, determine bFastDeciMerge=1, otherwise, determine bFastDeciMerge=0, or when judging that described target pattern is judged to described frame mode in advance, determine bFastDeciIntra=1, otherwise, determine bFastDeciIntra=0, wherein, bFastDeciMerge is described second variable, and bFastDeciIntra is described ternary
According to according to described fusion 2Nx2N pattern to described coding unit CU depththe size of carrying out encoding overhead calculating the 4th variable of encoding overhead and the described coding unit CU ' coding of encoding comprises: judge min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > T 1 &times; min i ( Cost i , Merge &prime; &prime; ) Whether set up, or judge min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, Cost x , Merge > T 2 &times; min i ( Cost i &prime; &prime; ) Whether set up, wherein, cost i, Mergefor according to described fusion 2Nx2N pattern to coding unit CU ithe encoding overhead of ' coding, depth ifor described coding unit CU i' the degree of depth, Cost ifor described coding unit CU i' encoding overhead, i ∈ [0,1,2,3,4,5], T1 and T2 is preset multiple, and T1 ≠ T2, wherein, is judging min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > min i ( Cost i , Merge &prime; &prime; ) When setting up, determine bCheckFurther=1, otherwise, determine bCheckFurther=0, or judging min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, when setting up, determine bCheckFurther=1, otherwise determine bCheckFurther=0, bCheckFurther is described 4th variable,
Judge whether to skip according to the size of described first variable, described second variable, described ternary and described 4th variable and calculate the encoding overhead needed for according to the second pattern described target code unit being encoded and comprise: whether Rule of judgment 1 and condition 2 are set up, wherein, described condition 1 is bMergeDetectSkip=0 and bFastDeciMerge=0 and bFastDeciIntra=0, and described condition 2 is bMergeDetectSkip=1 and bCheckFurther=1 and bFastDeciIntra=0; And when judging that described condition 1 or described condition 2 are set up, determine to skip the calculating calculating the encoding overhead needed for according to described present mode described target code unit being encoded.
8. the inter-frame mode selecting method according to any one of claim 2 to 4, it is characterized in that, described present mode is the non-2Nx2N pattern of interframe, and described inter-frame mode selecting method also comprises the calculating judging whether to skip the encoding overhead needed for encoding to described target code unit according to described present mode in the following ways and comprises:
Obtain target pattern, wherein, described target pattern is adopted pattern of encoding to described target code unit;
The size of ternary is calculated according to described target pattern; And
The calculating of the encoding overhead needed for according to described present mode described target code unit being encoded is judged whether to skip according to the size of described ternary.
9. inter-frame mode selecting method according to claim 8, is characterized in that,
Comprise according to the size that described target pattern calculates ternary: judge whether described target pattern is judged to frame mode in advance; And when judging that described target pattern is judged to described frame mode in advance, determine bFastDeciIntra=1, on the contrary determine bFastDeciIntra=0, wherein, bFastDeciIntra is described ternary,
The calculating judging whether to skip the encoding overhead needed for encoding to described target code unit according to described present mode according to the size of described ternary comprises: judge whether bFastDeciIntra=1 sets up, wherein, when judging that bFastDeciIntra=1 sets up, determine to skip the calculating of the encoding overhead needed for according to described present mode described target code unit being encoded.
10. the inter-frame mode selecting method according to any one of claim 2 to 4, it is characterized in that, described present mode is frame mode, and described inter-frame mode selecting method also comprises the calculating judging whether to skip the encoding overhead needed for encoding to described target code unit according to described present mode in the following ways and comprises:
Obtain at described coding unit CU depthencode before, and with described coding unit CU depththe coding mode of time domain and/or the adjacent coding unit CU ' in spatial domain and described coding unit CU '; And
Judge whether there is frame mode in the coding mode of described coding unit CU ',
Wherein, when there is not described frame mode in the coding mode judging described coding unit CU ', determine to skip the calculating of the encoding overhead needed for according to described present mode described target code unit being encoded.
11. inter-frame mode selecting methods according to claim 5, is characterized in that, when judge described target component do not meet described pre-conditioned, described inter-frame mode selecting method also comprises:
Judge whether termination the 3rd calculating, wherein, the described 3rd calculating being calculated as the encoding overhead needed for the described target code unit of current depth Depth encoded; And
When judging to stop described 3rd calculating, judge whether to stop dividing described coding unit CU depth,
Wherein, when judging not stop the described 3rd and calculating, judge whether to skip the calculating of the encoding overhead needed for according to described interframe 2Nx2N pattern described target code unit being encoded.
12. inter-frame mode selecting methods according to claim 11, is characterized in that, judge whether that termination the 3rd calculating comprises:
Obtain described target pattern, according to merging the residual error coefficient and motion vector difference that the described target code unit of 2Nx2N pattern to current depth Depth encode; And
Judge whether to stop described 3rd calculating according to described target pattern, described residual error coefficient and described motion vector difference.
13. inter-frame mode selecting methods according to claim 11 or 12, is characterized in that, judge whether to stop dividing described coding unit CU depthcomprise:
Obtain described coding unit CU depththe degree of depth, according to fusion 2Nx2N to described coding unit CU depthcarry out the encoding overhead of encoding, and obtain at described coding unit CU depthencode before, and with described coding unit CU depththe coding mode of time domain and/or the adjacent coding unit CU ' in spatial domain, described coding unit CU ' and the degree of depth;
Judge whether there is N in described coding unit CU ' sindividual 4th coding unit, wherein, the degree of depth of described 4th coding unit is less than or equal to described coding unit CU depththe degree of depth, N sit is the 4th parameter preset; And
Judge according to fusion 2Nx2N described coding unit CU depthwhether the encoding overhead carrying out encoding is less than the minimum code expense of encoding to described coding unit CU ' according to fusion 2Nx2N,
Wherein, judging to there is N in described coding unit CU ' sindividual described 4th coding unit, and according to fusion 2Nx2N to described coding unit CU depthwhen the encoding overhead carrying out encoding is less than the minimum code expense of encoding to described coding unit CU ' according to fusion 2Nx2N, determine to stop dividing described coding unit CU depth.
14. 1 kinds of method for video coding, is characterized in that, comprising:
Receive video source data to be encoded;
Determine the encoding frame type of each frame in described video source data, obtain MB of prediction frame and infra-frame prediction frame;
Determine the coding mode of described infra-frame prediction frame, and adopt presetting method to determine the coding mode of described MB of prediction frame, wherein, the inter-frame mode selecting method of described presetting method according to any one of claim 1 to 13; And
First mode is adopted to encode described infra-frame prediction frame, and adopt MB of prediction frame described in the second pattern-coding, wherein, described first mode is the coding mode of the described infra-frame prediction frame determined, described second pattern is the coding mode of the described MB of prediction frame adopting described presetting method to determine.
The inter mode decision device of 15. 1 kinds of Video codings, is characterized in that, comprising:
Selected cell, for when the calculating of skipping the encoding overhead needed for encoding to target code unit according to present mode, skip the first calculating, and minimum code expense is selected from the first encoding overhead, wherein, described first calculating being calculated as the encoding overhead needed for according to described present mode described target code unit being encoded, described first encoding overhead for calculate according to the encoding overhead needed for encoding to described target code unit in first pattern, described is pattern before described present mode in first pattern; And
First determining unit, for when adopting pattern corresponding to described minimum code expense to meet pre-conditioned to the target component that described target code unit is encoded, determine that pattern corresponding to described minimum code expense is adopted pattern of encoding to described target code unit.
16. inter mode decision devices according to claim 15, it is characterized in that, described inter mode decision device also comprises as follows by the unit repeating to call, until the degree of depth of described target code unit reaches preset maximum depth, wherein, the initial value of current depth Depth is 1:
First judging unit, skips the coding unit CU to current depth Depth for judging whether depththe calculating of the encoding overhead needed for encoding; And
Second determining unit, for when described first judging unit is judged to skip the second calculating, divides described coding unit CU depthfor the coding unit CU of multiple degree of depth Depth+1 depth+1, and determine each described coding unit CU depth+1be described target code unit, or when judging not skip described second calculating, determine described coding unit CU depthfor described target code unit, wherein, described second is calculated as described coding unit CU depththe calculating of the encoding overhead needed for encoding.
17. inter mode decision devices according to claim 16, is characterized in that, described inter mode decision device also comprises:
Comparing unit, for comparing the size of the first encoding overhead and the second encoding overhead, wherein, described first encoding overhead is described coding unit CU depthminimum code expense, described second encoding overhead is multiple described coding unit CU depth+1minimum code expense sum, described coding unit CU depthminimum code expense identical with the minimum code expense of the described target code unit of degree of depth Depth-1; And
3rd determining unit, for compare at described comparing unit described first encoding overhead be less than described second encoding overhead, determine to adopt the described target code unit of pattern to current depth Depth corresponding to described first encoding overhead to encode, or when comparing described first encoding overhead and being greater than described second encoding overhead, determine that the described target code unit of pattern to each degree of depth Depth+1 that employing the 3rd encoding overhead is corresponding is encoded, wherein, described 3rd encoding overhead is the minimum code expense of the described target code unit of each degree of depth Depth+1.
18. inter mode decision devices according to claim 16, is characterized in that, described first judging unit comprises:
First obtains subelement, for obtaining at described coding unit CU depthcarry out encoding before and with described coding unit CU depththe degree of depth of time domain and/or the adjacent coding unit CU ' in spatial domain and described coding unit CU '; And
First judgment sub-unit, for judging whether there is N in described coding unit CU ' cindividual first coding unit, wherein, N cbe the first parameter preset, the degree of depth of described first coding unit is greater than current depth Depth,
Wherein, judging to there is N in described coding unit CU ' cwhen individual described first coding unit, determine to skip described coding unit CU depththe calculating of the encoding overhead needed for encoding.
19., according to claim 16 to the inter mode decision device according to any one of 18, is characterized in that, described is fusion 2Nx2N pattern in first pattern, and described present mode is interframe 2Nx2N pattern, and described inter mode decision device also comprises:
First acquiring unit, for obtaining target pattern, according to described fusion 2Nx2N pattern to described coding unit CU depthcarry out the residual error coefficient of encoding and motion vector difference, and obtain the encoding overhead of coding unit CU ' coding, wherein, described coding unit CU ' is at described coding unit CU depthbefore coding and with described coding unit CU depththe adjacent coding unit of time domain and/or spatial domain, described target pattern is adopted pattern of encoding to described target code unit; And
Second judging unit, calculates the encoding overhead needed for encoding to described target code unit according to the second pattern for judging whether according to the encoding overhead of described target pattern, described residual error coefficient, described motion vector difference and described coding unit CU ' coding to skip.
20. inter mode decision devices according to claim 19, is characterized in that, described second judging unit comprises:
First computing module, for calculating the size of the first variable according to described residual error coefficient and described motion vector difference;
Second computing module, for calculating the size of the second variable and ternary according to described target pattern;
3rd computing module, for according to according to described fusion 2Nx2N pattern to described coding unit CU depththe encoding overhead carrying out encoding overhead and the described coding unit CU ' coding of encoding calculates the size of the 4th variable; And
First judge module, for judging whether to skip the calculating of the encoding overhead needed for encoding to described target code unit according to described present mode according to the size of described first variable, described second variable, described ternary and described 4th variable.
21. inter mode decision devices according to claim 20, is characterized in that:
Described first computing module comprises: first judges submodule, for judging whether described residual error coefficient and described motion vector difference are zero; And first determines submodule, for when judging described residual error coefficient and described motion vector difference is zero, determine bMergeDetectSkip=1, and determine Cost x, Merge=MAX, or when judging described residual error coefficient and described motion vector difference is not zero, determine bMergeDetectSkip=0, and determine Cost x, Mergefor according to described fusion 2Nx2N pattern to described coding unit CU depthcarry out the encoding overhead of encoding, wherein, MAX=2 a-1, a is Cost x, Mergethe number of bits of value type, bMergeDetectSkip is described first variable,
Described second computing module comprises: second judges submodule, for judging whether described target pattern is judged to described fusion 2Nx2N pattern or frame mode in advance; And second determines submodule, for when judging that described target pattern is judged to described fusion 2Nx2N pattern in advance, determine bFastDeciMerge=1, otherwise, determine bFastDeciMerge=0, or when judging that described target pattern is judged to described frame mode in advance, determine bFastDeciIntra=1, otherwise, determine bFastDeciIntra=0, wherein, bFastDeciMerge is described second variable, bFastDeciIntra is described ternary
Described 3rd computing module comprises: the 3rd judges submodule, for judging min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > T 1 &times; min i ( Cost i , Merge &prime; &prime; ) Whether set up, or judge min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, Cost x , Merge > T 2 &times; min i ( Cost i &prime; &prime; ) Whether set up, wherein, cost i, Mergefor according to described fusion 2Nx2N pattern to coding unit CU ithe encoding overhead of ' coding, depth ifor described coding unit CU i' the degree of depth, Cost ifor described coding unit CU i' encoding overhead, i ∈ [0,1,2,3,4,5], T1 and T2 is preset multiple, and T1 ≠ T2, wherein, is judging min i ( Cost i , Merge &prime; &prime; ) &NotEqual; MAX And min i ( Cost i , Merge &prime; &prime; ) &NotEqual; 0 Time, Cost x , Merge > min i ( Cost i , Merge &prime; &prime; ) When setting up, determine bCheckFurther=1, otherwise, determine bCheckFurther=0, or judging min i ( Cost i , Merge &prime; &prime; ) = MAX Or min i ( Cost i , Merge &prime; &prime; ) = 0 Time, when setting up, determine bCheckFurther=1, otherwise determine bCheckFurther=0, bCheckFurther is described 4th variable,
Described first judge module comprises: the 4th judges submodule, whether set up for Rule of judgment 1 and condition 2, wherein, described condition 1 is bMergeDetectSkip=0 and bFastDeciMerge=0 and bFastDeciIntra=0, and described condition 2 is bMergeDetectSkip=1 and bCheckFurther=1 and bFastDeciIntra=0; And the 4th determines submodule, for when the described 4th judges that submodule judges that described condition 1 or described condition 2 are set up, determine to skip the calculating calculating the encoding overhead needed for according to described present mode described target code unit being encoded.
22., according to claim 16 to the inter mode decision device according to any one of 18, is characterized in that, described present mode is the non-2Nx2N pattern of interframe, and described inter mode decision device also comprises:
Second acquisition unit, for obtaining target pattern, wherein, described target pattern is adopted pattern of encoding to described target code unit;
Computing unit, for calculating the size of ternary according to described target pattern; And
3rd judging unit, for judging whether to skip the calculating of the encoding overhead needed for encoding to described target code unit according to described present mode according to the size of described ternary.
23. inter mode decision devices according to claim 22, is characterized in that,
Described computing unit comprises: the second judge module, for judging whether described target pattern is judged to frame mode in advance; And determination module, for when described second judge module judges that described target pattern is judged to described frame mode in advance, determine bFastDeciIntra=1, otherwise, determine bFastDeciIntra=0, wherein, bFastDeciIntra is described ternary
Described 3rd judging unit comprises: the 3rd judge module, for judging whether bFastDeciIntra=1 sets up, wherein, when described 3rd judge module judges that bFastDeciIntra=1 sets up, determine to skip the calculating of the encoding overhead needed for according to described present mode described target code unit being encoded.
24., according to claim 16 to the inter mode decision device according to any one of 18, is characterized in that, described present mode is frame mode, and described inter mode decision device also comprises:
3rd acquiring unit, for obtaining at described coding unit CU depthencode before, and with described coding unit CU depththe coding mode of time domain and/or the adjacent coding unit CU ' in spatial domain and described coding unit CU '; And
4th judging unit, for judge described coding unit CU ' coding mode in whether there is frame mode,
Wherein, when there is not described frame mode in the coding mode judging described coding unit CU ', determine to skip the calculating of the encoding overhead needed for according to described present mode described target code unit being encoded.
25. inter mode decision devices according to claim 19, is characterized in that, described inter mode decision device also comprises:
5th judging unit, for judging whether termination the 3rd calculating, wherein, the described 3rd calculating being calculated as the encoding overhead needed for the described target code unit of current depth Depth encoded; And
6th judging unit, for when judging to stop described 3rd calculating, judges whether to stop dividing described coding unit CU depth,
Wherein, when judging not stop the described 3rd and calculating, judge whether to skip the calculating of the encoding overhead needed for according to described interframe 2Nx2N pattern described target code unit being encoded.
26. inter mode decision devices according to claim 25, is characterized in that, described 5th judging unit comprises:
Second obtains subelement, for obtaining described target pattern, according to merging the residual error coefficient and motion vector difference that the described target code unit of 2Nx2N pattern to current depth Depth encode; And
Second judgment sub-unit, stops described 3rd calculating for judging whether according to described target pattern, described residual error coefficient and described motion vector difference.
27. inter mode decision devices according to claim 25 or 26, it is characterized in that, described 6th judging unit comprises:
3rd obtains subelement, for obtaining described coding unit CU depththe degree of depth, according to fusion 2Nx2N to described coding unit CU depthcarry out the encoding overhead of encoding, and obtain at described coding unit CU depthencode before, and with described coding unit CU depththe coding mode of time domain and/or the adjacent coding unit CU ' in spatial domain, described coding unit CU ' and the degree of depth;
3rd judgment sub-unit, for judging whether there is N in described coding unit CU ' sindividual 4th coding unit, wherein, the degree of depth of described 4th coding unit is less than or equal to described coding unit CU depththe degree of depth, N sit is the 4th parameter preset; And
4th judgment sub-unit, for judging according to fusion 2Nx2N described coding unit CU depthwhether the encoding overhead carrying out encoding is less than the minimum code expense of encoding to described coding unit CU ' according to fusion 2Nx2N,
Wherein, judging to there is N in described coding unit CU ' sindividual described 4th coding unit, and according to fusion 2Nx2N to described coding unit CU depthwhen the encoding overhead carrying out encoding is less than the minimum code expense of encoding to described coding unit CU ' according to fusion 2Nx2N, determine to stop dividing described coding unit CU depth.
28. 1 kinds of video coding apparatus, is characterized in that, comprising:
Receiving element, receives video source data to be encoded;
Frame type selected cell, determines the encoding frame type of each frame in described video source data, obtains MB of prediction frame and infra-frame prediction frame;
Mode selecting unit, determines the coding mode of described infra-frame prediction frame, and adopts presetting method to determine the coding mode of described MB of prediction frame, wherein, and the inter-frame mode selecting method of described presetting method according to any one of claim 1 to 13; And
Coding unit, first mode is adopted to encode described infra-frame prediction frame, and adopt MB of prediction frame described in the second pattern-coding, wherein, described first mode is the coding mode of the described infra-frame prediction frame determined, described second pattern is the coding mode of the described MB of prediction frame adopting described presetting method to determine.
29. 1 kinds of video encoders, is characterized in that, comprising: the inter mode decision device of the Video coding according to any one of claim 15 to 27.
CN201410256678.6A 2014-06-10 2014-06-10 Video encoder, method and apparatus and its inter-frame mode selecting method and device Active CN104601988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410256678.6A CN104601988B (en) 2014-06-10 2014-06-10 Video encoder, method and apparatus and its inter-frame mode selecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410256678.6A CN104601988B (en) 2014-06-10 2014-06-10 Video encoder, method and apparatus and its inter-frame mode selecting method and device

Publications (2)

Publication Number Publication Date
CN104601988A true CN104601988A (en) 2015-05-06
CN104601988B CN104601988B (en) 2018-02-02

Family

ID=53127444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410256678.6A Active CN104601988B (en) 2014-06-10 2014-06-10 Video encoder, method and apparatus and its inter-frame mode selecting method and device

Country Status (1)

Country Link
CN (1) CN104601988B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222742A (en) * 2017-07-05 2017-09-29 中南大学 Video coding Merge mode quick selecting methods and device based on time-space domain correlation
CN107396102A (en) * 2017-08-30 2017-11-24 中南大学 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors
CN108206954A (en) * 2016-12-16 2018-06-26 北京金山云网络技术有限公司 A kind of method for video coding and device
WO2018192574A1 (en) * 2017-04-21 2018-10-25 Mediatek Inc. Sub-prediction unit temporal motion vector prediction (sub-pu tmvp) for video coding
CN110446052A (en) * 2019-09-03 2019-11-12 南华大学 The quick CU depth selection method of depth map in a kind of 3D-HEVC frame
WO2020007304A1 (en) * 2018-07-02 2020-01-09 华为技术有限公司 Motion vector prediction method and device, and codec
CN113286140A (en) * 2021-05-11 2021-08-20 北京飞讯数码科技有限公司 Video coding and decoding test method, device and storage medium
CN113596442A (en) * 2021-07-07 2021-11-02 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589022A (en) * 2004-08-26 2005-03-02 中芯联合(北京)微电子有限公司 Macroblock split mode selecting method in multiple mode movement estimation decided by oriented tree
US20060013299A1 (en) * 2004-07-07 2006-01-19 Sony Corporation Coding apparatus, coding method, coding method program, and recording medium recording the coding method program
CN101179728A (en) * 2007-12-13 2008-05-14 北京中星微电子有限公司 Method and apparatus for determining interframe encoding mode
US20090016439A1 (en) * 2006-02-08 2009-01-15 Thomas Licensing Derivation of Frame/Field Encoding Mode for a Pair of Video Macroblocks
US20090110077A1 (en) * 2006-05-24 2009-04-30 Hiroshi Amano Image coding device, image coding method, and image coding integrated circuit
CN101668207A (en) * 2009-09-25 2010-03-10 天津大学 Video coding switching system from MPEG to AVS

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013299A1 (en) * 2004-07-07 2006-01-19 Sony Corporation Coding apparatus, coding method, coding method program, and recording medium recording the coding method program
CN1589022A (en) * 2004-08-26 2005-03-02 中芯联合(北京)微电子有限公司 Macroblock split mode selecting method in multiple mode movement estimation decided by oriented tree
US20090016439A1 (en) * 2006-02-08 2009-01-15 Thomas Licensing Derivation of Frame/Field Encoding Mode for a Pair of Video Macroblocks
CN101379830A (en) * 2006-02-08 2009-03-04 汤姆森许可贸易公司 Derivation of frame/field encoding mode for a pair of video macroblocks
US20090110077A1 (en) * 2006-05-24 2009-04-30 Hiroshi Amano Image coding device, image coding method, and image coding integrated circuit
CN101179728A (en) * 2007-12-13 2008-05-14 北京中星微电子有限公司 Method and apparatus for determining interframe encoding mode
CN101668207A (en) * 2009-09-25 2010-03-10 天津大学 Video coding switching system from MPEG to AVS

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108206954A (en) * 2016-12-16 2018-06-26 北京金山云网络技术有限公司 A kind of method for video coding and device
WO2018192574A1 (en) * 2017-04-21 2018-10-25 Mediatek Inc. Sub-prediction unit temporal motion vector prediction (sub-pu tmvp) for video coding
CN107222742B (en) * 2017-07-05 2019-07-26 中南大学 Video coding Merge mode quick selecting method and device based on time-space domain correlation
CN107222742A (en) * 2017-07-05 2017-09-29 中南大学 Video coding Merge mode quick selecting methods and device based on time-space domain correlation
CN107396102A (en) * 2017-08-30 2017-11-24 中南大学 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vectors
CN107396102B (en) * 2017-08-30 2019-10-08 中南大学 A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector
WO2020007304A1 (en) * 2018-07-02 2020-01-09 华为技术有限公司 Motion vector prediction method and device, and codec
CN110446052A (en) * 2019-09-03 2019-11-12 南华大学 The quick CU depth selection method of depth map in a kind of 3D-HEVC frame
CN110446052B (en) * 2019-09-03 2021-02-12 南华大学 3D-HEVC intra-frame depth map rapid CU depth selection method
CN113286140A (en) * 2021-05-11 2021-08-20 北京飞讯数码科技有限公司 Video coding and decoding test method, device and storage medium
CN113286140B (en) * 2021-05-11 2022-09-02 北京飞讯数码科技有限公司 Video coding and decoding test method, device and storage medium
CN113596442A (en) * 2021-07-07 2021-11-02 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium
CN113596442B (en) * 2021-07-07 2022-10-04 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104601988B (en) 2018-02-02

Similar Documents

Publication Publication Date Title
CN104602017A (en) Video coder, method and device and inter-frame mode selection method and device thereof
CN104601988A (en) Video coder, method and device and inter-frame mode selection method and device thereof
US11968398B2 (en) Method and device for processing video signal by using reduced secondary transform
CN106537915B (en) Method derived from motion vector for Video coding
US20230239496A1 (en) Method and apparatus for encoding/decoding image
JP2021528896A (en) Partial cost calculation
Zhang et al. Low complexity HEVC INTRA coding for high-quality mobile video communication
CN104768011A (en) Image encoding and decoding method and related device
CN104838658A (en) Inside view motion prediction among texture and depth view components with asymmetric spatial resolution
WO2019136220A1 (en) Multiple-model local illumination compensation
CN104125469A (en) Fast coding method for high efficiency video coding (HEVC)
JP2013523010A5 (en) Method and apparatus for implicit adaptive motion vector predictor selection for video encoding and video decoding
CN105141954A (en) HEVC interframe coding quick mode selection method
CN104604232A (en) Method and apparatus for encoding multi-view images, and method and apparatus for decoding multi-view images
CN103348681A (en) Method and device for determining reference unit
US11889080B2 (en) Method and apparatus for processing video signal by applying secondary transform to partitioned block
Tohidypour et al. Online-learning-based complexity reduction scheme for 3D-HEVC
CN102740071A (en) Scalable video codec encoder device and methods thereof
CN111742553A (en) Deep learning based image partitioning for video compression
US11997284B2 (en) Method for deriving motion vector, and electronic device of current block in coding unit
WO2016031253A1 (en) Block size determining method and program recording medium
CN110149512B (en) Inter-frame prediction acceleration method, device, computer storage medium and equipment
CN104918047B (en) A kind of method and device for removing of bi-directional motion estimation
CN102595110A (en) Video coding method, decoding method and terminal
CN104104947A (en) Video coding method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant