CN104853191A - HEVC fast coding method - Google Patents

HEVC fast coding method Download PDF

Info

Publication number
CN104853191A
CN104853191A CN201510225448.8A CN201510225448A CN104853191A CN 104853191 A CN104853191 A CN 104853191A CN 201510225448 A CN201510225448 A CN 201510225448A CN 104853191 A CN104853191 A CN 104853191A
Authority
CN
China
Prior art keywords
coding unit
maximum coding
current
unit
current maximum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510225448.8A
Other languages
Chinese (zh)
Other versions
CN104853191B (en
Inventor
蒋刚毅
方树清
彭宗举
郁梅
徐升阳
杜宝祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weier Vision Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201510225448.8A priority Critical patent/CN104853191B/en
Publication of CN104853191A publication Critical patent/CN104853191A/en
Application granted granted Critical
Publication of CN104853191B publication Critical patent/CN104853191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention discloses an HEVC fast coding method. During the coding process, the temporal similarity between optimal coding units of the same coordinate position in forward and backward reference frames, the spatial similarity between adjacent optimal coding units in the left and right direction, and the spatial similarity between adjacent optimal coding units in the up and down direction, are fully utilized to figure out the depth predicted values of P-frame or B-frame optimal coding units. After that, based on the one-to-one correspondence relationship between the depth predicted values of the optimal coding units and depth ergodic intervals, the depth ergodic intervals of the optimal coding units are determined. The optimal coding units and all coding units in the optimal coding units are coded within the depth ergodic intervals. During the coding process, qualified P-frame or B-frame optimal coding units and qualified coding units are selected quickly in the prediction mode, so that the unnecessary depth traversal and the unnecessary prediction-mode traversal are avoided. Therefore, on the premise that the video quality is ensured, the calculation complexity of the video coding is lowered.

Description

The fast encoding method of a kind of HEVC
Technical field
The present invention relates to a kind of video coding technique, especially relate to the fast encoding method of a kind of HEVC.
Background technology
H.264/AVC along with the develop rapidly of multimedia and network, MPEG-2, MPEG-4 and the video encoding standard such as have been difficult to meet the Efficient Compression of user to ultra high-definition video and the requirement of transmission.The MPEG of VCEG and ISO/IEC of International Standards Organization ITU-T combines and has set up JCT-VC (Joint Collaborative Team on VideoCoding), study and formulated high-performance video coding (High Efficiency Video Coding, HEVC) standard.Compared with H.264/AVC, HEVC realizes the target that code efficiency doubles substantially; But have employed the technology such as the larger encoding block of size and quadtree coding structure on coding structure due to HEVC, therefore result in encoder complexity significantly increases.
In HEVC test model (HEVC Test Model, HM), the quad-tree structure of a maximum coding unit (Largest Coding Unit, LCU) divides the full traversal needing to adopt the mode of recurrence depth value to be carried out to 0 to 3.Fig. 1 gives the LCU deterministic process finally splitting form, it needs to calculate 1+4+4 × 4+4 × 4 × 4=85 rate distortion (Rate-distortion Optimization, RDO) cost, and each coding unit (Coding Unit, CU) also to carry out in frame, the traversal of the various predicting unit of interframe (Prediction Unit, PU) predictive mode.Obviously, the cutting process of whole LCU makes the computation complexity of coding side very big.The threshold value that people's utilance distortion costs such as Hou are determined carrys out the segmentation of premature termination CU, and the method is because failing to stop the selection course of predictive mode of large scale CU, and the ability causing it to reduce complexity is very limited.The people such as Shen predict depth bounds (the Depth Range of current LCU by the depth value of the adjacent LCU in weighted space-time territory, DR), the traversal degree of depth number of LCU can be reduced, but it does not consider the difference between video sequence, its fixed weight adopted inapplicable all video sequences, depth bounds of its prediction still needs further improvement.The people such as Xiong utilize the motion of optical flow method calculating pyramid to disperse (Pyramid Motion Divergence, PMD) characteristic value determines the segmentation situation of CU, reduce complexity to a certain extent, but it does not think over the correlation between motion vector, cause its encoding rate distortion performance not good.
Summary of the invention
Technical problem to be solved by this invention is to provide the fast encoding method of a kind of HEVC, and it, under the prerequisite ensureing video quality, can reduce encoder complexity effectively.
The present invention solves the problems of the technologies described above adopted technical scheme: the fast encoding method of a kind of HEVC, it is characterized in that comprising the following steps:
1. be present frame by pending frame definition current in HD video;
2. maximum coding unit to be encoded current in present frame is defined as current maximum coding unit;
3. according to frame type and the current maximum coding unit position in the current frame of present frame, the maximum coding unit of all predictions of current maximum coding unit is determined; Then be the prediction sets of current maximum coding unit by the sets definition that the maximum coding unit of all predictions by current maximum coding unit is formed, and be designated as Ω, wherein, Ω for the maximum coding unit comprised in empty set or Ω be at least one in L, T and COL, L represents the adjacent maximum coding unit in the left side of current maximum coding unit, T represents the adjacent maximum coding unit in the top of current maximum coding unit, and COL represents maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame;
4. D is made predrepresent the depth prediction value of current maximum coding unit, then establish D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, wherein, D predvalue do not exist or D predvalue be real number, and 0≤D pred≤ 3;
If 5. Ω is in L, T and COL two for the maximum coding unit that comprises in empty set or Ω for the maximum coding unit comprised in L, T and COL or Ω, then determine D predvalue do not exist, then perform step 7.; If the maximum coding unit comprised in Ω is L, T and COL, then determine D predvalue be real number, and 0≤D pred≤ 3, then perform step 6.;
6. by obtaining the time domain similarity TS of current maximum coding unit and the spatial domain similarity LAS of the spatial domain similarity TAS of T, current maximum coding unit and L, current maximum coding unit and COL, D is calculated predvalue, wherein, 1≤m≤3,1≤i≤256, and m and i is integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4;
7. according to D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, determine that the extreme saturation of current maximum coding unit is interval; Then according to the extreme saturation interval of current maximum coding unit, each coding unit in current maximum coding unit and current maximum coding unit is encoded, in an encoding process, if the frame at current maximum coding unit place is P frame in HD video or B frame, then when the proper discernable distortion value of mean time spatial domain human eye of each coding unit in current maximum coding unit or current maximum coding unit is less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2then each coding unit in current maximum coding unit or current maximum coding unit is carried out to the selection of fast prediction pattern, use skip, merge, inter2N × 2N and intra2N × 2N predictive mode to carry out traversal coding respectively to each coding unit in current maximum coding unit or current maximum coding unit, choose rate distortion costs and be worth minimum predictive mode as optimal prediction modes;
8. using maximum coding unit to be encoded next in present frame as current maximum coding unit, then return step and 3. continue to perform, until all maximum coding unit in present frame is all encoded complete;
9. using frame pending for next frame in HD video as present frame, then return step 2. continue perform, until all frames in HD video are all disposed, so far complete the fast coding of HD video.
Described step 3. in the deterministic process of maximum coding unit of all predictions of current maximum coding unit be:
If present frame is the I frame in HD video, then, when current maximum coding unit is the 1st maximum coding unit in present frame, determine the maximum coding unit that current maximum coding unit is not predicted; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top is the maximum coding unit of the prediction of current maximum coding unit;
If present frame is P frame in HD video or B frame, then when current maximum coding unit is the 1st maximum coding unit in present frame, determine that maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that in the forward reference frame of the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top and present frame, the maximum coding unit COL identical with the coordinate position of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit.
Described step is middle D 4. predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit be: work as D predwhen=0, the extreme saturation interval of current maximum coding unit is [0,0]; Work as 0<D predwhen≤0.5, the extreme saturation interval of current maximum coding unit is [0,1]; Work as 0.5<D predwhen≤1.5, the extreme saturation interval of current maximum coding unit is [0,2]; Work as 1.5<D predwhen≤2.5, the extreme saturation interval of current maximum coding unit is [1,3]; Work as 2.5<D predwhen≤3, the extreme saturation interval of current maximum coding unit is [2,3]; Work as D predvalue when not existing, the extreme saturation interval of current maximum coding unit is [0,3].
Described step detailed process is 6.:
6. the mean depth between the adjacent maximum coding unit L-COL in the left side-1, calculating maximum coding unit identical with the coordinate position of current maximum coding unit in the adjacent maximum coding unit L in the left side of current maximum coding unit and the forward reference frame of present frame is poor, be designated as ADD1 and mean depth between the adjacent maximum coding unit T-COL in top of maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of the adjacent maximum coding unit T in the top calculating current maximum coding unit and present frame is poor, be designated as ADD2 then calculate the mean value of ADD1 and ADD2, be designated as ADD, ADD=(ADD1+ADD2)/2; Wherein, 1≤i≤256, and be integer, i-th size in expression L is the depth value of the basic unit of storage of 4 × 4, i-th size in expression L-COL is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T-COL is the depth value of the basic unit of storage of 4 × 4, span be [0,3], and be integer, symbol " || " is the symbol that takes absolute value;
-2 6., calculate the spatial domain similarity of the adjacent maximum coding unit T in top of current maximum coding unit and current maximum coding unit, be designated as TAS, TAS=0.05 × ADD+0.25; And the spatial domain similarity of the adjacent maximum coding unit L in the left side calculating current maximum coding unit and current maximum coding unit, be designated as LAS, LAS=0.05 × ADD+0.25; Meanwhile, calculate current maximum coding unit and the time domain similarity of maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame, be designated as TS, TS=-0.1 × ADD+0.5;
6.-3, D is calculated predvalue, wherein, 1≤m≤3, and be integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4.
Described step 7. according to the extreme saturation interval of current maximum coding unit to the detailed process that each coding unit in current maximum coding unit and current maximum coding unit is encoded be:
-1 7., pending coding unit current in the extreme saturation interval of current maximum coding unit is defined as current coded unit, the layer at current coded unit place is defined as current layer;
7.-2, JND is used tsrepresent the proper discernable distortion value of mean time spatial domain human eye of current coded unit, if present frame is the I frame in HD video, then determine JND tsvalue do not exist, then perform step 7.-3; If present frame is P frame in HD video or B frame, then calculate JND tsvalue, then step 7.-3 is performed, wherein, (x, y) represents the coordinate position of the pixel in current coded unit, 0≤x≤K-1,0≤y≤G-1, and be all integer, K represents total number of one-row pixels point in current coded unit, G represents total number of a row pixel in current coded unit, JND (x, y) represents that in current coded unit, coordinate position is the proper discernable distortion value of time-space domain human eye of the pixel of (x, y);
7.-3, JND is worked as tsvalue when not existing, use intra2N × 2N and intraN × N predictive mode to encode respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Work as JND tsvalue when existing, judge JND tsvalue whether be less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2if, then judge that current coded unit carries out the selection of fast prediction pattern, skip, merge, inter2N × 2N and intra2N × 2N predictive mode is used to carry out traversal coding respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Otherwise, judge that current coded unit does not carry out the selection of fast prediction pattern, skip, merge, inter2N × 2N, inter2N × N, interN × 2N, AMP, intra2N × 2N and intraN × N predictive mode is used entirely to travel through coding respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Wherein, the value of the N in inter2N × 2N, inter2N × N, interN × 2N, intra2N × 2N and intraN × N is the half of total number of one-row pixels point or a row pixel in current coded unit;
7.-4, judge whether the depth value of current coded unit is less than the maximum in the extreme saturation interval of current maximum coding unit, if, then current coded unit is divided into further lower one deck coding unit that four sizes are identical, then using current pending coding unit in this lower one deck coding unit as current coded unit, using the layer at current coded unit place as current layer, then return step 7.-2 continuation execution; Otherwise, determine that current coded unit cataloged procedure terminates, then perform step 7.-5;
-5 7., judge whether all coding units in the extreme saturation interval of current maximum coding unit are all disposed, if so, then determine that current maximum encoding unit encodes process terminates, then perform step 8., otherwise, judge whether four coding units in current layer are all disposed, if four coding units in current layer are all disposed, then using pending coding unit next in the last layer coding unit of current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned, if four coding units in current layer are untreated complete, then using pending coding unit next in current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned.
7. described step gets T in-3 1=3.5, get T 2=10.
Compared with prior art, the invention has the advantages that:
1) the inventive method takes full advantage of the spatial domain similarity of the time domain similarity of the maximum coding unit identical with coordinate position in the forward reference frame of the frame at this maximum coding unit place of the maximum coding unit in HD video cataloged procedure and the adjacent maximum coding unit in the left side of maximum coding unit and this maximum coding unit, the spatial domain similarity of the adjacent maximum coding unit in top of maximum coding unit and this maximum coding unit, calculate the depth prediction value of the P frame in HD video or the maximum coding unit in B frame, then the extreme saturation of maximum coding unit is interval to utilize the depth prediction value of maximum coding unit and the one-to-one relationship in extreme saturation interval to determine, in extreme saturation interval, each coding unit in maximum coding unit and maximum coding unit is encoded, in an encoding process fast prediction model selection is carried out to each coding unit satisfied condition in the maximum coding unit satisfied condition in P frame or B frame and maximum coding unit, effectively reduce the unnecessary extreme saturation in cutting procedure and the unnecessary predictive mode traversal in forecasting process, thus under the prerequisite ensureing video quality, reduce the computation complexity of Video coding.
2) the inventive method carries out traveling through in the process of coding in extreme saturation interval range, the proper discernable distortion value of mean time spatial domain human eye of coding unit is utilized to instruct the quick selection of predictive mode, decrease unnecessary predictive mode traversal, thus reduce the computation complexity of Video coding.
Accompanying drawing explanation
Fig. 1 is prediction and the cutting procedure schematic diagram of a maximum coding unit in HEVC test model HM;
Fig. 2 be the inventive method totally realize block diagram;
Fig. 3 is the position relationship of maximum coding unit COL identical with the coordinate position of this maximum coding unit C in the forward reference frame of a maximum coding unit C maximum coding unit L adjacent with the left side of this maximum coding unit C, the adjacent maximum coding unit T in top of this maximum coding unit C, the frame at this maximum coding unit C place;
Fig. 4 a is the statistical relationship schematic diagram of BasketballDrive (1920 × 1080) sequence when QP=32, coding frame number are 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit;
Fig. 4 b is the statistical relationship schematic diagram of BlowingBubbles (416 × 240) sequence when QP=32, coding frame number are 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit;
Fig. 4 c is the statistical relationship schematic diagram of Traffic (2560 × 1600) sequence when QP=32, coding frame number are 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit;
Fig. 4 d is the statistical relationship schematic diagram of RaceHorsesC (832 × 480) sequence when QP=32, coding frame number are 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit;
The saving of time percentage situation map of Fig. 5 for utilizing the inventive method to carry out coding 100 frame respectively compared to the cycle tests utilized listed by HM9.0 original coding method his-and-hers watches 1 under low delay coding structure and random access coding structure;
Fig. 6 a gives and utilizes the inventive method and utilize HM9.0 original coding method PeopleOnStreet cycle tests (2560 × 1600) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure;
Fig. 6 b gives and utilizes the inventive method and utilize HM9.0 original coding method Kimono cycle tests (1920 × 1080) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure;
Fig. 6 c gives and utilizes the inventive method and utilize HM9.0 original coding method PartyScene cycle tests (832 × 480) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure;
Fig. 6 d gives and utilizes the inventive method and utilize HM9.0 original coding method BlowingBubbles cycle tests (416 × 240) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure;
Fig. 7 a is the Quadtree Partition result schematic diagram after utilizing HM9.0 original coding method to encode to the 6th frame (1280 × 720) in Johnny cycle tests under the condition of quantization parameter QP=32 under low delay coding structure;
Fig. 7 b is the Quadtree Partition result schematic diagram after utilizing the inventive method to encode to the 6th frame (1280 × 720) in Johnny cycle tests under the condition of quantization parameter QP=32 under low delay coding structure.
Embodiment
Below in conjunction with accompanying drawing embodiment, the present invention is described in further detail.
The fast encoding method of a kind of HEVC that the present invention proposes, it totally realizes block diagram as shown in Figure 2, and it comprises the following steps:
1. be present frame by pending frame definition current in HD video.
Have I frame, P frame and B frame in HD video coding, wherein, I frame belongs to the key frame of Video coding, belongs to I group of picture, unfavorable time-domain information, adopts intraframe coding; P frame adopts the mode of forward direction reference to carry out in frame and interframe encode, belongs to non-I group of picture; B frame adopts the mode of the two-way reference of forward and backward to carry out in frame and interframe encode, also belongs to non-I group of picture.
2. maximum coding unit to be encoded current in present frame is defined as current maximum coding unit.
Adopt the mode from small size coding unit to the recurrence of large scale coding unit to encode in HEVC test model HM, the size of coding unit has 8 × 8,16 × 16,32 × 32 and 64 × 64.Wherein, the coding unit of 64 × 64 is called the maximum coding unit in cataloged procedure.
3. according to frame type and the current maximum coding unit position in the current frame of present frame, the maximum coding unit of all predictions of current maximum coding unit is determined, then be the prediction sets of current maximum coding unit by the sets definition that the maximum coding unit of all predictions by current maximum coding unit is formed, and be designated as Ω, wherein, Ω for the maximum coding unit comprised in empty set or Ω be L, at least one in T and COL is Ω={ T} or Ω={ L} or Ω={ COL} or Ω={ L, T} or Ω={ T, COL} or Ω={ L, COL} or Ω={ L, T, COL}, L represents the adjacent maximum coding unit in the left side of current maximum coding unit, T represents the adjacent maximum coding unit in the top of current maximum coding unit, COL represents maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame.Fig. 3 gives the position relationship of maximum coding unit COL identical with the coordinate position of this maximum coding unit C in the forward reference frame of a maximum coding unit C maximum coding unit L adjacent with the left side of this maximum coding unit C, the adjacent maximum coding unit T in top of this maximum coding unit C, the frame at this maximum coding unit C place.
In this particular embodiment, step 3. in the deterministic process of maximum coding unit of all predictions of current maximum coding unit be:
If present frame is the I frame in HD video, then, when current maximum coding unit is the 1st maximum coding unit in present frame, determine the maximum coding unit that current maximum coding unit is not predicted, namely Ω is empty set; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ T}; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ L}; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top is the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ L, T}.
If present frame is P frame in HD video or B frame, then when current maximum coding unit is the 1st maximum coding unit in present frame, determine that maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame is the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ COL}; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ T, COL}; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ L, COL}; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that in the forward reference frame of the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top and present frame, the maximum coding unit COL identical with the coordinate position of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit, i.e. Ω={ L, T, COL}.
4. D is made predrepresent the depth prediction value of current maximum coding unit, then establish D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, wherein, D predvalue do not exist or D predvalue be real number, and 0≤D pred≤ 3.
In this particular embodiment, step 4. middle D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit be: work as D predwhen=0, the extreme saturation interval of current maximum coding unit is [0,0]; Work as 0<D predwhen≤0.5, the extreme saturation interval of current maximum coding unit is [0,1]; Work as 0.5<D predwhen≤1.5, the extreme saturation interval of current maximum coding unit is [0,2]; Work as 1.5<D predwhen≤2.5, the extreme saturation interval of current maximum coding unit is [1,3]; Work as 2.5<D predwhen≤3, the extreme saturation interval of current maximum coding unit is [2,3]; Work as D predvalue when not existing, the extreme saturation interval of current maximum coding unit is [0,3].
At this, D predthe value mean value weighted calculation that to need by the maximum coding unit inside dimension size of L, T and COL tri-be the depth value of 4 × 4 pieces obtain, therefore, if do not comprise three maximum coding units in Ω, even Ω is empty set or comprises a maximum coding unit or comprise two maximum coding units, then think D predvalue do not exist, thus definition work as D predwhen not existing, the extreme saturation interval of current maximum coding unit is [0,3]; And if comprise three maximum coding units in Ω, then D predfor real number, and 0≤D pred≤ 3.
If 5. Ω is in L, T and COL two for the maximum coding unit that comprises in empty set or Ω for the maximum coding unit comprised in L, T and COL or Ω, then determine D predvalue do not exist, then perform step 7.; If the maximum coding unit comprised in Ω is L, T and COL, then determine D predvalue be real number, and 0≤D pred≤ 3, then perform step 6..
6. by obtaining the time domain similarity TS of current maximum coding unit and the spatial domain similarity LAS of the spatial domain similarity TAS of T, current maximum coding unit and L, current maximum coding unit and COL, D is calculated predvalue, wherein, 1≤m≤3,1≤i≤256, and m and i is integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4.
In this particular embodiment, step detailed process is 6.:
6. the mean depth between the adjacent maximum coding unit L-COL in the left side-1, calculating maximum coding unit identical with the coordinate position of current maximum coding unit in the adjacent maximum coding unit L in the left side of current maximum coding unit and the forward reference frame of present frame is poor, be designated as ADD1 and mean depth between the adjacent maximum coding unit T-COL in top of maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of the adjacent maximum coding unit T in the top calculating current maximum coding unit and present frame is poor, be designated as ADD2 then calculate the mean value of ADD1 and ADD2, be designated as ADD, ADD=(ADD1+ADD2)/2; Wherein, 1≤i≤256, and be integer, i-th size in expression L is the depth value of the basic unit of storage of 4 × 4, i-th size in expression L-COL is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T-COL is the depth value of the basic unit of storage of 4 × 4, span be [0,3], and be integer, symbol " || " is the symbol that takes absolute value.
-2 6., calculate the spatial domain similarity of the adjacent maximum coding unit T in top of current maximum coding unit and current maximum coding unit, be designated as TAS, TAS=0.05 × ADD+0.25; And the spatial domain similarity of the adjacent maximum coding unit L in the left side calculating current maximum coding unit and current maximum coding unit, be designated as LAS, LAS=0.05 × ADD+0.25; Meanwhile, calculate current maximum coding unit and the time domain similarity of maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame, be designated as TS, TS=-0.1 × ADD+0.5.
6.-3, D is calculated predvalue, wherein, 1≤m≤3, and be integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4.
7. according to D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, determine that the extreme saturation of current maximum coding unit is interval; Then according to the extreme saturation interval of current maximum coding unit, each coding unit in current maximum coding unit and current maximum coding unit is encoded, in an encoding process, if the frame at current maximum coding unit place is P frame in HD video or B frame, then when the proper discernable distortion value of mean time spatial domain human eye of each coding unit in current maximum coding unit or current maximum coding unit is less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2then each coding unit in current maximum coding unit or current maximum coding unit is carried out to the selection of fast prediction pattern, use skip, merge, inter2N × 2N and intra2N × 2N predictive mode to carry out traversal coding respectively to each coding unit in current maximum coding unit or current maximum coding unit, choose rate distortion costs and be worth minimum predictive mode as optimal prediction modes.
In this particular embodiment, step 7. according to the extreme saturation interval of current maximum coding unit to the detailed process that each coding unit in current maximum coding unit and current maximum coding unit is encoded be:
-1 7., pending coding unit current in the extreme saturation interval of current maximum coding unit is defined as current coded unit, the layer at current coded unit place is defined as current layer.
7.-2, JND is used tsrepresent the proper discernable distortion value of mean time spatial domain human eye of current coded unit, if present frame is the I frame in HD video, then determine JND tsvalue do not exist, then perform step 7.-3; If present frame is P frame in HD video or B frame, then calculate JND tsvalue, then step 7.-3 is performed, wherein, (x, y) coordinate position of the pixel in current coded unit is represented, 0≤x≤K-1,0≤y≤G-1, and be all integer, K represents total number of one-row pixels point in current coded unit, and G represents total number of a row pixel in current coded unit, JND (x, y) represent that in current coded unit, coordinate position is (x, the proper discernable distortion value of time-space domain human eye of pixel y), the value of JND (x, y) adopts prior art to obtain.
7.-3, JND is worked as tsvalue when not existing, use intra2N × 2N and intraN × N predictive mode to encode respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Work as JND tsvalue when existing, judge JND tsvalue whether be less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2if, then judge that current coded unit carries out the selection of fast prediction pattern, skip, merge, inter2N × 2N and intra2N × 2N predictive mode is used to carry out traversal coding respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Otherwise, judge that current coded unit does not carry out the selection of fast prediction pattern, skip, merge, inter2N × 2N, inter2N × N, interN × 2N, AMP (Asymmetric Motion Partitioning is used respectively to current coded unit, asymmetrical movement is split), intra2N × 2N and intraN × N predictive mode travel through coding entirely, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Wherein, the value of the N in inter2N × 2N, inter2N × N, interN × 2N, intra2N × 2N and intraN × N is the half of total number of one-row pixels point or a row pixel in current coded unit.
At this, get T 1=3.5, get T 2=10, T 1occurrence and T 2occurrence be the proper discernable distortion value of mean time spatial domain human eye and current coded unit by setting up current coded unit optimal prediction modes (namely rate distortion costs is worth minimum predictive mode) between statistical relationship, utilize statistical relationship to determine.
Due to Video coding predictive mode along with region and present frame different with the difference degree of reference frame and different, and the proper discernable distortion model of human eye shows the maximum distortion of vision invisible, comprise the proper discernable distortion model of spatial domain human eye and the proper discernable distortion model of time domain human eye, human eye proper discernable distortion model in spatial domain shows that human eye is more responsive than absolute brightness to relative brightness, and texture mask then shows that texture region can hide more distortions than flat site; The proper discernable distortion model of time domain human eye then shows that front and back two frame difference is larger, and suppressible distortion is larger.Therefore the selection of inter-frame forecast mode can be instructed by the proper discernable distortion model of human eye.Statistical relationship schematic diagram when Fig. 4 a gives that BasketballDrive sequence equals 32 at quantization parameter, coding frame number is 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit; Statistical relationship schematic diagram when Fig. 4 b gives that BlowingBubbles sequence equals 32 at quantization parameter, coding frame number is 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit; Statistical relationship schematic diagram when Fig. 4 c gives that Traffic sequence equals 32 at quantization parameter, coding frame number is 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit; Statistical relationship schematic diagram when Fig. 4 d gives that RaceHorses sequence equals 32 at quantization parameter, coding frame number is 50 frame between the proper discernable distortion value of mean time spatial domain human eye of coding unit and the optimum prediction mode of coding unit.According to the statistical analysis of Fig. 4 a to Fig. 4 d, when the proper discernable distortion value of time-space domain human eye less or larger time, fractional prediction pattern proportion is very little, can get rid of these predictive modes.Therefore, JND is worked as in the present invention tsvalue be less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2time, current coded unit carries out the selection of fast prediction pattern, carries out traversal coding, select the predictive mode that is optimum to skip, merge, inter2N × 2N and intra2N × 2N predictive mode.
7.-4, judge whether the depth value of current coded unit is less than the maximum in the extreme saturation interval of current maximum coding unit, if, then current coded unit is divided into further lower one deck coding unit that four sizes are identical, then using current pending coding unit in this lower one deck coding unit as current coded unit, using the layer at current coded unit place as current layer, then return step 7.-2 continuation execution; Otherwise, determine that current coded unit cataloged procedure terminates, then perform step 7.-5.
-5 7., judge whether all coding units in the extreme saturation interval of current maximum coding unit are all disposed, if so, then determine that current maximum encoding unit encodes process terminates, then perform step 8., otherwise, judge whether four coding units in current layer are all disposed, if four coding units in current layer are all disposed, then using pending coding unit next in the last layer coding unit of current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned, if four coding units in current layer are untreated complete, then using pending coding unit next in current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned.
8. using maximum coding unit to be encoded next in present frame as current maximum coding unit, then return step and 3. continue to perform, until all maximum coding unit in present frame is all encoded complete.
9. using frame pending for next frame in HD video as present frame, then return step 2. continue perform, until all frames in HD video are all disposed, so far complete the fast coding of HD video.
Below for test the inventive method, to further illustrate validity and the feasibility of the inventive method.
The inventive method chooses the HM9.0 of HEVC as test model; Hardware configuration is CPU intel core i5-2500, the PC of dominant frequency 3.30GHz, internal memory 8G, and operating system is Windows7, and developing instrument is Microsoft Visual Studio2008.Adopt low delay and random access coding structure, choose all cycle testss quantization parameter QP be 22,27,32 and 37 4 kind of situation under test, test frame number is 100 frames, and non-I group of picture comprises the non-I frame of 8 frame, and the I frame period is 32.Employing BD-PSNR ( delta peak signal-to-noise rate), BDBR ( delta bit rate) and Δ T as the evaluation index of the inventive method.BD-PSNR represents the variable quantity of PSNR under same code rate condition, and BDBR represents the change percentage of code check under identical PSNR condition, and Δ T (%) is used for the percentage of presentation code time variations, wherein, T pand T hMthe corresponding scramble time representing employing the inventive method and HM9.0 algorithm.
Table 1 gives the cycle tests that the inventive method is selected.Table 2 gives the coding efficiency situation that the cycle tests adopting the inventive method his-and-hers watches 1 to list carries out coding 100 frame, the i.e. numerical value of BD-PSNR, BDBR and Δ T, as can be seen from Table 2, the inventive method is under low delay coding structure, BDBR only increases by 1.71%, BD-PSNR only reduces 0.058dB, and the time saves 41.35%; Under random access coding structure, BDBR only increases by 1.06%, BD-PSNR and only to decline 0.037dB, and the time saves 41.43%; Compared with HM9.0, the inventive method no matter at low delay coding structure or under random access coding structure, video quality remains unchanged substantially, but greatly reduces the computation complexity of Video coding.
Fig. 5 gives the saving of time percentage situation map utilizing the inventive method to carry out coding 100 frame respectively compared to the cycle tests utilized listed by HM9.0 original coding method his-and-hers watches 1 under low delay coding structure and random access coding structure.As can be seen from Figure 5, it is close that the average time under two kinds of coding structures saves proportion, all reaches more than 40%.Therefore, the inventive method reduces the computation complexity of coding clearly.
Fig. 6 a gives and utilizes the inventive method and utilize HM9.0 original coding method PeopleOnStreet cycle tests (2560 × 1600) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure; Fig. 6 b gives and utilizes the inventive method and utilize HM9.0 original coding method Kimono cycle tests (1920 × 1080) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure; Fig. 6 c gives and utilizes the inventive method and utilize HM9.0 original coding method PartyScene cycle tests (832 × 480) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure; Fig. 6 d gives and utilizes the inventive method and utilize HM9.0 original coding method BlowingBubbles cycle tests (416 × 240) to be carried out to the distortion performance curve comparison figure of coding 100 frame under low delay coding structure.As can be seen from Fig. 6 a to Fig. 6 d, utilize the inventive method compared to HM9.0 original coding method, distortion performance curve overlaps substantially, illustrates that the video quality after utilizing the inventive method and HM9.0 original coding method to encode remains unchanged substantially.
Fig. 7 a gives the Quadtree Partition result schematic diagram after utilizing HM9.0 original coding method to encode to the 6th frame (1280 × 720) in Johnny cycle tests under the condition of quantization parameter QP=32 under low delay coding structure; Fig. 7 b gives the Quadtree Partition result schematic diagram after utilizing the inventive method to encode to the 6th frame (1280 × 720) in Johnny cycle tests under the condition of quantization parameter QP=32 under low delay coding structure.The difference of both the black box representatives coding unit segmentation in Fig. 7 a and Fig. 7 b, the difference of predictive mode both black vertical line or horizontal line represent.As can be seen from Fig. 7 a and Fig. 7 b, the inventive method is HM9.0 original coding method comparatively, the segmentation degree of depth of coding unit and the predictive mode of predicting unit basically identical, though there is minority not mate, the predictive mode not mating the segmentation degree of depth of part coding unit and predicting unit is close.Therefore, it is possible to absolutely prove that the overall distortion performance utilizing the inventive method to carry out encoding keeps stable.
The cycle tests that table 1 the inventive method is selected
Sequence names Resolution Frame number Frame per second Bit-depth
Traffic 2560×1600 150 30fps 8
PeopleOnStreet 2560×1600 150 30fps 8
Kimono 1920×1080 240 24fps 8
Cactus 1920×1080 500 50fps 8
ParkScene 1920×1080 240 24fps 8
BasketballDrive 1920×1080 500 50fps 8
BQTerrace 1920×1080 600 60fps 8
PartyScene 832×480 500 50fps 8
RaceHorsesC 832×480 300 30fps 8
BasketballDrill 832×480 500 50fps 8
BQMall 832×480 600 60fps 8
RaceHorses 416×240 300 30fps 8
BlowingBubbles 416×240 500 50fps 8
BasketballPass 416×240 500 50fps 8
Vidyo1 1280×720 600 60fps 8
Vidyo3 1280×720 600 60fps 8
Vidyo4 1280×720 600 60fps 8
Johnny 1280×720 600 60fps 8
FourPeople 1280×720 600 60fps 8
KristenAndSara 1280×720 600 60fps 8
The cycle tests that table 2 adopts the inventive method his-and-hers watches 1 to list carries out the coding efficiency situation of coding 100 frame

Claims (6)

1. a fast encoding method of HEVC, is characterized in that comprising the following steps:
1. be present frame by pending frame definition current in HD video;
2. maximum coding unit to be encoded current in present frame is defined as current maximum coding unit;
3. according to frame type and the current maximum coding unit position in the current frame of present frame, the maximum coding unit of all predictions of current maximum coding unit is determined; Then be the prediction sets of current maximum coding unit by the sets definition that the maximum coding unit of all predictions by current maximum coding unit is formed, and be designated as Ω, wherein, Ω for the maximum coding unit comprised in empty set or Ω be at least one in L, T and COL, L represents the adjacent maximum coding unit in the left side of current maximum coding unit, T represents the adjacent maximum coding unit in the top of current maximum coding unit, and COL represents maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame;
4. D is made predrepresent the depth prediction value of current maximum coding unit, then establish D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, wherein, D predvalue do not exist or D predvalue be real number, and 0≤D pred≤ 3;
If 5. Ω is in L, T and COL two for the maximum coding unit that comprises in empty set or Ω for the maximum coding unit comprised in L, T and COL or Ω, then determine D predvalue do not exist, then perform step 7.; If the maximum coding unit comprised in Ω is L, T and COL, then determine D predvalue be real number, and 0≤D pred≤ 3, then perform step 6.;
6. by obtaining the time domain similarity TS of current maximum coding unit and the spatial domain similarity LAS of the spatial domain similarity TAS of T, current maximum coding unit and L, current maximum coding unit and COL, D is calculated predvalue, wherein, 1≤m≤3,1≤i≤256, and m and i is integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4;
7. according to D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit, determine that the extreme saturation of current maximum coding unit is interval; Then according to the extreme saturation interval of current maximum coding unit, each coding unit in current maximum coding unit and current maximum coding unit is encoded, in an encoding process, if the frame at current maximum coding unit place is P frame in HD video or B frame, then when the proper discernable distortion value of mean time spatial domain human eye of each coding unit in current maximum coding unit or current maximum coding unit is less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2then each coding unit in current maximum coding unit or current maximum coding unit is carried out to the selection of fast prediction pattern, use skip, merge, inter2N × 2N and intra2N × 2N predictive mode to carry out traversal coding respectively to each coding unit in current maximum coding unit or current maximum coding unit, choose rate distortion costs and be worth minimum predictive mode as optimal prediction modes;
8. using maximum coding unit to be encoded next in present frame as current maximum coding unit, then return step and 3. continue to perform, until all maximum coding unit in present frame is all encoded complete;
9. using frame pending for next frame in HD video as present frame, then return step 2. continue perform, until all frames in HD video are all disposed, so far complete the fast coding of HD video.
2. the fast encoding method of a kind of HEVC according to claim 1, is characterized in that the deterministic process of the maximum coding unit of all predictions of current maximum coding unit during described step is 3. is:
If present frame is the I frame in HD video, then, when current maximum coding unit is the 1st maximum coding unit in present frame, determine the maximum coding unit that current maximum coding unit is not predicted; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top is the maximum coding unit of the prediction of current maximum coding unit;
If present frame is P frame in HD video or B frame, then when current maximum coding unit is the 1st maximum coding unit in present frame, determine that maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame is the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit T in the top of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in the maximum coding unit of the 1st row of present frame except the 1st maximum coding unit, determine that the adjacent maximum coding unit L in the left side of current maximum coding unit and maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame are the maximum coding unit of the prediction of current maximum coding unit; When current maximum coding unit is all the other the maximum coding units in present frame except the 1st maximum coding unit of row and the maximum coding unit of the 1st row, determine that in the forward reference frame of the adjacent maximum coding unit L in the left side of current maximum coding unit maximum coding unit T adjacent with top and present frame, the maximum coding unit COL identical with the coordinate position of current maximum coding unit is the maximum coding unit of the prediction of current maximum coding unit.
3. the fast encoding method of a kind of HEVC according to claim 1 and 2, is characterized in that described step 4. middle D predvalue and the one-to-one relationship in extreme saturation interval of current maximum coding unit be: work as D predwhen=0, the extreme saturation interval of current maximum coding unit is [0,0]; Work as 0<D predwhen≤0.5, the extreme saturation interval of current maximum coding unit is [0,1]; Work as 0.5<D predwhen≤1.5, the extreme saturation interval of current maximum coding unit is [0,2]; Work as 1.5<D predwhen≤2.5, the extreme saturation interval of current maximum coding unit is [1,3]; Work as 2.5<D predwhen≤3, the extreme saturation interval of current maximum coding unit is [2,3]; Work as D predvalue when not existing, the extreme saturation interval of current maximum coding unit is [0,3].
4. the fast encoding method of a kind of HEVC according to claim 3, is characterized in that described step detailed process is 6.:
6. the mean depth between the adjacent maximum coding unit L-COL in the left side-1, calculating maximum coding unit identical with the coordinate position of current maximum coding unit in the adjacent maximum coding unit L in the left side of current maximum coding unit and the forward reference frame of present frame is poor, be designated as ADD1 and mean depth between the adjacent maximum coding unit T-COL in top of maximum coding unit identical with the coordinate position of current maximum coding unit in the forward reference frame of the adjacent maximum coding unit T in the top calculating current maximum coding unit and present frame is poor, be designated as ADD2 then calculate the mean value of ADD1 and ADD2, be designated as ADD, ADD=(ADD1+ADD2)/2; Wherein, 1≤i≤256, and be integer, i-th size in expression L is the depth value of the basic unit of storage of 4 × 4, i-th size in expression L-COL is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T is the depth value of the basic unit of storage of 4 × 4, i-th size in expression T-COL is the depth value of the basic unit of storage of 4 × 4, span be [0,3], and be integer, symbol " || " is the symbol that takes absolute value;
-2 6., calculate the spatial domain similarity of the adjacent maximum coding unit T in top of current maximum coding unit and current maximum coding unit, be designated as TAS, TAS=0.05 × ADD+0.25; And the spatial domain similarity of the adjacent maximum coding unit L in the left side calculating current maximum coding unit and current maximum coding unit, be designated as LAS, LAS=0.05 × ADD+0.25; Meanwhile, calculate current maximum coding unit and the time domain similarity of maximum coding unit COL identical with the coordinate position of current maximum coding unit in the forward reference frame of present frame, be designated as TS, TS=-0.1 × ADD+0.5;
6.-3, D is calculated predvalue, wherein, 1≤m≤3, and be integer, ω mrepresent the individual weight shared by maximum coding unit of m in Ω, the ω as m=1 m=LAS, the ω as m=2 m=TAS, the ω as m=3 m=TS, i-th size in m maximum coding unit in expression Ω is the depth value of the basic unit of storage of 4 × 4.
5. the fast encoding method of a kind of HEVC according to claim 4, is characterized in that the extreme saturation interval according to current maximum coding unit during described step is 7. to the detailed process that each coding unit in current maximum coding unit and current maximum coding unit is encoded is:
-1 7., pending coding unit current in the extreme saturation interval of current maximum coding unit is defined as current coded unit, the layer at current coded unit place is defined as current layer;
7.-2, JND is used tsrepresent the proper discernable distortion value of mean time spatial domain human eye of current coded unit, if present frame is the I frame in HD video, then determine JND tsvalue do not exist, then perform step 7.-3; If present frame is P frame in HD video or B frame, then calculate JND tsvalue, then step 7.-3 is performed, wherein, (x, y) represents the coordinate position of the pixel in current coded unit, 0≤x≤K-1,0≤y≤G-1, and be all integer, K represents total number of one-row pixels point in current coded unit, G represents total number of a row pixel in current coded unit, JND (x, y) represents that in current coded unit, coordinate position is the proper discernable distortion value of time-space domain human eye of the pixel of (x, y);
7.-3, JND is worked as tsvalue when not existing, use intra2N × 2N and intraN × N predictive mode to encode respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Work as JND tsvalue when existing, judge JND tsvalue whether be less than the Low threshold T of setting 1or be greater than the high threshold T of setting 2if, then judge that current coded unit carries out the selection of fast prediction pattern, skip, merge, inter2N × 2N and intra2N × 2N predictive mode is used to carry out traversal coding respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Otherwise, judge that current coded unit does not carry out the selection of fast prediction pattern, skip, merge, inter2N × 2N, inter2N × N, interN × 2N, AMP, intra2N × 2N and intraN × N predictive mode is used entirely to travel through coding respectively to current coded unit, choose rate distortion costs and be worth the optimal prediction modes of minimum predictive mode as current coded unit, then perform step 7.-4; Wherein, the value of the N in inter2N × 2N, inter2N × N, interN × 2N, intra2N × 2N and intraN × N is the half of total number of one-row pixels point or a row pixel in current coded unit;
7.-4, judge whether the depth value of current coded unit is less than the maximum in the extreme saturation interval of current maximum coding unit, if, then current coded unit is divided into further lower one deck coding unit that four sizes are identical, then using current pending coding unit in this lower one deck coding unit as current coded unit, using the layer at current coded unit place as current layer, then return step 7.-2 continuation execution; Otherwise, determine that current coded unit cataloged procedure terminates, then perform step 7.-5;
-5 7., judge whether all coding units in the extreme saturation interval of current maximum coding unit are all disposed, if so, then determine that current maximum encoding unit encodes process terminates, then perform step 8., otherwise, judge whether four coding units in current layer are all disposed, if four coding units in current layer are all disposed, then using pending coding unit next in the last layer coding unit of current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned, if four coding units in current layer are untreated complete, then using pending coding unit next in current layer as current coded unit, and using the layer at current coded unit place as current layer, then step 7.-2 continuation execution are returned.
6. the fast encoding method of a kind of HEVC according to claim 5, is characterized in that 7. described step gets T in-3 1=3.5, get T 2=10.
CN201510225448.8A 2015-05-06 2015-05-06 A kind of HEVC fast encoding method Active CN104853191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510225448.8A CN104853191B (en) 2015-05-06 2015-05-06 A kind of HEVC fast encoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510225448.8A CN104853191B (en) 2015-05-06 2015-05-06 A kind of HEVC fast encoding method

Publications (2)

Publication Number Publication Date
CN104853191A true CN104853191A (en) 2015-08-19
CN104853191B CN104853191B (en) 2017-09-05

Family

ID=53852507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510225448.8A Active CN104853191B (en) 2015-05-06 2015-05-06 A kind of HEVC fast encoding method

Country Status (1)

Country Link
CN (1) CN104853191B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254887A (en) * 2016-08-31 2016-12-21 天津大学 A kind of deep video coding fast method
CN108012150A (en) * 2017-12-14 2018-05-08 湖南兴天电子科技有限公司 Video code between frames method and device
CN108737841A (en) * 2017-04-21 2018-11-02 腾讯科技(深圳)有限公司 Coding unit depth determination method and device
CN109168000A (en) * 2018-10-09 2019-01-08 北京佳讯飞鸿电气股份有限公司 A kind of HEVC Fast Intra-prediction Algorithm based on RC prediction
WO2019148906A1 (en) * 2018-02-01 2019-08-08 腾讯科技(深圳)有限公司 Video coding method, computer device, and storage medium
CN110446040A (en) * 2019-07-30 2019-11-12 暨南大学 A kind of inter-frame encoding methods and system suitable for HEVC standard
CN112866693A (en) * 2021-03-25 2021-05-28 北京百度网讯科技有限公司 Method and device for dividing coding unit CU, electronic equipment and storage medium
CN113596456A (en) * 2019-09-23 2021-11-02 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263231A1 (en) * 2011-04-18 2012-10-18 Minhua Zhou Temporal Motion Data Candidate Derivation in Video Coding
CN103533355A (en) * 2013-10-10 2014-01-22 宁波大学 Quick coding method for HEVC (high efficiency video coding)
CN103873861A (en) * 2014-02-24 2014-06-18 西南交通大学 Coding mode selection method for HEVC (high efficiency video coding)
US20140169451A1 (en) * 2012-12-13 2014-06-19 Mitsubishi Electric Research Laboratories, Inc. Perceptually Coding Images and Videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120263231A1 (en) * 2011-04-18 2012-10-18 Minhua Zhou Temporal Motion Data Candidate Derivation in Video Coding
US20140169451A1 (en) * 2012-12-13 2014-06-19 Mitsubishi Electric Research Laboratories, Inc. Perceptually Coding Images and Videos
CN103533355A (en) * 2013-10-10 2014-01-22 宁波大学 Quick coding method for HEVC (high efficiency video coding)
CN103873861A (en) * 2014-02-24 2014-06-18 西南交通大学 Coding mode selection method for HEVC (high efficiency video coding)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG SHAO ET AL: "Asymmetric Coding of Multi-View Video Plus Depth Based 3-D Video for View Rendering", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
郑明魁 等: "基于纹理分解的变换域JND模型及图像编码方法", 《通信学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254887B (en) * 2016-08-31 2019-04-09 天津大学 A kind of deep video coding fast method
CN106254887A (en) * 2016-08-31 2016-12-21 天津大学 A kind of deep video coding fast method
US10841583B2 (en) 2017-04-21 2020-11-17 Tencent Technology (Shenzhen) Company Limited Coding unit depth determining method and apparatus
CN108737841A (en) * 2017-04-21 2018-11-02 腾讯科技(深圳)有限公司 Coding unit depth determination method and device
CN108012150A (en) * 2017-12-14 2018-05-08 湖南兴天电子科技有限公司 Video code between frames method and device
CN108012150B (en) * 2017-12-14 2020-05-05 湖南兴天电子科技有限公司 Video interframe coding method and device
US11070817B2 (en) 2018-02-01 2021-07-20 Tencent Technology (Shenzhen) Company Limited Video encoding method, computer device, and storage medium for determining skip status
WO2019148906A1 (en) * 2018-02-01 2019-08-08 腾讯科技(深圳)有限公司 Video coding method, computer device, and storage medium
CN109168000B (en) * 2018-10-09 2021-02-12 北京佳讯飞鸿电气股份有限公司 HEVC intra-frame prediction rapid algorithm based on RC prediction
CN109168000A (en) * 2018-10-09 2019-01-08 北京佳讯飞鸿电气股份有限公司 A kind of HEVC Fast Intra-prediction Algorithm based on RC prediction
CN110446040A (en) * 2019-07-30 2019-11-12 暨南大学 A kind of inter-frame encoding methods and system suitable for HEVC standard
CN113596456A (en) * 2019-09-23 2021-11-02 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN112866693A (en) * 2021-03-25 2021-05-28 北京百度网讯科技有限公司 Method and device for dividing coding unit CU, electronic equipment and storage medium
CN112866693B (en) * 2021-03-25 2023-03-24 北京百度网讯科技有限公司 Method and device for dividing coding unit CU, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104853191B (en) 2017-09-05

Similar Documents

Publication Publication Date Title
CN104853191A (en) HEVC fast coding method
CN102186070B (en) Method for realizing rapid video coding by adopting hierarchical structure anticipation
CN103581647B (en) A kind of depth map sequence fractal coding based on color video motion vector
CN103248893B (en) From H.264/AVC standard to code-transferring method and transcoder thereof the fast frame of HEVC standard
CN103873861B (en) Coding mode selection method for HEVC (high efficiency video coding)
CN103004197B (en) For to the equipment of Image Coding and method
CN103546749B (en) Method for optimizing HEVC (high efficiency video coding) residual coding by using residual coefficient distribution features and bayes theorem
CN105959611B (en) A kind of the interframe fast transcoding method and device of adaptive slave H264 to HEVC
CN105141954B (en) A kind of HEVC interframe encodes fast schema selection method
CN103188496B (en) Based on the method for coding quick movement estimation video of motion vector distribution prediction
CN103634606B (en) Video encoding method and apparatus
CN103533359B (en) One is bit rate control method H.264
CN104243997B (en) Method for quality scalable HEVC (high efficiency video coding)
CN105049850A (en) HEVC (High Efficiency Video Coding) code rate control method based on region-of-interest
CN110662078B (en) 4K/8K ultra-high-definition coding inter-frame coding fast algorithm suitable for AVS2 and HEVC
CN103533355B (en) A kind of HEVC fast encoding method
CN103997645B (en) Quick HEVC intra-frame coding unit and pattern decision method
CN103888762A (en) Video coding framework based on HEVC standard
CN103546758A (en) Rapid depth map sequence interframe mode selection fractal coding method
CN105681797A (en) Prediction residual based DVC-HEVC (Distributed Video Coding-High Efficiency Video Coding) video transcoding method
JP2012505618A (en) Encoding and decoding with the exclusion of one or more predetermined predictors
CN104394409A (en) Space-domain correlation based rapid HEVC (High Efficiency Video Coding) predication mode selection method
CN105681808A (en) Rapid decision-making method for SCC interframe coding unit mode
CN103491380A (en) High-flexible variable size block intra-frame predication coding
CN103596003B (en) Interframe predication quick mode selecting method for high-performance video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190812

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210512

Address after: 518057 Room 101, building 1, building 10, Maqueling Industrial Zone, Maling community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Weier Vision Technology Co.,Ltd.

Address before: 313000 room 1020, science and Technology Pioneer Park, 666 Chaoyang Road, Nanxun Town, Nanxun District, Huzhou, Zhejiang.

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A fast coding method of hevc

Effective date of registration: 20220125

Granted publication date: 20170905

Pledgee: Bank of Jiangsu Limited by Share Ltd. Shenzhen branch

Pledgor: Shenzhen Weier Vision Technology Co.,Ltd.

Registration number: Y2022440020017

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230111

Granted publication date: 20170905

Pledgee: Bank of Jiangsu Limited by Share Ltd. Shenzhen branch

Pledgor: Shenzhen Weier Vision Technology Co.,Ltd.

Registration number: Y2022440020017

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Fast Coding Method for HEVC

Effective date of registration: 20230116

Granted publication date: 20170905

Pledgee: Bank of Jiangsu Limited by Share Ltd. Shenzhen branch

Pledgor: Shenzhen Weier Vision Technology Co.,Ltd.

Registration number: Y2023440020009

PE01 Entry into force of the registration of the contract for pledge of patent right