CN103813178B - Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units - Google Patents
Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units Download PDFInfo
- Publication number
- CN103813178B CN103813178B CN201410041255.2A CN201410041255A CN103813178B CN 103813178 B CN103813178 B CN 103813178B CN 201410041255 A CN201410041255 A CN 201410041255A CN 103813178 B CN103813178 B CN 103813178B
- Authority
- CN
- China
- Prior art keywords
- depth
- search
- coding
- unit
- hevc
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units. The rapid HEVC method includes the following steps of (1) inputting an original coding sequence; (2) extracting the depth of a coded coding unit; (3) if the depth of a current coding unit is smaller than a depth threshold value, using a depth relationship of two adjacent coding units in a prior frame to narrow a depth search range of the current coding unit; (4) searching the depth within the depth search range obtained in step (3) according to the increasing sequence and ending the depth search unit the maximum depth is searched; (5) determining an optimum coding tree unit partition according to the depth searched in step (4), and subsequently performing HEVC. According to the HEVC method, the HEVC can be obviously accelerated, better rate-distortion properties can be guaranteed, and higher practicability is achieved.
Description
Technical field
The present invention relates to field of video encoding is and in particular to a kind of quick based on coding unit depth temporal correlation
Hevc coding method.
Background technology
In recent years, along with the development of high definition and ultra high-definition (3840 × 2160 or 7680 × 4320) video, h.264/avc
The compression efficiency of video compression standard cannot meet transmission and the storage demand of these videos.
In order to improve video compression efficiency further, mpeg and vceg is organized in 2010 and has set up Video coding and combine association
Make group (jct-vc), joint development next generation's video compression standard.Video compression standard of new generation was referred to as hevc, in 2013
Just formally issue.
As up-to-date video compression standard, hevc can take h.264/avc higher-level layer (high profile, hp)
Under conditions of coding half code check, provide the code stream identical in quality with it, new video standard is high definition and ultra high-definition video counts
According to network transmission, storage etc. paved road.
Compared with former video compression standard, hevc inherits the basic framework of hybrid coding, additionally provides more simultaneously
How efficient video compress instrument, including the quaternary tree block partition structure of recursion in coding unit (coding unit, cu),
More intra prediction mode, efficient reference frame management, a kind of new in-loop filter (sample adaptive
Offset, sao) etc..These new coding toolses, while improving code efficiency, also substantially increase the complexity of encoder
Degree.According to the difference of configuration, the complexity of hevc encoder is 2-3 times of h.264/avc hp encoder complexity, wherein compiles
Code device consumes substantial amounts of computing resource to obtain the full deep search method that optimal cu quaternary tree block divides and adopts, and this is tight
Hinder again hevc encoder application.
There are some researchers to be directed to cu quaternary tree block in hevc encoder at present and divided the too high problem of complexity, carried
Go out fast algorithm.Kim etc. passes through to count characteristic under different quantization parameters for the rate distortion costs, and whether every layer of cu be downward for setting
The threshold value dividing, when the rate distortion costs of cu are less than setting threshold value, cu is not divided.Wang etc. utilizes the residual error of current cu
Mean value judges whether cu continues to divide, and when residual error mean value is less than the threshold value setting, cu is not divided.These algorithms are all
It is based on threshold value, algorithm stability is not strong.Shen etc. proposes one kind and judges whether cu continues using Bayes decision rule
The continuous method dividing is for a certain class video algorithm efficiency high, low for an other class video algorithm efficiency.Kim etc. proposes
A kind of rate distortion costs using periphery skip pattern cu judge that whether current cu is the method for skip pattern in advance.The profits such as shen
With depth time space correlation and the cu inter-layer information correlation of cu, skip the block being rarely employed in current cu cataloged procedure and draw
The mode of dividing.
Gweon etc. decides whether to terminate cu coding in advance by the situation checking coded_block_flag.Choi etc. leads to
Cross and judge whether current cu is that skip pattern decides whether that continuing antithetical phrase cu is encoded.Yang etc. propose a kind of similar to
H.264/avc the skip mode detection algorithm that reference encoder device is adopted.It is multiple that these three algorithms can effectively reduce encoder
Miscellaneous degree, these algorithms are all adopted by hevc standard coders.
Lee etc. utilizes the block of the cu and current cu of same position in former frame to divide correlation, skips certain in current cu
A little divisions, but the method only make use of cu and the temporal correlation of former frame same position cu, if coding can fully be excavated
The time of unit and the correlation in space, encoder complexity can also reduce further.
Content of the invention
The invention provides a kind of quick hevc coding method based on coding unit depth temporal correlation, using coding
The monistic feature of depth between unit, reduces the depth range search of coding unit, while ensureing distortion performance, fall
Low encoder complexity.
A kind of quick hevc coding method based on coding unit depth temporal correlation, comprises the following steps:
(1) input original coding sequence;
(2) extract the depth of the coding unit completing to encode;
(3) if the depth of current coded unit is less than depth threshold, using the depth of in former frame two adjacent coding units
Relation, reduces the depth range search of current coded unit, and rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, in present frame, position is corresponding two adjacent
The depth of coding unit is respectively dl_crAnd dcr, then:
If a is dl_co<dco, then determine d using depth unicity featurecrScope;
If b is dl_co=dco, then determining d using depth unicity featurecrIn the range of search for dcrDepth;
If c is dl_co>dco, then determine d using depth unicity featurecrScope;
(4) in the depth range search of step (3) gained, according to sequential search depth from small to large, deep to maximum
Terminate deep search after degree;
Depth range search is reduced, in described step (4), in depth range search according to rule in step (3)
Each depth, carry out the frame in of current coded unit and the assessment of interframe partition mode, select optimal partition mode.
(5) depth obtaining according to step (4) search, determines optimal encoding code tree dividing elements, then carries out hevc coding.
The code tree unit (coding tree unit, abbreviation ctu) of hevc compare traditional h.264 divided bigger
Block, maximum is divided into 64 × 64 block, and the heretofore described method reducing depth range search is applied to 64 × 64 block
Block with 32 × 32, corresponds to 64 × 64 coding units (also referred to as code tree unit) and 32 × 32 coding units respectively.
That is, the coding unit in described step (2) is 64 × 64 coding units or 32 × 32 coding units, 64 ×
The depth bounds of 64 coding units is 0,1,2 or 3;The depth of 32 × 32 coding units is scope 1,2 or 3.
For 64 × 64 coding units, do not divided, then depth is 0;Be divided into 32 × 32 coding units, then deep
Spend for 1;At least one 32 × 32 coding unit is divided into 16 × 16 coding units, then depth is 2;At least one 16 × 16 coding
Dividing elements are 8 × 8 coding units, then depth is 3.
For 32 × 32 coding units, do not divided, then depth is 1;Be divided into 16 × 16 coding units, then deep
Spend for 2;At least one 16 × 16 coding unit is divided into 8 × 8 coding units, then depth is 3.
Because the present invention is applied to 64 × 64 coding units and 32 × 32 coding units, therefore, described depth threshold is 2,
I.e. depth is when 0 or 1 it is adaptable to method of the present invention, if depth is 2 or 3, adopts existing standard hevc coding staff
Method.
If dl_co=dco, can scan for it is also possible to according to a set pattern according to deep search strategy of the prior art
Then, reduce hunting zone, the rule reducing depth range search to 64 × 64 coding units and 32 × 32 coding units below is divided
It is not described.
If coding unit is 64 × 64 coding units, using the depth relationship of two adjacent coding units in former frame, contracting
The depth range search of little current coded unit, rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, in present frame, position is corresponding two adjacent
The depth of coding unit is respectively dl_crAnd dcr, then:
If dl_co<dco, dl_crFor 1, then search for d in 1,2,3crDepth (skipping the search that depth is 0);
If dl_co<dco, dl_crFor 2, then search for d in 1,2,3crDepth (skipping the search that depth is 0);
If dl_co<dco, dl_crFor 3, then search for d in 1,2,3crDepth (skipping the search that depth is 0);
If dl_co=dco=0, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=1, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=3, dl_crFor 0,1,2 or 3, then search for d in 1,2,3crDepth;
If dl_co>dco, dl_crFor 0, then search for d in 0crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 0,1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 0,1,2crDepth.
If coding unit is 32 × 32 coding units, and the depth of current coded unit is 1, then utilize two-phase in former frame
The depth relationship of adjacent coding unit, reduces the depth range search of current coded unit, and rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, in present frame, position is corresponding two adjacent
The depth of coding unit is respectively dl_crAnd dcr, then:
If dl_co<dco, dl_crFor 2, then search for d in 2,3crDepth;
If dl_co<dco, dl_crFor 3, then search for d in 2,3crDepth;
If dl_co=dco=1, dl_crFor 1,2 or 3, then search for d in 1,2crDepth;
If dl_co=dco=3, dl_crFor 1, then search for d in 1,2,3crDepth;
If dl_co=dco=3, dl_crFor 2 or 3, then search for d in 2,3crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 1,2crDepth.
There is no the part of specified otherwise in the present invention, all encoded using existing hevc coding method.
The present invention can substantially speed up hevc based on the quick hevc coding method of coding unit depth temporal correlation and regard
The speed of frequency coding, and ensure preferable distortion performance, there is stronger practicality.
Brief description
Fig. 1 is the flow chart based on the quick hevc coding method of coding unit depth temporal correlation for the present invention;
Fig. 2 is that in two adjacent coding units and present frame in reference frame (former frame) in the present invention, position is corresponding two adjacent
The relation schematic diagram of coding unit;
Fig. 3 is dl_co<dcoIn the case of, dl_crWith dl_crRelation distribution schematic diagram;
Fig. 4 is dl_co<dcoIn the case of, dl_crWith dl_crRelation distribution schematic diagram;
Fig. 5 is that the rate distortion performance under different coding environment configurations of the inventive method, lee algorithm and hm8.0 algorithm is right
Ratio is wherein: (a) under the configuration of ld coding environment, the rate distortion performance contrast of kimono sequence;B () configures for ra coding environment
Under, the rate distortion performance contrast of kimono sequence;C () configures for ld coding environment under, the rate distortion table of partyscene sequence
Now contrast;D () is for, under the configuration of ra coding environment, the rate distortion performance of partyscene sequence contrasts.
Specific embodiment
Below in conjunction with the accompanying drawings, the present invention is done based on the quick hevc coding method of coding unit depth temporal correlation in detail
Thin description.
As shown in figure 1, a kind of quick hevc coding method based on coding unit depth temporal correlation, walk including following
Rapid:
(1) input original coding sequence;
(2) extract the depth of the coding unit completing to encode, some different sizes will be divided into by original coding sequence
Block, record the depth (i.e. depth full search in Fig. 1) of each block;
If coding unit is 64 × 64 coding units, depth bounds is 0,1,2 or 3;
If coding unit is 32 × 32 coding units, depth bounds is 1,2 or 3.
(3) as shown in Fig. 2 cr represents current coding unit (i.e. 64 × 64 coding units or 32 × 32 coding units),
L_cr is corresponding coding unit on the left of it;Co is the coding unit being in reference frame (i.e. former frame) with cr same position,
L_co is the coding unit being in reference frame (i.e. former frame) with l_co same position,
For 64 × 64 coding units and 32 × 32 coding units, depth range search is reduced using different rules, point
State as follows:
If the size of 3-1 coding unit is 64 × 64, using the depth relationship of two adjacent coding units in former frame,
Reduce the depth range search of current coded unit, rule is as follows:
If dl_co<dco, dl_crFor 1, then search for d in 1,2,3crDepth;
If dl_co<dco, dl_crFor 2, then search for d in 1,2,3crDepth;
If dl_co<dco, dl_crFor 3, then search for d in 1,2,3crDepth;
If dl_co=dco=0, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=1, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=3, dl_crFor 0,1,2 or 3, then search for d in 1,2,3crDepth;
If dl_co>dco, dl_crFor 0, then search for d in 0crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 0,1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 0,1,2crDepth.
If the size of 3-2 coding unit is 32 × 32, using the depth relationship of two adjacent coding units in former frame,
Reduce the depth range search of current coded unit, rule is as follows:
If dl_co<dco, dl_crFor 2, then search for d in 2,3crDepth;
If dl_co<dco, dl_crFor 3, then search for d in 2,3crDepth;
If dl_co=dco=1, dl_crFor 1,2 or 3, then search for d in 1,2crDepth;
If dl_co=dco=3, dl_crFor 1, then search for d in 1,2,3crDepth;
If dl_co=dco=3, dl_crFor 2 or 3, then search for d in 2,3crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 1,2crDepth.
(4) in the depth range search of step (3) gained, according to sequential search depth from small to large, deep to maximum
Terminate deep search after degree;For each depth (i.e. coding unit) in depth range search, carry out current coded unit
Frame in and the assessment of interframe partition mode, select optimal partition mode.
(5) depth obtaining according to step (4) search, determines optimal encoding code tree dividing elements, then carries out hevc coding.
In the inventive method, for dl_co<dcoAnd dl_co<dcoTwo kinds of situations, the depth that can skip is listed in Table 1 below, right
In dl_co=dco, the depth that can skip is listed in Table 2 below.
Table 1
Table 2
With regard to dl_co<dcoAnd dl_co>dcoWhen the monistic checking of depth respectively as shown in Figure 3, Figure 4, existing using hevc
Some standard coders to sequence city (704 × 576), harbor (704 × 576), bigships (1280 × 720),
Vidyo3 (1280 × 720), pair (1920 × 1080), sunse (1920 × 1080) encoded after statistics, Fig. 3
In, work as dl_co<dcoWhen, dl_cr≤dcrProbability be about in 90%, Fig. 4, work as dl_co>dcoWhen, then dl_cr≥dcrProbability be about
90%, therefore, it can using the monistic relation of this depth, to the depth range search of the coding unit in present frame in advance
Reduced.
Under the configuration of two kinds of coding environments of ld and ra, method that the present invention provides and lee algorithm and hm8.0 algorithm right
Than result as shown in table 3 and Fig. 5, △ t is time LVFS.
Table 3
From table 3 and Fig. 5, the inventive method, for lee algorithm and hm8.0 algorithm, can be accelerated to encode
Speed, and ensure suitable coding quality.
Claims (3)
1. a kind of quick hevc coding method based on coding unit depth temporal correlation is it is characterised in that include following walking
Rapid:
(1) input original coding sequence;
(2) extract the depth of the coding unit completing to encode;
(3) if the depth of current coded unit is less than depth threshold, using the depth relationship of in former frame two adjacent coding units,
Reduce the depth range search of current coded unit, rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, corresponding two adjacent encoders in position in present frame
The depth of unit is respectively dl_crAnd dcr, then:
If a is dl_co<dco, then determine d using depth unicity featurecrScope;
If b is dl_co=dco, then determining d using depth unicity featurecrIn the range of search for dcrDepth;
If c is dl_co>dco, then determine d using depth unicity featurecrScope;
(4) in the depth range search of step (3) gained, according to sequential search depth from small to large, to depth capacity
Terminate deep search;
(5) depth obtaining according to step (4) search, determines optimal encoding code tree dividing elements, then carries out hevc coding,
Coding unit size in step (2) is 64 × 64 or 32 × 32, the depth bounds of 64 × 64 coding units be 0,1,2 or
3;The depth bounds of 32 × 32 coding units is 1,2 or 3;
If coding unit size is 64 × 64, using the depth relationship of two adjacent coding units in former frame, reduce current volume
The depth range search of code unit, rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, corresponding two adjacent encoders in position in present frame
The depth of unit is respectively dl_crAnd dcr, then:
If dl_co<dco, dl_crFor 1, then search for d in 1,2,3crDepth;
If dl_co<dco, dl_crFor 2, then search for d in 1,2,3crDepth;
If dl_co<dco, dl_crFor 3, then search for d in 1,2,3crDepth;
If dl_co=dco=0, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=1, dl_crFor 0,1,2 or 3, then search for d in 0,1,2crDepth;
If dl_co=dco=3, dl_crFor 0,1,2 or 3, then search for d in 1,2,3crDepth;
If dl_co>dco, dl_crFor 0, then search for d in 0crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 0,1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 0,1,2crDepth;
If coding unit size is 32 × 32, using the depth relationship of two adjacent coding units in former frame, reduce current volume
The depth range search of code unit, rule is as follows:
If the depth of two adjacent coding units is respectively d in former framel_coAnd dco, corresponding two adjacent encoders in position in present frame
The depth of unit is respectively dl_crAnd dcr, then:
If dl_co<dco, dl_crFor 2, then search for d in 2,3crDepth;
If dl_co<dco, dl_crFor 3, then search for d in 2,3crDepth;
If dl_co=dco=1, dl_crFor 1,2 or 3, then search for d in 1,2crDepth;
If dl_co=dco=3, dl_crFor 1, then search for d in 1,2,3crDepth;
If dl_co=dco=3, dl_crFor 2 or 3, then search for d in 2,3crDepth;
If dl_co>dco, dl_crFor 1, then search for d in 1crDepth;
If dl_co>dco, dl_crFor 2, then search for d in 1,2crDepth.
2. the quick hevc coding method based on coding unit depth temporal correlation as claimed in claim 1, its feature exists
In described depth threshold is 2.
3. the quick hevc coding method based on coding unit depth temporal correlation as claimed in claim 1, its feature exists
In in described step (4), for each depth in depth range search, carrying out the frame in of current coded unit and interframe drawn
Merotype is assessed, and selects optimal partition mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410041255.2A CN103813178B (en) | 2014-01-28 | 2014-01-28 | Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410041255.2A CN103813178B (en) | 2014-01-28 | 2014-01-28 | Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103813178A CN103813178A (en) | 2014-05-21 |
CN103813178B true CN103813178B (en) | 2017-01-25 |
Family
ID=50709306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410041255.2A Active CN103813178B (en) | 2014-01-28 | 2014-01-28 | Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103813178B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104023241B (en) * | 2014-05-29 | 2017-08-04 | 华为技术有限公司 | The method for video coding and video coding apparatus of intraframe predictive coding |
CN104602017B (en) * | 2014-06-10 | 2017-12-26 | 腾讯科技(北京)有限公司 | Video encoder, method and apparatus and its inter-frame mode selecting method and device |
CN105812797B (en) * | 2014-12-31 | 2019-03-26 | 浙江大华技术股份有限公司 | A kind of coding unit selection method and device |
CN105681808B (en) * | 2016-03-16 | 2017-10-31 | 同济大学 | A kind of high-speed decision method of SCC interframe encodes unit mode |
CN106454342B (en) * | 2016-09-07 | 2019-06-25 | 中山大学 | A kind of the inter-frame mode fast selecting method and system of video compression coding |
CN107071497B (en) * | 2017-05-21 | 2020-01-17 | 北京工业大学 | Low-complexity video coding method based on space-time correlation |
CN107295336B (en) * | 2017-06-21 | 2019-10-29 | 鄂尔多斯应用技术学院 | Adaptive fast coding dividing elements method and device based on image correlation |
CN107371022B (en) * | 2017-08-26 | 2020-02-14 | 同济大学 | Inter-frame coding unit rapid dividing method applied to HEVC medical image lossless coding |
CN111669593B (en) * | 2020-07-27 | 2022-01-28 | 北京奇艺世纪科技有限公司 | Video encoding method, video encoding device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491334A (en) * | 2013-09-11 | 2014-01-01 | 浙江大学 | Video transcode method from H264 to HEVC based on region feature analysis |
CN103533355A (en) * | 2013-10-10 | 2014-01-22 | 宁波大学 | Quick coding method for HEVC (high efficiency video coding) |
-
2014
- 2014-01-28 CN CN201410041255.2A patent/CN103813178B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103491334A (en) * | 2013-09-11 | 2014-01-01 | 浙江大学 | Video transcode method from H264 to HEVC based on region feature analysis |
CN103533355A (en) * | 2013-10-10 | 2014-01-22 | 宁波大学 | Quick coding method for HEVC (high efficiency video coding) |
Also Published As
Publication number | Publication date |
---|---|
CN103813178A (en) | 2014-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103813178B (en) | Rapid high efficiency video coding (HEVC) method based on depth and space-time relevancy of coding units | |
Tian et al. | Content adaptive prediction unit size decision algorithm for HEVC intra coding | |
CN105120292B (en) | A kind of coding intra-frame prediction method based on image texture characteristic | |
CN102984521A (en) | High-efficiency video coding inter-frame mode judging method based on temporal relativity | |
Saldanha et al. | Configurable fast block partitioning for VVC intra coding using light gradient boosting machine | |
Shen et al. | Ultra fast H. 264/AVC to HEVC transcoder | |
CN103517069A (en) | HEVC intra-frame prediction quick mode selection method based on texture analysis | |
CN105187826B (en) | For the fast intra mode decision method of high efficiency video encoding standard | |
CN107277506B (en) | Motion vector accuracy selection method and device based on adaptive motion vector precision | |
CN109104609A (en) | A kind of lens boundary detection method merging HEVC compression domain and pixel domain | |
CN103141096A (en) | Adaptive filtering method and apparatus | |
CN105681797A (en) | Prediction residual based DVC-HEVC (Distributed Video Coding-High Efficiency Video Coding) video transcoding method | |
Chen et al. | A novel fast intra mode decision for versatile video coding | |
CN104837019A (en) | AVS-to-HEVC optimal video transcoding method based on support vector machine | |
Gao et al. | Fast intra mode decision algorithm based on refinement in HEVC | |
CN103533355A (en) | Quick coding method for HEVC (high efficiency video coding) | |
Chen et al. | A fast inter coding algorithm for HEVC based on texture and motion quad-tree models | |
CN101557519B (en) | Multi-view video coding method | |
CN106878754B (en) | A kind of 3D video depth image method for choosing frame inner forecast mode | |
CN106658024A (en) | Fast video coding method | |
Chen et al. | Gradient based fast mode and depth decision for high efficiency intra frame video coding | |
CN104539954A (en) | Cascading method for speeding up high efficiency video coding (HEVC) | |
Li et al. | A low complexity algorithm for H. 264/AVC intra prediction | |
CN108184114B (en) | Method for rapidly judging Intra prediction mode in P frame based on Support Vector Machine (SVM) | |
Huangyuan et al. | Learning based fast H. 264 to H. 265 transcoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |