CN106254887A - A kind of deep video coding fast method - Google Patents

A kind of deep video coding fast method Download PDF

Info

Publication number
CN106254887A
CN106254887A CN201610780882.7A CN201610780882A CN106254887A CN 106254887 A CN106254887 A CN 106254887A CN 201610780882 A CN201610780882 A CN 201610780882A CN 106254887 A CN106254887 A CN 106254887A
Authority
CN
China
Prior art keywords
current
depth
coding
similar
gray value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610780882.7A
Other languages
Chinese (zh)
Other versions
CN106254887B (en
Inventor
雷建军
段金辉
侯春萍
李东阳
贺小旭
孙振燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201610780882.7A priority Critical patent/CN106254887B/en
Publication of CN106254887A publication Critical patent/CN106254887A/en
Application granted granted Critical
Publication of CN106254887B publication Critical patent/CN106254887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention belongs to 3D field of video encoding, for realizing 3D HEVC deep video is carried out fast coding, ensureing to be effectively reduced in the case of coding efficiency does not has significantly sacrificing 3D HEVC computation complexity, saving the time of coding depth video.Take into full account deep video interframe gray value similarity, it is achieved effective deep video coding fast algorithm.The technical solution used in the present invention is, deep video coding fast method, and step is as follows: 1. gray value similar judgement 2.CU minimum code depth layer calculating 3. shifts to an earlier date decision-making based on the PU pattern that gray scale is similar.Present invention is mainly applied to 3D Video coding occasion.

Description

A kind of deep video coding fast method
Technical field
The invention belongs to 3D field of video encoding, relate to a kind of deep video coding utilizing gray value similarity and calculate quickly soon Method.
Background technology
Multiple views color video plus depth video (Multiview video plus depth, MVD) is a kind of effective 3D video data format.Depth map represents the geological information of scene, is the gray-scale map of only luminance component, each of which pixel Represent current object and the relative distance of camera plane in the 3 d space.Deep video is by the flat region being divided into by sharpened edge Territory forms, and has the characteristic different from color video.MVD video can be drawn out by utilizing rendering technique based on depth map Multiple virtual views.The 3D extended version of efficient video coding standard (High Efficiency Video Coding, HEVC) (3D-HEVC) new Predicting Technique and coding tools are introduced for coding MVD video, as disparity compensation prediction technology, motion are vowed Amount succession, depth model pattern (Depth Modeling Mode, DMM) and View Synthesis optimization (View Synthesis Optimization, VSO) etc..DMM pattern utilizes two kinds of different segmentation patterns that one depth block is divided into two non-rectangles Region, i.e. wedge shape segmentation.Each cut zone is represented by constant partition value.DMM can preferably represent the border of depth map And can obtain and predict more accurately, the most it is integrated in infra-frame prediction as candidate.Between component, coding tools kinematic parameter continues Holding (Motion Parameter Inheritance, MPI) utilizes the movable information of encoded color video to carry out depth map volume Code.
For deep video, 3D-HEVC is by introducing between viewpoint and between component, coding tools has reached well to encode effect Rate.But due to wedge shape segmentation decision-making, complicated rate-distortion optimization and the VSO process etc. of DMM, 3D-when coding depth video The computation complexity of HEVC is also in increase at double.Chinese scholars at present, for reducing 3D-HEVC computation complexity, saves Scramble time it is also proposed some effective methods.Tohidypour et al. proposes one and shifts to an earlier date adaptively in decision-making inter-frame The scheme of predictive mode search, the program mainly utilizes Movement consistency between viewpoint, predictive mode and rate distortion (Rate Distortion, RD) cost relatedness decreases 3D-HEVC computation complexity.Shen et al. proposes one rapidly and effectively Mode decision algorithm, by utilizing between Space-time domain adjacent encoder unit (Coding Unit, CU) and viewpoint and degree of depth interlayer pass Connection information saves the scramble time.Gu et al. proposes a kind of degree of depth intraframe coding quick DMM mode selection algorithm, and the method utilizes Likely model selection result terminates DMM full RD cost calculating process in advance thus obtains scramble time saving.
Summary of the invention
For overcoming the deficiencies in the prior art, it is contemplated that according to the gray value similarity of depth map, to the 3D-HEVC degree of depth Video carries out fast coding, is ensureing that being effectively reduced 3D-HEVC in the case of coding efficiency does not has significantly sacrificing calculates complexity Degree, saves the time of coding depth video.Take into full account deep video interframe gray value similarity, it is achieved effective deep video Coding fast algorithm.The technical solution used in the present invention is, deep video coding fast method, and step is as follows:
1. the similar judgement of gray value
The gray value similarity using depth map judges that can the CU depth layer information of encoded time domain reference frame be used for The reference information encoded as current CU, at current CU with the average gray value difference of corresponding CU in its reference frame less than similar thresholding Time, the coding depth of current CU is used for reference encoded with reference to CU depth layer scope, is limited the depth layer scope of self;CU maximum encodes Depth layer calculates
Based on gray value similarity calculated above, utilize the current CU's of correlation calculations of current CU and time domain reference CU Maximum coding depth layer, the maximum depth value of i.e. current CU quaternary tree recurrence, the maximum coding depth layer D of current CUmaxDefinition is such as Under:
D m a x = D r e f , i f S G = t r u e D o r g , o t h e r w i s e .
Wherein DorgIt is maximum coding depth layer in HTM primal algorithm, usually 3;DrefIt is corresponding in its time domain reference frame The maximum coding depth layer of CU, works as DmaxAfter determining, limit the division degree of depth of current CU thus avoid non-essential bigger depth layer RD cost calculate process;
2.CU minimum code depth layer calculates
First predetermined depth layer D of current CU is definedpreAs follows:
Dpre=α (Dleft+Dabove)+β·Dco
Wherein DleftRepresent the depth capacity layer of left side CU adjacent for current CU, DaboveRepresent top CU adjacent for current CU Depth capacity layer, DcoRepresenting the depth capacity layer of correspondence position CU in its time domain reference frame of current CU, α and β is weight factor;
The minimum-depth layer D of current CUminAccording to DpreAnd SGValue predict such as following formula:
3. shift to an earlier date decision-making based on the PU pattern that gray scale is similar
When current CU CU gray scale corresponding with its reference frame is similar, and select to merge Merge pattern or interframe with reference to CU When Inter2N × 2N pattern is as its optimum prediction mode, the mode decision process of current CU is by decision-making in advance;Excellent in rate distortion During changing (Rate Distortion Optimization, RDO), current CU needs pattern M of detectioncAccording to reference to CU's Pattern definition is as follows:
Wherein MrRepresenting the optimum prediction mode of reference frame correspondence CU, All modes represents RDO mistake in HTM primal algorithm Journey needs the pattern of traversal.
The mark S of gray value similarityGDefinition:
S G = t r u e , i f | G c u r - G r e f | < T H f a l s e , o t h e r w i s e .
Wherein GcurIt is the average gray value of current CU, GrefIt is the average gray value of corresponding CU, TH in its time domain reference frame It it is the similar thresholding of gray scale.Grey similarity S is judged by similar thresholding THG;At GcurAnd GrefDifference less than similar thresholding During TH, then two CU have similar average gray value, SGIt is that true i.e. two CU gray scales are similar;Otherwise SGIt is false, i.e. two CU gray scales Dissimilar;
The feature of the present invention and providing the benefit that:
The present invention proposes a kind of new deep video based on gray value similarity coding quickly for deep video coding Algorithm.The present invention obtains considerable scramble time saving while ensureing coding efficiency, reduces 3D-HEVC and calculates complexity Degree.
Accompanying drawing illustrates:
Fig. 1 is the flow chart of technical scheme.
Detailed description of the invention
Deep video coding, for reducing 3D-HEVC computation complexity, is proposed a kind of new based on gray value phase by the present invention Like CU depth layer and the PU pattern high-speed decision method of property, thus save the depth map encoding time.Concrete technical scheme is divided into The following step:
1. the similar judgement of gray value
The gray value similarity using depth map judges that can the CU depth layer information of encoded time domain reference frame be used for The reference information encoded as current CU.At current CU with the average gray value difference of corresponding CU in its reference frame less than similar thresholding Time, the coding depth of current CU can be used for reference encoded with reference to CU depth layer scope, limit the depth layer scope of self.Gray value phase Mark S like propertyGDefinition:
S G = t r u e , i f | G c u r - G r e f | < T H f a l s e , o t h e r w i s e .
Wherein GcurIt is the average gray value of current CU, GrefIt is the average gray value of corresponding CU, TH in its time domain reference frame It it is the similar thresholding of gray scale.Grey similarity S is judged by similar thresholding THG.At GcurAnd GrefDifference less than similar thresholding During TH, then two CU have similar average gray value, SGIt is that true i.e. two CU gray scales are similar;Otherwise SGIt is false, i.e. two CU gray scales Dissimilar.
2.CU maximum coding depth layer calculates
Based on gray value similarity calculated above, utilize the current CU's of correlation calculations of current CU and time domain reference CU Maximum coding depth layer, the maximum depth value of i.e. current CU quaternary tree recurrence.The maximum coding depth layer D of current CUmaxDefinition is such as Under:
D m a x = D r e f , i f S G = t r u e D o r g , o t h e r w i s e .
Wherein DorgIt is maximum coding depth layer in HTM primal algorithm, usually 3;DrefIt is corresponding in its time domain reference frame The maximum coding depth layer of CU.Work as DmaxAfter determining, the division degree of depth of current CU can be limited thus avoid non-essential bigger deep The RD cost of degree layer calculates process.
3.CU minimum code depth layer calculates
First predetermined depth layer D of current CU is definedpreAs follows:
Dpre=α (Dleft+Dabove)+β·Dco
Wherein DleftRepresent the depth capacity layer of left side CU adjacent for current CU, DaboveRepresent top CU adjacent for current CU Depth capacity layer, DcoRepresenting the depth capacity layer of correspondence position CU in its time domain reference frame of current CU, α and β is weight factor.
The minimum-depth layer D of current CUminAccording to DpreAnd SGValue predict such as following formula:
4. shift to an earlier date decision-making based on the PU pattern that gray scale is similar
When current CU CU gray scale corresponding with its reference frame is similar, and select Merge or Inter 2N × 2N mould with reference to CU When formula is as its optimum prediction mode, the mode decision process of current CU is by decision-making in advance.During RDO, current CU needs inspection Pattern M surveyedcAs follows according to the pattern definition with reference to CU:
Wherein MrRepresenting the optimal mode of reference frame correspondence CU, All modes represents RDO process in HTM primal algorithm and needs Pattern to be traveled through.
Colored and the preferred forms of the deep video coding key present invention below by three viewpoints:
The present invention, based on the reference model HTM of 3D-HEVC, wherein regards three by the various patterns of 3D-HEVC definition Stippling normal complexion deep video encodes, and utilizes the former frame coding information of deep video to provide interframe for deep video present frame Reference information is used for predicting, described below:
1. the similar judgement of gray value
The gray value similarity using depth map judges that can the CU depth layer information of encoded time domain reference frame be used for The reference information encoded as current CU.First, the average gray value G of current CU CU corresponding with time domain reference frame is calculatedcurWith GrefValue;Then pass through similar thresholding TH and judge grey similarity SGGray value similarity, corresponding with its reference frame at current CU When the average gray value difference of CU is less than similar thresholding, the coding depth of current CU can be used for reference encoded with reference to CU depth layer scope, Limit the depth layer scope of self.The mark S of gray value similarityGDefinition:
S G = t r u e , i f | G c u r - G r e f | < T H f a l s e , o t h e r w i s e .
Wherein GcurIt is the average gray value of current CU, GrefIt is the average gray value of corresponding CU, TH in its time domain reference frame It it is the similar thresholding of gray scale.Specifically, at GcurAnd GrefDifference less than similar thresholding TH time, then explanation two CU have similar Average gray value, SGIt is that true i.e. two CU gray scales are similar;Otherwise SGBeing false, i.e. two CU gray scales are dissimilar.Similar thresholding TH is arranged It is 1, obtains by testing and weigh scramble time and coding efficiency in a large number.
2.CU maximum coding depth layer calculates
In the case of gray value is similar, depth layer information D of corresponding CU in time domain reference framerefCan be as with reference to letter Breath instructs the coding of current CU.Therefore, based on the gray value similarity above calculated, the maximum coding depth layer D of current CUmax According to SGValue calculate.Utilize the maximum coding depth layer of the current CU of correlation calculations of current CU and time domain reference CU, The maximum depth value of the most current CU quaternary tree recurrence.The maximum coding depth layer D of current CUmaxIt is defined as follows:
D m a x = D r e f , i f S G = t r u e D o r g , o t h e r w i s e .
Wherein DorgIt is maximum coding depth layer in HTM primal algorithm, usually 3;DrefIt is corresponding in its time domain reference frame The maximum coding depth layer of CU.Work as DmaxAfter determining, the depth capacity layer of i.e. current CU depth layer recursive traversal determines, passes through Dmax The division degree of depth of current CU can be limited thus avoid the RD cost of non-essential bigger depth layer to calculate process.
3.CU minimum code depth layer calculates
Minimum-depth layer restrictive condition utilizes corresponding CU in spatial domain adjacent C U encoded for current CU and time domain reference frame Coding information calculates.First minimum-depth layer restrictive condition define predetermined depth layer D of current CUpreAs follows:
Dpre=α (Dleft+Dabove)+β·Dco
Wherein DleftRepresent the depth capacity layer of left side CU adjacent for current CU, DaboveRepresent top CU adjacent for current CU Depth capacity layer, DcoRepresenting the depth capacity layer of correspondence position CU in its time domain reference frame of current CU, α and β is weight factor, It is set to 0.25 and 0.5.
The minimum-depth layer D of current CUminPredetermined depth layer D according to the current CU calculatedpreWith current CU with at that time Gray value similarity S of territory reference frame correspondence CUGValue predict such as following formula:
At SGDuring for true value, under conditions of i.e. gray scale is similar, DminAccording to DpreThe value that varies in size is different;In gray scale not phase Under the conditions of as, the minimum depth value of current CU will not be defined, Dmin0 can be set to according to HTM primal algorithm.
4. shift to an earlier date decision-making based on the PU pattern that gray scale is similar
During RDO, current CU needs pattern M of detectioncOptimum prediction mode according to CU corresponding in time domain reference frame Determine.Under conditions of and reference CU similar to its time domain reference CU gray scale at current CU selects Merge pattern to be predicted, currently CU only detects the RD cost of Merge other pattern of mode skipping and calculates process;If selecting Inter 2N × 2N pattern to make with reference to CU During for optimum prediction mode, then current CU detects Merge pattern and Inter 2N × 2N pattern and skips the RD generation of other pattern Valency calculates.During RDO, current CU needs pattern M of detectioncAs follows according to the pattern definition with reference to CU:
Wherein MrRepresenting the optimum prediction mode of corresponding CU in the time domain reference frame of current CU, it is former that All modes represents HTM In beginning algorithm, RDO process needs the pattern of traversal.
In sum, the invention provides a kind of deep video based on gray value similarity coding fast algorithm, three regard In the colored plus depth video of point, deep video component is limited by CU minimax depth layer scope based on gray value similarity And the decision making algorithm in advance of PU pattern, simplify CU depth layer traversal and the PU model prediction of original HTM platform deep video Process, thus decrease the deep video scramble time.

Claims (2)

1. a deep video coding fast method, is characterized in that, step is as follows:
1) the similar judgement of gray value
The gray value similarity using depth map judges that can the CU depth layer information of encoded time domain reference frame be used as The reference information of current CU coding, when current CU is less than similar thresholding with the average gray value difference of corresponding CU in its reference frame, The coding depth of current CU is used for reference encoded with reference to CU depth layer scope, limits the depth layer scope of self;
2) CU maximum coding depth layer calculates
Based on gray value similarity calculated above, utilize the maximum of the current CU of correlation calculations of current CU and time domain reference CU Coding depth layer, the maximum depth value of i.e. current CU quaternary tree recurrence, the maximum coding depth layer D of current CUmaxIt is defined as follows:
Wherein DorgIt is maximum coding depth layer in HTM primal algorithm, usually 3;DrefIt is corresponding CU in its time domain reference frame Maximum coding depth layer, works as DmaxAfter determining, limit the division degree of depth of current CU thus avoid the RD of non-essential bigger depth layer Cost calculates process;
3) CU minimum code depth layer calculates
First predetermined depth layer D of current CU is definedpreAs follows:
Dpre=α (Dleft+Dabove)+β·Dco
Wherein DleftRepresent the depth capacity layer of left side CU adjacent for current CU, DaboveRepresent that adjacent for current CU top CU is Big depth layer, DcoRepresenting the depth capacity layer of correspondence position CU in its time domain reference frame of current CU, α and β is weight factor;
The minimum-depth layer D of current CUminAccording to DpreAnd SGValue predict such as following formula:
4) decision-making is shifted to an earlier date based on the PU pattern that gray scale is similar
When current CU CU gray scale corresponding with its reference frame is similar, and Merge or Inter 2N × 2N pattern is selected to make with reference to CU During for its optimum prediction mode, the mode decision process of current CU is by decision-making in advance;During RDO, current CU needs detection Pattern McAs follows according to the pattern definition with reference to CU:
Wherein MrRepresenting the optimal mode of reference frame correspondence CU, All modes represents RDO process in HTM primal algorithm needs traversal Pattern.
2. a kind of deep video coding fast method as claimed in claim 1, is characterized in that, the mark S of gray value similarityG It is defined as follows:
Wherein GcurIt is the average gray value of current CU, GrefBeing the average gray value of corresponding CU in its time domain reference frame, TH is ash Spend similar thresholding.Grey similarity S is judged by similar thresholding THG;At GcurAnd GrefDifference less than similar thresholding TH time, Then two CU have similar average gray value, SGIt is that true i.e. two CU gray scales are similar;Otherwise SGIt is false, i.e. two CU gray scale not phases Seemingly.
CN201610780882.7A 2016-08-31 2016-08-31 A kind of deep video coding fast method Active CN106254887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610780882.7A CN106254887B (en) 2016-08-31 2016-08-31 A kind of deep video coding fast method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610780882.7A CN106254887B (en) 2016-08-31 2016-08-31 A kind of deep video coding fast method

Publications (2)

Publication Number Publication Date
CN106254887A true CN106254887A (en) 2016-12-21
CN106254887B CN106254887B (en) 2019-04-09

Family

ID=58080107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610780882.7A Active CN106254887B (en) 2016-08-31 2016-08-31 A kind of deep video coding fast method

Country Status (1)

Country Link
CN (1) CN106254887B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867810A (en) * 2010-04-07 2010-10-20 宁波大学 Method for pre-processing deep video sequence
CN102801996A (en) * 2012-07-11 2012-11-28 上海大学 Rapid depth map coding mode selection method based on JNDD (Just Noticeable Depth Difference) model
US20140240472A1 (en) * 2011-10-11 2014-08-28 Panasonic Corporation 3d subtitle process device and 3d subtitle process method
US20150022633A1 (en) * 2013-07-18 2015-01-22 Mediatek Singapore Pte. Ltd. Method of fast encoder decision in 3d video coding
CN104853191A (en) * 2015-05-06 2015-08-19 宁波大学 HEVC fast coding method
CN105872561A (en) * 2015-12-29 2016-08-17 上海大学 Method for quickly selecting scalable multi-view video plus depth macro block coding mode

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867810A (en) * 2010-04-07 2010-10-20 宁波大学 Method for pre-processing deep video sequence
US20140240472A1 (en) * 2011-10-11 2014-08-28 Panasonic Corporation 3d subtitle process device and 3d subtitle process method
CN102801996A (en) * 2012-07-11 2012-11-28 上海大学 Rapid depth map coding mode selection method based on JNDD (Just Noticeable Depth Difference) model
US20150022633A1 (en) * 2013-07-18 2015-01-22 Mediatek Singapore Pte. Ltd. Method of fast encoder decision in 3d video coding
CN104853191A (en) * 2015-05-06 2015-08-19 宁波大学 HEVC fast coding method
CN105872561A (en) * 2015-12-29 2016-08-17 上海大学 Method for quickly selecting scalable multi-view video plus depth macro block coding mode

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PALLAB KANTI PODDER等: "Fast Inter-Mode Decision Strategy for HEVC on Depth Videos", 《2015 18TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY (ICCIT)》 *
ZHOUYE GU等: "FAST DEPTH MODELING MODE SELECTION FOR 3D HEVC DEPTH INTRA CODING", 《2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW)》 *

Also Published As

Publication number Publication date
CN106254887B (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN104853197B (en) Method for decoding video data
CN104506863B (en) For the equipment that motion vector is decoded
CN106105191B (en) Method and apparatus for handling multiview video signal
CN104811724B (en) Method and apparatus for being coded and decoded to motion vector
JP5052134B2 (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, and image decoding program
CN102577383B (en) Hierarchy based on coding unit is for the method and apparatus encoding video and for the method and apparatus being decoded video
CN101815218B (en) Method for coding quick movement estimation video based on macro block characteristics
KR101370919B1 (en) A method and apparatus for processing a signal
CN103155563B (en) By using the method and apparatus that video is encoded by merged block and method and apparatus video being decoded by use merged block
CN101917619B (en) Quick motion estimation method of multi-view video coding
CN102934432B (en) For the method and apparatus by using manipulative indexing to encode to video, for the method and apparatus by using manipulative indexing to decode to video
CN104038760B (en) A kind of wedge shape Fractionation regimen system of selection of 3D video depths image frame in and system
CN106803956A (en) The method and apparatus decoded to image
CN103888762B (en) Video coding framework based on HEVC standard
CN106713933A (en) Image decoding method
CN106507116B (en) A kind of 3D-HEVC coding method predicted based on 3D conspicuousness information and View Synthesis
CN101404766B (en) Multi-view point video signal encoding method
CN108712648A (en) A kind of quick inner frame coding method of deep video
CN101959067B (en) Decision method and system in rapid coding mode based on epipolar constraint
CN103108183B (en) Skip mode and Direct mode motion vector predicting method in three-dimension video
CN106210741B (en) A kind of deep video encryption algorithm based on correlation between viewpoint
KR20090122633A (en) Method and its apparatus for fast mode decision in multi-view video coding
CN106254887A (en) A kind of deep video coding fast method
CN109547798A (en) A kind of quick HEVC inter-frame mode selecting method
CN105519120A (en) Method of SUB-PU syntax signaling and illumination compensation for 3d and multi-view video coding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant