CN106254884B - Inter-frame prediction method and encoder IP based on H265 coding - Google Patents
Inter-frame prediction method and encoder IP based on H265 coding Download PDFInfo
- Publication number
- CN106254884B CN106254884B CN201610871310.XA CN201610871310A CN106254884B CN 106254884 B CN106254884 B CN 106254884B CN 201610871310 A CN201610871310 A CN 201610871310A CN 106254884 B CN106254884 B CN 106254884B
- Authority
- CN
- China
- Prior art keywords
- motion vector
- mode
- merging
- coding
- inter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present invention relates to coding and decoding video fields, in particular to based on the coding and decoding video of H265.The present invention is provided based on the H265 inter-frame prediction method encoded and encoder IP, for solving the problems, such as current encoder method low efficiency, its method is further comprised the steps of: before interframe interpolation calculating according to candidate at least one motion vector, whether judgement includes wherein identical motion vector, and identical motion vector is merged, motion vector after merging is used to interframe interpolation to calculate, and the merging mode information of the motion vector and vector after merging is used for mode adjudging and bit rate distortion computation.
Description
Technical field
The present invention relates to coding and decoding video fields, in particular to based on the coding and decoding video of H265.
Background technique
H.265 (HEVC) is current newest international video compression standards.H.265 possess high code efficiency, mainly
It is because of the more accurate intra prediction (intra prediction) and inter-prediction (inter prediction) used.But
The raising for being these code efficiencies all can be along with the raising of encoder complexity, to realize that real time algorithm proposes new challenge.
With the rise of wearable smart machine, the hardware device integrated in little space requires lower and lower power consumption,
And can efficient process items affairs, especially for the social demand of the popularity such as video acquisition, this also results in video
The challenge of coded system.And H265 video coding system is lacking in performance at present, is embodied in that area occupied is big, and power consumption is big
The problems such as on.
Summary of the invention
It is given below and simplifying for one or more aspects is summarized to try hard to provide the basic comprehension to such aspect.This
The extensive overview of the not all aspect contemplated is summarized, and is both not intended to identify in all aspects key or decisive
The element also non-range attempted to define in terms of any or all.Its unique purpose be to provide in simplified form it is one or more
Some concepts of a aspect are using the sequence as more specifical explanation given later.
The present invention provides a kind of inter-frame prediction method based on H265 coding and encoder IP solves existing coded system
It can low, the big problem of power consumption.
To achieve the above object, the inter-frame prediction method based on H265 coding is inventor provided, is calculated in interframe interpolation
It further comprises the steps of: before
According to candidate at least one motion vector, whether judgement includes wherein identical motion vector, and by identical fortune
Dynamic vector merges, and the motion vector after merging is used for interframe interpolation and is calculated, and by the conjunction of motion vector and vector after merging
And mode information is used for mode adjudging and bit rate distortion computation.
Further, merge number of modes is set.
It further, further include judging merge mode in step " whether judgement includes wherein identical motion vector "
Validity.
For realizing the encoder IP of the inter-frame prediction method of H265 coding, including motion vector merging module, interframe is inserted
It is worth computing module, mode decision module and bit rate distortion computation module;
Motion vector merging module is for receiving using defeated after merge mode, amvp mode and estimation of motion vectors processing
Whether motion vector out judge in the motion vector received comprising identical motion vector, and by identical motion vector
Merge, and sends the pooling information of motion vector to mode adjudging and bit rate distortion computation module;
Interframe interpolation computing module is used to carry out interframe interpolation calculating according to the motion vector after merging, obtains video and compiles
Code;
Mode decision module is for judging optimal coding mode according to the motion vector after merging.
It further, further include main control module, main control module is for being arranged merge number of modes.
Further, the motion vector merging module is also used to judge the validity of merge mode.
It is different from the prior art, above-mentioned technical proposal can save corresponding merging fortune after carrying out motion vector merging
The corresponding flow time of the calculating of dynamic vector.To reduce the reduction clock frequency that the corresponding chip of this method can be opposite to
Reduce power consumption;To address related purpose before capable of reaching, this one or more aspect, which is included in, to be hereinafter fully described and in institute
The feature particularly pointed out in attached claim.The following description and drawings illustrate certain theorys of this one or more aspect
Bright property feature.But these features are only several in the various modes for indicate the principle that various aspects can be used, and
And this description is intended to cover all such aspects and its equivalent aspect.
Detailed description of the invention
Disclosed aspect is described below with reference to attached drawing, provide attached drawing be in order to illustrate and non-limiting disclosed side
Face, similar label indicates similar elements in attached drawing, and wherein:
Fig. 1 is the merging process for the motion vector that inter-prediction of the invention obtains;
Fig. 2 is the Effective judgement process of inter-prediction MV;
Fig. 3 is to export the treatment process of merging patterns information according to effective number of modes;
Fig. 4 uses stream treatment to coding and decoding video;
Fig. 5 show the encoding-decoding efficiency statistics of motion vector merging;
The structural schematic diagram of the encoder IP of Fig. 6 inter-frame prediction method.
Specific embodiment
Technology contents, construction feature, the objects and the effects for detailed description technical solution, below in conjunction with specific reality
It applies example and attached drawing is cooperated to be explained in detail.In the following description, numerous details is elaborated for explanatory purposes to provide pair
The thorough understanding of one or more aspects.It will be evident that such aspect can also be practiced without these details.
A kind of inter-frame prediction method based on H265 coding is disclosed herein, is further comprised the steps of: before interframe interpolation calculating
According to candidate at least one motion vector, whether judgement includes wherein identical motion vector, and by identical fortune
Dynamic vector merges, and the motion vector after merging is used for interframe interpolation and is calculated, and by the conjunction of motion vector and vector after merging
And mode information is used for mode adjudging and bit rate distortion algorithms.After the identical motion vector of merging being understood that, closed
Motion vector after and is mutually different that is, between the motion vector after merging.
In the encoder of this paper, merging patterns (i.e. merge mode, the corresponding movement letter for being used to indicate PU are used
Breath output be merge MV), automatic prediction method (AMVP, it is corresponding be used to indicate PU motion information output be mvp) and
Estimation (i.e. ME, the corresponding motion information output for being used to indicate PU is MV).
In merging patterns, encoder selects the motion information of PU neighbouring on time domain and airspace, and a pre-defined fortune
Dynamic information candidate list, encoder select best candidate index and directly using the movements in current PU movement compensation process
Information, and coded reference index information.Motion residuals are not encoded but are directly set as 0 unlike and.
Referring to Fig. 1, for the merging process for the motion vector that inter-prediction obtains, it includes pair that wherein MV, which merges preprocessing process,
The judgement of merge mode effectiveness.In some embodiments, fortune candidate in the motion information candidate list of merge mode is set
Dynamic information is that number is 3, i.e., merge mode exports 3 candidate MV;It is arranged in the motion information candidate list of amvp mode and waits
The motion information number of choosing is 1, i.e. amvp mode exports 1 table.
From prediction technique AMVP, i.e., adaptive M V prediction is carried out using temporal correlation from the MVs of neighbouring PUs.It uses
AMVP derives the MV. motion vector MVs. of AMVP scanning space predicting unit PUs first of current block then in specified region
It scans the adjacent PU of time domain and constructs a MV predicted vector candidate list encoder according to the MV being encoded at present from candidate
Best predicted motion is selected in list to being and encode relevant parameter information.MV difference bit stream encoded is just not yet
Together.
As shown in Fig. 2, in some embodiments, setting merge mode is 3, i.e., merge mode corresponds to motion information time
It is 3 that select motion information candidate in list, which be number, i.e., merge mode exports 3 candidate MV;It is 1 that amvp mode, which is arranged,
A, i.e., motion information number candidate in the corresponding motion information candidate list of amvp mode is 1, i.e. amvp mode exports 1
MV;The Effective judgement process of inter-prediction MV is as shown in Fig. 2, the mode that wherein inter-prediction obtains uses mx in figure at this time
It indicating, x indicates serial number, i.e. m0 indicates first merge mode, and so on, m1 is invalid in figure, i.e., corresponding MV is invalid,
M1/2 indicates that m1, m2 are invalid.M0, m1, m2 respectively correspond a candidate MV;If corresponding mode is invalid, corresponding mv also without
Effect.After interframe as shown in Figure 2 merges pretreatment, according to effective number of modes, merging patterns information is exported,
Treatment process is as shown in Fig. 3.Merging patterns information is for being transmitted to mode adjudging, using rate-distortion optimization (RDO) algoritic module
Calculation processing is carried out, merging patterns information includes the pooling information of MV, MV information.
The merging of motion vector herein refers to that identical motion vector merges, different nonjoinder;Example
Motion vector such as MV0, MV1, MV2, MVamvp, if MV0=MV1, after merging are as follows: MV0, MV2, MVamvp, motion vector
Pooling information includes the merging mode of the motion vector and vector after merging.
Mode adjudging, which refers to, judges optimal coding mode according to the motion vector after merging, i.e., according to the fortune after above-mentioned merging
Dynamic vector, judgment frame is interior or interframe is optimal and that mode of interframe is optimal.The brief job description of mode decision module:
Schema encoding process includes inter motion compensation, transformation, quantization, inverse quantization, anti-change, residual error estimation.
Merge0 expression represents MVmerge0 in Fig. 4, and mc, T, Q, IQ, IT, SSE respectively indicate inter motion compensation, become
It changes, quantify, inverse quantization, anti-change, residual error estimation, i.e. merge0mc is MVmerge0Carry out inter motion compensation operation, amvp
Mc refers to that MV amvp carries out inter motion compensation operation, and so on, the 1-34 of top indicates time shaft, mode in Fig. 4
The estimation of estimation intermediate scheme.In the above-described embodiments, if 4 modes are effective, corresponding 4 MV are effective, such as
Shown in Fig. 4, whole flow process treatment process needs to occupy the hardware resource of 4 groups of inter-frame forecast mode consumption and flow time is 27
A unit time;If MVmerge0=MVmerge1=MVmerge2=MVamvp, it is only necessary to one group of MV information is handled for doing interframe movement
Compensation, transformation, quantization, inverse quantization, anti-change, residual error estimation etc. calculate, and the consumed time is 12 unit time.Therefore
It is understood that after carrying out motion vector merging, the corresponding flow time for merging and can be saved.To reduce this method
Corresponding chip can be opposite reduction clock frequency to reducing power consumption;In particular with the raising of film source resolution ratio, decoding
The raising of efficiency has weight to the efficient SOC coding of 4kx2k@30FPS, 1080p@120FPS and wearable low-power consumption demand is met
Big meaning;In one embodiment, the encoding-decoding efficiency statistics that motion vector merges is as shown in Figure 5.
It is understood that merge number of modes is arranged according to power consumption demand and chip computing capability, it is corresponding to meet
Product demand.
The encoder IP of the inter-frame prediction method for realizing above-mentioned H265 coding is also provided herein, as shown in Fig. 6: packet
Include motion vector merging module, interframe interpolation computing module, mode decision module and bit rate distortion computation module;
Motion vector merging module calculates output using merge mode, amvp mode and estimation of motion vectors for receiving
Motion vector, whether judge in the motion vector received comprising identical motion vector, and identical motion vector is closed
And and the merging mode information of motion vector is sent to mode adjudging and bit rate distortion computation module;
Interframe interpolation computing module is used to carry out interframe interpolation calculating according to the motion vector after merging, obtains video and compiles
Code;
Mode decision module is for judging optimal coding mode according to the motion vector after merging.
Mode decision module, the motion vector residual error and current block of processing Inter prediction module input are obtained from prime
The corresponding residual information of block contextual information and mode arrived does the optimal judgement of mode;
Mode adjudging overall equation is as follows:
J (mode)=SSE+ λ * R (ref, mode, mv, residual)
In this formula, SSE refer to reconstructed block and source images squared difference and;λ is Lagrange multiplier;R is the mould
The practical code stream of macroblock coding under formula, including the bit summation to reference frame, mode, motion vector, residual error etc..If it is in frame
Mode, just only R (mode, residual).
Above formula adjudicates the interframe or intra block of present encoding and corresponding various modes, takes J value minimum
Mode decision pattern the most optimal.
The brief job description of interframe interpolation module:
Interframe interpolation module, the motion vector information of processing Inter prediction module input, including integer components and decimal point
The motion vector information of amount;Since reference frame image only includes the pixel of integer, there is the motion information of fractional component to refer to
When, need the movement block motion compensation near integer position;For the component motion information of only integer, do not need to make a point picture
The motion compensation of element, directly finds prediction pixel information of the moving mass as present frame current block of integer position;If comprising
Motion vector have integer again and have the motion vector information of decimal, then need near integer position according to decimal information into
Row sub-pixel motion compensation obtains the prediction pixel information of present frame current block;
It in some embodiments, further include main control module, main control module is for being arranged merge number of modes.Such as Fig. 2
Shown, in some embodiments, setting merge mode is 3, i.e., merge mode corresponds to candidate in motion information candidate list
Motion information be number be 3, i.e., merge mode exports 3 candidate MV;It is 1, i.e. amvp mode that amvp mode, which is arranged,
Candidate motion information number is 1 in corresponding motion information candidate list, i.e. amvp mode exports 1 MV;Interframe is pre- at this time
The Effective judgement process of MV is surveyed as shown in Fig. 2, the mode that wherein inter-prediction obtains is indicated in figure with mx, x indicates sequence
Number, i.e. m0 indicate first merge mode, and so on, m1 is invalid in figure, i.e., corresponding MV is invalid, m1/2 indicate m1, m2 without
Effect.M0, m1, m2 respectively correspond a candidate MV;If corresponding mode is invalid, corresponding mv is also invalid.By such as Fig. 2
Shown in after interframe merges pretreatment, according to effective number of modes, export merging patterns information, treatment process such as Fig. 3
It is shown.Merging patterns information is used to be transmitted to mode adjudging, carries out calculation processing using rate-distortion optimization (RDO) algoritic module,
Merging patterns information includes the pooling information of MV, MV information.
The merging of motion vector herein refers to that identical motion vector merges, different nonjoinder;Example
Motion vector such as MV0, MV1, MV2, MVamvp, if MV0=MV1, after merging are as follows: MV0, MV2, MVamvp.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or the terminal device that include a series of elements not only include those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or end
The intrinsic element of end equipment.In the absence of more restrictions, being limited by sentence " including ... " or " including ... "
Element, it is not excluded that there is also other elements in process, method, article or the terminal device for including the element.This
Outside, herein, " being greater than ", " being less than ", " being more than " etc. are interpreted as not including this number;" more than ", " following ", " within " etc. understand
Being includes this number.
It should be understood by those skilled in the art that, the various embodiments described above can provide as method, apparatus or computer program production
Product.Complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in these embodiments
Form.The all or part of the steps in method that the various embodiments described above are related to can be instructed by program relevant hardware come
It completes, the program can store in the storage medium that computer equipment can be read, for executing the various embodiments described above side
All or part of the steps described in method.The computer equipment, including but not limited to: personal computer, server, general-purpose computations
It is machine, special purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, smart home device, wearable
Smart machine, vehicle intelligent equipment etc.;The storage medium, including but not limited to: RAM, ROM, magnetic disk, tape, CD, sudden strain of a muscle
It deposits, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are referring to the method according to embodiment, equipment (system) and computer program product
Flowchart and/or the block diagram describes.It should be understood that can be realized by computer program instructions every in flowchart and/or the block diagram
The combination of process and/or box in one process and/or box and flowchart and/or the block diagram.It can provide these computers
Program instruction generates a machine to the processor of computer equipment, so that the finger executed by the processor of computer equipment
It enables and generates to specify in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of function.
These computer program instructions, which may also be stored in, to be able to guide computer equipment computer operate in a specific manner and sets
In standby readable memory, so that the instruction being stored in the computer equipment readable memory generates the manufacture including command device
Product, command device realization refer in one or more flows of the flowchart and/or one or more blocks of the block diagram
Fixed function.
These computer program instructions can also be loaded into computer equipment, so that executing on a computing device a series of
Operating procedure is to generate computer implemented processing, so that the instruction executed on a computing device is provided for realizing in process
The step of function of being specified in figure one process or multiple processes and/or block diagrams one box or multiple boxes.
Although the various embodiments described above are described, once a person skilled in the art knows basic wounds
The property made concept, then additional changes and modifications can be made to these embodiments, so the above description is only an embodiment of the present invention,
It is not intended to limit scope of patent protection of the invention, it is all to utilize equivalent structure made by description of the invention and accompanying drawing content
Or equivalent process transformation, being applied directly or indirectly in other relevant technical fields, similarly includes in patent of the invention
Within protection scope.
Claims (6)
1. the inter-frame prediction method based on H265 coding, which is characterized in that further comprised the steps of: before interframe interpolation calculating
It receives using the motion vector exported after merge mode, amvp mode and estimation of motion vectors processing;Extremely according to candidate
Few 1 motion vector, whether judgement includes wherein identical motion vector, and identical motion vector is merged, after merging
Motion vector calculated for interframe interpolation, and the merging mode information of the motion vector and vector after merging is used for mode and is sentenced
Certainly with bit rate distortion computation, optimal coding mode is judged according to the motion vector after merging.
2. the inter-frame prediction method according to claim 1 based on H265 coding, which is characterized in that setting merge mode
Number.
3. the inter-frame prediction method according to claim 1 based on H265 coding, which is characterized in that judge wherein in step
Whether include identical motion vector, further includes the validity for judging merge mode.
4. the encoder IP of the inter-frame prediction method for realizing H265 coding, which is characterized in that merge mould including motion vector
Block, interframe interpolation computing module, mode decision module and bit rate distortion computation module;
Motion vector merging module is used to receive handled using merge mode, amvp mode and estimation of motion vectors after export
Whether motion vector judge in the motion vector received comprising identical motion vector, and identical motion vector merged,
And the pooling information of motion vector is sent to mode adjudging and bit rate distortion computation module;
Interframe interpolation computing module is used to carry out interframe interpolation calculating according to the motion vector after merging;
Mode decision module is for judging optimal coding mode according to the motion vector after merging.
5. the encoder IP of the inter-frame prediction method according to claim 4 for realizing H265 coding, which is characterized in that
It further include main control module, main control module is for being arranged merge number of modes.
6. the encoder IP of the inter-frame prediction method according to claim 4 for realizing H265 coding, which is characterized in that
The motion vector merging module is also used to judge the validity of merge mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610871310.XA CN106254884B (en) | 2016-09-30 | 2016-09-30 | Inter-frame prediction method and encoder IP based on H265 coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610871310.XA CN106254884B (en) | 2016-09-30 | 2016-09-30 | Inter-frame prediction method and encoder IP based on H265 coding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106254884A CN106254884A (en) | 2016-12-21 |
CN106254884B true CN106254884B (en) | 2019-06-28 |
Family
ID=57612180
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610871310.XA Active CN106254884B (en) | 2016-09-30 | 2016-09-30 | Inter-frame prediction method and encoder IP based on H265 coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106254884B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014129A (en) * | 2007-03-06 | 2007-08-08 | 孟智平 | Video data compression method |
KR20120138426A (en) * | 2011-06-15 | 2012-12-26 | 광운대학교 산학협력단 | Signaling apparatus and method of interleaving advanced motion vector prediction |
CN103609121A (en) * | 2011-06-20 | 2014-02-26 | 高通股份有限公司 | Unified merge mode and adaptive motion vector prediction mode candidates selection |
CN104247428A (en) * | 2012-04-06 | 2014-12-24 | 索尼公司 | Decoder and decoding method, as well as encoder and encoding method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI250423B (en) * | 2004-07-30 | 2006-03-01 | Ind Tech Res Inst | Method for processing video images |
-
2016
- 2016-09-30 CN CN201610871310.XA patent/CN106254884B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014129A (en) * | 2007-03-06 | 2007-08-08 | 孟智平 | Video data compression method |
KR20120138426A (en) * | 2011-06-15 | 2012-12-26 | 광운대학교 산학협력단 | Signaling apparatus and method of interleaving advanced motion vector prediction |
CN103609121A (en) * | 2011-06-20 | 2014-02-26 | 高通股份有限公司 | Unified merge mode and adaptive motion vector prediction mode candidates selection |
CN104247428A (en) * | 2012-04-06 | 2014-12-24 | 索尼公司 | Decoder and decoding method, as well as encoder and encoding method |
Also Published As
Publication number | Publication date |
---|---|
CN106254884A (en) | 2016-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102598670B (en) | With reference to multiple frame, image is carried out to the method and apparatus of coding/decoding | |
CN103354614B (en) | The device that motion vector is encoded | |
CN101557514B (en) | Method, device and system for inter-frame predicting encoding and decoding | |
CN102474622B (en) | Method and device for video coding | |
CN102714736B (en) | Method and apparatus for encoding and decoding motion vector based on reduced motion vector predictor candidates | |
CN102835111B (en) | The motion vector of previous block is used as the motion vector of current block, image to be carried out to the method and apparatus of coding/decoding | |
CN102137263B (en) | Distributed video coding and decoding methods based on classification of key frames of correlation noise model (CNM) | |
CN101379835B (en) | Method and apparatus for motion estimation using combined reference bi-prediction | |
CN108605137A (en) | The method for video coding and device compensated using affine motion | |
WO2017192175A1 (en) | Motion vector coding using dynamic reference motion vectors | |
CN107396102B (en) | A kind of inter-frame mode fast selecting method and device based on Merge technological movement vector | |
CN102047665A (en) | Dynamic image encoding method and dynamic image decoding method | |
CN102474610A (en) | Method and apparatus for encoding/decoding motion vector | |
US9094683B2 (en) | Method of coding moving image and method of decoding moving image | |
CN104980761A (en) | Method and device for coding and decoding motion vector | |
CN106375770A (en) | Video decoding method | |
CN111133759A (en) | Method and apparatus for encoding or decoding video data in FRUC mode with reduced memory access | |
CN103248895A (en) | Quick mode estimation method used for HEVC intra-frame coding | |
CN110121073A (en) | A kind of bidirectional interframe predictive method and device | |
CN105025298A (en) | A method and device of encoding/decoding an image | |
CN104918047B (en) | A kind of method and device for removing of bi-directional motion estimation | |
CN106254884B (en) | Inter-frame prediction method and encoder IP based on H265 coding | |
TWI590083B (en) | A method of adaptive motion estimation in search windows for video coding | |
CN103430543A (en) | Method for reconstructing and coding image block | |
US20150036752A1 (en) | Method for storing movement prediction-related information in an interscreen prediction method, and method for calculating the movement prediction-related information in the inter-screen prediction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China Patentee after: Ruixin Microelectronics Co., Ltd Address before: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China Patentee before: Fuzhou Rockchips Electronics Co.,Ltd. |