CN110248188A - Predicted motion vector generation method and relevant device - Google Patents

Predicted motion vector generation method and relevant device Download PDF

Info

Publication number
CN110248188A
CN110248188A CN201810188482.6A CN201810188482A CN110248188A CN 110248188 A CN110248188 A CN 110248188A CN 201810188482 A CN201810188482 A CN 201810188482A CN 110248188 A CN110248188 A CN 110248188A
Authority
CN
China
Prior art keywords
motion vector
candidate motion
candidate
prediction
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810188482.6A
Other languages
Chinese (zh)
Inventor
陈旭
郑建铧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810188482.6A priority Critical patent/CN110248188A/en
Priority to PCT/CN2019/070629 priority patent/WO2019169949A1/en
Publication of CN110248188A publication Critical patent/CN110248188A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides predicted motion vector generation method and relevant devices, this method comprises: the set of candidate motion vectors of to be processed piece of building is closed;According to set of candidate motion vectors close in the first candidate motion vector, determine at least two first reference motion vectors;Calculate separately the pixel difference value or rate distortion costs value between the first neighbouring reconstructed blocks of the reference block that at least two determine and the second of to be processed piece the neighbouring reconstructed blocks;It is worth the smallest first reference motion vector according to pixel difference value corresponding at least two first reference motion vectors or rate distortion costs, obtains to be processed piece of the first predicted motion vector.Implement the embodiment of the present invention and be advantageously implemented the best reference picture block for obtaining current block in encoding-decoding process, and then constructs the accurate reconstructed blocks of current block.

Description

Predicted motion vector generation method and relevant device
Technical field
The present invention relates to coding and decoding video field more particularly to predicted motion vector generation method and relevant devices.
Background technique
In Video coding and decoding frame, hybrid coding structure is commonly used in the coding and decoding of video sequence.Mixing The coding side of coding structure generally includes: prediction module, conversion module, quantization modules and entropy code module;Hybrid coding structure Decoding end generally include: entropy decoder module, inverse quantization module, inverse transform block and predictive compensation module.In Video coding and It decodes in frame, the image of video sequence is commonly divided into image block and is encoded.One frame image is divided into several image blocks, These image blocks are coded and decoded using above-mentioned module.The combination of these coding and decoding modules can effectively remove video The redundancy of sequence, and can guarantee and obtain the coded image of video sequence in decoding end.
In above-mentioned module, prediction module obtains the prediction block letter of the image block of video sequence coding image for coding side It ceases, and then determines the need for obtaining the residual error of image block according to specific mode, predictive compensation module is worked as decoding end The prediction block message of preceding decoded image blocks determines whether the image block residual error obtained according to decoding further according to specific mode to obtain Current decoded image blocks.Prediction module generally comprises two kinds of technologies of intra prediction and inter-prediction.Wherein, infra-prediction techniques benefit The redundancy of current image block is removed with the space Pixel Information of current image block to obtain residual error.The elder generation of inter-frame prediction techniques Into motion-vector prediction (advanced motion vector prediction, AMVP) mode and merge in (Merge) mode Non- SKIP mode it is (referred to as current using the Pixel Information removal current image block of the neighbouring image of encoding and decoding of present image Block) redundancy to obtain residual error, and SKI P mode is then independent of image block residual error in merging patterns, directly according to pre- Current decoded image blocks can be obtained by surveying block message.Wherein, the neighbouring image of encoding and decoding of present image is referred to as reference picture.
It is all directly to be sweared using the movement of adjacent block in time domain or airspace during AMVP mode or Merge mode are realized Amount is as the candidate MV in its established candidate list.However, the motion conditions of adjacent block may not be consistent with current block, that is, Say adjacent block motion vector and current block actual motion vector may difference, so current block is directly based upon these Candidate mv may not necessarily obtain best reference picture block.
Summary of the invention
The embodiment of the invention provides predicted motion vector generation method and relevant devices, implement the embodiment of the present invention, It is advantageously implemented the best reference picture block for obtaining current block in encoding-decoding process, and then constructs current block and accurately reconstructs Block.
In a first aspect, the embodiment of the invention provides a kind of predicted motion vector generation method, this method comprises: building to The set of candidate motion vectors of process block is closed;According to the first candidate motion vector in set of candidate motion vectors conjunction, determine extremely Few two the first reference motion vectors, first reference motion vector be used to determine described to be processed piece at described to be processed piece The first reference picture in reference block;The one or more first for calculating separately the reference block of at least two determinations is neighbouring Pixel difference value or rate distortion between reconstructed blocks and the neighbouring reconstructed blocks of to be processed piece of the one or more second (rate-distortion) cost value, wherein the shape of the described first neighbouring reconstructed blocks and the second neighbouring reconstructed blocks It is identical and equal sized;It is lost according to the pixel difference value corresponding at least two first reference motion vector or rate The smallest first reference motion vector of true cost value, obtains to be processed piece of the first predicted motion vector.
Wherein, the set of candidate motion vectors conjunction may include multiple candidate motion vectors of current block, candidate motion vector It can be described as candidate prediction motion vector again, can also include multiple candidate motion vectors pair of current block in possible embodiment The reference image information answered.Specifically, the set of candidate motion vectors is combined into the column of the Merge candidate based on Merge mode construction Table, or the AMVP candidate list based on AMVP mode construction.For coding side and decoding end, it can be constructed and be worked as according to preset rules Preceding piece of set of candidate motion vectors is closed, such as the ginseng of the time domain according to corresponding to current block airspace upper periphery adjacent block or current block The predicted motion vector of time domain reference block corresponding to block or current block periphery adjacent block is examined as candidate motion vector, is based on These candidate motion vectors are based on preset Merge mode or AMVP mode construction candidate list.
In the embodiment of the present invention, after building Merge candidate list or AMVP candidate list, it is also based on template matching Mode the candidate motion vector in candidate list is updated.Template matching process in the embodiment of the present invention is specifically wrapped It includes: choosing the first candidate motion vector in candidate list, the first candidate motion vector indicates that the current block in present image arrives The predicted motion vector of reference image block (referred to as reference block) in first reference picture.Then it determines in first reference Search range in image;Scanned for around first reference block according to the search range, obtain at least one first Reference image block (it may be simply referred to as reference block, but in order to be distinguished with reference block determined by candidate motion vector, it can also letter Referred to as image block), the current block has determined a motion vector, this motion vector to each the first image block respectively For a kind of reference motion vector.That is, being based on the first candidate motion vector, it may be determined that at least two first with reference to movement arrow Amount: one of reference motion vector is first candidate motion vector itself determined by the reference block, in addition extremely Few first reference motion vector is the reference motion vector determined by least one described image block.Then, it counts respectively Calculating at least one of the current block, neighbouring at least one of reconstructed blocks and at least one first image block is neighbouring has weighed Pixel difference value or rate distortion costs value between building block, and calculate the current block at least one neighbouring reconstructed blocks and Pixel difference value or rate distortion costs value between the neighbouring reconstructed blocks of at least one of the reference block;From these obtained pictures In plain difference value or rate distortion costs value, selects pixel difference value or rate distortion costs value is one the smallest, use the pixel New candidate motion vector of the motion vector as the current block corresponding to difference value or cost value the smallest one.
In specific embodiment, if the pixel difference value or the smallest corresponding motion vector of cost value are not The candidate motion vector then determines at least one described (specifically can be two or more) first image block One pixel difference value or rate distortion costs are worth the smallest image block, and reference motion vector corresponding to the image block is made For new candidate motion vector.For decoding end, if the index value instruction of parsing is exactly the new candidate motion vector, So decoding end can directly be obtained using the new candidate motion vector as predicted motion vector current block prediction block it is (right In single directional prediction, prediction block is the reconstructed blocks of current block;For bi-directional predicted, prediction block can be used for finally forming current block Reconstructed blocks).For coding side, the new candidate motion vector replacement can be replaced in candidate list in template matching stage institute That candidate motion vector chosen, to realize the update of candidate list.In this way, coding side can be based on updated candidate column Table therefrom determines an optimal Candidate Motion arrow according to candidate motion vector all in rate distortion costs value traversal of lists Amount as current block predicted motion vector (such as based on the resulting predicted motion vector of Merge candidate list be optimal MV, base In the resulting predicted motion vector of AMVP candidate list be optimal MVP), and then current block is obtained based on the predicted motion vector Prediction block (for single directional prediction, prediction block is the reconstructed blocks of current block;For bi-directional predicted, prediction block can be used for finally Form the reconstructed blocks of current block).
As can be seen that video coding and decoding system can verify current block in the way of template matching in the embodiment of the present invention Reference picture in reference image block in a certain range (or even in entire reference picture) neighbouring reconstructed blocks whether with work as Preceding piece of neighbouring reconstructed blocks have preferable matching, transport to the candidate based on candidate list constructed by Merge or AMVP mode Dynamic vector is updated, and can guarantee the optimal reference figure that current block is obtained in encoding-decoding process based on updated candidate list As block, and then construct the accurate reconstructed blocks of current block.
Based in a first aspect, in possible embodiment, for decoding end, also in addition to can directly by pixel difference value or Person's rate distortion costs are worth a smallest motion vector, and (selected candidate motion vector or image block are corresponding when template matching Reference motion vector) directly as predicted motion vector outside, if pixel difference value or rate distortion costs value it is the smallest by one A motion vector is exactly the corresponding reference motion vector of some image block, then the reference motion vector can also be replaced candidate Candidate motion vector selected when template matching in list, and the replaced candidate motion vector is determined (based on index value) (i.e. the reference motion vector) is used as predicted motion vector.
Based in a first aspect, the method can be used for decoding end, in the building in the possible embodiment of the present invention Before to be processed piece of set of candidate motion vectors is closed, decoding end parses the index that code stream obtains identification information and/or candidate list Value, is determined based on the identification information and/or index value based on which kind of prediction mode (such as Merge mode or AMVP mode) Candidate list is established, and which candidate motion vector in candidate list is updated in template matching.
In a possible embodiment, decoding end can obtain identification information (such as in code stream code by parsing code stream Marker) and candidate list index value.The identification information, which is used to indicate the set of candidate motion vectors and is combined into, to be based on Merge mode or AMVP mode construct, and the index value is used to indicate the specific candidate motion vector in candidate list. So decoding end can quickly determine prediction mode based on the identification information, sweared based on index value fast selecting Candidate Motion Amount use the mode of template matching be updated (or be directly based upon index value choose the candidate motion vector as prediction fortune Dynamic vector calculates prediction block), improve decoding efficiency.
In a kind of possible embodiment, the identification information (such as marker in code stream code) can be both used to refer to Show prediction mode used by decoding current block, while can also be used to indicate based on candidate list constructed by the prediction mode Index value.Specifically, being also used to indicate when it is Merge mode that the identification information, which is used to indicate the prediction mode, The index information of Merge mode, or, being also used to when it is AVMP mode that the identification information, which is used to indicate the prediction mode, Indicate the index information of AMVP mode.So decoding end can quickly determine prediction mode based on the identification information, and fast Speed, which is chosen candidate motion vector and is updated using the mode of template matching, (or to be directly based upon indicated index value and chooses The candidate motion vector calculates prediction block as predicted motion vector), improve decoding efficiency.
In a kind of possible embodiment, the identification information (such as marker in code stream code) is used to indicate decoding Prediction mode used by current block, also, when the prediction mode of identification information instruction is AMVP mode, it indicates simultaneously The index value of AMVP candidate list;When the prediction mode of identification information instruction is Merge mode, decoding end uses default Merge mode index value.It is waited so decoding end can quickly be determined prediction mode and be determined based on the identification information Motion vector is selected, and (or other refer in advance for first selection candidate motion vector of quick candidate list in Merge mode Fixed candidate motion vector) be updated using the mode of template matching (or directly choose specified candidate motion vector Prediction block is calculated as predicted motion vector), improve decoding efficiency.
In a kind of possible embodiment, in bi-directional predicted, if encoding and decoding end is all made of hybrid predicting mode, It decodes and can get at least two identification informations, an identification information is used to indicate a direction using Merge mode, another mark Know information and is used to indicate another direction using AMVP mode.It is used alternatively, an identification information is used to indicate a direction Merge mode and the index value for indicating Merge candidate list, another identification information are used to indicate the use of another direction AMVP mode and the index value for indicating AMVP candidate list.So decoding end only need to be based on two marks in bi-directional predicted Information can quickly determine that prediction mode and fast selecting candidate motion vector are updated using the mode of template matching (or be directly based upon indicated index value and choose the candidate motion vector as predicted motion vector to calculate prediction block), mentions High decoding efficiency.
In a kind of possible embodiment, in bi-directional predicted, if encoding and decoding end is all made of hybrid predicting mode (i.e. not It is equidirectional that Merge mode and AMVP mode is respectively adopted), then decoding available at least two combination first identifier information, the One index value }, { second identifier information, second index value }, wherein in { first identifier information, first index value }, first identifier Information indicates the prediction mode of first direction, and first index value indicates the index value of the candidate list of the first direction;{ the second mark Know information, second index value } in, second identifier information indicate second direction prediction mode, second index value indicate this second The index value of the candidate list in direction.So decoding end is based on two combinations { first identifier information, first index values }, { second Identification information, second index value } it can quickly determine the prediction mode of bi-directional predicted middle different directions, and choose candidate list In candidate motion vector be updated using the mode of template matching and (or be directly based upon index value and choose the Candidate Motion Vector calculates the prediction block of corresponding direction as predicted motion vector), improve decoding efficiency.
In a kind of possible embodiment, decoding end can also obtain the index value of candidate list, institute by parsing code stream The index value for stating candidate list can both be used to refer to prediction mode used by decoding current block, while can also be used to indicate Specific candidate motion vector based on candidate list constructed by the prediction mode.So rope of the decoding end based on candidate list Drawing value can quickly determine that prediction mode and fast selecting candidate motion vector are updated using the mode of template matching (or be directly based upon index value and choose the candidate motion vector as predicted motion vector to calculate prediction block) improves decoding effect Rate.
Wherein, in the embodiment of the present invention, it is described it is bi-directional predicted include first direction prediction and second direction prediction, described the One direction prediction is the prediction based on the first reference frame lists, and the second direction prediction is based on the pre- of the second reference frame lists It surveys.First reference frame lists include first reference picture, and second reference frame lists include the second reference picture, It is common, the prediction in a direction can also be known as forward prediction, the prediction in another direction is referred to as back forecast.
The present embodiments relate to it is bi-directional predicted include at least two types.One of type is based on hybrid predicting Mode progress is bi-directional predicted, and another seed type is bi-directional predicted based on a kind of progress of prediction mode.
For the first seed type, hybrid predicting mode includes Merge mode and AMVP mode, that is to say, that bi-directional predicted A direction use Merge mode, another direction use AMVP mode.
In this seed type, a predicted motion vector is can be obtained in each direction respectively, in turn based on predicted motion vector A prediction block is respectively obtained, the prediction block of both direction is finally passed through into preset algorithm (such as the algorithm being weighted and averaged) It is combined, to obtain the reconstructed blocks of current block.The process that wherein each direction obtains predicted motion vector can be identical , it can also be different;It can be independent, can also coordinate mutually.
For example, a kind of prediction mode is respectively adopted in both direction, one of direction (first direction) is independently based upon this The template matching mode that inventive embodiments provide realizes update/candidate list update of candidate motion vector, to obtain The predicted motion vector of the first direction, another direction (second direction) are independently directly based upon constructed candidate list and obtain To the predicted motion vector of the second direction (i.e. without template matching).
For example, a kind of prediction mode is respectively adopted in both direction, one of direction (first direction) is independently based upon this The template matching mode that inventive embodiments provide realizes update/candidate list update of candidate motion vector, to obtain The predicted motion vector of the first direction, another direction (second direction) are also independently based upon mould provided in an embodiment of the present invention Plate matching way realizes update/candidate list update of candidate motion vector, to obtain the prediction fortune of the second direction Dynamic vector.
For example, a kind of prediction mode is respectively adopted in both direction, one of direction (such as first direction) is independently based upon Template matching mode provided in an embodiment of the present invention realizes update/candidate list update of candidate motion vector, thus To the predicted motion vector of the first direction, another direction (such as second direction) utilizes the Candidate Motion of first direction in phase Vector updates the difference of front and back, is updated to the candidate motion vector in candidate list constructed by second direction, and then To the predicted motion vector of second direction.Second direction prediction the realization process includes: calculating the replaced first direction The difference of first direction candidate motion vector before candidate motion vector and the replacement;It is waited according to the difference and second direction It selects the second direction candidate motion vector in motion vector set, obtains the new candidate motion vector of second direction, by described the The original candidate motion vector of second direction, realization pair in the new candidate motion vector replacement set of candidate motion vectors conjunction in two directions Candidate motion vector in candidate list constructed by second direction is updated, and then obtains the predicted motion arrow of second direction Amount.
As can be seen that hybrid predicting mode provided in an embodiment of the present invention can also can guarantee to obtain in encoding-decoding process The optimal reference block of current block improves the efficiency of encoding and decoding in the case where ensuring to obtain the optimal reference block of current block.
For second of type, a kind of one-direction prediction modes can be Merge mode or AMVP mode, be based on the prediction mould Formula, which is established, is used for bi-directional predicted candidate list, includes the candidate motion vector for first direction prediction in the candidate list With the candidate motion vector predicted for second direction.The process that wherein each direction obtains predicted motion vector can be identical , it can also be different;It can be independent, can also coordinate mutually.
For example, one of direction (first direction) is independently based upon template matching mode provided in an embodiment of the present invention Update/candidate list update of first direction candidate motion vector is realized, to obtain the predicted motion arrow of the first direction Amount, another direction (second direction) are independently directly based upon constructed candidate list and obtain the predicted motion of the second direction Vector (i.e. without template matching).
For example, one of direction (first direction) is independently based upon template matching mode provided in an embodiment of the present invention Update/candidate list update of first direction candidate motion vector is realized, to obtain the predicted motion arrow of the first direction Amount, another direction (second direction) is independently based upon template matching mode provided in an embodiment of the present invention also to realize the second time Update/candidate list update of motion vector is selected, to obtain the predicted motion vector of the second direction.
For example, a kind of prediction mode is respectively adopted in both direction, one of direction (such as first direction) is independently based upon Template matching mode provided in an embodiment of the present invention realizes update/candidate list update of candidate motion vector, thus To the predicted motion vector of the first direction, another direction (such as second direction) is sweared using first direction Candidate Motion in phase Amount updates the difference of front and back, selects motion vector to be updated second direction, and then obtain the predicted motion vector of second direction. Second direction prediction the realization process includes: calculating the replaced first direction candidate motion vector and the replacement before The difference of first direction candidate motion vector;It is candidate according to the second direction in the difference and set of candidate motion vectors conjunction Motion vector obtains the new candidate motion vector of second direction, the new candidate motion vector of the second direction is replaced candidate The original candidate motion vector of second direction in motion vector set is realized to the time in candidate list constructed by second direction It selects motion vector to be updated, and then obtains the predicted motion vector of second direction.
As can be seen that the embodiment of the present invention in bi-directional predicted, can be realized another based on the update result in a direction The update of the candidate motion vector in a direction, to substantially increase the efficiency in encoding-decoding process.
Based in a first aspect, in possible embodiment, decoding end (or coding side) is from the set of candidate motion vectors The mode of the first candidate motion vector of selection (first direction) in conjunction can there are many, comprising:
Mode one: when the 4th candidate motion vector in set of candidate motion vectors conjunction is predicted to give birth to by the second direction Cheng Shi zooms in or out the 4th candidate motion vector under a proportional relationship, to obtain first candidate motion vector;Its In, the proportionate relationship includes the ratio of the first difference of injection time and the second difference of injection time, and first difference of injection time is described first candidate The image sequence number of the image sequence number of reference image frame determined by motion vector and the picture frame at the to be processed piece of place Difference, image sequence number and institute of second difference of injection time for reference image frame determined by the 4th candidate motion vector State the difference of the image sequence number of the picture frame where to be processed piece.It is specific: selected by being closed from the set of candidate motion vectors When the 4th candidate motion vector taken is predicted to generate by the second direction, by the 4th candidate motion vector in the first ratio Relationship map is first candidate motion vector;Wherein, first candidate motion vector and the 4th Candidate Motion are sweared The first proportionate relationship is constituted between amount (first proportionate relationship is a kind of scalar);Ginseng determined by first candidate motion vector It examines and constitutes the first difference of injection time between the timing of picture frame and the timing of the picture frame where current block, the 4th Candidate Motion arrow Constitute the second difference of injection time between the timing of reference image frame and the timing of the picture frame where current block determined by measuring, described the The second proportionate relationship is constituted between one difference of injection time and second difference of injection time (second proportionate relationship is a kind of scalar);It is described First proportionate relationship is identical with second proportionate relationship.
That is, if preceding (rear) candidate motion vector chosen into prediction is to be obtained by rear (or preceding) to prediction Arrive, can by map be used for before (rear) to prediction candidate motion vector, as selected candidate motion vector value It is applied to subsequent step.
Mode two: it when first candidate motion vector is by the bi-directional predicted generation, is sweared according to the 5th Candidate Motion Amount, determines at least two first reference motion vector, wherein the reference block that first candidate motion vector determines is by root The first direction reference block determined according to the 5th candidate motion vector and determined according to the 6th candidate motion vector the The weighting of two direction reference blocks obtains, and the 5th candidate motion vector is predicted to generate by the first direction, and the described 6th is candidate Motion vector is predicted to generate by the second direction.
That is, if the candidate motion vector that preceding (or rear) is chosen into prediction is obtained by bi-directional predicted , then (or rear) is chosen before being used in the candidate motion vector to the value of predicted portions, as selected candidate motion vector Value is applied to subsequent step.
If choose candidate motion vector be exactly before (rear) obtained to prediction, just directly adopt the candidate Motion vector is applied to subsequent step as selected candidate motion vector value.
Mode three: if preceding (rear) candidate motion vector chosen into prediction is exactly that (rear) must to prediction before It arrives, then just directlying adopt the candidate motion vector as selected candidate motion vector value is applied to subsequent step.
It should be noted that the selection (second direction) in possible embodiment, from set of candidate motion vectors conjunction The mode of second candidate motion vector can also there are many, specifically refer to aforesaid way realization, which is not described herein again.
As can be seen that implement the above embodiment of the present invention can be improved candidate motion vector choose during accuracy and Fault-tolerance, and then improve accuracy and fault-tolerance during template matching.
Based in a first aspect, in possible embodiment, according to the set of candidate motion vectors close in it is first candidate Motion vector determines at least two first reference motion vectors, comprising: according to first candidate motion vector described first To be processed piece of the reference block determined in reference picture;With aimed at precision the reference block of the determination close position into Row search obtains at least two candidate reference blocks, wherein corresponding one described first of each candidate reference block is with reference to movement Vector, the aimed at precision include 4 pixel precisions, 2 pixel precisions, whole pixel precision, half-pixel accuracy, 1/4 pixel precision, 1/ One of 8 pixel precisions.Implement the embodiment of the present invention to be conducive to improve the search precision during template matching, and then improves The accuracy of template matching results.
Based in a first aspect, in possible embodiment, the neighbouring reconstructed blocks of at least one of current block, reference block Shape between at least one neighbouring neighbouring reconstructed blocks of at least one of reconstructed blocks and image block is identical and equal sized.It is false Such as, at least one of current block be neighbouring have between reconstructed blocks and current block positional relationship 1 (as adjacent in a certain range or Close to such as syntople), reference block at least one neighbouring have positional relationship 2 (such as one between reconstructed blocks and reference block Determine in range adjacent or near to such as syntople), image block at least one adjacent to having between reconstructed blocks and the image block Positional relationship 3 (as in a certain range adjacent or near to syntople), then, positional relationship 1, positional relationship 2 and position are closed Be 3 can be it is identical.But in a possible embodiment, positional relationship 1, positional relationship 2 and positional relationship 3 can also have Difference.
Second aspect, the embodiment of the invention provides the generation method of another predicted motion vector, the method is available Bi-directional predicted in hybrid predicting mode, the hybrid predicting mode includes Merge mode and AMVP mode.The Two-way Surveying includes first direction prediction and second direction prediction, and the first direction prediction is the prediction based on the first reference frame lists, The second direction prediction is the prediction based on the second reference frame lists, which comprises obtains the first prediction mode, generates First set of candidate motion vectors close, first set of candidate motion vectors share in the first direction prediction in generate first Direction prediction motion vector;The second prediction mode is obtained, the second set of candidate motion vectors is generated and closes, the second Candidate Motion arrow Duration set is used to generate second direction predicted motion vector in second direction prediction, wherein when the first prediction mould When formula is AMVP mode, second prediction mode is Merge mode, alternatively, when first prediction mode is Merge mode When, second prediction mode is AMVP mode.
Hybrid predicting mode provided in an embodiment of the present invention is a kind of novel encoding/decoding mode, is based on hybrid predicting mode It can also can guarantee the optimal reference block that current block is obtained in encoding-decoding process, in the optimal reference block for ensuring to obtain current block In the case where improve encoding and decoding efficiency.
It is described when first prediction mode is Merge mode based in a first aspect, in possible embodiment The first prediction mode is obtained, the first set of candidate motion vectors is generated and closes, comprising: is pre- in the first direction using Merge mode Candidate motion vector used in survey generates first set of candidate motion vectors and closes.
It is described to obtain when first prediction mode is AMVP mode based in a first aspect, in possible embodiment The first prediction mode is taken, the first set of candidate motion vectors is generated and closes, comprising: using AMVP mode in first direction prediction The candidate motion vector used generates first set of candidate motion vectors and closes.
It is described when second prediction mode is Merge mode based in a first aspect, in possible embodiment The second prediction mode is obtained, the second set of candidate motion vectors is generated and closes, comprising: is pre- in the second direction using Merge mode Candidate prediction vector used in survey generates second set of candidate motion vectors and closes.
It is described to obtain when second prediction mode is AMVP mode based in a first aspect, in possible embodiment The second prediction mode is taken, the second set of candidate motion vectors is generated and closes, comprising: using AMVP mode in second direction prediction The candidate motion vector used generates second set of candidate motion vectors and closes.
In a kind of possible embodiment, in bi-directional predicted, encoding and decoding end is all made of hybrid predicting mode, then decoding It can get at least two identification informations, an identification information is used to indicate a direction using Merge mode, another mark letter Breath is used to indicate another direction using AMVP mode.Alternatively, an identification information is used to indicate a direction using Merge mould Formula and the index value for indicating Merge candidate list, another identification information are used to indicate another direction using AMVP mode And the index value of instruction AMVP candidate list.So decoding end only need to be based on two identification information energy in bi-directional predicted It is (or straight quickly to determine that prediction mode and fast selecting candidate motion vector are updated using the mode of template matching Connect and the candidate motion vector chosen as predicted motion vector based on indicated index value to calculate prediction block), improve decoding Efficiency.
In a kind of possible embodiment, in bi-directional predicted, encoding and decoding end is all made of hybrid predicting mode (i.e. not Tongfang To Merge mode and AMVP mode is respectively adopted), then decoding can get at least two combination { first identifier information, the first ropes Draw value }, { second identifier information, second index value }, wherein in { first identifier information, first index value }, first identifier information Indicate the prediction mode of first direction, first index value indicates the index value of the candidate list of the first direction;{ second identifier letter Breath, second index value } in, second identifier information indicates the prediction mode of second direction, and second index value indicates the second direction Candidate list index value.So decoding end is based on two combinations { first identifier information, first index values }, { second identifier Information, second index value } it can quickly determine the prediction mode of bi-directional predicted middle different directions, and choose in candidate list Candidate motion vector is updated using the mode of template matching (or to be directly based upon index value and chooses the candidate motion vector The prediction block of corresponding direction is calculated as predicted motion vector), improve decoding efficiency.
Specifically, when it is Merge mode that the first identifier information, which is used to indicate first prediction mode, described the One identification information is also used to indicate the index information of Merge mode, or, when the first identifier information is used to indicate described first When prediction mode is AVMP mode, the first identifier information is also used to indicate the index information of AMVP mode.
Specifically, some direction in bi-directional predicted, identification information (first identifier information or second identifier information) It is used to indicate prediction mode used by decoding current block, also, when the prediction mode of identification information instruction is AMVP mould When formula, while indicating the index value of AMVP candidate list;When the prediction mode of identification information instruction is Merge mode, The index value of preset Merge mode can be used in decoding end.So decoding end can be determined quickly in advance based on the identification information Survey mode and determining candidate motion vector, and first selection Candidate Motion of quick candidate list is sweared in Merge mode Amount (either other preassigned candidate motion vectors) be updated using the mode of template matching (or directly choose refer to Fixed candidate motion vector calculates prediction block as predicted motion vector), improve decoding efficiency.
Specifically, in decoding end, in the second prediction mode of the acquisition, before generating the conjunction of the second set of candidate motion vectors, Further include: after determining that first prediction mode is Merge mode, parses the code stream and obtain second identifier information, institute State the index information that second identifier information is used to indicate AMVP mode.
Specifically, in decoding end, in the second prediction mode of the acquisition, before generating the conjunction of the second set of candidate motion vectors, Further include: after determining that first prediction mode is AMVP mode, parses the code stream and obtain second identifier information, it is described Second identifier information is used to indicate the index information of Merge mode.
Based in a first aspect, decoding end determines that first prediction mode is AMVP mode in possible embodiment Later, reference frame index and motion vector difference information that the code stream obtains the first direction prediction are parsed;Alternatively, determining institute The second prediction mode is stated after AMVP mode, to parse the reference frame index and fortune that the code stream obtains the second direction prediction Dynamic vector difference information.
In addition, in the possible embodiment of the present invention, it is bi-directional predicted in encoding and decoding end can use the embodiment of the present invention Provided template matching mode carries out the update of candidate list, and detailed process refers to the description of first aspect, no longer superfluous here It states.
The third aspect, the embodiment of the present invention provide a kind of equipment for generating predicted motion vector, and the equipment includes: Gather generation module, the set of candidate motion vectors for constructing to be processed piece is closed;Template matching module, for according to the candidate The first candidate motion vector in motion vector set determines at least two first reference motion vectors, and described first with reference to fortune Dynamic vector is used to determine the described to be processed piece reference block in described to be processed piece of the first reference picture;The template matching Module is also used to, and calculates separately the neighbouring reconstructed blocks of one or more first of the reference block of at least two determinations and described Pixel difference value or rate distortion costs value between the neighbouring reconstructed blocks of to be processed piece of one or more second, wherein institute It is identical and equal sized with the shape of the described second neighbouring reconstructed blocks to state the first neighbouring reconstructed blocks;Predicted motion vector generates Module, for according to the pixel difference value corresponding at least two first reference motion vector or rate distortion costs It is worth the smallest first reference motion vector, obtains to be processed piece of the first predicted motion vector.Specifically, described set Modules in standby are for realizing method described in first aspect.
Fourth aspect, the embodiment of the present invention provide another equipment for generating predicted motion vector, and the equipment is used In carrying out to be processed piece bi-directional predicted, it is described it is bi-directional predicted include first direction prediction and second direction prediction, described first Direction prediction is the prediction based on the first reference frame lists, and the second direction prediction is based on the pre- of the second reference frame lists It surveys, the equipment includes: first set generation module, for obtaining the first prediction mode, generates the first set of candidate motion vectors It closes, first set of candidate motion vectors is shared in the generation first direction predicted motion vector in first direction prediction; Second set generation module generates the conjunction of the second set of candidate motion vectors, the described second candidate fortune for obtaining the second prediction mode Dynamic vector set is used to generate second direction predicted motion vector in second direction prediction, wherein when described first pre- When survey mode is AMVP mode, second prediction mode is Merge mode, alternatively, when first prediction mode is Merge When mode, second prediction mode is AMVP mode.Modules in the equipment are specifically used for realizing second aspect institute The method of description.
5th aspect, the embodiment of the present invention provide a kind of equipment for generating predicted motion vector, which includes: to set Standby can be is applied to coding side, is also possible to be applied to decoding side.The equipment includes processor, memory, the processor It is connected with memory (as being connected with each other by bus), in possible embodiment, which may also include transceiver, receive Send out device connection processor and memory, for receive/send data.Memory is for storing program code and video data.Place Reason device can be used for reading the program code stored in institute's memory, execute method described in first aspect.
6th aspect, the embodiment of the present invention provide another equipment for generating predicted motion vector, which includes: Equipment, which can be, is applied to coding side, is also possible to be applied to decoding side.The equipment includes processor, memory, the processing Device is connected (as being connected with each other by bus) with memory, and in possible embodiment, which may also include transceiver, Transceiver connects processor and memory, for receive/send data.Memory is for storing program code and video data. Processor can be used for reading the program code stored in institute's memory, execute method described in second aspect.
7th aspect, the embodiment of the invention provides a kind of video coding and decoding system, which includes source Device and destination device.Source device can be communicatively coupled with destination device.Source device generates encoded video data.Cause This, source device can be referred to video coding apparatus or video encoder.The warp that destination device decodable code is generated by source device Encoded video data.Therefore, destination device can be referred to video decoder or video decoding apparatus.Source device and destination Device can be the example of video encoding/decoding apparatus or video decoding/encoding device.First aspect or and/or second aspect described in Method can be applied in the video encoding/decoding apparatus or video decoding/encoding device, that is to say, that the video coding and decoding system is available The method described in realization first aspect and/or second aspect.
Eighth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage Instruction is stored in medium, when run on a computer, so that computer executes method described in above-mentioned first aspect.
9th aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer-readable storage Instruction is stored in medium, when run on a computer, so that computer executes method described in above-mentioned second aspect.
Tenth aspect, the embodiment of the invention provides a kind of computer program products comprising instruction, when it is in computer When upper operation, so that computer executes method described in above-mentioned first aspect.
Tenth on the one hand, and the embodiment of the invention provides a kind of computer program products comprising instruction, when it is being calculated When being run on machine, so that computer executes method described in above-mentioned second aspect.
As can be seen that video coding and decoding system can verify current block in the way of template matching in the embodiment of the present invention Reference picture in reference image block in a certain range (or even in entire reference picture) neighbouring reconstructed blocks whether with work as Preceding piece of neighbouring reconstructed blocks have preferable matching, transport to the candidate based on candidate list constructed by Merge or AMVP mode Dynamic vector is updated, and can guarantee the optimal reference figure that current block is obtained in encoding-decoding process based on updated candidate list As block, and then construct the accurate reconstructed blocks of current block.In addition, being based on the embodiment of the invention also provides hybrid predicting mode The hybrid predicting mode can also can guarantee the optimal reference block that current block is obtained in encoding-decoding process, ensure to be worked as The efficiency of encoding and decoding is improved in the case where preceding piece of optimal reference block.
Detailed description of the invention
Fig. 1 is a kind of structural schematic diagram of coding/decoding system provided in an embodiment of the present invention;
Fig. 2 is the schematic block diagram of a kind of video encoding/decoding apparatus or electronic equipment provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of terminal provided in an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of AMVP candidate list provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of merge candidate list provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram of another AMVP candidate list provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of another merge candidate list provided in an embodiment of the present invention;
Fig. 8 is a kind of schematic diagram of a scenario of template matching mode provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of a scenario of another template matching mode provided in an embodiment of the present invention;
Figure 10 is a kind of encoding-decoding process schematic diagram of Merge mode provided in an embodiment of the present invention;
Figure 11 is the encoding-decoding process schematic diagram of another Merge mode provided in an embodiment of the present invention;
Figure 12 is a kind of encoding-decoding process schematic diagram of AMVP mode provided in an embodiment of the present invention;
Figure 13 is the encoding-decoding process schematic diagram of another AMVP mode provided in an embodiment of the present invention;
Figure 14-24 is the flow diagram of some predicted motion vector generation methods provided in an embodiment of the present invention;
Figure 25-28 is the structural schematic diagram of some equipment provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described with reference to the attached drawing in the embodiment of the present invention.Embodiments of the present invention portion Divide the term used to be only used for explaining specific embodiments of the present invention, and is not intended to limit the present invention.
System framework applied by the embodiment of the present invention is introduced first, is that the embodiment of the present invention mentions referring first to Fig. 1, Fig. 1 A kind of schematic block diagram of the video coding and decoding system 10 supplied.As shown in Figure 1, video coding and decoding system 10 include source device 12 and Destination device 14.Source device 12 generates encoded video data.Therefore, source device 12 can be referred to video coding apparatus or view Frequency encoding device.The encoded video data that 14 decodable code of destination device is generated by source device 12.Therefore, destination device 14 It can be referred to video decoder or video decoding apparatus.Source device 12 and destination device 14 can for video encoding/decoding apparatus or The example of video decoding/encoding device.Source device 12 and destination device 14 may include a wide range of devices, include desk-top calculating The hand-held sets such as machine, mobile computing device, notebook (for example, on knee) computer, tablet computer, set-top box, smart phone, TV, camera, display device, digital media player, video game console, car-mounted computer or its fellow.
Destination device 14 can receive the video data after the coding from source device 12 via channel 16.Channel 16 can wrap Include the one or more media and/or device that encoded video data can be moved to destination device 14 from source device 12. In an example, channel 16 may include enabling source device 12 that the video data after coding is transmitted directly to purpose in real time One or more communication mediums of ground device 14.In this example, source device 12 can be according to communication standard (for example, wireless communication Agreement) carry out the video data after modulating-coding, and modulated video data can be emitted to destination device 14.It is one Or multiple communication mediums may include wireless and/or wired communication media, such as radio frequency (RF) frequency spectrum or one or more physics pass Defeated line.One or more of communication mediums can form the network based on packet (for example, local area network, wide area network or global network (example Such as, internet)) part.One or more of communication mediums may include router, exchanger, base station, or promote to fill from source Set the other equipment for the communication that 12 arrive destination device 14.
In another example, channel 16 may include the storage matchmaker for storing the video data after the coding generated by source device 12 Body.In this example, destination device 14 can access storage media via disk access or card access.Storing media may include A variety of local access formula data storage mediums, such as Blu-ray Disc, DVD, CD-ROM, flash memory, or for storing warp knit Other appropriate digitals of code video data store media.
In another example, channel 16 may include the video after file server or coding that storage is generated by source device 12 Another intermediate storage mean of data.In this example, destination device 14 can access storage via stream transmission or downloading The video data after coding at the file server or other intermediate storage means.File server, which can be, can store volume Video data after code and the video data after the coding is emitted to the type of server of destination device 14.Instance document Server includes web server (for example, being used for website), File Transfer Protocol (FTP) server, network attached storage (NAS) Device and local drive.
Destination device 14 can connect (for example, internet connection) via normal data and carry out the video counts after Access Coding Minimal According to.The example types of data connection include the wireless of the video data after the coding for being suitable for being stored on file server Channel (for example, Wi-Fi connection), wired connection (for example, DSL, cable modem etc.), or both combination.After coding Video data from the transmitting of file server can be stream transmission, the combination of download transmission or both.
Technology of the invention is not limited to wireless application scene, illustratively, can be applied to the technology to support following answer With etc. a variety of multimedia application coding and decoding video: airborne television broadcast, cable television transmitting, satellite television transmitting, streaming pass Defeated video transmission (for example, via internet), the video data being stored on data storage medium coding, be stored in data and deposit Store up the decoding or other application of the video data on media.In some instances, video coding and decoding system 10 can be configured to prop up One-way or bi-directional video transmission is held, to support such as stream video, video playing, video broadcasting and/or visual telephone Using.
In the example of fig. 1, source device 12 includes video source 18, video encoder 20 and output interface 22.In some realities In example, output interface 22 may include modulator/demodulator (modem) and/or transmitter.Video source 18 may include video Trap setting (for example, video camera), the video archive containing the video data previously captured, to from video content provider Receive the video input interface of video data, and/or computer graphics system or above-mentioned video counts for generating video data According to the combination in source.
Video data of 20 codified of video encoder from video source 18, specifically, being provided in video encoder 20 Prediction module, conversion module, quantization modules and entropy code module etc..In some instances, source device 12 is via output interface Video data after coding is transmitted directly to destination device 14 by 22.Video data after coding also can be stored in storage media Or so that destination device 14 is accessed later for decoding and/or playing on file server.
In the example of fig. 1, destination device 14 includes input interface 28, Video Decoder 30 and display device 32.? In some examples, input interface 28 includes receiver and/or modem.Input interface 28 can be received via channel 16 and be encoded Video data afterwards.Video Decoder 30 is for 28 received code streams (video data) of decoded input interface, specifically, video Entropy decoder module, inverse quantization module, inverse transform block and predictive compensation module etc. are provided in decoder 30.Display device 32 Can with destination device 14 integrate or can be outside destination device 14.In general, display device 32 shows decoded view Frequency evidence.Display device 32 may include a variety of display devices, such as liquid crystal display (LCD), plasma scope, You Jifa Optical diode (OLED) display or other types of display device.
Video encoder 20 and Video Decoder 30 can be according to video compression standards (for example, high efficiency coding and decoding video H.265 standard) and operate, and can be in accordance with HEVC test model (HM).H.265 normative text describes ITU-TH.265 (V3) (04/2015) it is issued on April 29th, 2015, it can be under http://handle.itu.int/11.1002/1000/12455 It carries, the full content of the file is incorporated herein by reference.
Referring to fig. 2, Fig. 2 is the schematic of a kind of video encoding/decoding apparatus or electronic equipment 50 provided in an embodiment of the present invention Block diagram, the device or electronic equipment 50 can be incorporated to the coding decoder of embodiment according to the present invention.Fig. 3 is according to this hair A kind of structural schematic diagram of device for Video coding of bright embodiment.The unit in Fig. 2 and Fig. 3 is described below.
Electronic equipment 50 may, for example, be the mobile terminal or user equipment of wireless communication system.It should be understood that can be May need to code and decode video image, perhaps encode or decoded any electronic equipment or device in it is real Apply the embodiment of the present invention.
Device 50 may include the shell 30 for being incorporated to and protecting equipment.Device 50 can also include that form is liquid crystal display The display 32 of device.In other embodiments of the invention, display can be suitable for display image or video it is any Display technology appropriate.Device 50 can also include keypad 34.In other embodiments of the invention, it can use any Data or user interface mechanisms appropriate.Such as, it is possible to implement user interface is that dummy keyboard or data entry system are made For a part of touch-sensitive display.Device may include microphone 36 or any audio input appropriate, which can To be number or analog signal input.Device 50 can also include following audio output apparatus, and the audio output apparatus is at this It can be any one in the following terms: earphone 38, loudspeaker or analogue audio frequency or digital sound in the embodiment of invention Frequency output connection.Device 50 also may include battery 40, and in other embodiments of the invention, equipment can be by any appropriate Mobile energy device, such as solar battery, fuel cell or the power supply of clockwork generator.Device can also include being used for With the infrared port 42 of the short range line-of-sight communication of other equipment.In other embodiments, device 50 can also include any suitable When short-haul connections solution, such as bluetooth wireless connection or USB/ firewire wired connection.
Device 50 may include the controller 56 or processor for control device 50.Controller 56, which may be coupled to, to be deposited Reservoir 58, the memory can store the data of the data that form is image and audio in an embodiment of the present invention, and/or It can store the instruction for implementing on controller 56.Controller 56, which may be also connected to, is adapted for carrying out audio and/or view The coding and decoding of frequency evidence or the auxiliaring coding and decoded codec circuitry 54 realized by controller 56.
Device 50 can also include for providing user information and being suitable for providing for using in network authentication and authorization The card reader 48 and smart card 46 of the authentication information at family, such as UICC and UICC reader.
Device 50 can also include radio interface circuit 52, which is connected to controller and is suitble to In generation for example for the wireless communication signals with cellular communications networks, wireless communication system or wireless LAN communication.Dress Setting 50 can also include antenna 44, which is connected to radio interface circuit 52 for being sent in nothing to other (multiple) devices Radiofrequency signal that line electrical interface circuit 52 generates and for receiving radiofrequency signals from other (multiple) devices.
In some embodiments of the invention, device 50 includes the camera for being able to record or detecting single frames, coding and decoding Device 54 or controller receive these single frames and handle them.In some embodiments of the invention, device can be with Video image data to be processed is received from another equipment before transmission and/or storage.In some embodiments of the invention, Device 50 can receive image by wireless or wired connection and be used for coding/decoding.Method described in the embodiment of the present invention The inter-prediction being mainly used in video encoder 20 and the corresponding encoding-decoding process of Video Decoder 30.
In the encoding and decoding involved in the embodiment of the present invention, the essence of inter-prediction be for present image current block (when The preceding piece of image block for indicating currently to need coding/decoding, current block are referred to as to be processed piece) one piece is found in a reference image Most like piece (reference block).In order to obtain the reference block with current block best match, in the inter-frame forecast mode of encoding and decoding end Advanced motion vector prediction (Advanced Motion Vector Prediction, AMVP) mode with merge (Merge) mode It is realized using different modes.
For AMVP mode, AMVP mode is first that current block predicts a MV, this predicted motion vector is also known as transported Dynamic vector predicted value (Motion Vector Prediction, MVP), MVP can according to current block airspace upper periphery adjacent block, Or the motion vector of time domain reference block corresponding to time domain reference block corresponding to current block or current block periphery adjacent block is direct Obtain because adjacent block have it is multiple, MVP also have it is multiple, a MVP be substantially exactly a candidate motion vector (wait Select mv), these MVP groups are built up a candidate list by AMVP mode, are herein known as the candidate list of AMVP mode construction AMVP candidate list.Coding side selects an optimal MVP after establishing the AMVP candidate list from AMVP candidate list, The starting point searched in a reference image is determined according to optimal MVP (MVP is also a candidate MV), then in initial search point Near rate distortion costs value calculating is scanned for and carried out according to ad hoc fashion in particular range, be finally calculated one most Excellent MV, optimal MV have determined the position of practical reference block (prediction block) in reference picture, pass through the difference of optimal MV and optimal MVP It acquires motion vector difference (motion vector difference, MVD), and is arranged in AMVP candidate the optimal MVP is corresponding Index value in table is encoded, and is encoded to the index of reference picture.Coding side need to only be sent in code stream to decoding end The index of MVD, Merge candidate list and the index of reference picture, have achieved the purpose that video data compression.One side of decoding end Face obtains out the index of index value and reference picture in the MVD and candidate list from decoding in code stream, on the other hand oneself builds Vertical AMVP candidate list, obtains optimal MVP by the index value, optimal MV is obtained according to MVD and optimal MVP, according to reference The index of image obtains reference picture, and practical reference block (prediction block), Jin Ertong are found using optimal MV and from reference picture It crosses and the reconstructed blocks that motion compensation finally obtains current block is carried out to practical reference block (prediction block).
For Merge mode, Merge mode equally can be according to current block airspace upper periphery adjacent block or current block institute The motion vector of time domain reference block corresponding to corresponding time domain reference block or current block periphery adjacent block, is sweared as Candidate Motion Measure (candidate mv) because adjacent block have it is multiple, candidate mv also have it is multiple, Merge mode be based on these candidate mv construct time List is selected, the candidate list of Merge mode construction is known as Merge candidate list herein, and (list length is different from AMVP mould Formula).Directly use the MV of adjacent block as the predicted motion vector of current block in Merge mode, i.e. current block and adjacent block shares One MV (so MVD is not present at this time), and use the reference picture of adjacent block as the reference picture of oneself.Merge mode All candidate MV in Merge candidate list are traversed, and carry out rate distortion costs value calculating, it is final to choose rate distortion costs value most Optimal MV of the small candidate MV as the Merge mode, and to the index value of the optimal MV in Merge candidate list It is encoded, coding side need to only send the index of the Merge candidate list in code stream to decoding end, reach video data pressure The purpose of contracting.On the one hand decoding end decodes the index for obtaining out the Merge candidate list from code stream, on the other hand oneself establish Merge candidate list determines that a candidate MV in Merge candidate list is optimal MV by the index value, and using adjacent The reference picture of block finds the (prediction of practical reference block using optimal MV and from reference picture as the reference picture of oneself Block), and then by carrying out the reconstructed blocks that motion compensation finally obtains current block to practical reference block (prediction block).
It is best to obtain that traditional AMVP mode or Merge mode is all based on directly on the candidate motion vector of adjacent block Reference image block.But the candidate motion vector of the adjacent block of current block may not be optimal predicted motion vector, That is directly being obtained using the candidate motion vector of the mode during traditional AMVP mode or Merge mode are realized Practical reference block be not necessarily current block optimal reference block.In order to solve the skill of traditional AMVP mode or Merge mode Art defect, the embodiment of the invention provides predicted motion vector generation methods, realize to traditional AMVP mode or Merge mode institute The candidate list of building improves (update), can guarantee to obtain in encoding-decoding process based on updated candidate list current The optimal reference block of block.In addition, being based on the hybrid predicting mode the embodiment of the invention also provides some hybrid predicting modes It can also can guarantee the optimal reference block that current block is obtained in encoding-decoding process.
For the ease of the understanding of technical solution of the present invention, be described first below the present embodiments relate to single directional prediction/ Bi-directional predicted, constructed candidate list and candidate motion vector and candidate list are carried out based on template matching mode The method of update, the encoding-decoding process based on template matching are constructed candidate list based on decoded information and choose candidate fortune The mode of dynamic vector.
Firstly, description the present embodiments relate to interframe single directional prediction (abbreviation single directional prediction) and interframe it is bi-directional predicted (referred to as bi-directional predicted).
The single directional prediction refers to based on the reference picture in single direction the first predicted motion vector for determining current block, And then obtain the prediction block in single direction of current block.It is common, it can be according to the image sequence number of reference image frame and current The relativeness of the image sequence number of picture frame, and single directional prediction can be known as forward prediction or back forecast accordingly.
Described bi-directional predicted including first direction prediction and second direction prediction, the first direction is predicted as based on first The reference picture in direction determines the first predicted motion vector of current block, and then obtains the prediction in a first direction of current block Block, wherein the reference picture of the first direction is one in the first reference image frame set, first reference image frame Set includes a certain number of reference pictures;It is current to determine that the second direction is predicted as the reference picture based on second direction Second predicted motion vector of block, and then obtain the prediction block in second direction of current block, the second direction with reference to figure As being one in the second reference image frame set, the second reference image frame set includes a certain number of reference pictures. The prediction block that is obtained based on the first predicted motion vector and the prediction block obtained based on the second predicted motion vector are by preset algorithm It is handled, can finally obtain the reconstructed blocks of current block, such as by prediction block that the first predicted motion vector obtains and be based on The prediction block that second predicted motion vector obtains is weighted and averaged to obtain the reconstructed blocks of current block.It is common, it can also incite somebody to action Interframe is bi-directional predicted to be known as forward-backward algorithm prediction, that is to say, that interframe is bi-directional predicted including forward prediction and back forecast, in this way In the case of, when first direction is predicted as forward prediction, then second direction prediction mutually should be back forecast;When first direction is predicted For back forecast, then second direction prediction mutually should be forward prediction.
Secondly, description the present embodiments relate to candidate list.
Referring to fig. 4, Fig. 4 be the present embodiments relate to AMVP mode used by a kind of AMVP candidate list, should AMVP candidate list can be applied in forward prediction, also can be applied in back forecast.Specifically, the AMVP candidate list can Applied to single directional prediction (forward prediction or back forecast), the forward prediction in bi-directional predicted, or application can also be applied to Back forecast in bi-directional predicted.
The AMVP candidate list includes the set of multiple MVP (each MVP is also candidate MV), and MVP can be according to current Time domain corresponding to time domain reference block corresponding to block airspace upper periphery adjacent block or current block or current block periphery adjacent block The motion vector of reference block directly obtains.As shown in (a) in Fig. 4, AMVP candidate list includes MVP0, MVP1 ... MVPn, In, position of each MVP in candidate list, which corresponds respectively to a specific candidate list index, (specifically can be described as AMVP time Select the index of list), that is, each candidate MV for indexing the corresponding position being used to indicate in list, such as MVP0, MVP1 ... in figure The corresponding candidate list index of MVPn is respectively index0, index 1 ... index n.
In a kind of application scenarios, the AMVP candidate list length based on AMVP mode construction is 2.Such as (b) institute in Fig. 4 Show, includes 2 MVP (MVP0 and MVP1) in AMVP candidate list, one of MVP can be to be obtained according to adjacent block on airspace Candidate motion vector, correspondingly, another MVP can be the candidate motion vector obtained according to adjacent block in time domain.In addition, The index value 0 of candidate list be can define to indicate MVP0, can define the index value 1 of candidate list to indicate MVP1.Certainly, this hair Bright embodiment does not limit the specific value of index value, that is to say, that MVP0 and MVP1 can also with definition other index values into Row instruction.
Referring to Fig. 5, Fig. 5 be the present embodiments relate to Merge mode used by a kind of Merge candidate list, should Merge candidate list can be applied in forward prediction, also can be applied in back forecast.Specifically, the Merge candidate list It can be applied to single directional prediction (forward prediction or back forecast), can also be applied to the forward prediction in bi-directional predicted, Huo Zheying For the back forecast in bi-directional predicted.
The Merge candidate list includes the set of multiple candidate MV, and candidate MV can be according to current block airspace upper periphery The movement of time domain reference block corresponding to time domain reference block corresponding to adjacent block or current block or current block periphery adjacent block Vector directly obtains.The Merge candidate list as shown in (a) in Fig. 5 include before (and/or) backward candidate MV0, candidate MV1 ... Candidate MVn, wherein position of each candidate MV in candidate list corresponds respectively to a specific candidate list index (tool Body can be described as the index of Merge candidate list), that is, each candidate MV for indexing the corresponding position being used to indicate in list, such as The corresponding candidate list index of candidate MV0, the candidate MVn of candidate MV1 ... is respectively index0, index1 ... index n in figure.Separately Outside, in well-behaved inventive embodiments, the candidate MV0, candidate MV1 ... candidate MVn are used when may be used to indicate building current block Reference picture (such as the reference picture of reference picture as current block for directlying adopt contiguous block).
In a kind of application scenarios, the Merge candidate list length based on Merge mode construction is 5.Such as (b) in Fig. 5 It is shown, it include 5 MVP (candidate MV0, candidate MV1, candidate MV2, candidate MV3, candidate MV4) in Merge candidate list, wherein 4 candidate MV can be according to the obtained candidate motion vector of adjacent block on airspace, remaining 1 candidate MV is according to time domain The candidate motion vector that upper adjacent block obtains.In addition, can define the index value 2 of candidate list, index value 3, index value, 4, index Value 5, index value 6 indicate candidate MV0, candidate MV1, candidate MV2, candidate MV3, candidate MV4.Certainly, the embodiment of the present invention is simultaneously The specific value of each index value is not limited, that is to say, that each MVP can also be indicated with other index values of definition.
Referring to Fig. 6, Fig. 6 be the present embodiments relate to AMVP mode used by a kind of AMVP candidate list, should AMVP candidate list can be applied in bi-directional predicted, i.e., the forward prediction and back forecast being applied to simultaneously in bi-directional predicted.
As shown in fig. 6, the AMVP candidate list includes MVP (including MVP10, MVP11 ... for first direction prediction MVP1n the set of set) and the MVP (MVP2n including MVP20, MVP21 ...) for second direction prediction, each MVP can be with According to time domain reference block corresponding to current block airspace upper periphery adjacent block or current block or current block periphery adjacent block institute The motion vector of corresponding time domain reference block directly obtains.In this way in situation, can be respectively adopted candidate list index index 0, Index 1 ... index n indicates MVP10, MVP11 ... MVP1n, can also be respectively adopted candidate list index index 0, Index 1 ... index n indicates MVP20, MVP21 ... MVP2n is indicated.In the particular embodiment, it is used to indicate first The index value of direction MVP and the index value for being used to indicate second direction MVP can be identical, are also possible to different.Such as it can be with MVP10 and MVP20 are indicated simultaneously using index value 0, index value 0 and index value 1 can also be respectively adopted to indicate respectively MVP10 and MVP20.
In following Example of the present invention, before first direction is (or rear) to when, before being used in AMVP candidate list (or To candidate motion vector before may be simply referred to as afterwards) to the candidate motion vector of prediction (or afterwards);When second direction be after (or preceding) to When, (or preceding) is sweared to Candidate Motion after (or preceding) may be simply referred to as to the candidate motion vector of prediction for after in AMVP candidate list Amount.
Referring to Fig. 7, Fig. 7 be the present embodiments relate to Merge mode used by another Merge candidate list, The Merge candidate list can be applied in bi-directional predicted, i.e., the forward prediction and back forecast being applied to simultaneously in bi-directional predicted.
As shown in fig. 7, the Merge candidate list include for first direction prediction candidate MV (including candidate MV10, The candidate candidate MV1n of MV11 ...) set and for second direction prediction candidate MV (including candidate MV20, candidate MV21 ... time Select MV2n) set, each candidate MV can the time domain according to corresponding to current block airspace upper periphery adjacent block or current block ginseng The motion vector for examining time domain reference block corresponding to block or current block periphery adjacent block directly obtains.In this way in situation, Ke Yifen Candidate MV10, the candidate candidate MV1n of MV11 ... are not indicated using candidate list index index 0, index 1 ... index n, Candidate MV20, candidate MV21 ... can also be indicated with candidate list index index 0, index 1 ... index n is respectively adopted Candidate MV2n is indicated.In the particular embodiment, it is used to indicate the index value of first direction candidate MV and is used to indicate second The index value of direction candidate MV can be identical, is also possible to different.Such as candidate can be indicated using index value 0 simultaneously MV10 and candidate MV20, can also be respectively adopted index value 0 and index value 1 to indicate respectively candidate MV10 and candidate MV20.
In following Example of the present invention, before first direction is (or rear) to when, before being used in Merge candidate list (or To candidate motion vector before may be simply referred to as afterwards) to the candidate motion vector of prediction (or afterwards);When second direction be after (or preceding) to When, (or preceding) is transported to candidate after (or preceding) can be also simply referred to as to the candidate motion vector of prediction for after in Merge candidate list Dynamic vector.
Again, describe the present embodiments relate to template matching mode.
In order to solve the technological deficiency of traditional AMVP mode or Merge mode, template provided in an embodiment of the present invention Matching way, which can be used for realizing, to be updated candidate motion vector used by AMVP mode or Merge mode or even right Candidate list constructed by AMVP mode or Merge mode is updated, and is based on updated candidate motion vector/candidate list Can be realized in the encoding-decoding process of AMVP mode or Merge mode, obtained practical reference block be current block most Good reference block, to guarantee the correctness of the reconstructed blocks of obtained current block.Realize template matching to candidate motion vector into The process description that row updates is as follows: (1) in based on candidate list constructed by AMVP mode or Merge mode, selecting one A candidate motion vector.The candidate motion vector has determined that the reference image block of current block on a reference (referred to as refers to Block) (2) determining search range in the reference picture that the candidate motion vector determines, within the scope of described search, really Determine at least one reference motion vector for being different from the candidate motion vector, each of at least one reference motion vector Correspond respectively in a reference picture indicated by candidate motion vector reference image block (referred to as image block, so as to In being different from reference block determined by candidate motion vector).(3) at least one for calculating separately at least one image block is neighbouring The pixel difference value between the neighbouring reconstructed blocks of at least one of reconstructed blocks and current block or rate are distorted (rate- Distortion) cost value, and at least one neighbouring at least one of reconstructed blocks and current block of calculating reference block are neighbouring Pixel difference value or rate distortion costs value between reconstructed blocks.Wherein, at least one neighbouring reconstructed blocks, ginseng of current block Examine at least one and size identical adjacent to the shape of at least one of reconstructed blocks and image block between reconstructed blocks of block It is equal.Have positional relationship 1 (such as in a certain range between reconstructed blocks and current block if at least one of current block is neighbouring It is interior adjacent or near to), reference block at least one neighbouring had positional relationship 2 (such as certain between reconstructed blocks and reference block In range adjacent or near to), image block at least one neighbouring there is positional relationship 3 (such as between reconstructed blocks and the image block In a certain range adjacent or near to), then, positional relationship 1, positional relationship 2 and positional relationship 3 can be identical.But In a possible embodiment, positional relationship 1, positional relationship 2 and positional relationship 3 can also discrepant (4) from least one figure The pixel difference value obtained between reconstructed blocks as at least one neighbouring at least one of reconstructed blocks and current block of block Or neighbouring at least one of reconstructed blocks and current block is neighbouring has reconstructed at least one of rate distortion costs value and reference block Pixel difference value between block perhaps determines a minimum pixel difference value or rate distortion costs value in rate distortion costs value, The motion vector candidate motion vector or institute corresponding to the minimum pixel difference value or rate distortion costs value State some at least one reference motion vector.Specifically, can be by calculating the pixel difference value between reconstructed blocks, calculating in advance If bit number required for coefficient and acquisition motion vector information (for example obtains and compiles solution consumed by current block motion vector difference Code bit number) product, by the two progress read group total obtain rate distortion costs value.(4) using the minimum pixel difference value or Motion vector corresponding to person's rate distortion costs value is as new candidate motion vector.Specifically, if the motion vector is Some at least one described reference motion vector, then using this reference motion vector as new candidate motion vector, Original selected candidate motion vector is updated using new candidate motion vector.If the motion vector is exactly original Candidate motion vector, do not need to be updated the candidate motion vector.In this way, above-mentioned (1) (2) (3) (4) are exactly to use mould Plate matches the renewal process to candidate motion vector.It should be understood that if acquired movement is sweared after carrying out (1) (2) (3) (4) Amount is some at least one described reference motion vector, then swears this reference motion vector as new Candidate Motion It measures, and this new candidate motion vector is replaced to original candidate motion vector in constructed candidate list, from And realize the update using template matching to candidate list.
For example, a kind of schematic diagram of template matching is shown referring to Fig. 8, Fig. 8, during the template matching, base The candidate list constructed by AMVP mode or Merge mode selects a candidate motion vector, and the candidate motion vector is true Reference block in a reference image is determined, has included one in the search range for combining the candidate motion vector to determine with reference to movement Vector (i.e. reference motion vector 1 in diagram), reference motion vector 1 has determined image block 1 in a reference image.In template In matching, the neighbouring reconstructed blocks of current block are A1 and A2, and the neighbouring reconstructed blocks of reference block are B1 and B2, image block 1 it is neighbouring Reconstructed blocks are C1 and C2.Shape between { A1, A2 }, { B1, B2 }, { C1, C2 } is identical and equal sized, and they respectively with Current block, reference block, positional relationship having the same between image block 1.Calculate { C1, the C2 } and current block of image block 1 Pixel difference value or rate distortion costs value between { A1, A2 }, and calculate reference block { B1, B2 } and current block A1, A2 } between pixel difference value or rate distortion costs value.Using in both pixel difference values or rate distortion costs value most A small corresponding motion vector is corresponded to as candidate motion vector, such as minimum pixel difference value or rate distortion costs value Motion vector be reference motion vector 1, then reference motion vector 1 is used into reference campaign as new candidate motion vector Vector 1 replaces the candidate motion vector in candidate list.It should be understood that if minimum pixel difference value or rate are distorted generation Being worth corresponding motion vector is the candidate motion vector, then does not need to carry out more the candidate motion vector in candidate list Newly.
Again for example, the schematic diagram of another template matching is shown referring to Fig. 9, Fig. 9, in the template matching process In, a candidate motion vector, Candidate Motion arrow are selected based on candidate list constructed by AMVP mode or Merge mode Amount has determined reference block in a reference image, includes two references in the search range for combining the candidate motion vector to determine Motion vector (i.e. reference motion vector 1 and reference motion vector 2 in diagram), reference motion vector 1, reference motion vector 2 divide Image block 1, image block 2 in a reference image has not been determined.In template matching, the neighbouring reconstructed blocks of current block be A1 and A2, the neighbouring reconstructed blocks of reference block are B1 and B2, and the neighbouring reconstructed blocks of image block 1 are C1 and C2, image block 2 it is neighbouring Reconstructed blocks are D1 and D2, { A1, A2 }, { B1, B2 }, { C1, C2 }, the shape between { D1 and D2 } it is identical equal sized and respectively with Current block, reference block, image block 1, positional relationship having the same between image block 2.It calculates { C1, the C2 } of image block 1 and works as Pixel difference value or rate distortion costs value between preceding piece of { A1, A2 } calculate { D1, the D2 } and current block of image block 2 Pixel difference value or rate distortion costs value between { A1, A2 }, and calculate reference block { B1, B2 } and current block A1, A2 } between pixel difference value or rate distortion costs value.From pixel difference value or rate distortion costs value, pixel difference is selected A smallest corresponding motion vector is poor as candidate motion vector, such as minimum pixel in different value or rate distortion costs value It is reference motion vector 1 or reference motion vector 2 that different value or rate distortion costs, which are worth corresponding motion vector, then will refer to movement Vector 1 or reference motion vector 2 are replaced as new candidate motion vector using reference motion vector 1 or reference motion vector 2 The candidate motion vector in candidate list.It should be understood that if minimum pixel difference value or rate distortion costs value are corresponding Motion vector be the candidate motion vector, then do not need to be updated the candidate motion vector in candidate list.
It should be noted that in the specific embodiment of the invention, being updated using template matching to candidate list can be with It is to be updated to a candidate motion vector in candidate list, can be to multiple candidate motion vectors in candidate list It is updated, can also be and all candidate motion vectors in candidate list are updated.
It is described below involved in the embodiment of the present invention and is obtained in Merge mode based on the mode of template matching currently The encoding-decoding process of the reconstructed blocks of block, the process can be divided into cataloged procedure and decoding process.
A kind of specific coding process of Merge mode in the embodiment of the present invention is shown referring to Figure 10, Figure 10, such as Figure 10 institute Show, in coding side, constructs the candidate list (i.e. Merge candidate list) of Merge mode, which includes candidate MV1, waits MV2 etc. is selected, the candidate MV2 in candidate list is selected, (specific mistake is updated using the mode of template matching to the candidate MV2 Journey refers to above description), reference motion vector is obtained, is replaced using the reference motion vector as new candidate motion vector candidate Candidate MV2 in list, to realize the update to candidate list, updated candidate list includes candidate MV1 and the reference Motion vector.Then, for updated candidate list, predicted motion vector (here pre- is obtained by the way of traditional Surveying motion vector is optimal MV), that is to say, that Merge mode traverses all candidate MV (including candidate in candidate list MV1 and reference motion vector), and rate distortion costs value calculating is carried out, final rate distortion costs of choosing are worth a smallest candidate Optimal MV of the MV as the Merge mode, and based on optimal MV building current block prediction block (in single directional prediction, current block Prediction block be current block reconstructed blocks), and the index value of the optimal MV is encoded.Such as optimal MV is with reference to fortune Dynamic vector, the then prediction block based on reference motion vector building current block, and the corresponding index value of coded reference motion vector.It compiles Code end sends the index value of the Merge candidate list in code stream to decoding end.
In decoding end, on the one hand decoding end is based on same rule building Merge candidate list, candidate column with coding side Table includes candidate MV1, candidate MV2 etc.;On the other hand, the code stream sended over from coding side is parsed, Merge candidate list is obtained Index value, and candidate motion vector indicated by index value is updated using the mode of template matching, for example, index value It indicates the candidate MV2 in candidate list, is then updated that (detailed process is with reference to upper using the mode of template matching to candidate MV2 Text description), obtain reference motion vector.Later, in a kind of possible embodiment, using the reference motion vector as new candidate Motion vector replaces the candidate MV2 in candidate list, to realize the update to candidate list, updated candidate list includes Candidate MV1 and the reference motion vector.Then, for updated candidate list, the ginseng that will be directly determined based on index value Predicted motion vector (the predicted motion vector be optimal MV) of the motion vector as current block is examined, and is based on the predicted motion (in single directional prediction, the prediction block of current block is the reconstruct of current block to the prediction block of vector (optimal MV) building current block Block).It, can also be directly by the reference after obtaining reference motion vector based on template matching in alternatively possible embodiment Predicted motion vector of the motion vector as current block, and the prediction block based on predicted motion vector building current block.
It should be noted that in the specific embodiment of the invention, coding side using template matching to Merge candidate list into Row, which updates can be, is updated a candidate motion vector in candidate list, can be to multiple times in candidate list It selects motion vector to be updated, can also be and all candidate motion vectors in candidate list are updated.But it is decoding End candidate MV need to be only updated indicated by the index value to Merge candidate list, to improve understanding using template matching The decoding efficiency at code end.Another specific coding mistake of Merge mode in the embodiment of the present invention is shown referring to Figure 11, Figure 11 Journey, as shown in figure 12, coding side are updated, base multiple candidate MV (including candidate MV1 and candidate MV2) in candidate list Determine that reference motion vector 2 is the predicted motion vector (optimal MV) of current block in updated candidate list, based on reference to fortune Dynamic vector 2 realizes the prediction block of building current block, the corresponding candidate list of coded reference motion vector 2 index.In decoding end, only The candidate MV2 indicated by it need to be updated using template matching, based on the received candidate list index of institute to be joined Motion vector 2 is examined, the prediction block of building current block is realized based on reference motion vector 2.
It should also be noted that, in the possible embodiment of the present invention, if coding side is based on obtained by updated candidate list To predicted motion vector be not one in updated candidate motion vector, then receiving candidate list in decoding end Index after, the candidate motion vector in candidate list directly can be selected according to the index of candidate list, as decoding end Predicted motion vector, without being updated again to the candidate motion vector.
It is described below involved in the embodiment of the present invention and is obtained in AMVP mode based on the mode of template matching currently The encoding-decoding process of the reconstructed blocks of block, the process can be divided into cataloged procedure and decoding process.
A kind of specific coding process of AMVP mode in the embodiment of the present invention is shown referring to Figure 12, Figure 12, such as Figure 12 institute Show, in coding side, constructs the candidate list (i.e. AMVP candidate list) of AMVP mode, which includes MVP1, MVP2 etc., The MVP2 in candidate list is selected, is updated that (detailed process with reference to retouching above using the mode of template matching to the MVP2 State), reference motion vector is obtained, using the reference motion vector as in new candidate motion vector replacement candidate list MVP2, to realize the update to candidate list, updated candidate list includes MVP1 and the reference motion vector.Then, For updated candidate list, predicted motion vector is obtained by the way of traditional, and (predicted motion vector here is Optimal MVP), that is to say, that an optimal MVP is selected from AMVP candidate list, is determined according to optimal MVP with reference to figure Then the starting point searched for as in scans near the initial search point according to ad hoc fashion in particular range and carries out rate Distortion cost value calculates, and an optimal MV is finally calculated, optimal MV has determined the prediction block of current block in a reference image Position (in single directional prediction, the prediction block of current block is the reconstructed blocks of current block), and pass through optimal MV and optimal MVP Difference acquires motion vector difference MVD, and encodes to the corresponding index value in AMVP candidate list of the optimal MVP, right The index of reference picture is encoded.Such as optimal MVP is that (i.e. reference motion vector is exactly the prediction fortune to reference motion vector Dynamic vector), then the prediction block based on reference motion vector building current block, and the corresponding index value of coded reference motion vector, MVD is encoded, the index of the corresponding reference picture of the reference motion vector is encoded.Then, coding side is sent out in code stream to decoding end Send the index value of the AMVP candidate list after encoding, the index of MVD and reference picture.
In decoding end, on the one hand decoding end is based on same rule building AMVP candidate list, candidate column with coding side Table includes MVP1, MVP2 etc.;On the other hand, the code stream sended over from coding side is parsed, the index of Merge candidate list is obtained The index of value, MVD and reference picture, and candidate motion vector indicated by index value is carried out more using the mode of template matching Newly, for example, index value indicates the MVP2 in candidate list, then (specific mistake is updated using the mode of template matching to MVP2 Journey refers to above description), obtain reference motion vector.Later, in a kind of possible embodiment, using the reference motion vector as MVP2 in new candidate motion vector replacement candidate list, thus realize the update to candidate list, updated candidate column Table includes MVP1 and the reference motion vector.Then, it for updated candidate list, is somebody's turn to do what is directly determined based on index value Optimal MVP (predicted motion vector that the optimal MVP be current block) of the reference motion vector as current block, it is then, optimal MVP combination MVD obtains the optimal MV of current block, determines the corresponding reference picture of optimal MV based on the index of reference picture, carries out Construct the prediction block of current block (in single directional prediction, the prediction block of current block is the reconstructed blocks of current block).Another kind can Can be in embodiment, after obtaining reference motion vector based on template matching, it can also be directly using the reference motion vector as working as Preceding piece of optimal MV (predicted motion vector), and the prediction block for constructing current block is finally realized based on the predicted motion vector.
Similarly, in the specific embodiment of the invention, coding side is updated AMVP candidate list using template matching can Being updated to a candidate motion vector in candidate list, can be to multiple Candidate Motions arrow in candidate list Amount is updated, and be can also be and is updated to all candidate motion vectors in candidate list.But in decoding end, need pair Candidate MV indicated by the index value of AMVP candidate list is updated using template matching, to improve the decoding of decoding end Efficiency.Another specific coding process of AMVP mode in the embodiment of the present invention is shown referring to Figure 13, Figure 13, such as Figure 13 institute Show, coding side is updated multiple MVP (including MVP1, MVP2) in candidate list, true based on updated candidate list Determine the predicted motion vector (optimal MVP) that reference motion vector 2 is current block, building is finally realized based on reference motion vector 2 The prediction block of current block, the corresponding candidate list of coded reference motion vector 2 index, MVD and reference picture.In decoding end, only The MVP2 indicated by it need to be updated using template matching, based on the received candidate list index of institute to be referred to Motion vector 2 finally realizes the prediction block of building current block based on reference motion vector 2, MVD and reference picture.
Similarly, in the possible embodiment of the present invention, if coding side is based on the obtained prediction fortune of updated candidate list Dynamic vector is not one in updated candidate motion vector, then in decoding end, after the index for receiving candidate list, The candidate motion vector in candidate list directly can be selected according to the index of candidate list, the predicted motion as decoding end is sweared Amount, without being updated again to the candidate motion vector.
It is described below in decoding process (such as entropy decoding process) and constructs candidate list based on decoded information, based on decoding Information chooses the mode of candidate motion vector.
In possible embodiment of the invention, decoding end can be according to the identification information and/or candidate for decoding gained candidate list The index value of list come determine current codec using any prediction mode, and/or, choose which of candidate list and wait Select motion vector.
In a possible embodiment, decoding end can obtain identification information (such as in code stream code by parsing code stream Marker) and candidate list index value.The identification information, which is used to indicate the set of candidate motion vectors and is combined into, to be based on Merge mode or AMVP mode construct, such as marker is 0 to indicate Merge mode, and marker is 1 to indicate AMVP mode. The index value of the candidate list is used to indicate the specific candidate motion vector in candidate list, such as index value 0 indicates to wait First candidate motion vector in list is selected, index value 1 indicates second candidate motion vector, etc. in candidate list. It should be understood that in this case, if { marker 0, index value 1 } is combined in current decoding, then illustrating to decode current block Used prediction mode is Merge mode, and second candidate MV in Merge candidate list is chosen in subsequent decoding step.
In a kind of possible embodiment, the identification information (such as marker in code stream code) can be both used to refer to Show prediction mode used by decoding current block, while can also be used to indicate based on candidate list constructed by the prediction mode Index value.Specifically, being also used to indicate when it is Merge mode that the identification information, which is used to indicate the prediction mode, The index information of Merge mode, or, being also used to when it is AVMP mode that the identification information, which is used to indicate the prediction mode, Indicate the index information of AMVP mode.For example, being used to indicate the candidate list of AMVP mode when the mark bit value is 0 or 1 First MVP or second MVP;When the mark bit value is 2,3,4,5,6, it is respectively used to instruction Merge mode First, second, third and fourth, five candidate MV of candidate list.
In a kind of possible embodiment, the identification information (such as marker in code stream code) is used to indicate decoding Prediction mode used by current block, also, when the prediction mode of identification information instruction is AMVP mode, it indicates simultaneously The index value of AMVP candidate list;When the prediction mode of identification information instruction is Merge mode, decoding end uses default Merge mode index value.For example, being used to indicate the candidate list of AMVP mode when the mark bit value is 0 or 1 First MVP or second MVP;When it is described mark bit value be it is non-zero and non-1 special value when (such as 2), then be used to indicate Merge mode, in this case, decoding end is directly chosen in the candidate list that Merge mode is established in subsequent decoding step First candidate MV (the candidate MV that can also be any other default index position).
In a kind of possible embodiment, in bi-directional predicted, if encoding and decoding end is all made of hybrid predicting mode, It decodes and can get at least two identification informations, an identification information is used to indicate a direction using Merge mode, another mark Know information and is used to indicate another direction using AMVP mode.It is used alternatively, an identification information is used to indicate a direction Merge mode and the index value for indicating Merge candidate list, another identification information are used to indicate the use of another direction AMVP mode and the index value for indicating AMVP candidate list.
In a kind of possible embodiment, in bi-directional predicted, if encoding and decoding end is all made of hybrid predicting mode (i.e. not It is equidirectional that Merge mode and AMVP mode is respectively adopted), then decoding available at least two combination first identifier information, the One index value }, { second identifier information, second index value }, wherein in { first identifier information, first index value }, first identifier Information indicates the prediction mode of first direction, and first index value indicates the index value of the candidate list of the first direction;{ the second mark Know information, second index value } in, second identifier information indicate second direction prediction mode, second index value indicate this second The index value of the candidate list in direction.
In a kind of possible embodiment, decoding end can also obtain the index value of candidate list, institute by parsing code stream The index value for stating candidate list can both be used to refer to prediction mode used by decoding current block, while can also be used to indicate Specific candidate motion vector based on candidate list constructed by the prediction mode.For example, being used for when the index value is 0 or 1 Indicate first MVP or second MVP of the candidate list of AMVP mode;When the index value is 2,3,4,5,6, use respectively In first, second, third and fourth, five candidate MV of the candidate list of instruction Merge mode.In a kind of possible embodiment, double Into prediction, if encoding and decoding end is all made of hybrid predicting mode, decodes and can get at least two index values, one of them Index value is used to indicate a direction and uses Merge mode and indicate the specific candidate motion vector in Merge candidate list, Another index value is used to indicate another direction using AMVP mode and indicates that the specific candidate in AMVP candidate list transports Dynamic vector.
It should be noted that being only reference example in above-mentioned specific embodiment, do not represent to the embodiment of the present invention Limitation.
Single directional prediction based on the above described/bi-directional predicted, constructed candidate list are based on template matching mode to time The method of selecting motion vector and candidate list to be updated, the encoding-decoding process based on template matching, are described below the present invention Predicted motion vector generation method provided by embodiment.
It is a kind of process signal of predicted motion vector generation method provided in an embodiment of the present invention referring to Figure 14, Figure 14 Figure, this method includes but is not limited to following steps:
Step S101: the set of candidate motion vectors for constructing current block is closed.The set of candidate motion vectors conjunction may include current Multiple candidate motion vectors of block, candidate motion vector can be described as candidate prediction motion vector again, in possible embodiment, may be used also To include the corresponding reference image information of multiple candidate motion vectors of current block.Specifically, the set of candidate motion vectors is closed For the Merge candidate list based on Merge mode construction, or the AMVP candidate list based on AMVP mode construction.
Specifically, coding side can construct the candidate of current block according to preset rules (such as traditional approach) for coding side Motion vector set, such as the time domain reference block according to corresponding to current block airspace upper periphery adjacent block or current block or work as The motion vector of time domain reference block corresponding to preceding piece of periphery adjacent block is based on these candidate motion vectors as candidate motion vector Based on preset Merge mode or AMVP mode construction candidate list.
In decoding end, current codec can be determined using any pre- according to decoding gained identification information or index value Survey mode, and then corresponding candidate list is constructed again.
For example, the identification information is used to indicate prediction mode used by decoding current block, also, works as the mark When the prediction mode for knowing information instruction is AMVP mode, while indicating the index value of AMVP candidate list;When the identification information When the prediction mode of instruction is Merge mode, decoding end uses the index value of preset Merge mode.
For example, the identification information can both be used to refer to prediction mode used by decoding current block, simultaneously also It can be used to refer to the index value based on candidate list constructed by the prediction mode.
For example, decoding end can obtain the index value of identification information and candidate list by parsing code stream.The mark Know information and be used to indicate the set of candidate motion vectors and be combined into based on Merge mode or AMVP mode and constructs, the candidate The index value of list is used to indicate the specific candidate motion vector in candidate list.
For example, decoding end can also obtain the index value of candidate list by parsing code stream, the candidate list Index value can both be used to refer to prediction mode used by decoding current block, while can also be used to instruction based on the prediction mould The specific candidate motion vector of candidate list constructed by formula.
The implementation of the related realization mode of above-described embodiment and other possible embodiments can also refer to above Associated description is not described in detail one by one here.
Step S102: the candidate motion vector in the set of candidate motion vectors conjunction is chosen, is sweared based on the Candidate Motion It measures and determines at least one reference motion vector, reference motion vector is used to determine the current block in the current block with reference to figure Reference block as in.
Specifically, the reference block packet of the current block determined in the reference picture according to the candidate motion vector It includes: in conjunction with the candidate motion vector value, search range in the reference picture is determined, with aimed at precision in the determination The close position of reference block scan for obtaining candidate reference block, wherein each corresponding reference of the candidate reference block Motion vector, the aimed at precision include 4 pixel precisions, 2 pixel precisions, whole pixel precision, half-pixel accuracy, 1/4 pixel essence One of degree, 1/8 pixel precision.
Step S103: at least one neighbouring reconstructed blocks of the reference block of at least one determination and described are calculated separately Pixel difference value or rate distortion costs value between the neighbouring reconstructed blocks of at least one of current block.Specifically, the present invention is real The update that example realizes candidate motion vector by the way of template matching is applied, specific implementation can refer to correlation hereinbefore and retouch It states, which is not described herein again.
Step S104 is distorted according to the corresponding pixel difference value or rate at least one described reference motion vector The smallest reference motion vector of cost value, obtains the predicted motion vector of the current block.
In specific embodiment, it can be used the corresponding pixel difference value or rate distortion costs value one the smallest Reference motion vector replaces the candidate motion vector in the set of candidate motion vectors conjunction;From including described replaced The set of candidate motion vectors of one candidate motion vector obtains the predicted motion vector of the current block in closing.Wherein, for Merge mode, the predicted motion vector are the actual motion vector (practical MV) of current block, and coding side or decoding end can bases The predicted motion vector obtains the prediction block of current block, and (in single directional prediction, prediction block is final reconstructed blocks;Two-way In prediction, the reconstructed blocks of current block are obtained based on preset algorithm using prediction block);For AMVP mode, the predicted motion Vector is the optimal MVP of current block, and coding side or decoding end can obtain practical MV according to the predicted motion vector, and then obtain (in single directional prediction, prediction block is final reconstructed blocks to the prediction block of current block;In bi-directional predicted, prediction block base is used The reconstructed blocks of current block are obtained in preset algorithm).
In a kind of bi-directional predicted scene of the embodiment of the present invention, current (or) obtained more to the candidate motion vector of prediction After newly (being updated for example, by using the mode of template matching), before the candidate motion vector of rear (preceding) prediction can also be directly based upon (or) to the update information realization of prediction update, detailed process include: calculate before (or) to prediction candidate list it is replaced The difference of candidate motion vector before candidate motion vector and replacement;In conjunction with the Candidate Motion arrow of the difference and rear (preceding) prediction Amount, the new candidate motion vector of (preceding) prediction after acquisition, after after the new candidate motion vector replacement of (preceding) prediction Original candidate motion vector of (preceding) prediction, so that the candidate motion vector of rear (preceding) prediction is realized and updated.
As can be seen that video coding and decoding system can verify current block in the way of template matching in the embodiment of the present invention Reference picture in image block in a certain range (or even in entire reference picture) whether with current block have preferable matching, it is right Candidate motion vector based on candidate list constructed by Merge or AMVP mode is updated, based on updated candidate column Table can guarantee the optimal reference block that current block is obtained in encoding-decoding process, finally obtain optimal reconstructed blocks.
The specific implementation of predicted motion vector generation method provided in an embodiment of the present invention is described below in detail.
It is a kind of predicted motion vector generation method provided in an embodiment of the present invention referring to Figure 15, Figure 15, from coding side Angle is described, and this method can be used in single directional prediction (forward prediction or back forecast), and this method includes but is not limited to Following steps:
Step S201: coding side constructs AMVP candidate list or Merge candidate list.
Specifically, it is real to can refer to Fig. 4 or Fig. 5 if constructed candidate list is the candidate list using single directional prediction The associated description of example is applied, if constructed candidate list is that it is real to can refer to Fig. 6 or Fig. 7 using bi-directional predicted candidate list The associated description of example is applied, which is not described herein again.
Step S202: coding side chooses a candidate fortune in constructed AMVP candidate list or Merge candidate list Dynamic vector (i.e. MVP or candidate MV).
Step S203: coding side combines the determining search range in a reference image of selected candidate motion vector, institute It states comprising at least one motion vector value in search range, at least one motion vector value according in search range obtains At least one image block in the reference picture of current block.
Specifically, the close position with aimed at precision in the determining reference block of the candidate motion vector scans for obtaining At least one image block, wherein the corresponding reference motion vector of each described image block, the aimed at precision include 4 pixels One of precision, 2 pixel precisions, whole pixel precision, half-pixel accuracy, 1/4 pixel precision, 1/8 pixel precision.
Step S204: coding side calculate separately the current block at least one neighbouring reconstructed blocks and each image block Pixel difference value or rate distortion costs value between at least one neighbouring reconstructed blocks, and calculate at least one neighbour of current block Closely the pixel difference value between the neighbouring reconstructed blocks of at least one of reconstructed blocks and the corresponding reference block of candidate motion vector or Rate distortion costs value.In these pixel difference values or rate distortion costs value, pixel difference value or rate distortion costs value are selected most The minimum pixel difference value or rate distortion costs are worth corresponding motion vector as the Candidate Motion of current block and sweared by small one Amount.
Step S205: if the minimum pixel difference value or rate distortion costs are worth in corresponding motion vector and step 202 Selected candidate motion vector is different, then the minimum pixel difference value or rate distortion costs are worth corresponding movement and sweared by coding side The new candidate motion vector as current block is measured, the constructed AMVP candidate list or Merge candidate list are carried out It updates.The AMVP candidate is arranged specifically, being worth corresponding motion vector using the minimum pixel difference value or rate distortion costs Corresponding candidate motion vector is replaced in table or Merge candidate list.
It should be noted that the specific implementation process of above-mentioned steps S201-S205 can refer to the correlation of Fig. 8-Fig. 9 embodiment Description, which is not described herein again.
It should also be noted that, the embodiment of the present invention can also repeat above-mentioned steps S201-S205, so that described Multiple (or even whole) candidate motion vectors in AMVP candidate list or Merge candidate list, which are realized, to be updated.
In the possible embodiment of the present invention, if constructed is candidate for bi-directional predicted AMVP candidate list or Merge List, then in candidate list renewal process, to the candidate motion vector and backward Candidate Motion of the forward direction of index value instruction Vector is updated.
Step S206: coding side is based on the updated AMVP candidate list or Merge candidate list, obtains current block Predicted motion vector, and then the predicted motion vector based on current block obtains the reconstructed blocks of current block, and sends to decoding end Code stream.
Specifically, being waited if prediction mode used in coding side is AMVP mode based on the updated AMVP It selects list to determine predicted motion vector (optimal MVP), and then obtains the reconstructed blocks of current block, and encode the corresponding movement of current block Vector difference information (MVD) encodes the index of the corresponding reference picture of the reconstructed blocks, and it is corresponding to encode the optimal MVP The index value of AMVP candidate list.It should be understood that coding side is into decoding end transmitted stream comprising the MVD, described The index of the index value of AMVP candidate list, the reference picture.
Specifically, being based on the updated Merge if prediction mode used in coding side is Merge mode Candidate list determines predicted motion vector (optimal MV), and then obtains the reconstructed blocks of current block, and it is corresponding to encode optimal MV The index value of Merge candidate list.It is arranged it should be understood that coding side is sent in code stream to decoding end comprising the Merge candidate The index value of table.
It should be noted that the realization process of the above-mentioned steps S201- step S206 of the embodiment of the present invention reference may also be made to figure The associated description of middle coding side in 10- Figure 11 embodiment or Figure 12-Figure 13 embodiment, which is not described herein again.
It should also be noted that, the correlation step being not described in coding side reference may also be made to associated description above. In order to illustrate the succinct of book, not expansion description one by one here.
It is a kind of predicted motion vector generation method provided in an embodiment of the present invention referring to Figure 16, Figure 16, from decoding end Angle is described, which can correspond to the coding side in Figure 15 embodiment.This method can be used for single directional prediction, and (forward direction is pre- Survey or back forecast) in, this method includes but is not limited to following steps:
Step S301: decoding end constructs AMVP candidate list or Merge candidate list.
It should be understood that decoding end uses and the consistent prediction mode of coding side in the embodiment of the present invention.That is, If coding side is based on AMVP mode construction Merge candidate list, decoding end equally can also be based on AMVP mode construction Merge candidate list, the method for establishing list are consistent with coding side;If it is candidate that coding side is based on Merge mode construction Merge List establishes the method and coding side of list then decoding end equally can also be based on Merge mode construction Merge candidate list Unanimously.Specifically, can refer to Fig. 4 or Fig. 5 embodiment if constructed candidate list is the candidate list using single directional prediction Associated description, if constructed candidate list is to can refer to Fig. 6 or Fig. 7 embodiment using bi-directional predicted candidate list Associated description, which is not described herein again.
Step S302: decoding end parsing code stream obtains the index value of AMVP candidate list or Merge candidate list.Based on this Index value be chosen at candidate motion vector indicated by the index value in AMVP candidate list or Merge candidate list (i.e. MVP or Candidate MV).
Specifically, decoding end is parsing the code stream process if coding side and decoding end are all made of AMVP mode In, decoding obtains the index value of AMVP candidate list, and decoding obtains the index of reference picture, and decoding obtains motion vector difference information (MVD), the candidate motion vector that the index value indicates in AMVP candidate list is chosen at based on the index value.
If coding side and decoding end are all made of Merge mode, decoding end is during decoding the code stream, decoding The index value of Merge candidate list is obtained, the candidate that the index value indicates in Merge candidate list is chosen at based on the index value Motion vector.
For example, described 0 or 1 is used to indicate the candidate list of AMVP mode when decoding resulting index value is 0 or 1 First MVP or second MVP;When decoding resulting index value is 2,3,4,5,6, described 2,3,4,5,6 are respectively used to refer to Show first, second, third and fourth, five candidate MV of the candidate list of Merge mode.
Step S303: decoding end combines the determining search range in a reference image of selected candidate motion vector, institute It states comprising at least one motion vector value in search range, at least one motion vector value according in search range obtains At least one image block in the reference picture of current block.
Specifically, the close position with aimed at precision in the determining reference block of the candidate motion vector scans for obtaining At least one image block, wherein the corresponding reference motion vector of each described image block, the aimed at precision include 4 pixels One of precision, 2 pixel precisions, whole pixel precision, half-pixel accuracy, 1/4 pixel precision, 1/8 pixel precision.
Step S304: calculate separately the current block at least one neighbouring reconstructed blocks and each image block at least one Pixel difference value or rate distortion costs value between a neighbouring reconstructed blocks, and calculate at least one of current block and neighbouring weighed Pixel difference value or rate distortion between the neighbouring reconstructed blocks of at least one of building block and the corresponding reference block of candidate motion vector Cost value.In these pixel difference values or rate distortion costs value, selects pixel difference value or rate distortion costs value is the smallest by one It is a, the minimum pixel difference value or rate distortion costs are worth corresponding motion vector as the candidate motion vector of current block.
Step S305: if the minimum pixel difference value or rate distortion costs are worth in corresponding motion vector and step 302 Selected candidate motion vector is different, then the minimum pixel difference value or rate distortion costs are worth corresponding movement and sweared by coding side The new candidate motion vector as current block is measured, the constructed AMVP candidate list or Merge candidate list are carried out It updates.The AMVP candidate is arranged specifically, being worth corresponding motion vector using the minimum pixel difference value or rate distortion costs Corresponding candidate motion vector is replaced in table or Merge candidate list.
It should be noted that the specific implementation process of above-mentioned steps S301-S305 can refer to the correlation of Fig. 8-Fig. 9 embodiment Description, which is not described herein again.
It should also be noted that, the embodiment of the present invention can also repeat above-mentioned steps S301-S305, so that described Multiple (or even whole) candidate motion vectors in AMVP candidate list or Merge candidate list, which are realized, to be updated.
In the possible embodiment of the present invention, if constructed is candidate for bi-directional predicted AMVP candidate list or Merge List, then in candidate list renewal process, to the candidate motion vector and backward Candidate Motion of the forward direction of index value instruction Vector is updated.
Step S306: index value of the decoding end based on candidate list determines AMVP candidate list or Merge in the updated Candidate motion vector indicated by index value described in candidate list is as predicted motion vector, and then the prediction based on current block Motion vector obtains the reconstructed blocks of current block.
For example, the embodiment of the present invention uses Merge mode, and the Merge index value that decoding end obtains is 0.Then, it solves Code end constructs Merge candidate list, after the completion of building, if current block chooses Merge candidate list index under Merge mode Value is that candidate motion vector corresponding to 0 is single directional prediction, and the value of candidate motion vector is (3,5), according to the candidate of selection Motion vector obtains the reference image block (reference block) in reference picture.Centered on reference block, in 1 pixel region range of periphery It is interior to be searched for whole pixel precision, several and same size of reference block reference image block (image block) can be obtained.With the left phase of current block Adjacent and/or upper adjacent reconstructed blocks as template, respectively with the reference block, each described image block be left adjacent and/or upper phase Adjacent reconstructed blocks are matched, and taking rate distortion costs value reckling is the reference image block updated, the reference image block institute of update Corresponding motion vector (new candidate MV) is (4,5), i.e., the predicted motion vector of current block is (4,5).Based on (4,5), benefit Current block is reconstructed with the reference image block of update, to obtain the reconstructed blocks of current block.
Again for example, the embodiment of the present invention uses AMVP mode, the index value for the candidate list that decoding end decodes. Decoding end constructs AMVP candidate list, after the completion of building, if current block time corresponding to Selecting Index value under AMVP mode Selecting motion vector is single directional prediction, and the value of candidate motion vector is (3,5), is referred to according to the candidate motion vector of selection Reference image block (reference block) in image.Centered on reference block, searched within the scope of 1 pixel region of periphery with whole pixel precision Rope can obtain several and same size of reference block reference image block (image block).It is left adjacent with current block and/or upper adjacent weighed Building block is left adjacent and/or upper adjacent reconstructed blocks carry out with the reference block and each described image block respectively as template Matching, taking rate distortion costs value reckling is the reference image block updated, motion vector corresponding to the reference image block of update (new MVP) is (4,5), i.e., the predicted motion vector of current block is (4,5).Based on (4,5), the reference image block of update is utilized Current block is reconstructed, to obtain the reconstructed blocks of current block.
In the possible embodiment of the present invention, decoding end can also skip over the update of candidate list, directly poor using minimum pixel Different value or rate distortion costs are worth corresponding motion vector as predicted motion vector, and then the predicted motion vector based on current block Obtain the reconstructed blocks of current block.
It should be noted that the realization process of the above-mentioned steps S301- step S306 of the embodiment of the present invention reference may also be made to figure The associated description of decoding end in 10- Figure 11 embodiment or Figure 12-Figure 13 embodiment, which is not described herein again.
It should also be noted that, the correlation step being not described in decoding end can refer to correlation step in coding side Description, and with reference to associated description above.In order to illustrate the succinct of book, not expansion description one by one here.
Referring to Figure 17, a kind of hybrid predicting mode is introduced in Figure 17, which is applied in bi-directional predicted. The hybrid predicting mode refers in the bi-directional predicted in the process while using Merge mode and AMVP mode of encoding and decoding, specifically , it is described it is bi-directional predicted in, direction prediction carries out encoding and decoding using Merge mode, another direction prediction uses AMVP mode.It should be understood that another direction prediction accordingly may be simply referred to as when a direction prediction is referred to as forward prediction Back forecast.A kind of predicted motion vector generation side provided in an embodiment of the present invention is described below based on the hybrid predicting mode Method, this method are described from the angle of coding side, and in the method, a direction prediction carries out volume solution using Merge mode Code includes step S401-S404 come the process for obtaining the first prediction block of current block, another direction prediction uses AMVP mode It includes step S405-S406 that encoding and decoding, which are carried out, come the process for obtaining the second prediction block of current block, finally, in step S 407, First prediction block of the current block and the second prediction block of the current block can be finally obtained currently by preset algorithm The reconstructed blocks of block.These steps are specifically described below.
Step S401-S404 is described first.In step S401-S404, Merge mode is in preceding (after or, similarly hereinafter) to pre- Candidate list is updated using template matching mode in survey and then obtains the first prediction block, specific as follows:
Step S401: (or rear) uses Merge mode to prediction before coding side, and coding side is based on Merge mode construction Merge candidate list, the Merge candidate list is for (or rear) before carrying out in bi-directional predicted to prediction.It specifically refers to The description of Fig. 5 embodiment, which is not described herein again.
Step S402: coding side chooses candidate motion vector in Merge candidate list.
In a concrete application scene, if the candidate motion vector chosen is to be obtained by rear (or preceding) to prediction , (rear) is answered to the candidate motion vector of prediction as selected candidate motion vector value before being used for by mapping Use subsequent step S403.For example, it is (- 2, -2) that coding side chooses candidate motion vector in Merge candidate list, should (- 2, -2) it is obtained based on back forecast, the image sequence number that current block corresponds in image frame sequence is 4, the reference of back forecast The image sequence number that block corresponds in image frame sequence is 6, and the reference block of forward prediction corresponds to the image in image frame sequence Serial No. 3, then, the difference of injection time (referred to as the second difference of injection time) of the reference block of current block and forward prediction is 1 (i.e. 4-3= 1), the difference of injection time (referred to as the first difference of injection time) of the reference block of current block and back forecast is -2 (i.e. 4-6=-2), so, it can (- 2, -2) of back forecast will be used for according to the proportionate relationship (- 2/1=-2) between the first difference of injection time and the second difference of injection time Mapping becomes the candidate motion vector (1,1) for forward prediction, i.e. (- 2, -2)/- 2=(1,1).By described in (1,1) are used as institute The candidate motion vector value of selection is applied to subsequent step S403.
In addition, in a possible embodiment, if selected candidate motion vector is by rear (or preceding) to prediction It obtains, interrupt processing can also be carried out to the selection of the candidate motion vector, that is to say, that again in Merge candidate list The other candidate motion vectors of middle selection.
In a concrete application scene, if the candidate motion vector chosen is selected as obtained from bi-directional predicted It takes before being used in the candidate motion vector (or rear) to the value of predicted portions, is applied to as selected candidate motion vector value Subsequent step S403.Before such as (or afterwards) into prediction, the candidate motion vector of coding side pre-selection in Merge candidate list Including (rear) before being used for, (preceding) is then encoded to the motion vector (- 2, -2) of prediction to the motion vector (1,1) of prediction and for after (rear) is applied to subsequent step as selected candidate motion vector value to the motion vector (1,1) of prediction before the final use in end S403。
In a concrete application scene, if choose candidate motion vector be exactly before (rear) obtained to prediction, The candidate motion vector is so just directlyed adopt as selected candidate motion vector value and is applied to subsequent step S403.
Step S403: coding side is updated the candidate motion vector in the way of template matching.
Detailed process includes that coding side chooses the candidate motion vector of Merge candidate list as input, the candidate fortune Dynamic vector corresponds to the reference block in reference picture.Search range is determined in conjunction with candidate motion vector, is wrapped within the scope of described search Containing at least one reference motion vector;(rear) is obtained before current block according to search range internal reference motion vector referring to prediction Image block in image;Calculate separately at least one neighbouring at least one of reconstructed blocks and each image block of the current block Pixel difference value or rate distortion costs value between neighbouring reconstructed blocks, and calculate at least one of current block and neighbouring reconstructed Pixel difference value or rate between the neighbouring reconstructed blocks of at least one of block and the corresponding reference block of candidate motion vector are distorted generation Value.In these pixel difference values or rate distortion costs value, selects pixel difference value or rate distortion costs value is one the smallest, The minimum pixel difference value or rate distortion costs are worth corresponding motion vector as the candidate motion vector of current block, thus real Now candidate motion vector is updated.
Specific implementation process reference may also be made to the description of Fig. 8-Fig. 9 embodiment, and which is not described herein again.
Step S404: the minimum pixel difference value or rate distortion costs can be worth corresponding motion vector and replace institute by coding side Corresponding candidate motion vector in Merge candidate list is stated, so that the update to Merge candidate list is realized, after updating Merge candidate list obtain (rear) before current block to predicted motion vector (and can be described as the first predicted motion vector), knot Close it is described before (rear) to predicted motion vector obtain before (rear) to prediction block (and can be described as the first prediction block), and to prediction The corresponding index value in Merge candidate list of motion vector is encoded.Detailed process can refer to Figure 10-Figure 11 embodiment Coding side associated description, which is not described herein again.
Secondly, description step S405-S406.AMVP mode directly uses at rear (or preceding, similarly hereinafter) into prediction constructed Candidate list obtain the second prediction block, it is specific as follows:
Step S405: coding side is based on AMVP mode construction AMVP candidate list.The AMVP candidate list is for carrying out Rear (or preceding) in bi-directional predicted is to prediction.The description of Fig. 6 embodiment is specifically referred to, which is not described herein again.
Step S406: after coding side obtains current block using traditional AMVP mode based on the AMVP candidate list (preceding) to prediction block (and can be described as the second prediction block).
Detailed process includes: that coding side directly selects an optimal MVP from AMVP candidate list, according to optimal MVP Determine the starting point searched in a reference image, then near the initial search point in particular range according to ad hoc fashion into Row is searched for and carries out rate distortion costs value calculating, an optimal MV is finally calculated, optimal MV has determined the second prediction block Motion vector difference MVD is acquired by the difference of optimal MV and optimal MVP in position, and corresponding in AMVP candidate to the optimal MVP Index value in list is encoded, and is encoded to the index of reference picture.
Finally, in step S 407: coding side obtains reconstructed blocks, sends code stream to decoding end.Specifically, before obtaining (rear) to the first prediction block and rear (preceding) to the second prediction block after, coding side is based on preset algorithm to first prediction block It is handled with second prediction block, to obtain the reconstructed blocks of current block, such as by the first prediction block and the second prediction block It is weighted and divides calculating equally, to obtain the reconstructed blocks of current block.Later, coding side sends code stream, the code stream to decoding end It correspondingly include the index value of Merge candidate list, the index value of AMVP candidate list, MVD, index of reference picture etc..
It should be noted that there is no inevitable sequencing between above-mentioned steps S401-S404 and step S405-S406, That is, S401-S404 can be carried out before or after S405-S406, S401-S404 and S405-S406 can also be same Shi Jinhang.
It should also be noted that, S401-S404 can also be replaced with to AMVP mode in another embodiment of the invention Candidate list and then the first prediction block is obtained in preceding (rear) update into prediction using template matching mode, S405-S406 is replaced Be changed to Merge mode it is rear it is (preceding) directly obtain the second prediction block using constructed candidate list into prediction, implement Process can refer to description above, and which is not described herein again.
It should also be noted that, the correlation step being not described in coding side reference may also be made to associated description above. In order to illustrate the succinct of book, not expansion description one by one here.
Referring to Figure 18, Figure 18 is the predicted motion vector generation method based on a kind of hybrid predicting mode, is applied to two-way It in prediction, is described from the angle of decoding end, which can correspond to the coding side in Figure 17.
During entropy decoding, decoding end parses code stream, obtains the index value of candidate list.Then it is by index value No is particular value to judge to execute following step S501-S504 or S505-S506.In specific embodiment, if index value is spy Definite value, then the index value is used to indicate before decoding end (rear) to prediction using Merge mode, and decoding end executes step in turn S501-S504;If index value is not particular value, which is used to indicate coding side using AMVP mode, then decoding end is also Index and motion vector difference information (MVD) to the reference picture of the prediction direction are decoded, and then execute step S505- S506.For example, in possible application scene, if index value is 0 or 1, which is used to indicate back forecast and adopts With AMVP mode (specifically for first or second MVP in candidate list constructed by instruction AMVP mode), then solve Code end is also decoded the index and MVD of the reference picture of the back forecast, and then executes subsequent step;If index value For particular value 2, then the index value is used to indicate forward prediction and (is specifically used for constructed by instruction Merge mode using Merge mode Candidate list in particular candidate MV, such as first candidate MV) and then execution subsequent step.Above-mentioned example is used only for explaining The solution of the present invention and it is non-limiting.These steps are specifically described below.
Step S501-S504 is described first.In step S501-S504, Merge mode is in preceding (after or, similarly hereinafter) to pre- Candidate motion vector indicated by index value in candidate list is updated using template matching mode in survey, and then obtains the first prediction Block, specific as follows:
Step S501: in the case that (or rear) uses Merge mode to prediction before the index value instruction that decoding obtains, solution (or rear) uses Merge mode to prediction before code end is determining, and then constructs Merge candidate list.The specific building mode of list It can be consistent with Figure 17 embodiment step S401.
The present invention may in embodiment, decoding end can also be determined by decoding candidate motion vector value Merge mode or Prediction direction corresponding to AMVP mode.For example, it is the fortune of candidate corresponding to 3 that decoding end, which chooses Merge candidate list index value, Dynamic vector finds that selected candidate motion vector is the candidate motion vector of forward prediction, it is determined that forward prediction uses Merge mode.In another example it is candidate motion vector corresponding to 0 that decoding end, which chooses AMVP candidate list index value, selected by discovery The candidate motion vector taken is the candidate motion vector of back forecast, it is determined that back forecast uses Merge mode.
Step S502: decoding end is based on decoding resulting index value, and Selecting Index value is corresponding in Merge candidate list Candidate motion vector.Such as index value is 2, described 2 first candidate MV being used to indicate in Merge candidate list, then chooses First candidate MV in Merge candidate list.
Step S503: decoding end is updated the candidate motion vector in the way of template matching.It specifically refers to The associated description of Figure 17 embodiment step S403, which is not described herein again.
Step S504: decoding end realizes the update to candidate MV selected in Merge candidate list, after the update Candidate MV as (rear) before current block to predicted motion vector (and can be described as the first predicted motion vector), before described (rear) to predicted motion vector obtain before (rear) to prediction block (and can be described as the first prediction block).Detailed process can refer to figure The decoding end associated description of 10- Figure 11 embodiment, which is not described herein again.
Step S505-S506 is described below:
Step S505: in the case that (or preceding) uses AMVP mode to prediction after the index value instruction that decoding obtains, solution (or preceding) uses AMVP mode to prediction after code end is determining, continues decoding and obtains the information such as index and the MVD of reference picture, into And construct AMVP candidate list.The specific building mode of list can be consistent with Figure 17 embodiment step S405.
Step S506: decoding end is based on the index and MVD of the AMVP candidate list and reference picture decoded Etc. information, using traditional AMVP mode decode to obtain current block it is rear it is (preceding) to prediction block (and can be described as the second prediction Block).
Finally, in step s 507: decoding end before obtaining (rear) to the first prediction block and rear (preceding) to it is second pre- After surveying block, first prediction block and second prediction block are handled based on preset algorithm, to obtain current block Reconstructed blocks.Such as the first prediction block and the second prediction block are weighted and divide calculating equally, to obtain the reconstructed blocks of current block.
It should be noted that there is no inevitable sequencing between above-mentioned steps S501-S504 and step S505-S506, That is, S501-S504 can be carried out before or after S505-S506, S501-S504 and S505-S506 can also be same Shi Jinhang.
It should also be noted that, S501-S504 accordingly can also be replaced with AMVP in another embodiment of the invention Mode (rear) update candidate list using template matching mode into prediction and then obtains the first prediction block (such as index value preceding Indicate to use AMVP mode in forward prediction), S505-S506 is replaced with Merge mode (preceding) directly to be made rear into prediction The second prediction block (such as using Merge mode in index value instruction back forecast) is obtained with constructed candidate list, specifically Realization process can refer to description above, and which is not described herein again.
It should also be noted that, the correlation step being not described in decoding end can refer to correlation step in coding side Description, and with reference to associated description above.In order to illustrate the succinct of book, not expansion description one by one here.
It is generated referring to the predicted motion vector that Figure 19, Figure 19 are another hybrid predicting modes provided in an embodiment of the present invention Method is described from the angle of coding side, and this method embodiment and the difference of Figure 17 embodiment include, forward prediction and backward Prediction uses different prediction modes (Merge mode and AMVP mode), is all made of mould in forward prediction and back forecast Plate matching way updates candidate motion vector/update candidate list.These steps are briefly described below.
Step S601-S604 is described first.In step S601-S604, Merge mode is in preceding (after or, similarly hereinafter) to pre- Candidate list is updated using template matching mode in survey and then obtains the first prediction block, specific as follows:
Step S601: (or rear) uses Merge mode to prediction before coding side, and coding side is based on Merge mode construction Merge candidate list, the Merge candidate list is for (or rear) before carrying out in bi-directional predicted to prediction.
Step S602: coding side chooses candidate motion vector in Merge candidate list.
Concrete implementation process can refer to the associated description in Figure 17 step S402, and which is not described herein again.
Step S603: coding side is updated the candidate motion vector in the way of template matching.
Step S604: coding side is based on before obtaining before current block (rear) to updated Merge candidate list (rear) to pre- It surveys block (and can be described as the first prediction block).
Step S605-S608 is described below, in step S605-S606, AMVP mode is at rear (or preceding, similarly hereinafter) to prediction It is middle that candidate list is updated using template matching mode and then obtains the second prediction block, specific as follows:
Step S605: coding side is based on AMVP mode construction AMVP candidate list.The AMVP candidate list is for carrying out Rear (or preceding) in bi-directional predicted is to prediction.
Step S606: coding side chooses candidate motion vector in AMVP candidate list.
In a concrete application scene, if the candidate motion vector chosen is that (or rear) is obtained to prediction before , it can obtain answering to the candidate motion vector of prediction as selected candidate motion vector value for after (preceding) by mapping Use subsequent step S607.For example, coding side chosen in AMVP candidate list before to candidate motion vector be (- 2, -2), should (- 2, -2) are obtained based on forward prediction, and current block corresponds to image sequence number (the Picture Order in image frame sequence Count, abbreviation timing) it is 4, the image sequence number that the reference block of forward prediction corresponds in image frame sequence is 2, back forecast Reference block correspond to image frame sequence in image sequence number be 5, then, the timing of the reference block of current block and back forecast Poor (may be simply referred to as the second difference of injection time) is 1 (i.e. 4-5=-1), and the difference of injection time of the reference block of current block and forward prediction (can be referred to as For the first difference of injection time) it is 2 (i.e. 4-2=2), so, it can be according to the proportionate relationship between the first difference of injection time and the second difference of injection time (- 2, -2) mapping for being used for forward prediction is become the candidate motion vector (1,1) for back forecast, i.e., by (2/-1=-2) (- 2, -2)/- 2=(1,1).By described in (1,1) are applied to subsequent step S607 as selected candidate motion vector value.
In addition, in a possible embodiment, if selected candidate motion vector is before (or afterwards) to prediction It obtains, interrupt processing can also be carried out to the selection of the candidate motion vector, that is to say, that again in AMVP candidate list Choose other candidate motion vectors.
During a concrete application scene is bi-directional predicted, if rear (preceding) is to the candidate motion vector of prediction pre-selection As obtained from bi-directional predicted, then finally choose this it is bi-directional predicted in for after (preceding) to predicted portions value, as selected The candidate motion vector value selected is applied to subsequent step S607.Such as rear (preceding), to prediction, coding side is in Merge candidate list The candidate motion vector of pre-selection is obtained by bi-directional predicted, the candidate motion vector include before being used for (rear) to prediction Motion vector (1,1) and for after (preceding) to motion vector (- 2, -2) of prediction, then (preceding) is to prediction after coding side finally uses Motion vector (- 2, -2) be applied to subsequent step S607 as selected candidate motion vector value.
In a concrete application scene, if the candidate motion vector chosen is exactly to be obtained by rear (preceding) to prediction, The candidate motion vector is so just directlyed adopt as selected candidate motion vector value and is applied to subsequent step S607.
Step S607: coding side is updated the candidate motion vector in the way of template matching.
Detailed process includes that coding side selects the candidate motion vector of AMVP candidate list as input, the Candidate Motion Vector corresponds to the reference block in reference picture.Search range is determined in conjunction with candidate motion vector, includes within the scope of described search At least one reference motion vector;(preceding) is obtained after current block to prediction with reference to figure according to search range internal reference motion vector Image block as in;At least one neighbouring at least one of reconstructed blocks and each image block for calculating separately the current block is adjacent The closely pixel difference value or rate distortion costs value between reconstructed blocks, and at least one neighbouring reconstructed blocks of calculating current block Pixel difference value or rate distortion costs between at least one neighbouring reconstructed blocks of the corresponding reference block of candidate motion vector Value.In these pixel difference values or rate distortion costs value, selects pixel difference value or rate distortion costs value is one the smallest, it will The minimum pixel difference value or rate distortion costs are worth candidate motion vector of the corresponding motion vector as current block, to realize Candidate motion vector is updated.
Specific implementation process reference may also be made to the description of Fig. 8-Fig. 9 embodiment, and which is not described herein again.
Step S608: the minimum pixel difference value or rate distortion costs can be worth corresponding motion vector and replace institute by coding side Corresponding candidate motion vector in AMVP candidate list is stated, so that the update to AMVP candidate list is realized, based on updated AMVP candidate list obtain (preceding) after current block to predicted motion vector (and can be described as the second predicted motion vector), in conjunction with institute After stating (preceding) to predicted motion vector obtain after (preceding) prediction block (and can be described as the second prediction block), and predicted motion is sweared The corresponding index value in AMVP candidate list of amount is encoded, and is compiled to the index of the corresponding reference picture of prediction block Code, encodes MVD, detailed process can refer to the coding side associated description of Figure 12-Figure 13 embodiment, and which is not described herein again.
Finally, in step S609: coding side obtains reconstructed blocks, sends code stream to decoding end.Specifically, before obtaining (rear) To the first prediction block and rear (preceding) to the second prediction block after, coding side is based on preset algorithm to first prediction block and institute It states the second prediction block to be handled, to obtain the reconstructed blocks of current block.Later, coding side sends code stream to decoding end, described Code stream correspondingly includes the index of the index value of Merge candidate list, the index value of AMVP candidate list, MVD, reference picture Deng.
It should also be noted that, the correlation step being not described in coding side reference may also be made to associated description above. In order to illustrate the succinct of book, not expansion description one by one here.
0, Figure 20 is that the predicted motion vector of another hybrid predicting mode provided in an embodiment of the present invention generates referring to fig. 2 Method is described from the angle of decoding end, and the decoding end of this method embodiment can be corresponding with the coding side of Figure 19 embodiment. This method embodiment and the difference of Figure 18 embodiment include that forward prediction and back forecast use different prediction modes (Merge mode and AMVP mode) is all made of template matching mode in forward prediction and back forecast and updates Candidate Motion arrow Amount/update candidate list.These steps are briefly described below.
Step S701-S704 is described first.In step S701-S704, Merge mode is in preceding (after or, similarly hereinafter) to pre- Candidate motion vector indicated by index value in candidate list is updated using template matching mode in survey, and then obtains the first prediction Block, specific as follows:
Step S701: (or rear) is decoded to prediction using in the case where Merge mode before the index value instruction decoded (or rear) uses Merge mode to prediction before end is determining, and then constructs Merge candidate list.
Step S702: decoding end is based on decoded information (marker and/or index value, with specific reference to be described above), The corresponding candidate motion vector of Selecting Index value in Merge candidate list.
In a concrete application scene, if being after based on the candidate motion vector that resulting index value is chosen is decoded (or preceding) to obtained from prediction, can by map be used for before (rear) to prediction candidate motion vector, as selected Candidate motion vector value be applied to subsequent step S703.Such as decoding end chooses Candidate Motion arrow in Merge candidate list Amount is (- 2, -2), is somebody's turn to do (- 2, -2) and is obtained based on back forecast, and the image sequence number that current block corresponds in image frame sequence is 4, the image sequence number that the reference block of back forecast corresponds in image frame sequence is 6, and the reference block of forward prediction corresponds to figure As the image sequence number in frame sequence be 3, then, the difference of injection time of the reference block of current block and forward prediction is (when referred to as second Sequence is poor) it is 1 (i.e. 4-3=1), the difference of injection time (referred to as the first difference of injection time) of the reference block of current block and back forecast is -2 (i.e. 4-6=-2), so, can be according to the proportionate relationship (- 2/1=-2) between the first difference of injection time and the second difference of injection time, after being used for Become the candidate motion vector (1,1) for forward prediction, i.e. (- 2, -2)/- 2=(1,1) to (- 2, -2) of prediction mapping.It will Described (1,1) are applied to subsequent step S703 as selected candidate motion vector value.
In addition, in a possible embodiment, if being based on candidate motion vector selected by resulting index value is decoded As rear (or preceding) to obtained from prediction, interrupt processing can also be carried out to the selection of the candidate motion vector, that is to say, that Again other candidate motion vectors are chosen in Merge candidate list.
In a concrete application scene, if being by double based on the candidate motion vector that resulting index value is chosen is decoded To obtained from prediction, then choosing before being used in the candidate motion vector (or rear) to the value of predicted portions, as selected Candidate motion vector value is applied to subsequent step S703.Before such as (or afterwards) into prediction, decoding end is in Merge candidate list The candidate motion vector of pre-selection includes the motion vector (1,1) of (rear) to prediction and the fortune of (preceding) to prediction for after before being used for Dynamic vector (- 2, -2), then (rear) is used as selected Candidate Motion to the motion vector (1,1) of prediction before coding side finally uses Vector value is applied to subsequent step S703.
In a concrete application scene, if being exactly to pass through based on the candidate motion vector that resulting index value is chosen is decoded Before (rear) obtained to prediction, then just directlying adopt the candidate motion vector as selected candidate motion vector value application To subsequent step S703.
Step S703: decoding end is updated the candidate motion vector in the way of template matching.
Step S704: decoding end realizes the update to candidate MV selected in Merge candidate list, after the update Candidate MV as (rear) before current block to predicted motion vector (and can be described as the first predicted motion vector), before described (rear) to predicted motion vector obtain before (rear) to prediction block (and can be described as the first prediction block).
Step S705-S708 is described below, in step S705-S, 706, AMVP mode is at rear (or preceding, similarly hereinafter) to pre- Candidate list is updated using template matching mode in survey and then obtains the second prediction block, specific as follows:
Step S705: decoding end is based on decoded information (marker and/or index value, with specific reference to be described above) and determines AMVP mode is based on AMVP mode construction AMVP candidate list.The AMVP candidate list is for after carrying out in bi-directional predicted (or preceding) is to prediction.
Step S706: decoding end is based on decoded information (marker and/or index value, with specific reference to be described above) in AMVP Candidate motion vector is chosen in candidate list.
In a concrete application scene, if being before based on the candidate motion vector that resulting index value is chosen is decoded (or rear) can obtain the candidate motion vector for rear (preceding) to prediction by mapping, as selected to obtained from prediction Candidate motion vector value be applied to subsequent step S707.For example, decoding end is transported before choosing in AMVP candidate list to candidate Dynamic vector is (- 2, -2), is somebody's turn to do (- 2, -2) and is obtained based on forward prediction, current block corresponds to the image sequence in image frame sequence It number is 4, the image sequence number that the reference block of forward prediction corresponds in image frame sequence is 2, and the reference block of back forecast is corresponding Image sequence number in image frame sequence is 5, then, the difference of injection time of the reference block of current block and back forecast (may be simply referred to as Second difference of injection time) it is 1 (i.e. 4-5=-1), the difference of injection time of the reference block of current block and forward prediction (may be simply referred to as the first timing Difference) be 2 (i.e. 4-2=2), so, can according to the proportionate relationship (2/-1=-2) between the first difference of injection time and the second difference of injection time, (- 2, -2) mapping for being used for forward prediction is become into the candidate motion vector (1,1) for back forecast, i.e. (- 2, -2)/- 2= (1,1).By described in (1,1) are applied to subsequent step S707 as selected candidate motion vector value.
In addition, in a possible embodiment, if being based on candidate motion vector selected by resulting index value is decoded As preceding (or rear) to obtained from prediction, interrupt processing can also be carried out to the selection of the candidate motion vector, that is to say, that Again other candidate motion vectors are chosen in AMVP candidate list.
During a concrete application scene is bi-directional predicted, if it is rear it is (preceding) into prediction based on decoding resulting index The candidate motion vector of value pre-selection be as obtained from bi-directional predicted, then finally choose this it is bi-directional predicted in for after (preceding) To the value of predicted portions, subsequent step S707 is applied to as selected candidate motion vector value.After such as (preceding) to prediction, Decoding end candidate motion vector of pre-selection in Merge candidate list is obtained by bi-directional predicted, Candidate Motion arrow Including (rear) before being used for, (preceding) then solves amount to the motion vector (- 2, -2) of prediction to the motion vector (1,1) of prediction and for after (preceding) is applied to the motion vector (- 2, -2) of prediction as selected candidate motion vector value subsequent after code end finally uses Step S707.
In a concrete application scene, if the candidate motion vector chosen is exactly to be obtained by rear (preceding) to prediction, The candidate motion vector is so just directlyed adopt as selected candidate motion vector value and is applied to subsequent step S707.
Step S707: decoding end is updated the candidate motion vector in the way of template matching.
Step S708: decoding end is based on the index and MVD of the AMVP candidate list and reference picture decoded Etc. information, obtain current block it is rear it is (preceding) to prediction block (and can be described as the second prediction block).
Finally, in step S709: decoding end before obtaining (rear) to the first prediction block and rear (preceding) to it is second pre- After surveying block, first prediction block and second prediction block are handled based on preset algorithm, to obtain current block Reconstructed blocks.
It should also be noted that, the correlation step being not described in decoding end can refer to correlation step in coding side Description, and with reference to associated description above.In order to illustrate the succinct of book, not expansion description one by one here.
1, Figure 21 is another predicted motion vector generation method provided in an embodiment of the present invention referring to fig. 2, from coding side Angle be described, this method can be used for the coding side in bi-directional predicted, and this method includes but is not limited to following steps:
Step S801: coding side constructs Merge candidate list or AMVP candidate list, can be accordingly real with reference to Fig. 6 or Fig. 7 The associated description of example is applied, which is not described herein again.
It include for forward prediction in the candidate list after building Merge candidate list or AMVP candidate list Forward direction candidate motion vector and backward motion vector for back forecast, for it is preceding it is (rear) to candidate motion vector, after (preceding) executes subsequent step S802-S804, S805-S807 to candidate motion vector, retouches separately below to the two processes It states.
Step S802-S804 is described first:
Step S802: coding side is in constructed candidate list, (after or, similarly hereinafter) to candidate motion vector before selection.
Step S803: coding side is updated (rear) before this to candidate motion vector in the way of template matching.
Step S804: coding side is based on before (rear) is into updated candidate list before current block (rear) to Candidate Motion (rear) is to prediction block (and can be described as the first prediction block) before vector obtains.
Step S805-S807 is described below:
Step S805: coding side is in constructed candidate list, and (or preceding, similarly hereinafter) is to candidate motion vector after determining.
Step S806: the candidate motion vector of (rear) to before and after updating is to rear (preceding) to Candidate Motion arrow before coding side utilizes Amount is updated.
Specifically, coding side calculates in candidate list, (rear) is to candidate motion vector (i.e. step S803 institute before replaced It is new before (rear) to candidate motion vector) with before replacement before (rear) to candidate motion vector (i.e. selected by step S802 The candidate motion vector taken) difference;In conjunction with the difference and after step S805 is selected (preceding) to candidate motion vector, It obtains new rear (preceding) to candidate motion vector;Then, with described new rear (preceding) to candidate motion vector replacement step S805 In it is identified after (preceding) to candidate motion vector so that the candidate list is further updated.
Step S807: (preceding) is rear (preceding) to Candidate Motion into updated candidate list after coding side is based on current block Vector obtains rear (preceding) to prediction block (and can be described as the second prediction block).
Step S808: coding side before obtaining (rear) to the first prediction block and rear (preceding) to the second prediction block after, base First prediction block and second prediction block are handled in preset algorithm, so that the reconstructed blocks of current block are obtained, into And code stream is sent to decoding end.It should be understood that including Merge candidate if currently employed Merge pattern-coding, in code stream The index value of list;It include the index value of AMVP candidate list in code stream, with reference to figure if currently employed AMVP pattern-coding The index value and MVD information of picture.
It should also be noted that, the correlation step being not described in coding side reference may also be made to associated description above. In order to illustrate the succinct of book, not expansion description one by one here.
2, Figure 22 is another predicted motion vector generation method provided in an embodiment of the present invention referring to fig. 2, from decoding end It is described, this method can be used for the decoding end in bi-directional predicted, and the decoding end in this method can be with the coding side phase in Figure 21 It is corresponding.
Step S901: decoding end parses code stream, according to the index value of candidate list in code stream, constructs Merge candidate list Or AMVP candidate list.The associated description of Fig. 6 or Fig. 7 embodiment can be accordingly referred to, which is not described herein again.
It include for forward prediction in the candidate list after building Merge candidate list or AMVP candidate list Forward direction candidate motion vector and backward motion vector for back forecast, for it is preceding it is (rear) to candidate motion vector, after (preceding) executes subsequent step S902-S904, S905-S907 to candidate motion vector, retouches separately below to the two processes It states.
Step S902-S904 is described first:
Step S902: decoding end combine decode resulting index value chosen in constructed candidate list before (after or, under Together) to candidate motion vector.
Step S903: decoding end is updated (rear) before this to candidate motion vector in the way of template matching.
Step S904: decoding end combination decoded information, based on (rear) before current block into updated candidate list before (rear) is to prediction block (and can be described as the first prediction block) before (rear) is obtained to candidate motion vector.
Step S905-S907 is described below:
Step S905: decoding end is in constructed candidate list, and (or preceding, similarly hereinafter) is to candidate motion vector after determining.
Step S906: the candidate motion vector of (rear) to before and after updating is to rear (preceding) to Candidate Motion arrow before decoding end utilizes Amount is updated.
Specifically, decoding end calculates in candidate list, (rear) is to candidate motion vector (i.e. step S903 institute before replaced It is new before (rear) to candidate motion vector) with before replacement before (rear) to candidate motion vector (i.e. selected by step S902 The candidate motion vector taken) difference;In conjunction with the difference and after step S905 is selected (preceding) to candidate motion vector, It obtains new rear (preceding) to candidate motion vector;Then, with described new rear (preceding) to candidate motion vector replacement step S905 In it is identified after (preceding) to candidate motion vector so that the candidate list is further updated.
In a possible embodiment, decoding end can calculate it is replaced in the candidate list before (rear) candidate motion vector The difference of (rear) candidate motion vector before before replacement;After calculating in the difference and set of candidate motion vectors conjunction (preceding) to the sum of candidate motion vector, obtain it is new rear (preceding) to candidate motion vector, will be described after (preceding) sweared to Candidate Motion Amount replace the set of candidate motion vectors close in it is original rear (preceding) to candidate motion vector.
For example, in an application scenarios, the index value 0 of the Merge candidate list decoded.Decoding end building Merge list, after the completion of building, current block chosen under Merge mode Merge candidate list index value be 0 corresponding to before To (rear) before carrying out in bi-directional predicted, to prediction, and before this, (rear) to candidate motion vector is (rear) candidate motion vector (3,5), the reference image block (i.e. reference block) of (rear) into prediction process reference picture before being obtained according to (3,5) of selection.With Before (rear) to centered on prediction reference image block, with the search of whole pixel precision within the scope of 1 pixel region of periphery, can obtain it is several with The new reference image block (i.e. image block) of the same size of reference block.It is left adjacent with current block and/or the upper adjacent building block that solved is made It is and preceding (rear) to predict the reference block and preceding (rear) left adjacent to forecast image block and/or upper adjacent solved building block point for template It does not carry out matching and takes rate distortion costs value reckling to be the reference image block updated, fortune corresponding to rate distortion costs value reckling Dynamic vector is (4,5).Decoding end using the reference image block of rate distortion costs value reckling as (rear) before current block to prediction Block, and setting current block before (rear) to predicted motion vector be (4,5).In candidate list, index value is corresponding to 0 Afterwards (preceding) to candidate motion vector be (Isosorbide-5-Nitrae), in conjunction with it is preceding it is (rear) to replace before candidate motion vector (3,5) and replaced prediction Motion vector (4,5) obtains difference (1,0), and rear (preceding) is that (Isosorbide-5-Nitrae) combines difference (1,0) to be updated to candidate motion vector It is rear it is (preceding) to predicted motion vector corresponding to predict the reference block be (2,4).
For example, in an application scenarios, the index value 0 of the Merge candidate list decoded.Decoding end building Merge list, after the completion of building, current block chosen under Merge mode Merge candidate list index value be 0 corresponding to before (rear) candidate motion vector to (rear) before carrying out in bi-directional predicted to prediction, and before this (rear) to reference picture image sequence Number be 3, present image highlights the Serial No. 4, rear (preceding) to reference picture image sequence number be 5.Before (rear) to Candidate Motion Vector is (3,5), and the reference image block of (rear) into prediction process reference picture (refers to before being obtained according to (3,5) of selection Block).(rear), with the search of whole pixel precision within the scope of 1 pixel region of periphery, can be obtained to centered on prediction reference image block in the past Several new reference image blocks (i.e. image block) with the same size of reference block.It is left adjacent with current block and/or upper adjacent solved Building block is and preceding (rear) to predict the reference block and preceding (rear) left adjacent to forecast image block and/or upper adjacent solved as template Building block carries out matching and takes rate distortion costs value reckling to be the reference image block updated respectively, and rate distortion costs value reckling institute is right The motion vector answered is (4,5).Decoding end is using the reference image block of rate distortion costs value reckling as (rear) before current block To prediction block, and before setting current block (rear) to predicted motion vector be (4,5).In candidate list, index value is 0 institute It is corresponding after (preceding) to candidate motion vector be (Isosorbide-5-Nitrae), in conjunction with it is preceding it is (rear) to replacement before candidate motion vector (3,5) and replace after Predicted motion vector (4,5), difference (- 1,0) is obtained by ratio mapping calculation, rear (preceding) be to candidate motion vector (1, 4) combine difference (- 1,0) updated it is rear it is (preceding) to predicted motion vector corresponding to predict the reference block be (0,4).
In a possible embodiment, decoding end can calculate it is replaced in the candidate list before (rear) candidate motion vector The difference of (rear) candidate motion vector before before replacement;First by the difference multiplied by predetermined coefficient, then it is incorporated into the time Select in motion vector set it is rear it is (preceding) carry out read group total to candidate motion vector, obtain new rear (preceding) swear to Candidate Motion Amount, will it is described after (preceding) to candidate motion vector replace the set of candidate motion vectors close in it is original rear (preceding) to candidate fortune Dynamic vector.
In a possible embodiment, decoding end can calculate it is replaced in the candidate list before (rear) candidate motion vector (rear) difference before (rear) candidate motion vector before before replacement;(preceding) is to difference after (rear) difference map is before again will be described Value, then will it is described after (preceding) (preceding) asked to candidate motion vector to difference in conjunction with rear in set of candidate motion vectors conjunction And calculating, obtain it is new rear (preceding) to candidate motion vector, will be described after (preceding) candidate transported to candidate motion vector replacement is described It is original rear (preceding) to candidate motion vector in dynamic vector set.For example, for example, can calculate in the candidate list after replacement Before before (rear) candidate motion vector and replacement before before (rear) candidate motion vector (rear) to difference be (2,4), current block Corresponding to the timing image Serial No. 4 in image frame sequence, the reference block of forward prediction corresponds to the figure in image frame sequence Picture sequence number timing is 2, and the image sequence timing that the reference block of back forecast corresponds in image frame sequence is 5, then, when The difference of injection time (may be simply referred to as the second difference of injection time) of preceding piece and the reference block of back forecast is 1 (i.e. -5=-1), current block and forward direction The difference of injection time (may be simply referred to as the first difference of injection time) of the reference block of prediction is -2 (i.e. -2=2), so, it can be according to the first difference of injection time Proportionate relationship (2/-1=-2) between the second difference of injection time, will before (rear) to difference (2,4) mapping become for rear (preceding) to Difference (- 1,0), then (preceding) is obtained to difference (- 1, -2) and rear (preceding) to candidate motion vector progress read group total after use New is rear (preceding) to candidate motion vector, and rear (preceding) is replaced in the set of candidate motion vectors conjunction to candidate motion vector It is original rear (preceding) to candidate motion vector.
In a possible embodiment, decoding end can calculate it is replaced in the candidate list before (rear) candidate motion vector (rear) difference before (rear) candidate motion vector before before replacement;(preceding) is to difference after (rear) difference map is before again will be described Value, later multiplied by predetermined coefficient, in conjunction with rear (preceding) to candidate motion vector progress in set of candidate motion vectors conjunction Read group total, obtain it is new rear (preceding) to candidate motion vector, will it is described after (preceding) to candidate motion vector replace the candidate It is original rear (preceding) to candidate motion vector in motion vector set.Wherein, in possible implementation, if before will be described (rear) difference map be after (preceding) to the proportionate relationship during difference, between resulting first difference of injection time and the second difference of injection time Greater than 0, then defining predetermined coefficient is 1, or is other preset positive values;If the proportionate relationship defines predetermined coefficient less than 0 It is -1, or is other preset negative values.
Step S907: decoding end combination decoded information, based on (preceding) after current block into updated candidate list after (preceding) obtains rear (preceding) to prediction block (and can be described as the second prediction block) to candidate motion vector.
Step S908: before decoding end obtains (rear) to the first prediction block and rear (preceding) to the second prediction block after, be based on Preset algorithm handles first prediction block and second prediction block, to obtain the reconstructed blocks of current block.
It should also be noted that, the correlation step being not described in decoding end can refer to correlation step in coding side Description, reference may also be made to associated description above.In order to illustrate the succinct of book, not expansion description one by one here.
3, Figure 23 is another predicted motion vector generation method provided in an embodiment of the present invention referring to fig. 2, from coding side Angle be described, this method can use the coding side in bi-directional predicted, and the difference of this method embodiment and Figure 21 embodiment exists In different prediction modes (i.e. hybrid coding mode) is respectively adopted in forward prediction and back forecast in the method, currently When (or rear) Xiang Caiyong Merge mode, rear (or preceding) Xiang Xiangying takes AMVP mode.It is described in detail below:
Step S1001-S1004 is described first:
Step S1001: (after or, similarly hereinafter) prediction uses Merge mode before coding side, is based on Merge mode construction Merge candidate list.
Step S1002: coding side is in constructed Merge candidate list, and (rear) is to candidate motion vector before selection.
Step S1003: coding side is updated (rear) before this to candidate motion vector in the way of template matching.
Step S1004: coding side is based on before (rear) is into updated Merge candidate list before current block (rear) to time (rear) is to prediction block (and can be described as the first prediction block) before selecting motion vector to obtain, and encodes the corresponding candidate column of the first prediction block Table index.
Step S1005-S1007 is described below:
Step S1005: rear (or preceding, similarly hereinafter) prediction of coding side uses AMVP mode, is based on AMVP mode construction AMVP Candidate list.
Step S1006: coding side is in constructed AMVP candidate list, and (or preceding, similarly hereinafter) is to Candidate Motion after selection Vector.
Step S1007: the candidate motion vector of (rear) to before and after updating is to rear (preceding) to Candidate Motion before coding side utilizes Vector is updated.
Specifically, coding side calculates in Merge candidate list, (rear) is to candidate motion vector (i.e. step before replaced S1003 it is resulting it is new before (rear) to candidate motion vector) with replace before before (rear) to candidate motion vector (i.e. step Selected candidate motion vector in S1002) difference;In conjunction with the difference and step S1006 it is selected after (preceding) to Candidate motion vector obtains new rear (preceding) to candidate motion vector;Then, with described new rear (preceding) to candidate motion vector It is rear (preceding) to candidate motion vector selected in step S1006 in replacement AMVP candidate list, so that the AMVP is candidate List is further updated.
Step S1008: (preceding) is rear (preceding) to time into updated AMVP candidate list after coding side is based on current block (preceding) is to prediction block (and can be described as the second prediction block) after selecting motion vector to obtain, and encodes the corresponding candidate column of the second prediction block The index and MVD information of table index, the corresponding reference picture of the second prediction block of coding.
Step S1009: coding side before obtaining (rear) to the first prediction block and rear (preceding) to the second prediction block after, base First prediction block and second prediction block are handled in preset algorithm, so that the reconstructed blocks of current block are obtained, into And code stream is sent to decoding end.It should be understood that including the index value of Merge candidate list, AMVP candidate list in code stream Index value, the index value of reference picture and MVD information.
It should also be noted that, the correlation step being not described in coding side reference may also be made to associated description above. In order to illustrate the succinct of book, not expansion description one by one here.
4, Figure 24 is another predicted motion vector generation method provided in an embodiment of the present invention referring to fig. 2, from decoding end It is described, this method can be used for the decoding end in bi-directional predicted, and the decoding end in this method can be with the coding side phase in Figure 23 It is corresponding.
Step S1101-S1104 is described first:
Step S1101: decoding end parses code stream, according to the index value of Merge candidate list in code stream, before determining (after or, Merge mode similarly hereinafter) is used to prediction, is based on Merge mode construction Merge candidate list.
Step S1102: before decoding end combines the resulting index value of decoding to choose in constructed Merge candidate list (after or, similarly hereinafter) is to candidate motion vector.
Step S1103: decoding end is updated (rear) before this to candidate motion vector in the way of template matching.
Step S1104: decoding end combination decoded information, based on (rear) before current block into updated candidate list before (rear) is to prediction block (and can be described as the first prediction block) before (rear) is obtained to candidate motion vector.
Step S1005-S1007 is described below:
Step S1105: decoding end parses code stream, according to the index value of AMVP candidate list in code stream, after determining (or it is preceding, AMVP mode similarly hereinafter) is used to prediction, is based on AMVP mode construction AMVP candidate list.
Step S1106: after decoding end combines the resulting index value of decoding to choose in constructed AMVP candidate list (preceding) is to candidate motion vector.
Step S1107: the candidate motion vector of (rear) to before and after updating is to rear (preceding) to Candidate Motion before decoding end utilizes Vector is updated.
Specifically, decoding end calculates in Merge candidate list, (rear) is to candidate motion vector (i.e. step before replaced S1103 it is resulting it is new before (rear) to candidate motion vector) with replace before before (rear) to candidate motion vector (i.e. step Selected candidate motion vector in S1102) difference;In conjunction with the difference and step S1106 it is selected after (preceding) to Candidate motion vector obtains new rear (preceding) to candidate motion vector;Then, with described new rear (preceding) to candidate motion vector In replacement step S1106 it is identified after (preceding) to candidate motion vector so that the candidate list is further updated.
Step S1108: decoding end combination decoded information, based on (preceding) after current block into updated AMVP candidate list It is rear it is (preceding) obtained to candidate motion vector after (preceding) to prediction block (and can be described as the second prediction block).
Step S1109: before decoding end obtains (rear) to the first prediction block and rear (preceding) to the second prediction block after, be based on Preset algorithm handles first prediction block and second prediction block, to obtain the reconstructed blocks of current block.
It should also be noted that, the correlation step being not described in decoding end can refer to correlation step in coding side Description, reference may also be made to associated description above.In order to illustrate the succinct of book, not expansion description one by one here.
As can be seen that video coding and decoding system can verify current block in the way of template matching in the embodiment of the present invention Reference picture in image block in a certain range (or even in entire reference picture) whether with current block have preferable matching, it is right Candidate motion vector based on candidate list constructed by Merge or AMVP mode is updated, based on updated candidate column Table can guarantee the optimal reference block that current block is obtained in encoding-decoding process.In addition, pre- the embodiment of the invention also provides mixing Survey mode can also can guarantee the optimal reference block that current block is obtained in encoding-decoding process based on the hybrid predicting mode, The efficiency of encoding and decoding is improved in the case where ensuring to obtain the optimal reference block of current block.
Described above is coding/decoding systems of the embodiment of the present invention and associated method, are described further below of the invention real Apply relevant device involved in example.
Referring to fig. 25, the embodiment of the invention provides a kind of for generating the equipment 1200 of predicted motion vector, equipment 1200 can be applied to coding side, be also possible to be applied to decoding side.Equipment 1200 includes processor 1201, memory 1202, the processor 1201, memory 1202 are connected (as being connected with each other by bus 1204), in possible embodiment In, equipment 1200 may also include transceiver 1203, and transceiver 1203 connects processor 1201 and memory 1202, for receive/ Send data.
Memory 1202 include but is not limited to be random access memory (random access memory, RAM), it is read-only Memory (read-only memory, ROM), Erasable Programmable Read Only Memory EPROM (erasable programmable Read only memory, EPROM) or portable read-only memory (compact disc read-only memory, CD- ROM), which is used for storing program therefor code and video data.
Processor 1201 can be one or more central processing units (central processing unit, CPU), In the case that processor 1201 is a CPU, which can be monokaryon CPU, be also possible to multi-core CPU.
The processor 1201 is for reading the program code stored in institute's memory 1202, the following operation of execution:
The set of candidate motion vectors for constructing to be processed piece based on prediction mode is closed;The prediction mode be Merge mode or AMVP mode;
According to the first candidate motion vector in set of candidate motion vectors conjunction, at least two first reference movements are determined Vector, first reference motion vector be used to determine described to be processed piece in described to be processed piece of the first reference picture Reference block;
Calculate separately at least one first neighbouring reconstructed blocks of the reference block of at least two determinations and described wait locate The pixel difference value or rate managed between at least one second neighbouring reconstructed blocks of block are distorted rate-distortion cost Value, wherein at least one described first neighbouring reconstructed blocks are identical with the shape of at least one second neighbouring reconstructed blocks And it is equal sized;
According to the pixel difference value corresponding at least two first reference motion vector or rate distortion costs It is worth the smallest first reference motion vector, obtains to be processed piece of the first predicted motion vector.
It should be noted that in the particular embodiment, processor 1201 can be used for executing above-mentioned Figure 10-Figure 24 embodiment The correlation technique of description will not be described in great detail here in order to illustrate the succinct of book.
Referring to fig. 26, the embodiment of the invention provides the equipment 1300 that another is used to generate predicted motion vector, equipment 1300 can be applied to coding side, be also possible to be applied to decoding side.Equipment 1300 includes processor 1301, memory 1302, the processor 1301, memory 1302 are connected (as being connected with each other by bus 1304), in possible embodiment In, equipment 1300 may also include transceiver 1303, and transceiver 1303 connects processor 1301 and memory 1302, for receive/ Send data.
Memory 1302 include but is not limited to be random access memory (random access memory, RAM), it is read-only Memory (read-only memory, ROM), Erasable Programmable Read Only Memory EPROM (erasable programmable Read only memory, EPROM) or portable read-only memory (compact disc read-only memory, CD- ROM), which is used for storing program therefor code and video data.
Processor 1301 can be one or more central processing units (central processing unit, CPU), In the case that processor 1301 is a CPU, which can be monokaryon CPU, be also possible to multi-core CPU.
The processor 1301 is for reading the program code stored in institute's memory 1302, to carry out to be processed piece two-way Prediction, described bi-directional predicted including first direction prediction and second direction prediction, the first direction prediction is to be based on the first ginseng The prediction of frame list is examined, the second direction prediction is the prediction based on the second reference frame lists.Processor 1301 specifically executes It operates below:
The first prediction mode is obtained, the first set of candidate motion vectors is generated and closes, first set of candidate motion vectors shares In the generation first direction predicted motion vector in first direction prediction;
The second prediction mode is obtained, the second set of candidate motion vectors is generated and closes, second set of candidate motion vectors shares In the second direction prediction in generate second direction predicted motion vector,
Wherein, when first prediction mode is AMVP mode, second prediction mode is Merge mode, alternatively, When first prediction mode is Merge mode, second prediction mode is AMVP mode.
It should be noted that in the particular embodiment, processor 1301 can be used for executing above-mentioned Figure 10-Figure 24 embodiment The correlation technique of description will not be described in great detail here in order to illustrate the succinct of book.
Based on same inventive concept, the embodiment of the invention provides the equipment that another is used to generate predicted motion vector 1400, referring to fig. 27, equipment 1400 includes:
Gather generation module 1401, the set of candidate motion vectors for constructing to be processed piece is closed;
Template matching module 1402, for the first candidate motion vector in being closed according to the set of candidate motion vectors, really Fixed at least two first reference motion vectors, first reference motion vector be used to determine described to be processed piece described wait locate Manage the reference block in the first reference picture of block;
The template matching module 1402 is also used to, and calculates separately the reference block of at least two determinations at least one Pixel difference value between first neighbouring reconstructed blocks and described to be processed piece at least one second neighbouring reconstructed blocks or Rate is distorted rate-distortion cost value, wherein at least one described first neighbouring reconstructed blocks and it is described at least one the The shape of two neighbouring reconstructed blocks is identical and equal sized;
Predicted motion vector generation module 1403, for according to corresponding at least two first reference motion vector The pixel difference value or rate distortion costs are worth the smallest first reference motion vector, obtain the of described to be processed piece One predicted motion vector.
It should be noted that passing through the associated description of earlier figures 10- Figure 24 embodiment, those skilled in the art are known that The implementation method for the modules that equipment 1400 is included, so will not be described in great detail here in order to illustrate the succinct of book.
Based on same inventive concept, the embodiment of the invention provides the equipment that another is used to generate predicted motion vector 1500, referring to fig. 28, equipment 1500 is used to carry out to be processed piece bi-directional predicted, it is described it is bi-directional predicted include that first direction is predicted It is predicted with second direction, the first direction prediction is the prediction based on the first reference frame lists, and the second direction prediction is Based on the prediction of the second reference frame lists, the equipment 1500 is specifically included:
First set generation module 1501 generates the conjunction of the first set of candidate motion vectors, institute for obtaining the first prediction mode The first set of candidate motion vectors is stated to share in the generation first direction predicted motion vector in first direction prediction;
Second set generation module 1502 generates the conjunction of the second set of candidate motion vectors, institute for obtaining the second prediction mode The second set of candidate motion vectors is stated to share in the generation second direction predicted motion vector in second direction prediction,
Wherein, when first prediction mode is AMVP mode, second prediction mode is Merge mode, alternatively, When first prediction mode is Merge mode, second prediction mode is AMVP mode.
It should be noted that passing through the associated description of earlier figures 10- Figure 24 embodiment, those skilled in the art are known that The implementation method for the modules that equipment 1500 is included, so will not be described in great detail here in order to illustrate the succinct of book.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it can be realized wholly or partly by software, hardware, firmware or any combination. When implemented in software, it can realize in the form of a computer program product in whole or in part.The computer program Product includes one or more computer instructions, when loading on computers and executing the computer program instructions, all or It partly generates according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, dedicated meter Calculation machine, computer network or other programmable devices.The computer instruction is storable in computer readable storage medium, or Person is transmitted from a computer readable storage medium to another computer readable storage medium, for example, the computer instruction Wired (such as coaxial cable, optical fiber, digital subscriber can be passed through from a website, computer, server or data center Line) or wirelessly (such as infrared, microwave etc.) mode is passed to another website, computer, server or data center It is defeated.The computer readable storage medium can be any usable medium that computer can access, and be also possible to comprising one Or the data storage devices such as integrated server, data center of multiple usable mediums.The usable medium can be magnetic medium (such as floppy disk, hard disk, tape etc.), optical medium (such as DVD etc.) or semiconductor medium (such as solid state hard disk) etc..
In the above-described embodiments, it emphasizes particularly on different fields to the description of each embodiment, there is no the part being described in detail in some embodiment, Reference can be made to the related descriptions of other embodiments.

Claims (48)

1. a kind of generation method of predicted motion vector characterized by comprising
To be processed piece of set of candidate motion vectors is constructed to close;
According to the first candidate motion vector in set of candidate motion vectors conjunction, determine that at least two first swear with reference to movement Amount, first reference motion vector are used to determine the described to be processed piece ginseng in described to be processed piece of the first reference picture Examine block;
Calculate separately the neighbouring reconstructed blocks of one or more first of the reference block of at least two determinations and described to be processed Pixel difference value or rate between the neighbouring reconstructed blocks of the one or more second of block are distorted rate-distortion cost Value, wherein the described first neighbouring reconstructed blocks are identical and equal sized with the shape of the described second neighbouring reconstructed blocks;
Most according to the pixel difference value corresponding at least two first reference motion vector or rate distortion costs value Small first reference motion vector obtains to be processed piece of the first predicted motion vector.
2. the method according to claim 1, wherein the method is for encoding described to be processed piece, described It is the smallest by one according to the corresponding pixel difference value at least two first reference motion vector or rate distortion costs value A first reference motion vector obtains to be processed piece of the predicted motion vector, comprising:
It is worth the smallest first reference motion vector replacement with the corresponding pixel difference value or rate distortion costs First candidate motion vector in the set of candidate motion vectors conjunction, obtains updated set of candidate motion vectors and closes;
From the updated set of candidate motion vectors conjunction, to be processed piece of the first predicted motion vector is obtained.
3. according to the method described in claim 2, it is characterized in that, described close from the updated set of candidate motion vectors In, obtain to be processed piece of the predicted motion vector, comprising:
In closing from the updated set of candidate motion vectors, according to rate distortion costs value, the of described to be processed piece is selected One predicted motion vector.
4. the method according to claim 1, wherein the method is for decoding described to be processed piece;Described It is the smallest by one according to the corresponding pixel difference value at least two first reference motion vector or rate distortion costs value A first reference motion vector obtains to be processed piece of the first predicted motion vector, comprising:
Using the corresponding pixel difference value or rate distortion costs be worth the smallest first reference motion vector as Described to be processed piece of the first predicted motion vector.
5. according to the method described in claim 4, it is characterized in that, the set of candidate motion vectors in be processed piece of the building is closed Before, the method also includes:
It parses code stream and obtains identification information, the identification information is used to indicate the set of candidate motion vectors and is combined into Merge mode Set of candidate motion vectors used by used set of candidate motion vectors conjunction or AMVP mode is closed;
Corresponding, the set of candidate motion vectors of to be processed piece of the building is closed, comprising:
When the identification information indicates that the set of candidate motion vectors is combined into set of candidate motion vectors used by Merge mode and closes When, construct the described to be processed piece used set of candidate motion vectors conjunction under Merge mode;
When the identification information indicates that the set of candidate motion vectors is combined into set of candidate motion vectors used by AMVP mode and closes When, construct the described to be processed piece used set of candidate motion vectors conjunction under AMVP mode.
6. according to the method described in claim 4, it is characterized in that, the method is for decoding described to be processed piece, further includes:
It parses the code stream and obtains index information, first candidate motion vector is determined according to the index information.
7. method according to claim 1-6, which is characterized in that the set of candidate motion vectors is shared in institute It is bi-directional predicted to state to be processed piece of progress, it is described bi-directional predicted including first direction prediction and second direction prediction, the first party It is the prediction based on the first reference frame lists to prediction, the second direction prediction is the prediction based on the second reference frame lists.
8. the method according to the description of claim 7 is characterized in that first reference frame lists include described first with reference to figure Picture, second reference frame lists include the second reference picture;The method also includes:
Calculate the corresponding pixel difference value or the smallest first reference motion vector of rate distortion costs value with The difference of first candidate motion vector;
According to the second candidate motion vector in the difference and set of candidate motion vectors conjunction, third Candidate Motion arrow is obtained Amount, second candidate motion vector are used to determine the described to be processed piece reference block in second reference picture.
9. according to the method described in claim 8, it is characterized in that, the method is for encoding described to be processed piece;Described After obtaining third candidate motion vector, the method also includes:
Second candidate motion vector in the set of candidate motion vectors conjunction is replaced with the third candidate motion vector.
10. according to the method described in claim 8, it is characterized in that, the method is for decoding described to be processed piece;Described After obtaining third candidate motion vector, the method also includes:
Using the third candidate motion vector as described to be processed piece of the second predicted motion vector.
11. the method according to the description of claim 7 is characterized in that first reference frame lists include first reference Image, second reference frame lists include the second reference picture;The method also includes:
According to the second candidate motion vector in set of candidate motion vectors conjunction, determine that at least two second swear with reference to movement Amount, second reference motion vector are used to determine the described to be processed piece reference block in second reference picture;
Calculate separately the neighbouring reconstructed blocks of one or more thirds of the reference block of at least two determinations and described to be processed One or more fourth neighborings of block pixel difference value or rate distortion costs value between reconstructed blocks, wherein the third Neighbouring reconstructed blocks and the fourth neighboring have reconstructed block-shaped identical and equal sized;
Most according to the pixel difference value corresponding at least two second reference motion vector or rate distortion costs value Small second reference motion vector obtains to be processed piece of the second predicted motion vector.
12. according to the method described in claim 9, it is characterized in that, described swear according to described at least two second with reference to movement The corresponding pixel difference value or rate distortion costs are worth the smallest second reference motion vector in amount, obtain it is described to The predicted motion vector of process block, comprising:
It is worth the smallest second reference motion vector replacement with the corresponding pixel difference value or rate distortion costs Second candidate motion vector in the set of candidate motion vectors conjunction, obtains again updated set of candidate motion vectors and closes;
From the set of candidate motion vectors conjunction updated again, to be processed piece of the second predicted motion vector is obtained.
13. according to the described in any item methods of claim 7 to 12, which is characterized in that according to the set of candidate motion vectors The first candidate motion vector in conjunction, before determining at least two first reference motion vectors, comprising:
When the set of candidate motion vectors conjunction in the 4th candidate motion vector by the second direction predict generate when, according to than Example relationship zooms in or out the 4th candidate motion vector, to obtain first candidate motion vector;
Wherein, the proportionate relationship includes the ratio of the first difference of injection time and the second difference of injection time, and first difference of injection time is described the The image of the image sequence number of reference image frame determined by one candidate motion vector and the picture frame at the to be processed piece of place The difference of sequence number, second difference of injection time are the image sequence of reference image frame determined by the 4th candidate motion vector Number and the picture frame where to be processed piece image sequence number difference.
14. according to the described in any item methods of claim 7 to 12, which is characterized in that closed according to the set of candidate motion vectors In the first candidate motion vector, determine at least two first reference motion vectors, comprising:
When first candidate motion vector is by the bi-directional predicted generation, according to the 5th candidate motion vector, determine described in At least two first reference motion vectors, wherein the reference block that first candidate motion vector determines is by according to the described 5th The first direction reference block and referred to according to the second direction that the 6th candidate motion vector determines that candidate motion vector determines Block weighting obtain, the 5th candidate motion vector by the first direction predict generate, the 6th candidate motion vector by The second direction prediction generates.
15. according to claim 1 to 14 described in any item methods, which is characterized in that closed according to the set of candidate motion vectors In the first candidate motion vector, determine at least two first reference motion vectors, comprising:
To be processed piece of the reference block determined in first reference picture according to first candidate motion vector;
It scans for obtaining at least two candidate reference blocks in the close position of the reference block of the determination with aimed at precision, In, each corresponding first reference motion vector of the candidate reference block, the aimed at precision includes 4 pixel precisions, 2 One of pixel precision, whole pixel precision, half-pixel accuracy, 1/4 pixel precision, 1/8 pixel precision.
16. according to claim 1 to 15 described in any item methods characterized by comprising described at least one is first neighbouring There is first position relationship, second contiguous block between the reference block that reconstructed blocks and first reference motion vector have determined And there is second position relationship between described to be processed piece, the first position relationship is identical as the second position relationship.
17. a kind of generation method of predicted motion vector, which is characterized in that the method is bi-directional predicted for be processed piece, Described bi-directional predicted including first direction prediction and second direction prediction, the first direction prediction is to be based on the first reference frame list The prediction of table, the second direction prediction is the prediction based on the second reference frame lists, which comprises
Obtain the first prediction mode, generate the first set of candidate motion vectors and close, first set of candidate motion vectors share in First direction predicted motion vector is generated in the first direction prediction;
Obtain the second prediction mode, generate the second set of candidate motion vectors and close, second set of candidate motion vectors share in Second direction predicted motion vector is generated in the second direction prediction,
Wherein, when first prediction mode is AMVP mode, second prediction mode is Merge mode, alternatively, working as institute State the first prediction mode be Merge mode when, second prediction mode be AMVP mode.
18. according to the method for claim 17, which is characterized in that when first prediction mode is Merge mode, institute It states and obtains the first prediction mode, generate the first set of candidate motion vectors and close, comprising:
Using Merge mode candidate motion vector used in first direction prediction, first Candidate Motion is generated Set of vectors.
19. according to the method for claim 17, which is characterized in that when second prediction mode is Merge mode, institute It states and obtains the second prediction mode, generate the second set of candidate motion vectors and close, comprising:
Using Merge mode candidate prediction vector used in second direction prediction, second Candidate Motion is generated Set of vectors.
20. according to the method for claim 17, which is characterized in that the method is for decoding described to be processed piece, in institute It states and obtains the first prediction mode, before the first set of candidate motion vectors of generation is closed, further includes:
It parses code stream and obtains first identifier information, it is Merge that the first identifier information, which is used to indicate first prediction mode, Mode or AMVP mode.
21. according to the method for claim 20, which is characterized in that the first identifier information is used to indicate described One prediction mode is Merge mode or AMVP mode, comprising: the first identifier information is used to indicate the first prediction mould Formula is Merge mode, alternatively, the first identifier information is used to indicate the described first candidate fortune based on Merge schema creation The index information of dynamic vector set, alternatively, it is AVMP mode that the first identifier information, which is used to indicate first prediction mode, Alternatively, the first identifier information is used to indicate the index of the conjunction of first set of candidate motion vectors based on AMVP schema creation Information.
22. according to the method for claim 20 characterized by comprising when the first identifier information is used to indicate institute State the first prediction mode be Merge mode when, use the index information of preset Merge mode.
23. according to the method for claim 22, which is characterized in that the index information of the preset Merge mode is First candidate motion vector of Merge mode.
24. according to the method for claim 20, which is characterized in that in the second prediction mode of the acquisition, generate second and wait Before selecting motion vector set, further includes:
After determining that first prediction mode is Merge mode, parses the code stream and obtain second identifier information, described the Two identification informations are used to indicate the index information of second prediction mode, wherein second prediction mode is AMVP mode.
25. according to the method for claim 20, which is characterized in that in the second prediction mode of the acquisition, generate second and wait Before selecting motion vector set, further includes:
After determining that first prediction mode is AMVP mode, parses the code stream and obtain second identifier information, described the Two identification informations are used to indicate the index information of second prediction mode, wherein second prediction mode is Merge mould Formula.
26. 7 to 25 described in any item methods according to claim 1 characterized by comprising determine the first prediction mould To parse after AMVP mode, the code stream obtains the reference frame index of the first direction prediction to formula and motion vector difference is believed Breath;Alternatively, determining second prediction mode after AMVP mode, to parse the code stream and obtaining the second direction prediction Reference frame index and motion vector difference information.
27. a kind of equipment for generating predicted motion vector, which is characterized in that the equipment includes:
Gather generation module, the set of candidate motion vectors for constructing to be processed piece is closed;
Template matching module determines at least two for the first candidate motion vector in closing according to the set of candidate motion vectors A first reference motion vector, first reference motion vector be used to determine described to be processed piece described to be processed piece the Reference block in one reference picture;
The template matching module is also used to, and the one or more first for calculating separately the reference block of at least two determinations is adjacent Closely the pixel difference value between reconstructed blocks and the neighbouring reconstructed blocks of to be processed piece of the one or more second or rate are lost True rate-distortion cost value, wherein the shape of the described first neighbouring reconstructed blocks and the second neighbouring reconstructed blocks It is identical and equal sized;
Predicted motion vector generation module, for according to the corresponding pixel at least two first reference motion vector Difference value or rate distortion costs are worth the smallest first reference motion vector, obtain described to be processed piece of the first prediction fortune Dynamic vector.
28. equipment according to claim 27, which is characterized in that the equipment is for decoding described to be processed piece;It is described Predicted motion vector generation module is used for according to the corresponding pixel difference at least two first reference motion vector Value or rate distortion costs are worth the smallest first reference motion vector, obtain to be processed piece of the first predicted motion arrow Amount, comprising:
The predicted motion vector generation module is used for, most by the corresponding pixel difference value or rate distortion costs value Small first reference motion vector is as described to be processed piece of the first predicted motion vector.
29. the equipment according to claim 26 or 27, which is characterized in that the equipment is for decoding described to be processed piece; The equipment further includes parsing module;
The parsing module is used for, and parsing code stream obtains identification information, and the identification information is used to indicate the Candidate Motion arrow Quantity set is combined into set of candidate motion vectors used by the conjunction of set of candidate motion vectors used by Merge mode or AMVP mode It closes;
When the identification information indicates that the set of candidate motion vectors is combined into set of candidate motion vectors used by Merge mode and closes When, the set generation module is for constructing the described to be processed piece used set of candidate motion vectors conjunction under Merge mode;
When the identification information indicates that the set of candidate motion vectors is combined into set of candidate motion vectors used by AMVP mode and closes When, the set generation module is for constructing the described to be processed piece used set of candidate motion vectors conjunction under AMVP mode.
30. according to the described in any item equipment of claim 27 to 29, which is characterized in that the parsing module is also used to, parsing The code stream obtains index information, determines first candidate motion vector according to the index information.
31. according to the described in any item equipment of claim 27 to 30, which is characterized in that the set of candidate motion vectors share in It is bi-directional predicted to the to be processed piece of progress, it is described it is bi-directional predicted include first direction prediction and second direction prediction, described the One direction prediction is the prediction based on the first reference frame lists, and the second direction prediction is based on the pre- of the second reference frame lists It surveys.
32. equipment according to claim 31, which is characterized in that first reference frame lists include first reference Image, second reference frame lists include the second reference picture, and the predicted motion vector generation module is also used to:
It calculates the corresponding pixel difference value or the smallest first reference motion vector of rate distortion costs value replaces The difference of the first candidate motion vector before the first candidate motion vector and the replacement after changing;
According to the second candidate motion vector in the difference and set of candidate motion vectors conjunction, third Candidate Motion arrow is obtained Amount, second candidate motion vector are used to determine that the described to be processed piece reference block in second reference picture will be described Third candidate motion vector is as the second fast predicted motion vector to be processed.
33. equipment according to claim 31, which is characterized in that first reference frame lists include first reference Image, second reference frame lists include the second reference picture;
The template matching module is also used to, and according to the second candidate motion vector in set of candidate motion vectors conjunction, is determined At least two second reference motion vectors, second reference motion vector are used to determine described to be processed piece and join described second Examine the reference block in image;
The template matching module is also used to, and the one or more thirds for calculating separately the reference block of at least two determinations are adjacent Closely the pixel difference value between reconstructed blocks or rate are lost for reconstructed blocks and to be processed piece of one or more fourth neighborings True cost value, wherein the third neighbouring reconstructed blocks and the fourth neighboring have reconstructed block-shaped identical and equal sized;
The predicted motion vector generation module is also used to, according to corresponding institute at least two second reference motion vector Pixel difference value or the smallest second reference motion vector of rate distortion costs value are stated, the second of described to be processed piece is obtained Predicted motion vector.
34. equipment according to claim 33, which is characterized in that the predicted motion vector generation module according to it is described extremely The corresponding pixel difference value or the smallest one second ginseng of rate distortion costs value in few two the second reference motion vectors Motion vector is examined, to be processed piece of the predicted motion vector is obtained, comprising:
The predicted motion vector generation module is used for, and uses the corresponding pixel difference value or rate distortion costs value The smallest second reference motion vector replaces second candidate motion vector in the set of candidate motion vectors conjunction, obtains It is closed to updated set of candidate motion vectors again;
It obtains again obtaining to be processed piece of the second predicted motion arrow in updated set of candidate motion vectors conjunction from described Amount.
35. according to the described in any item equipment of claim 31 to 34, which is characterized in that the template matching module is used for basis The first candidate motion vector in the set of candidate motion vectors conjunction, before determining at least two first reference motion vectors, packet It includes:
The template matching module is used for, when the 4th candidate motion vector in set of candidate motion vectors conjunction is by described second When direction prediction generates, the 4th candidate motion vector is zoomed in or out under a proportional relationship, it is candidate to obtain described first Motion vector;
Wherein, the proportionate relationship includes the ratio of the first difference of injection time and the second difference of injection time, and first difference of injection time is described the The image of the image sequence number of reference image frame determined by one candidate motion vector and the picture frame at the to be processed piece of place The difference of sequence number, second difference of injection time are the image sequence of reference image frame determined by the 4th candidate motion vector Number and the picture frame where to be processed piece image sequence number difference.
36. according to the described in any item equipment of claim 31 to 34, which is characterized in that the template matching module is used for basis The first candidate motion vector in the set of candidate motion vectors conjunction, determines at least two first reference motion vectors, comprising:
The template matching module is used for, when first candidate motion vector is by the bi-directional predicted generation, according to the 5th Candidate motion vector determines at least two first reference motion vector, wherein what first candidate motion vector determined Reference block is sweared by the first direction reference block that is determined according to the 5th candidate motion vector and according to the 6th Candidate Motion The determining second direction reference block of amount, which weights, to be obtained, and the 5th candidate motion vector is predicted to generate by the first direction, institute It states the 6th candidate motion vector and predicts to generate by the second direction.
37. according to the described in any item equipment of claim 27 to 36, which is characterized in that the template matching module is used for basis The first candidate motion vector in the set of candidate motion vectors conjunction, determines at least two first reference motion vectors, comprising:
The template matching module is used for, the institute determined in first reference picture according to first candidate motion vector State to be processed piece of reference block;
It scans for obtaining at least two candidate reference blocks in the close position of the reference block of the determination with aimed at precision, In, each corresponding first reference motion vector of the candidate reference block, the aimed at precision includes 4 pixel precisions, 2 One of pixel precision, whole pixel precision, half-pixel accuracy, 1/4 pixel precision, 1/8 pixel precision.
38. according to the described in any item equipment of claim 27 to 37 characterized by comprising at least one described first neighbour There is first position relationship between the reference block that closely reconstructed blocks and first reference motion vector have determined, described second is neighbouring Block and it is to be processed piece described between there is second position relationship, the first position relationship is identical as the second position relationship.
39. a kind of equipment for generating predicted motion vector, which is characterized in that the equipment is used to carry out to be processed piece of pair To prediction, described bi-directional predicted including first direction prediction and second direction prediction, the first direction prediction is to be based on first The prediction of reference frame lists, the second direction prediction is the prediction based on the second reference frame lists, and the equipment includes:
First set generation module generates the first set of candidate motion vectors and closes, described first waits for obtaining the first prediction mode Select motion vector set for generating first direction predicted motion vector in first direction prediction;
Second set generation module generates the second set of candidate motion vectors and closes, described second waits for obtaining the second prediction mode Select motion vector set for generating second direction predicted motion vector in second direction prediction,
Wherein, when first prediction mode is AMVP mode, second prediction mode is Merge mode, alternatively, working as institute State the first prediction mode be Merge mode when, second prediction mode be AMVP mode.
40. equipment according to claim 39, which is characterized in that when first prediction mode is Merge mode, institute It states first set generation module and obtains the first prediction mode, generate the first set of candidate motion vectors and close, comprising:
The first set generation module is used for, and uses Merge mode Candidate Motion used in first direction prediction Vector generates first set of candidate motion vectors and closes.
41. equipment according to claim 39, which is characterized in that when second prediction mode is Merge mode, institute State second set generation module for obtain the second prediction mode, generate the second set of candidate motion vectors conjunction, comprising:
The second set generation module is used for, and uses Merge mode candidate prediction used in second direction prediction Vector generates second set of candidate motion vectors and closes.
42. equipment according to claim 39, which is characterized in that the equipment is for decoding described to be processed piece;It is described Equipment further includes parsing module, and the parsing module obtains first identifier information, the first identifier information for parsing code stream Being used to indicate first prediction mode is Merge mode or AMVP mode.
43. according to equipment described in the claim 42 characterized by comprising the first identifier information is used to indicate First prediction mode is Merge mode or AMVP mode, comprising: the first identifier information is used to indicate described first Prediction mode is Merge mode, alternatively, the first identifier information is used to indicate described first based on Merge schema creation The index information that set of candidate motion vectors is closed, alternatively, the first identifier information is used to indicate first prediction mode is AVMP mode, alternatively, the first identifier information is used to indicate first candidate motion vector based on AMVP schema creation The index information of set.
44. equipment according to claim 42, which is characterized in that the parsing module is also used to, when the first identifier Information be used to indicate first prediction mode be Merge mode when, use the index information of preset Merge mode.
45. equipment according to claim 44, which is characterized in that the index information of the preset Merge mode is First candidate motion vector of Merge mode.
46. equipment according to claim 42, which is characterized in that the parsing module is also used to, and is determining described first Prediction mode obtains second identifier information after Merge mode, to parse the code stream, and the second identifier information is used to indicate The index information of second prediction mode described in AMVP, wherein second prediction mode is AMVP mode.
47. equipment according to claim 42, which is characterized in that the parsing module is also used to, and is determining described first Prediction mode obtains second identifier information after AMVP mode, to parse the code stream, and the second identifier information is used to indicate The index information of second prediction mode described in Merge, wherein second prediction mode is Merge mode.
48. according to the described in any item equipment of claim 39 to 47, which is characterized in that the parsing module is also used to, and is determined After first prediction mode is AMVP mode, parse the code stream obtain first direction prediction reference frame index and Motion vector difference information;Alternatively, after determining that second prediction mode is AMVP mode, parses the code stream and obtain described the The reference frame index and motion vector difference information of two direction predictions.
CN201810188482.6A 2018-03-07 2018-03-07 Predicted motion vector generation method and relevant device Pending CN110248188A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810188482.6A CN110248188A (en) 2018-03-07 2018-03-07 Predicted motion vector generation method and relevant device
PCT/CN2019/070629 WO2019169949A1 (en) 2018-03-07 2019-01-07 Method for generating predicted motion vector and related apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810188482.6A CN110248188A (en) 2018-03-07 2018-03-07 Predicted motion vector generation method and relevant device

Publications (1)

Publication Number Publication Date
CN110248188A true CN110248188A (en) 2019-09-17

Family

ID=67845540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810188482.6A Pending CN110248188A (en) 2018-03-07 2018-03-07 Predicted motion vector generation method and relevant device

Country Status (2)

Country Link
CN (1) CN110248188A (en)
WO (1) WO2019169949A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111163322A (en) * 2020-01-08 2020-05-15 绍兴文理学院 Encoding and decoding method for mapping index based on historical motion vector
CN112702601A (en) * 2020-12-17 2021-04-23 北京达佳互联信息技术有限公司 Method and apparatus for determining motion vector for inter prediction
WO2022061573A1 (en) * 2020-09-23 2022-03-31 深圳市大疆创新科技有限公司 Motion search method, video coding device, and computer-readable storage medium
WO2023131045A1 (en) * 2022-01-05 2023-07-13 维沃移动通信有限公司 Inter-frame prediction method and device, and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686393A (en) * 2008-09-28 2010-03-31 华为技术有限公司 Fast-motion searching method and fast-motion searching device applied to template matching
US20120106645A1 (en) * 2009-06-26 2012-05-03 Huawei Technologies Co., Ltd Method, apparatus and device for obtaining motion information of video images and template
WO2017048008A1 (en) * 2015-09-17 2017-03-23 엘지전자 주식회사 Inter-prediction method and apparatus in video coding system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104363449B (en) * 2014-10-31 2017-10-10 华为技术有限公司 Image prediction method and relevant apparatus
CN107113424B (en) * 2014-11-18 2019-11-22 联发科技股份有限公司 With the Video coding and coding/decoding method of the block of inter-frame forecast mode coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686393A (en) * 2008-09-28 2010-03-31 华为技术有限公司 Fast-motion searching method and fast-motion searching device applied to template matching
US20120106645A1 (en) * 2009-06-26 2012-05-03 Huawei Technologies Co., Ltd Method, apparatus and device for obtaining motion information of video images and template
WO2017048008A1 (en) * 2015-09-17 2017-03-23 엘지전자 주식회사 Inter-prediction method and apparatus in video coding system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANLE CHEN ET AL: "Algorithm Description of Joint Exploration Test Model 4", 《JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 4TH MEETING: CHENGDU, CN, JVET-D1001》 *
XU CHEN ET AL: "Decoder-Side Motion Vector Refinement Based on Bilateral Template Matching", 《JOINT VIDEO EXPLORATION TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 114TH MEETING: CHENGDU, CN》 *
YONGBING LIN ET AL: "Enhanced Template Matching in FRUC Mode", 《JOINT VIDEO EXPLORATION TEAM (JVET)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 5TH MEETING: GENEVA, CH,JVET-E0035》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111163322A (en) * 2020-01-08 2020-05-15 绍兴文理学院 Encoding and decoding method for mapping index based on historical motion vector
WO2022061573A1 (en) * 2020-09-23 2022-03-31 深圳市大疆创新科技有限公司 Motion search method, video coding device, and computer-readable storage medium
CN112702601A (en) * 2020-12-17 2021-04-23 北京达佳互联信息技术有限公司 Method and apparatus for determining motion vector for inter prediction
CN112702601B (en) * 2020-12-17 2023-03-10 北京达佳互联信息技术有限公司 Method and apparatus for determining motion vector for inter prediction
WO2023131045A1 (en) * 2022-01-05 2023-07-13 维沃移动通信有限公司 Inter-frame prediction method and device, and readable storage medium

Also Published As

Publication number Publication date
WO2019169949A1 (en) 2019-09-12

Similar Documents

Publication Publication Date Title
CN109479143B (en) Image coding and decoding method and device for inter-frame prediction
KR102343668B1 (en) Video encoding method, decoding method and terminal
CN104054350B (en) Method and device for video encoding
TWI568271B (en) Improved inter-layer prediction for extended spatial scalability in video coding
CN110248188A (en) Predicted motion vector generation method and relevant device
CN103297771B (en) Multiple view video coding using the disparity estimation based on depth information
CN109963155A (en) Prediction technique, device and the codec of the motion information of image block
CN104396244A (en) An apparatus, a method and a computer program for video coding and decoding
CN103999468A (en) Method for video coding and an apparatus
CN110213590A (en) Time-domain motion vector acquisition, inter-prediction, Video coding method and apparatus
JP7279154B2 (en) Motion vector prediction method and apparatus based on affine motion model
CN104205819A (en) Method and apparatus for video coding
CN105075265A (en) Disparity vector derivation in 3D video coding for skip and direct modes
US11412210B2 (en) Inter prediction method and apparatus for video coding
CN110545425B (en) Inter-frame prediction method, terminal equipment and computer storage medium
AU2023214364B2 (en) Video decoding method, and video decoder
CN110166778A (en) Video encoding/decoding method, Video Decoder and electronic equipment
CN109996080A (en) Prediction technique, device and the codec of image
CN101682787A (en) Spatially enhanced transform coding
CN103416065A (en) Methods, apparatuses and computer programs for video coding
CN109905714A (en) Inter-frame prediction method, device and terminal device
CN106105190A (en) The senior residual prediction of simplification for 3D HEVC
CN109922340A (en) Image coding/decoding method, device, system and storage medium
CN109756737A (en) Image prediction method and apparatus
CN109756739A (en) Image prediction method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190917

RJ01 Rejection of invention patent application after publication