CN110024394A - The recording medium of method and apparatus and stored bits stream to encoding/decoding image - Google Patents
The recording medium of method and apparatus and stored bits stream to encoding/decoding image Download PDFInfo
- Publication number
- CN110024394A CN110024394A CN201780073517.5A CN201780073517A CN110024394A CN 110024394 A CN110024394 A CN 110024394A CN 201780073517 A CN201780073517 A CN 201780073517A CN 110024394 A CN110024394 A CN 110024394A
- Authority
- CN
- China
- Prior art keywords
- block
- current
- prediction
- prediction block
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 103
- 230000033001 locomotion Effects 0.000 claims description 852
- 239000013598 vector Substances 0.000 claims description 377
- 238000005070 sampling Methods 0.000 claims description 97
- 238000009795 derivation Methods 0.000 claims description 19
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 13
- 238000013139 quantization Methods 0.000 description 58
- 230000009466 transformation Effects 0.000 description 46
- 238000010586 diagram Methods 0.000 description 39
- 238000012545 processing Methods 0.000 description 22
- 230000002123 temporal effect Effects 0.000 description 22
- 239000011159 matrix material Substances 0.000 description 20
- 230000008859 change Effects 0.000 description 13
- 238000005192 partition Methods 0.000 description 12
- 230000003044 adaptive effect Effects 0.000 description 10
- 241000208340 Araliaceae Species 0.000 description 8
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 8
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 8
- 235000003140 Panax quinquefolius Nutrition 0.000 description 8
- 235000008434 ginseng Nutrition 0.000 description 8
- 230000006978 adaptation Effects 0.000 description 7
- 238000010276 construction Methods 0.000 description 7
- 238000001914 filtration Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000008520 organization Effects 0.000 description 5
- 230000002457 bidirectional effect Effects 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 241001061076 Melanonus zugmayeri Species 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 230000008054 signal transmission Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000036299 sexual function Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000000930 thermomechanical effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present invention relates to a kind of methods for being coded and decoded to image.For can comprise the following steps that the first prediction block for generating current block by the exercise data of current block to the method that image is decoded;It is determined to be used at least one exercise data of current junior's block of the second prediction block among the exercise data of neighbouring junior's block;At least one second prediction block of current junior's block is generated by least one determining exercise data;Final prediction block is generated based on the weighted sum of the first prediction block of current block and at least one the second prediction block of current junior's block.
Description
Technical field
The present invention relates to a kind of method and apparatus for encoding/decoding to image and the records of stored bits stream
Medium.More particularly, it relates to a kind of method for being encoded/decoded using overlapped block motion compensation to image
And equipment.
Background technique
Recently, to the demand of the high-resolution quality image of such as high definition (HD) image or ultra high-definition (UHD) image each
A application field has been increased.However, compared with traditional image data, the number of the image data of higher resolution and quality
It increased according to amount.Therefore, scheme when the medium by using such as traditional wired broadband network or wireless broadband network transmits
When as data, or when storing image data in traditional storage medium, transmission cost and carrying cost increase.To understand
The these problems certainly occurred with the raising of the resolution ratio of image data and quality, need efficient encoding/decoding image skill
Art.
Image Compression includes various technologies, comprising: includes from the prediction of the previous picture or subsequent pic of current picture
The inter-frame prediction techniques of pixel value in current picture;It predicts to be included in by using the Pixel Information in current picture to work as
The infra-prediction techniques of pixel value in preceding picture;Short code is distributed to the value of the high frequency of occurrences and is divided to the value of the low frequency of occurrences
Entropy coding with long code;Etc..By using such Image Compression, image data can be efficiently compressed, and
The image data of compression is sent or is stored.
The shortcomings that traditional images coding/decoding method and equipment, is: calculating the weighting for being directed to overlapped block motion compensation
With and derive contiguous block motion information during, computation complexity increase.
Summary of the invention
Technical problem
Therefore, the present invention already allows for the problem above occurred in the prior art, and the object of the present invention is to mention
For a kind of by calculating based on for being reduced during the weighted sum of overlapped block motion compensation and the motion information of derivation contiguous block
The method and apparatus for executing overlapped block motion compensation while calculating complexity.
Solution
In order to realize the above target, the present invention provides a kind of method for being decoded to image, the method packets
It includes: generating the first prediction block of current block using the motion information of current block;In at least one adjacent sub-blocks of current sub-block
The motion information for generating the second prediction block is determined among motion information;Current son is generated using determining motion information
At least one second prediction block of block;And described in the first prediction block and current sub-block based on current block at least one second
The weighted sum of prediction block generates final prediction block.
It, can base in the step of being determined to the motion information for generating the second prediction block in picture decoding method
In at least one of size and Orientation of motion vector of adjacent sub-blocks of current sub-block come be determined to for generate second
The motion information of prediction block.
It, can base in the step of being determined to the motion information for generating the second prediction block in picture decoding method
It is determined to be used in the POC of frames count (POC) and the reference picture of current block of the reference picture of the adjacent sub-blocks
Generate the motion information of the second prediction block.
In picture decoding method, in the step of being determined to the motion information for generating the second prediction block, only when
When the POC of the reference picture of the adjacent sub-blocks is equal to the POC of the reference picture of current block, the movement of the adjacent sub-blocks is believed
Breath is determined as the motion information that can be used in generating the second prediction block.
In picture decoding method, current sub-block can have at least one of square shape and rectangular shape.
In picture decoding method, in the step of generating at least one second prediction block, only when current block had not both had
When motion vector derivation pattern (DM) does not have affine motion compensation model yet, at least one adjacent sub-blocks of current sub-block can be used
Motion information generates at least one described second prediction block.
In picture decoding method, in the step of generating final prediction block, when current sub-block is included in current block
When in borderline region, by obtain the first prediction block the part row adjacent with boundary or part column each of sampling point and
The weighted sum of each of the part row adjacent with boundary of second prediction block or part column sampling point generates final prediction block.
Sample in picture decoding method, in the part row adjacent with boundary of the first prediction block or part column
Point and the second prediction block the part row adjacent with boundary or the part column in sampling point can be based on current sub-block
Block size, the size and Orientation of the motion vector of current sub-block, the inter-prediction indicator of current block and the reference of current block
At least one of POC of picture is determined.
It, can be by being sweared according to the movement of current sub-block in the step of generating final prediction block in picture decoding method
Different weight factors are applied to the sampling point in the first prediction block and the second prediction block by least one of size and Orientation of amount
To obtain the weighted sum of the first prediction block and the second prediction block.
The present invention provides a kind of method for being encoded to image, which comprises uses the movement of current block
First prediction block of information generation current block;It is determined among the motion information of at least one adjacent sub-blocks of current sub-block
For generating the motion information of the second prediction block;At least one for generating current sub-block using determining motion information is second pre-
Survey block;It is generated most based on the weighted sum of at least one the second prediction block described in the first prediction block and current sub-block of current block
Whole prediction block.
It, can base in the step of being determined to the motion information for generating the second prediction block in image encoding method
It is determined at least one of the size and Orientation of motion vector of the adjacent sub-blocks for generating the second prediction block
Motion information.
It, can base in the step of being determined to the motion information for generating the second prediction block in image encoding method
It is pre- for generating second to be determined in the POC of the reference picture of the POC and current block of the reference picture of the adjacent sub-blocks
Survey the motion information of block.
In image encoding method, in the step of being determined to the motion information for generating the second prediction block, only when
It, can be by the movement of the adjacent sub-blocks when POC of the reference picture of the adjacent sub-blocks is equal to the POC of the reference picture of current block
Information is determined as the motion information that can be used in generating the second prediction block.
In image encoding method, current sub-block can have at least one of square shape and rectangular shape.
In image encoding method, in the step of generating at least one second prediction block, only when current block had not both had
When motion vector derivation pattern (DM) does not have affine motion compensation model yet, the movement letter of at least one adjacent sub-blocks can be used
It ceases to generate at least one described second prediction block.
In image encoding method, in the step of generating final prediction block, when current sub-block is included in current block
It, can the part row adjacent with boundary based on the first prediction block or the sampling point in the column of part and the second prediction when in borderline region
The weighted sum of the part row adjacent with boundary of block or the sampling point in the column of part generates final prediction block.
Sample in image encoding method, in the part row adjacent with boundary of the first prediction block or part column
Point and the second prediction block the part row adjacent with boundary or the part column in sampling point can be based on current sub-block
Block size, the size and Orientation of the motion vector of current sub-block, the inter-prediction indicator of current block and the reference of current block
At least one of POC of picture is determined.
It, can be by being sweared according to the movement of current sub-block in the step of generating final prediction block in image encoding method
Different weighted values are applied to the sampling point in the first prediction block and the second prediction block by least one of size and Orientation of amount to be come
Obtain the weighted sum.
A kind of recording medium the present invention provides storage by bit stream caused by image encoding method, described image
Coding method includes: that the first prediction block of current block is generated using the motion information of current block;In at least one of current sub-block
The motion information for generating the second prediction block is determined among the motion information of adjacent sub-blocks;Use determining motion information
To generate at least one second prediction block of current sub-block;Described in the first prediction block and current sub-block based on current block at least
The weighted sum of one the second prediction block generates final prediction block.
Beneficial effect
In accordance with the invention it is possible to provide a kind of method for encoding/decoding with improved compression efficiency to image
And equipment.
In accordance with the invention it is possible to improve encoding/decoding image efficiency.
In accordance with the invention it is possible to reduce the computation complexity of image encoder and image decoder.
Detailed description of the invention
Fig. 1 is the block diagram for showing the construction according to the encoding device for being applied the embodiment of the present invention;
Fig. 2 is the block diagram for showing the construction according to the decoding device for being applied the embodiment of the present invention;
Fig. 3 is the diagram for schematically showing the partitioned organization of the image used when being encoded or being decoded to image;
Fig. 4 is the diagram for showing the embodiment of inter-prediction processing;
Fig. 5 is the flow chart for showing image encoding method according to an embodiment of the invention;
Fig. 6 is the flow chart for showing picture decoding method according to an embodiment of the invention;
Fig. 7 is the flow chart for showing image encoding method according to another embodiment of the present invention;
Fig. 8 is the flow chart for showing picture decoding method according to another embodiment of the present invention;
Fig. 9 is the exemplary diagram for showing the spatial motion vectors candidate for deriving current block;
Figure 10 is the exemplary diagram for showing the temporal motion vector candidate for deriving current block;
Figure 11 is to show space merging the exemplary diagram that candidate is added to merging candidate list;
Figure 12 is to show time merging the exemplary diagram that candidate is added to merging candidate list;
Figure 13 is to show the exemplary diagram for by sub-block executing overlapped block motion compensation;
Figure 14 is to show using the motion information of the sub-block of same position block the exemplary diagram for executing overlapped block motion compensation;
Figure 15 is to show using the motion information of the block adjacent with the boundary of reference block to execute overlapped block motion compensation
Exemplary diagram;
Figure 16 is to show the exemplary diagram that overlapped block motion compensation is executed by sub-block group;
Figure 17 is the exemplary diagram for showing the item number of the motion information for overlapped block motion compensation;
Figure 18 and Figure 19 is the diagram for showing the sequence for deriving the motion information for generating the second prediction block;
Figure 20 is shown by by the reference picture of the POC of the reference picture of current sub-block and the adjacent sub-blocks of current sub-block
POC be compared to determine adjacent sub-blocks motion information whether be can be used for generating the second prediction block information it is exemplary
Diagram;
Figure 21 is the embodiment shown when calculating the weighted sum of the first prediction block and the second prediction block using weight factor
Diagram;
Figure 22 is the position shown when calculating the weighted sum of the first prediction block and the second prediction block according to the sampling point in block
Different weight factors are applied to the diagram of the embodiment of the sampling point;
Figure 23 be show during overlapped block motion compensation successively cumulatively calculated according to predetermined order the first prediction block and
The diagram of the embodiment of the weighted sum of second prediction block;
Figure 24 is the reality for showing the weighted sum that the first prediction block and the second prediction block are calculated during overlapped block motion compensation
Apply the diagram of example;
Figure 25 is the flow chart for showing picture decoding method according to another embodiment of the present invention.
Specific embodiment
A variety of modifications can be made to the present invention, and there are various embodiments of the invention, wherein now with reference to attached drawing
The examples of the embodiments are provided and will be described in the examples of the embodiments.However, the invention is not limited thereto, although showing
Example property embodiment can be interpreted as including all modifications, equivalent form or replacement in technical concept and technical scope of the invention
Form.Similar reference label refers to the same or similar function in all respects.In the accompanying drawings, for the sake of clarity, the shape of element
Shape and size can be exaggerated.In the following detailed description of the invention, to shown by way of diagram can to the present invention carry out
The attached drawing of the specific embodiment of implementation carries out reference.These embodiments are described in enough detail so that those skilled in the art's energy
Enough implement the disclosure.It should be understood that the various embodiments of the disclosure need not be mutually exclusive although difference.For example, not
In the case where being detached from spirit and scope of the present disclosure, described herein and the associated special characteristic of one embodiment, structure and spy
Property can be carried out in other embodiments.Moreover, it should be understood that without departing from the spirit and scope of the disclosure, often
The position of each element in a disclosed embodiment or arrangement can be modified.Therefore, described in detail below not to limit
Meaning, the scope of the present disclosure only by appended claims (in the case where appropriate translation, also together with required by claim protect
The full scope of the equivalent of shield) it limits.
Term " first " used in the description, " second " etc. can be used for describing various assemblies, but these components are simultaneously
It is not construed as limited to the term.The term is only applied to distinguish a component with other components.For example, not
In the case where departing from the scope of the present invention, " first " component is referred to alternatively as " second " component, and " second " component can also be by class
As be known as " first " component.Term "and/or" includes any one in multiple combination either multiple items.
It will be appreciated that in the present specification, be referred to simply as " being connected to " or " being integrated to " another element when element and
When not being " being directly connected to " or " being bonded directly to " another element, it " can be directly connected to " or " being bonded directly to " is another
Element, or another element is connected to or is integrated in the case where being inserted into other elements therebetween.On the contrary, it will be appreciated that when
When element referred to as " is bound directly " or " being directly connected to " arrives another element, intermediary element is not present.
In addition, the building block shown in an embodiment of the present invention is shown separately, so that spy different from each other is presented
Sexual function.Therefore, this is not meant to that each building block is combined into the component units of individual hardware or software.In other words
It says, for convenience, each building block includes each of the building block enumerated.Therefore, in each building block extremely
Few two building blocks, which can be combined to form a building block or a building block, can be divided into multiple building blocks
To execute each function.It is no be detached from essence of the invention in the case where, embodiment that each building block is combined and
One divided embodiment of building block is also included in the scope of the present invention.
The term used in the present specification is only used for description specific embodiment, is not intended to limit the invention.With odd number
The expression used includes plural number expression, unless it has visibly different meaning within a context.In the present specification, it will manage
Solution, the term of " including ... ", " having ... " etc. be intended to indicate feature disclosed in the description, quantity, step,
Movement, component, assembly unit, or combinations thereof presence, and be not intended to exclude one or more other features, quantity, step, dynamic
Work, component, assembly unit, or combinations thereof there may be or a possibility that may be added.In other words, when particular element is referred to as
When " by including ", the element in addition to respective element is not excluded, but, element in addition can be included in reality of the invention
It applies in example or in the scope of the present invention.
In addition, some constituent element may not be the indispensable constituent element for executing necessary function of the invention, but
Only promote the optional constituent element of its performance.It can be by only including substantive indispensable building block for carrying out the present invention
And the building block used in improving performance is excluded to implement the present invention.Only include the indispensable building block and excludes
The structure of the optional building block used in only improving performance is also included in the scope of the present invention.
Hereinafter, it will be described in detail with reference to the accompanying drawings the embodiment of the present invention.In description exemplary embodiment of the present invention
When, known function or structure will not be discussed in detail, this is because they can unnecessarily be obscured the present invention.In attached drawing
Identical constituent element indicated by identical reference label, and the repeated description of similar elements will be omitted.
In addition, hereinafter, image can mean the picture for constituting video, or can mean video itself.For example, " to figure
Both as being encoded or being decoded or being carried out the two " can mean " video is encoded or is decoded or is carried out ", and can
Both mean " image among the multiple images of video is encoded or decoded or is carried out ".Here, picture and figure
As that can have the same meaning.
Term description
Encoder: the equipment for executing coding is meant.
Decoder: it means and executes decoded equipment.
Block: being M × N matrix sampling point.Here, M and N means positive integer, and block can mean the sampling point square of two dimensional form
Battle array.Block can refer to unit.Current block can be meant becomes mesh as the encoding target block of target or upon decoding when encoding
Target decodes object block.In addition, current block can be at least one of encoding block, prediction block, residual block and transform block.
Sampling point: being the basic unit for constituting block.Sampling point may be expressed as according to bit-depth (Bd) and from 0 to 2Bd- 1
Value.In the present invention, sampling point is used as the meaning of pixel.
Unit: refer to coding and decoding unit.When coding and decoding to image, unit be can be by single
Image carries out subregion and the region that generates.In addition, unit can mean during coding or decoding when single image be partitioned it is more
Sub- division unit when a sub- division unit.When being coded and decoded to image, can be performed for the predetermined of each unit
Processing.Individual unit can be partitioned smaller subelement of the size than the unit.According to function, unit can mean block,
Macro block, coding tree unit, coding tree block, coding unit, encoding block, predicting unit, prediction block, residual unit, residual block, transformation
Unit, transform block etc..In addition, in order to distinguish unit and block, unit may include luminance component block, related to luminance component block
The chromatic component block of connection and the syntactic element of each color component block.Unit can have various sizes and shape, it is specific and
Speech, the shape of unit can be two-dimentional geometric figure, such as rectangular shape, square shape, trapezoidal shape, it is triangular shaped,
Pentagon shaped etc..In addition, unit information may include cell type (instruction coding unit, predicting unit, converter unit etc.), list
At least one of elemental size, unit depth, sequence that unit is coded and decoded etc..
Coding tree unit: it is configured with the single encoded tree block of luminance component Y and relevant to chromatic component Cb and Cr
Two coding tree blocks.In addition, coding tree unit can mean the syntactic element including block and each piece.It can be by using quaternary tree
At least one of partition method and binary tree partition method construct lower unit to each coding tree unit progress subregion, all
Such as coding unit, predicting unit, converter unit.Coding tree unit is used as specified when to the figure as input picture
As term when encoding/decoding as the block of pixels of processing unit.
Coding tree block: it is used as specifying any one in Y coding tree block, Cb coding tree block and Cr coding tree block
Term.
Contiguous block: the block adjacent with current block is meant.The block adjacent with current block can mean the borderless contact with current block
Block or positioned at away from the block in current block preset distance.Contiguous block can mean the block adjacent with the vertex of current block.Here,
The block adjacent with the vertex of current block can mean with the horizontally adjacent contiguous block in current block vertically adjacent to block or with it is vertical
Adjacent to the horizontally adjacent block of the contiguous block of current block.
Rebuild contiguous block: mean it is adjacent with current block and in time/be spatially encoded or decoded contiguous block.
Here, it is gratifying for reconstruction adjacent unit to rebuild contiguous block.Rebuilding spatial neighbor block can be having passed through in current picture
Coding or decoding or the block reconstructed by both coding and decodings.Reconstruction time contiguous block be in reference picture with work as
The current block of preceding picture is located at the block of same position or the contiguous block of the block.
Unit depth: mean unit is partitioned degree.In tree construction, root node can be highest node, leaf node
It can be minimum node.In addition, rank present in unit can mean unit depth when unit is represented as tree construction.
Bit stream: the bit stream including coded image information is meant.
Parameter set: corresponding to the head information in the structure in bit stream.Video parameter collection, sequence parameter set, frame parameter
At least one of collection and auto-adaptive parameter set can be included in parameter set.In addition, parameter set may include slice header and parallel
Block (tile) head information.
Parsing: it can mean and determine the value of syntactic element by executing entropy decoding, or can mean entropy decoding itself.
Symbol: at least one in the syntactic element, coding parameter and transform coefficient values of coding/decoding object element can be meant
It is a.In addition, symbol can mean entropy coding target or entropy decoding result.
Predicting unit: it means when execution such as inter-prediction, intra prediction, interframe compensation, the interior compensation of frame and motion compensation
Prediction when basic unit.Single predicting unit can be partitioned multiple subregions with small size, or can be partitioned
Junior's predicting unit.
Predicting unit subregion: it means by carrying out subregion shape obtained to predicting unit.
Reference picture list: it means including one or more reference pictures for inter-picture prediction or motion compensation
List.LC (List Combined), L0 (List 0), L1 (List 1), L2 (List 2), L3 (List 3) etc. are with reference to picture
The type of face list.One or more reference picture lists can be used for inter-picture prediction.
Inter-picture prediction indicator: the inter-picture prediction direction (single directional prediction, bi-directional predicted etc.) of current block can be meant.It can
Selection of land, inter-picture prediction indicator can mean the quantity of the reference picture of the prediction block for generating current block.It is further optional
Ground, inter-picture prediction indicator can mean the number of the prediction block for executing inter-picture prediction or motion compensation for current block
Amount.
Reference picture indices: the index with particular reference to picture in instruction reference picture list is meant.
Reference picture: it can mean for picture referenced by inter-picture prediction or motion compensation specific piece.
Motion vector: being the two-dimensional vector for inter-picture prediction or motion compensation, and can mean reference picture and compile
Offset between code/decoding target picture.For example, (mvX, mvY) can indicate that motion vector, mvX can indicate horizontal component, mvY
It can indicate vertical component.
Motion vector candidates: it can mean the block when predicting motion vector as predicting candidate, or can mean
The motion vector of the block.Motion vector candidates can be listed in motion vector candidates list.
Motion vector candidates list: the list of motion vector candidates can be meant.
Motion vector candidates index: the indicator of the motion vector candidates in instruction motion vector candidates list can be meant.
Motion vector candidates index is also referred to as the index of motion vector predictor.
Motion information: it can mean including in motion vector, reference picture indices, inter-picture prediction indicator and following item
At least any one information: reference picture list information, reference picture, motion vector candidates, motion vector candidates index,
Merge candidate and merging index.
Merge candidate list: meaning by merging the candidate list formed.
Merge candidate: meaning that space merging is candidate, time merging is candidate, combination merging is candidate, combined bidirectional is predicted to merge
Candidate, zero merging candidate etc..Merge candidate can have inter-picture prediction indicator, for each list reference picture indices,
And the motion information of such as motion vector.
Merge index: meaning that instruction merges the candidate information of the merging in candidate list.Merge index can indicate with currently
Block in space and or time among adjacent reconstructed block for deriving the block for merging candidate.Merging index can indicate by closing
At least one of and in the candidate motion information possessed.
Converter unit: mean when to residual signals execute coding/decoding (such as transformation, inverse transformation, quantization, inverse quantization with
And transform coefficients encoding/decoding) when basic unit.Single converter unit can be partitioned multiple transformation lists with small size
Member.
Scaling: the processing by transform coefficient levels and fac-tor is meant.It can be by being zoomed in and out to transform coefficient levels
To generate transformation coefficient.Scaling may be additionally referred to as inverse quantization.
Quantization parameter: the value used when can mean the transform coefficient levels for generating transformation coefficient during quantization.Quantization ginseng
The value that number uses when can also mean during inverse quantization by zooming in and out to transform coefficient levels and generating transformation coefficient.Quantization
Parameter can be the value being mapped in quantization step size.
Delta (Delta) quantization parameter: the quantization parameter of coding/decoding object element and the quantization predicted are meant
Difference between parameter.
Scanning: the method being ranked up to the coefficient in block or matrix is meant.For example, the two-dimensional matrix of coefficient is changed into
The operation of one-dimensional matrix is referred to alternatively as scanning, and the operation that the one-dimensional matrix of coefficient changes into two-dimensional matrix is referred to alternatively as sweeping
It retouches or inverse scan.
Transformation coefficient: the coefficient value for executing generate after transformation in the encoder can be meant.Transformation coefficient, which can be meant, to be solved
The coefficient value generated after at least one of entropy decoding and inverse quantization is performed in code device.By to transformation coefficient or residual error letter
Number grade of quantization or the transform coefficient levels of quantization that are quantified and obtained can also be fallen in the meaning of transformation coefficient.
The grade of quantization: the value generated and quantifying to transformation coefficient or residual signals in the encoder is meant.
Optionally, the grade of quantization can be meant as the inverse quantization mesh target value for undergoing inverse quantization in a decoder.Similarly, as change
Change and the meaning of grade that the transform coefficient levels of the quantization of result that quantify can also fall in quantization in.
Non-zero transform coefficient: mean value not be 0 transformation coefficient, or mean value not for 0 transform coefficient levels.
Quantization matrix: mean execution quantification treatment or inverse quantization processing in use so as to improve subjective picture quality or
The matrix of Objective image quality.Quantization matrix also referred to as scales list.
Quantization matrix coefficient: each element in quantization matrix is meant.Quantization matrix coefficient is also referred to as matrix coefficient.
Default matrix: the predetermined quantitative matrix in encoder or decoder by preliminary definition is meant.
Non-default matrix: it means in encoder or decoder not by preliminary definition but the quantization that is sent by user with signal
Matrix.
Fig. 1 is the block diagram for showing the construction according to the encoding device for being applied the embodiment of the present invention.
Encoding device 100 can be encoder, video encoder or image encoding apparatus.Video may include at least one
Image.Encoding device 100 can sequentially encode at least one described image.
Referring to Fig.1, encoding device 100 may include motion prediction unit 111, motion compensation units 112, intraprediction unit
120, switch 115, subtracter 125, converter unit 130, quantifying unit 140, entropy code unit 150, inverse quantization unit 160,
Inverse transformation block 170, adder 175, filter cell 180 and reference picture buffer 190.
Encoding device 100 can be come by using frame mode or inter-frame mode either both frame mode and inter-frame mode
Coding is executed to input picture.In addition, encoding device 100 can generate bit stream by being encoded to input picture, and can
Export the bit stream generated.The bit stream of generation can be stored in computer readable recording medium, or can pass through wired/nothing
Line transmission medium is streamed.When frame mode is used as prediction mode, switch 115 be can switch in frame.Optionally, when
When inter-frame mode is used as prediction mode, switch 115 can switch to inter-frame mode.Here, frame mode can mean pre- in frame
Survey mode, inter-frame mode can mean inter-frame forecast mode.Encoding device 100 can produce the prediction block of the input block of input picture.
In addition, encoding device 100 can encode the residual error between input block and prediction block after generating prediction block.Input figure
As being referred to alternatively as the present image as present encoding target.Input block be referred to alternatively as present encoding target current block or
Person is referred to alternatively as encoding target block.
When prediction mode is frame mode, the encoded/decoding adjacent with current block is can be used in intraprediction unit 120
Block pixel value as reference pixel.Intraprediction unit 120 can usually execute spatial prediction by using reference image, or
The prediction sampling point of input block can be generated by executing spatial prediction.Here, intra prediction can mean the prediction within frame.
When prediction mode is inter-frame mode, motion prediction unit 111 can be searched when executing motion prediction from reference picture
Rope and the most matched region of input block, and motion vector can be derived by using the region searched.Reference picture can be stored
In reference picture buffer 190.
Motion compensation units 112 can execute motion compensation by using motion vector to generate prediction block.Here, interframe is pre-
Survey the prediction or motion compensation that can be meant between frame.
When the value of motion vector is not integer, motion prediction unit 111 and motion compensation units 112 can be by references
The partial region of picture generates prediction block using interpolation filter.In order to execute inter-picture prediction or movement benefit to coding unit
It repays, it may be determined that among skip mode, merging patterns, advanced motion vector prediction (AMVP) mode and current picture reference model
Which kind of mode is used for motion prediction and the motion compensation for the predicting unit for including in corresponding coding unit.It then, can be according to true
Fixed mode is differently carried out inter-picture prediction or motion compensation.
Subtracter 125 can generate residual block by using the residual error between input block and prediction block.Residual block can be claimed
For residual signals.Residual signals can mean the difference between original signal and prediction signal.In addition, residual signals can be by right
Difference between original signal and prediction signal is converted or is quantified or carry out transform and quantization and the signal that generates.Residual error
Block can be the residual signals of module unit.
Converter unit 130 can generate transformation coefficient, and the transformation series of exportable generation by executing transformation to residual block
Number.Here, transformation coefficient can be the coefficient value generated and transformation by executing to residual block.When transformation skip mode is applied
When, converter unit 130 can skip the transformation to residual block.
The grade of quantization can be generated by quantifying to transformation coefficient or to residual signals application.Hereinafter, implementing
In example, the grade of quantization is also referred to as transformation coefficient.
Quantifying unit 140 can be generated by being quantified according to parameter to transformation coefficient or residual signals quantization etc.
Grade, and the exportable generated grade quantified.Here, quantifying unit 140 can be come by using quantization matrix to transformation coefficient
Quantified.
Entropy code unit 150 can by according to probability distribution to by the calculated value of quantifying unit 140 or to executing volume
Calculated encoded parameter values execute entropy coding to generate bit stream, and the bit stream of exportable generation when code.Entropy code unit
150 can the Pixel Information to image and the information execution entropy coding for being decoded to image.For example, for being carried out to image
Decoded information may include syntactic element.
When entropy coding is by application, can be according to the bit with the high symbol allocated with lesser amounts for generating chance and to tool
There is the low symbol for generating chance to distribute greater number of bit-wise to indicate symbol, so as to reduce the symbol for will be encoded
Number bit stream size.Such as Exp-Golomb, context-adaptive variable-length encoding can be used in entropy code unit 150
(CAVLC), context adaptive binary arithmetic coding (CABAC) etc. is used for the coding method of entropy coding.For example, entropy coding list
Member 150 can execute entropy coding by using variable-length encoding/code (VLC) table.In addition, entropy code unit 150 can derived object symbol
Number binarization method and aiming symbol/binary digit probabilistic model, and can be by using the binaryzation side derived
Method and context model execute arithmetic coding.
In order to encode to transform coefficient levels, entropy code unit 150 can be incited somebody to action by using transformation coefficient scan method
The coefficient of two-dimensional block form changes into one-dimensional vector form.
Coding parameter may include the information for encoding and being transmitted to signal such as syntactic element of decoder in the encoder
(mark, index etc.) and the information being derived when executing coding or decoding.Coding parameter can be meant to carry out to image
Required information when coding or decoding.For example, coding parameter may include at least one of following item value or combining form: single
Member/block size, unit/block depth, unit/block partition information, unit/block partitioned organization, point for whether carrying out quaternary tree form
Area, the subregion for whether carrying out binary tree form, the subregion direction (horizontal direction or vertical direction) of binary tree form, binary tree
The zoned format (symmetric partitioning or asymmetric subregion) of formula, intra prediction mode/direction, with reference to sampling point filtering method, prediction block
Filtering method, prediction block filter tap, prediction block filter coefficient, inter-frame forecast mode, motion information, motion vector, ginseng
Examine picture index, inter-prediction angle, inter-prediction indicator, reference picture list, reference picture, motion vector predictor
Candidate, motion vector candidates list, whether using merging patterns, merge it is candidate, merge candidate list, whether using skipping mould
Formula, interpolation filter type, interpolation filter tap, interpolation filter coefficients, motion vector size, the expression essence of motion vector
Degree, alternative types, transform size, whether using first (for the first time) transformation information, whether using quadratic transformation information, first
Manipulative indexing, the information that quadratic transformation indexes, residual signals whether there is, coded block pattern, coded block flag (CBF), quantization
Parameter, quantization matrix, whether using wave filter in loop, wave filter in loop coefficient, wave filter in loop tap, filter in loop
Whether wave device shape/form applies de-blocking filter, de-blocking filter coefficient, de-blocking filter tap, de-blocking filter strong
Whether degree de-blocking filter shape/form, deviates using the offset of adaptive sampling point, adaptive sampling point deviant, adaptive sampling point
Classification, adaptive sampling point offset type, whether using adaptive in-loop filter, adaptive in-loop filter coefficient, adaptive
In-loop filter tap, adaptive in-loop filter shape/form, binaryzation/anti-binarization method, context model determination side
Whether method, context model update method execute normal mode, whether execute bypass mode, context binary digit, bypass two
System position, transformation coefficient, transform coefficient levels, transform coefficient levels scan method, image show/output sequence, band identification
Information, type of strip, band partition information, parallel block identification information, parallel block type, parallel block partition information, picture type,
The information of bit-depth and luminance signal or carrier chrominance signal.
Here, corresponding mark can be meant or index by encoder entropy coding and be included in ratio by being transmitted mark with signal or being indexed
In spy's stream, and it can mean corresponding mark or index by decoder from bit stream entropy decoding.
When encoding device 100 is executed by inter-prediction to be encoded, the present image of coding is used as subsequent
The reference picture of another image of processing.Therefore, encoding device 100 can be rebuild or be decoded to the present image of coding, or
Person can will rebuild or decoded image is stored as reference picture.
The grade of quantization can be in inverse quantization unit 160 by inverse quantization, or can be in inverse transformation block 170 by inversion
It changes.Can by adder 175 by the coefficient by inverse quantization or inverse transformation or the coefficient by both inverse quantization and inverse transformation with
Prediction block is added.By the way that the coefficient of inverse quantization or inverse transformation will be passed through or coefficient by both inverse quantization and inverse transformation and pre-
It surveys block to be added, can produce reconstructed block.Here, by inverse quantization or the coefficient or process both inverse quantization and inverse transformation of inverse transformation
Coefficient can mean the coefficient for being performed at least one of inverse quantization and inverse transformation, and it is gratifying to rebuild residual block.
Reconstructed block can pass through filter cell 180.Filter cell 180 can be filtered to reconstructed block or reconstruction image application deblocking
At least one of wave device, sampling point self adaptation skew (SAO) (SAO) and auto-adaptive loop filter (ALF).Filter cell 180 can
Referred to as in-loop filter.
De-blocking filter can remove the block distortion that boundary between blocks generates.In order to determine whether using deblocking filtering
Device can determine whether based on including pixel included in several row or column in block to current block application deblocking filtering
Device.When de-blocking filter is applied to block, another filter can be applied according to required deblocking filtering intensity.
In order to compensate to encoding error, appropriate deviant can be added to by pixel by using sampling point self adaptation skew (SAO)
Value.Sampling point self adaptation skew (SAO) can be corrected the offset of image and original image Jing Guo deblocking according to pixel unit.It can make
The method for applying offset with the marginal information considered about each pixel, or use following methods: by the pixel partitions of image
It for the region of predetermined quantity, determines by the region of application offset, and the application of determined region is deviated.
Auto-adaptive loop filter can execute filter based on the comparison result between filtered reconstruction image and original image
Wave.The pixel for including in image can be partitioned predetermined group, and the filter that be applied to each group can be determined, and different
Filtering can be performed for each group.It can be sent according to coding unit (CU) with signal about whether the information of application ALF,
And by be applied to each piece ALF shape and coefficient can change.
It can be stored in reference picture buffer 190 by the reconstructed block or reconstruction image of filter cell 180.Fig. 2
It is the block diagram for showing the construction according to the embodiment for being applied decoding device of the invention.
Decoding device 200 can be decoder, video decoding apparatus or image decoding apparatus.
Referring to Fig. 2, decoding device 200 may include entropy decoding unit 210, inverse quantization unit 220, inverse transformation block 230, frame
Interior prediction unit 240, motion compensation units 250, adder 255, filter cell 260 and reference picture buffer 270.
Decoding device 200 can receive the bit stream exported from encoding device 100.Decoding device 200 can receive in computer
The bit stream stored in readable medium recording program performing, or can receive the bit stream being streamed by wire/wireless transmission medium.Solution
Decoding apparatus 200 can decode bit stream by using frame mode or inter-frame mode.In addition, decoding device 200 can produce
The reconstruction image or decoding image generated and being decoded, and exportable reconstruction image or decoding image.
When the prediction mode used in decoding is frame mode, switch can be switched in frame.Optionally, when
When the prediction mode used when decoding is inter-frame mode, switch can be switched to inter-frame mode.
Decoding device 200 can obtain reconstructive residual error block by being decoded to the bit stream of input, and can produce prediction
Block.When reconstructive residual error block and obtained prediction block, decoding device 200 can be by producing reconstructive residual error block and prediction block phase Calais
It is generated as the reconstructed block of decoding target.Decoding object block is referred to alternatively as current block.
Entropy decoding unit 210 can generate symbol by carrying out entropy decoding to bit stream according to probability distribution.The symbol of generation
It number may include the symbol of the classic form of quantization.Here, entropy decoding method can be the inversely processing of above-mentioned entropy coding method.
In order to be decoded to transform coefficient levels, entropy decoding unit 210 can be incited somebody to action by using transformation coefficient scan method
The coefficient of one-dimensional vector form changes into two-dimensional block form.
The grade of quantization can be in inverse quantization unit 220 by inverse quantization, or can be in inverse transformation block 230 by inversion
It changes.The grade of quantization can be both carry out inverse quantization or inverse transformation or carry out inverse quantization and inverse transformation as a result, and can
It is generated as reconstructive residual error block.Here, inverse quantization unit 220 can be to the classes of applications quantization matrix of quantization.
When frame mode quilt in use, intraprediction unit 240 can generate prediction block by executing spatial prediction,
In, spatial prediction uses decoded piece of the pixel value adjacent with decoding object block.
When inter-frame mode quilt in use, motion compensation units 250 can generate prediction block by executing motion compensation,
In, motion compensation uses the reference picture and motion vector being stored in reference picture buffer 270.
Adder 255 can be by generating reconstructed block for reconstructive residual error block and prediction block phase Calais.Filter cell 260 can be right
At least one of reconstructed block or reconstruction image application de-blocking filter, sampling point self adaptation skew (SAO) and auto-adaptive loop filter.
The exportable reconstruction image of filter cell 260.Reconstructed block or reconstruction image can be stored in reference picture buffer 270, and
It can be used when executing inter-prediction.
Fig. 3 is the diagram for schematically showing the partitioned organization of the image when coding and decoding to image.Fig. 3 shows
Example by individual unit subregion for multiple bottom-ranked units is shown to meaning property.
It is efficient zoned in order to be carried out to image, when being coded and decoded, can be used coding unit (CU).Coding unit
It is used as the basic unit when encoding/decoding to image.In addition, coding unit is used as compiling to image
Unit when code/decoding for being distinguished to frame mode and inter-frame mode.Coding unit can be for transformation coefficient
It predicted, converted, being quantified, the basic unit of inverse transformation, inverse quantization or coding/decoding processing.
Referring to Fig. 3, image 300 is according to maximum coding unit (LCU) by sequentially subregion, and LCU unit is confirmed as
Partitioned organization.Here, LCU can be used according to meaning identical with coding tree unit (CTU).Unit subregion can mean to
Associated piece of progress subregion of unit.In block partition information, it may include the information of unit depth.Depth information can indicate unit
The number or degree or both number and degree being partitioned.Tree construction can be based in layer associated with depth information to list
A unit carries out subregion.The bottom-ranked unit being each partitioned out can have depth information.Depth information can be the ruler for indicating CU
Very little information, and can be stored in each CU.
Partitioned organization can mean the distribution of the coding unit (CU) in LCU 310.It can be according to whether by single CU subregion
Such distribution is determined for multiple (positive integer equal to or more than 2, including 2,4,8,16 etc.) CU.It is produced by carrying out subregion
The horizontal size and vertical dimension of raw CU can be the half of horizontal size and vertical dimension of the CU before carrying out subregion respectively,
Or horizontal size and the small size of vertical dimension before being respectively provided with than subregion according to the number for carrying out subregion.CU can quilt
Recursively subregion is multiple CU.Subregion recursively can be executed until predefined depth or predefined size to CU.For example,
The depth of LCU can be 0, and the depth of minimum coding unit (SCU) can be predefined depth capacity.Here, LCU can
To be the coding unit with maximum coding unit size, SCU be can be with minimum coding unit size as described above
Coding unit.Subregion is carried out since LCU 310, when the horizontal size or vertical dimension or horizontal size and vertical dimension of CU
When the two is reduced by carrying out subregion, CU depth increases by 1.
In addition, the information whether being partitioned about CU can be indicated by using the partition information of CU.Partition information can be with
It is 1 bit information.All CU other than SCU may include partition information.For example, when the value of partition information is the first value,
CU can not be partitioned, and when the value of partition information is second value, CU can be partitioned.
It can be 64 × 64 block referring to Fig. 3, the LCU with depth 0.0 can be minimum-depth.SCU with depth 3
It can be 8 × 8 block.3 can be depth capacity.32 × 32 block and 16 × 16 CU of block can be respectively expressed as depth 1
With depth 2.
For example, when single encoded unit is partitioned four coding units, the level for four coding units that subregion goes out
Size and vertical dimension can be the half size of horizontal size and vertical dimension of the CU before being partitioned.In one embodiment
In, when the coding unit with 32 × 32 sizes is partitioned four coding units, in four coding units that subregion goes out
Each can have 16 × 16 sizes.When single encoded unit is partitioned four coding units, it can claim coding unit can
It is partitioned quaternary tree form.
For example, when single encoded unit is partitioned two coding units, the horizontal size of described two coding units
Or vertical dimension can be the half of horizontal size or vertical dimension of the coding unit before being partitioned.For example, when having 32
When the coding unit of × 32 sizes is partitioned according to vertical direction, each of two coding units that subregion goes out can have 16
× 32 size.When single encoded unit is partitioned two coding units, coding unit can be claimed according to binary tree form
It is partitioned.The LCU 320 of Fig. 3 is by the example of the LCU of both the subregion of application quaternary tree form and the subregion of binary tree form.
Fig. 4 is the diagram for showing the embodiment of inter-picture prediction processing.
In Fig. 4, rectangle can indicate picture.In Fig. 4, arrow indication predicting direction.Picture can be according to the coding of picture
Type is classified as picture in frame (I picture), predictive picture (P picture) and bi-directional predicted picture (B picture).
I picture can be encoded by intra prediction, without carrying out inter-picture prediction.P picture can by using about
The reference picture that current block is present on a direction (that is, forward or backward) is encoded via inter-picture prediction.B picture can
By using the reference picture of both direction (that is, forward and backward) is preset in about current block via inter-picture prediction and by
Coding.When inter-picture prediction is by use, inter-picture prediction or motion compensation can be performed in encoder, decoder is executable corresponding
Motion compensation.
Hereinafter, it will be described in the embodiment of inter-picture prediction.
Reference picture and motion information can be used to execute inter-picture prediction or motion compensation.
The fortune of current block can be derived by each of encoding device 100 and decoding device 200 during inter-picture prediction
Dynamic information.Can by using rebuild the motion information of contiguous block, same to block of locations (also referred to as col block or with position block) and/or with it is same
The motion information of the adjacent block of position block derives the motion information of current block.It draws the same position rebuild before being meant formerly with position block
It is located at the block of same position on face (also referred to as col picture or with position picture) internal space with current block.It can be with position picture
A picture among one or more reference pictures for including in reference picture list.
The method for deriving the motion information of current block can change according to the prediction mode of current block.For example, as being used for
AMVP mode, merging patterns, skip mode, current picture reference model etc. may be present in the prediction mode of inter-picture prediction.Merge
Mode is referred to alternatively as movement merging patterns.
For example, can be sweared by the motion vector for rebuilding contiguous block, with the movement of position block when AMVP is used as prediction mode
At least one of amount, the motion vector of the block adjacent with same position block and motion vector (0,0) are determined as current block
Motion vector candidates, and motion vector candidates list is generated by using motion vector candidates.It can be by using the fortune of generation
The motion vector candidates of dynamic vector candidate list derivation current block.Current block can be determined based on the motion vector candidates derived
Motion information.Time movement is referred to alternatively as with the motion vector of block of locations or the motion vector of the block adjacent with same block of locations
Vectors candidates, the motion vector for rebuilding contiguous block are referred to alternatively as spatial motion vectors candidate.
Encoding device 100 can calculate the motion vector difference (MVD) between the motion vector of current block and motion vector candidates,
And entropy coding can be executed to motion vector difference (MVD).In addition, encoding device 100, which can index motion vector candidates, executes entropy
Coding, and generate bit stream.Motion vector candidates index can indicate that the motion vector for including in motion vector candidates list is waited
Optimum movement vector among choosing is candidate.Decoding device can execute entropy solution to including motion vector candidates index in the bitstream
Code, and the motion vector for including in motion vector candidates list can be indexed by using the motion vector candidates of entropy decoding and waited
The motion vector candidates of selection decoding object block among choosing.In addition, decoding device 200 can by the MVD of entropy decoding with pass through entropy solution
The motion vector candidates that code extracts are added, to derive the motion vector of decoding object block.
Bit stream may include the reference picture indices for indicating reference picture.Reference picture indices can be encoded 100 entropy of equipment
Then coding is transmitted to decoding device 200 with signal as bit stream.Decoding device 200 can be based on the movement arrow derived
Amount and reference picture indices information generate the prediction block of decoding object block.
Another example for deriving the method for the motion information of current block can be merging patterns.Merging patterns can mean merging
The method of multiple pieces of movement.Merging patterns can mean the mould that the motion information of current block is derived from the motion information of contiguous block
Formula.When merging patterns are by application, can be used the motion information for rebuilding contiguous block and/or generate with the motion information of block of locations
Merge candidate list.Motion information may include at least one in motion vector, reference picture indices and inter-picture prediction indicator
It is a.Prediction indicator can indicate single directional prediction (L0 prediction or L1 prediction) or bi-directional predicted (L0 prediction and L1 prediction).
Merging candidate list can be the list of motion information of storage.The motion information for including in merging candidate list
It can be at least one of zero merging candidate and new motion information, wherein the new motion information is adjacent with current block
The motion information (space merge candidate) of one contiguous block, current block include same block of locations in reference picture movement letter
The combination of breath (time merges candidate) and the motion information present in merging candidate list.
Encoding device 100 can be by generating bit at least one of mark and merging index execution entropy coding is merged
Stream, and bit stream can be transmitted to decoding device 200 with signal.Merging mark, which can be, to indicate whether for each piece of execution
The information of merging patterns, which contiguous block that merging index can be in the contiguous block of instruction current block is to merge object block
Information.For example, the contiguous block of current block may include the left side contiguous block on the left of current block, the neighbour of the top above current block
Nearly block and the time contiguous block adjacent in time with current block.
Skip mode can be the mode that the motion information of contiguous block is applied to current block as it is.When skip mode quilt
In application, encoding device 100 can the motion information to which block will be used as current block motion information the fact information hold
The bit stream can be transmitted to decoding device 200 with signal to generate bit stream by row entropy coding.Encoding device 100 can not
It will be about at least any one the syntactic element signal in motion vector difference information, coded block flag and transform coefficient levels
It is transmitted to decoding device 200.
Current picture reference model can mean that previous reconstruction regions belonging to the current block in current picture are used for
The prediction mode of prediction.Here, vector can be used for specifying the previous reconstruction regions.It can be by using the reference picture of current block
Face indexes whether will encode with the information that current picture reference model is encoded to instruction current block.Indicating current block is
No can be transmitted with signal with the mark or index of the block of current picture reference model coding, and can be based on the ginseng of current block
Picture index is examined to be derived.In the case where current block is encoded with current picture reference model, current picture can be added to
Reference picture list for current block is so as to the fixation position being located in reference picture list or random site.The fixed bit
Set the position or rearmost position that are indicated by reference picture indices 0 that can be in such as list.It is added in current picture
When reference picture list is to be located at the random site, available signal transmission indicates the reference picture rope of the random site
Draw.
Based on above description, the image encoding method and image decoding of embodiment according to the present invention is described more fully below
Method.
Fig. 5 is the flow chart for showing image encoding method according to an embodiment of the invention, and Fig. 6 is shown according to this
The flow chart of the picture decoding method of one embodiment of invention.
Referring to Fig. 5, encoding device can be derived motion vector candidates (step S501), and can be based on the movement arrow derived
Amount is candidate to generate motion vector candidates list (step S502).It, can be based on generation after motion vector candidates list is generated
Motion vector candidates list determine motion vector (step S503), and can based on determining motion vector execute motion compensation
(step S504).Hereafter, encoding device can be encoded (step S505) to information associated with motion compensation.
Referring to Fig. 6, decoding device can execute entropy solution to the information associated with motion compensation received from encoding device
Code (step S601), and motion vector candidates (step S602) can be derived.Decoding device can be based on the motion vector derived
Candidate generates motion vector candidates list (step S603), and determines motion vector using the motion vector candidates list of generation
(step S604).Hereafter, decoding device can execute motion compensation (step S605) by using determining motion vector.
Fig. 7 is the flow chart for showing image encoding method according to another embodiment of the present invention, and Fig. 8 is shown according to this
The flow chart of the picture decoding method of another embodiment of invention.
Referring to Fig. 7, encoding device can derive merging candidate (step S701), and generate conjunction based on the merging candidate derived
And candidate list.After merging candidate list and being generated, the merging candidate list generated can be used to determine movement for encoding device
Information (step S702), and determining motion information can be used to execute motion compensation (step S703) to current block.Hereafter, it compiles
Decoding apparatus can execute entropy coding (step S704) to information associated with motion compensation.
Referring to Fig. 8, decoding device can execute entropy decoding to from the received information associated with motion compensation of encoding device
(S801), it derives and merges candidate (S802), and generated based on the merging candidate derived and merge candidate list.Merging candidate column
After table is generated, decoding device can determine the motion information (S803) of current block by using the merging candidate list of generation.
Hereafter, motion information can be used to execute motion compensation (S804) for decoding device.
Fig. 5 and Fig. 6 shows the example that AMVP shown in Fig. 4 is applied, and Fig. 7 and Fig. 8 show conjunction shown in Fig. 4
And the example that mode is applied.
Hereinafter, each step in Fig. 5 and Fig. 6 will be described, then will describes each step in Fig. 7 and Fig. 8.So
And will jointly describe motion compensation step corresponding with S504, S605, S703 and S804 and with S505, S601, S704 and
The corresponding entropy coding/decoding step of S801.
Hereinafter, each step in Fig. 5 and Fig. 6 is described more fully below.
Firstly, will be described in the step of deriving motion vector candidates (S501, S602).
The motion vector candidates of current block may include one of spatial motion vectors candidate and temporal motion vector candidate, or
It is both candidate including spatial motion vectors candidate and temporal motion vector.
The spatial motion vectors of current block can be derived from the reconstructed block adjacent with current block.For example, adjacent with current block
The spatial motion vectors that the motion vector of reconstructed block can be confirmed as current block are candidate.
Fig. 9 is the exemplary diagram for showing the spatial motion vectors candidate for deriving current block.
Referring to Fig. 9, the spatial motion vectors that current block can be derived from the contiguous block adjacent with current block X are candidate.With it is current
Block X adjacent contiguous block includes the block B1 adjacent with the upper end of current block, the block A1 and current block adjacent with the left end of current block
Adjacent block B0, the block B2 adjacent with the upper left corner of current block and the block A0 adjacent with the lower left corner of current block in the upper right corner
At least one of.The contiguous block adjacent with current block can have square shape or non-square shape.When with current block phase
When a contiguous block in adjacent multiple contiguous blocks has motion vector, the motion vector of the contiguous block can be determined as current block
Spatial motion vectors it is candidate.Whether inter-prediction can be passed through based on the determination that whether there is to contiguous block or to contiguous block
Processing determination encoded, come determine contiguous block whether have the motion vector of motion vector or contiguous block whether can by with
The spatial motion vectors for making current block are candidate.It can execute whether there is motion vector really to specific contiguous block according to predetermined order
The determination of spatial motion vectors candidate that is fixed or whether being used as current block to the motion vector of contiguous block.For example, such as
Shown in Fig. 9, the availability that motion vector can be executed according to the sequence of block A0, A1, B0, B1 and B2 is determined.
When the reference picture of the reference picture of current block and the contiguous block with motion vector is different from each other, contiguous block
Motion vector is scaled, and the spatial motion vectors that the motion vector after then scaling is used as current block are candidate.It can be based on working as
In the distance between reference picture of the distance between reference picture of preceding picture and current block and current picture and contiguous block
At least any one distance come execute motion vector scaling.Here, the reference picture according to current picture and current block can be passed through
The ratio of the distance between face and the distance between the reference picture of current picture and contiguous block to the motion vector of contiguous block into
Row scaling is candidate come the spatial motion vectors for deriving current block.
However, when the reference picture indices of current block and the reference picture indices of the contiguous block with motion vector are different
When, the spatial motion vectors that the motion vector after the scaling of contiguous block can be confirmed as current block are candidate.Even if in such case
Under, still can the distance between reference picture based on current picture and current block and current picture and contiguous block reference picture
The distance between at least one of execute scaling.
It, can be based on the reference picture as indicated by the reference picture indices with predefined value to contiguous block about scaling
Motion vector zooms in and out, and the spatial motion vectors that the motion vector after scaling can be determined as to current block are candidate.It is described
Predefined value can be zero or positive integer.For example, can by based on current picture and current block by the ginseng with predefined value
Examine the reference picture with predefined value of the distance between reference picture indicated by picture index with current picture and contiguous block
The ratio in the distance between face zooms in and out the motion vector of contiguous block to derive the spatial motion vectors of current block candidate.
Optionally, the spatial motion vectors that current block can be derived based at least one of coding parameter of current block are waited
Choosing.
Can from include reconstructed block in the same position picture of current picture derive current block temporal motion vector it is candidate.Together
Position picture is the picture of the coding/decoding before current picture, and can be different from current picture in time sequencing.
Figure 10 is the exemplary diagram for showing the temporal motion vector candidate for deriving current block.
Referring to Fig.1 0, can from include current picture same position picture (also referred to as with position picture) in current block X in sky
Between the upper block positioned at same position external position block derive current block temporal motion vector it is candidate, or from including and work as
The temporal motion vector that the block that preceding piece of X is spatially positioned at the interior location of the block of same position derives current block is candidate.Here,
Temporal motion vector candidate can mean the motion vector of the same position block of current block.For example, can from spatially with current block X
It is in the temporal motion vector that the adjacent block H in the lower left corner of the block C of same position derives current block X candidate, or from including block C
The temporal motion vector that the block C3 in middle position derives current block X is candidate.Temporal motion vector for deriving current block is candidate
Block H, block C3 etc. be referred to as same block of locations.
Optionally, temporal motion vector candidate can be derived based at least one of coding parameter, with position picture, same to position
Block, predicting list utilize at least one of mark and reference picture indices.
When the distance between reference picture of the current picture and current block that include current block is different from including same position block
When with position picture with the distance between the reference picture of position block, it can be zoomed in and out by the motion vector to same position block to obtain
The temporal motion vector of current block is candidate.Here, can the distance between reference picture based on current picture and current block and
Scaling is executed with position picture and at least one of the distance between reference picture of position block.For example, can be worked as by basis
The distance between reference picture of preceding picture and current block is with same position picture and with the ratio of the distance between the reference picture of position block
Rate zooms in and out the motion vector of same position block to derive the temporal motion vector of current block candidate.
In the following, the step of description is generated into motion vector candidates list based on the motion vector candidates derived (S502,
S503)。
The step of generating motion vector candidates list may include that motion vector candidates are added to motion vector candidates list
Or it is added to movement from the processing of motion vector candidates list removal motion vector candidates and by aggregate motion vectors candidates
The processing of vectors candidates list.
Firstly, the motion vector candidates derived are added to motion vector candidates list or from motion vector by description
The processing for the motion vector candidates that candidate list removal is derived.Encoding device and decoding device can be according to motion vector candidates quilts
The motion vector candidates derived are added to motion vector candidates list by the sequence of derivation.
Assuming that motion vector candidates list mvpListLX can mean transport corresponding with reference picture list L0, L1, L2 and L3
Dynamic vector candidate list.That is, motion vector candidates corresponding with reference picture list L0 can be indicated with mvpListL0
List.
Other than spatial motion vectors candidate and temporal motion vector are candidate, the movement with predetermined value can also be sweared
Amount is added to motion vector candidates list.For example, the quantity when the motion vector candidates in motion vector candidates list is less than energy
When enough including the maximum quantity of the motion vector candidates in motion vector candidates list, the motion vector with value 0 can be added
It is added to motion vector candidates list.
Next, aggregate motion vectors candidates to be added to the processing of motion vector candidates list by description.
Motion vector candidates column can be included in when the quantity of the motion vector candidates in motion vector candidates list is less than
One or more motion vector candidates quilts when the maximum quantity of the motion vector candidates in table, in motion vector candidates list
One or more aggregate motion vectors candidates are combined to produce, and the aggregate motion vectors candidates generated can be added to fortune
Dynamic vector candidate list.For example, the spatial motion vectors for including in motion vector candidates list are candidate, temporal motion vector is waited
At least one of choosing and zero motion vector candidate or more are used to aggregate motion vectors candidates, and the group generated
Resultant motion vectors candidates can be added to motion vector candidates list.
Optionally, aggregate motion vectors candidates can be generated based at least one of coding parameter, and can will be based on
Aggregate motion vectors candidates caused by least one of coding parameter are added to motion vector candidates list.
Next, the step of will be described below the predicted motion vector for selecting current block from motion vector candidates list
(S503、S604)。
Among the motion vector candidates for including in motion vector candidates list, indicated by motion vector candidates index
Motion vector candidates can be confirmed as the predicted motion vector of current block.
Encoding device can calculate the difference between the predicted motion vector of current block and motion vector, to generate motion vector
Difference.Predicted motion vector and motion vector difference phase Calais can be by being generated the motion vector of current block by decoding device.
To describe jointly later the step of execution motion compensation shown in fig. 5 and fig. (S504, S605) and to fortune
The step of dynamic associated information of compensation carries out entropy coding/decoding (S505, S601) and the execution shown in figures 7 and 8
The step of the step of motion compensation (S703, S804) and entropy coding/decoding (S704, S801).
In the following, will be described in each step shown in Fig. 7 and Fig. 8.
Merge candidate step (S701, S802) firstly, description is derived.
The merging candidate of current block may include that space merging is candidate, time merging is candidate and adds in merging candidate at least
One.Here, statement " derive space and merge candidate " means that deriving space merges candidate and add the merging candidate derived
It is added to the processing for merging candidate list.
Referring to Fig. 9, the space that current block can be derived from the contiguous block adjacent with current block X merges candidate.With current block X phase
Adjacent contiguous block may include the block B1 adjacent with the upper end of current block, the block A1 adjacent with the left end of current block and current block
In the adjacent block B0 in the upper right corner, the block B2 adjacent with the upper left corner of current block and the block A0 adjacent with the lower left corner of current block
At least one.
Merge candidate to derive the space of current block, determines whether each contiguous block adjacent with current block can be used for pair
The space of current block merges candidate derivation.Such determine can be carried out for contiguous block according to predetermined priority sequence.For example,
In the example of figure 9, it can determine that space merges candidate availability according to the sequence of block A1, B1, B0, A0 and B2.Based on can
Determine that space determined by sequence merges the merging candidate list that candidate can be sequentially added to current block with property.
Figure 11 is to show space merging the exemplary diagram that candidate is added to the processing of merging candidate list.
Referring to Fig.1 1, four spaces merge candidate and are derived from four contiguous blocks A1, B0, A0 and B2, and derive
Space, which merges candidate, can be sequentially added to merging candidate list.
Optionally, space can be derived based at least one of coding parameter merges candidate.
Here, it may include three or more motion informations that space, which merges candidate motion information, wherein described three or
More a plurality of motion information further includes L2 motion information and L3 movement letter other than including L0 motion information and L1 motion information
Breath.Here, at least one reference picture list may be present, for example including L0, L1, L2 and L3.
Next, the time that description derives current block is merged candidate method.
The time that the reconstructed block that can include from the same position picture of current picture derives current block merges candidate.With position picture
It can be the picture of the coding/decoding before current picture, and can be different from current picture in time sequencing.
Statement " the derivation time merges candidate " means that deriving the time merges candidate and add the time derived merging candidate
It is added to the processing for merging candidate list.
Referring to Fig.1 0, can from include current picture same position picture (also referred to as with position picture) in current block X in sky
Between the upper block positioned at same position outside position block derive current block time merge it is candidate, or can be from including current
Be spatially positioned in the same position picture of picture with current block X the position inside the block of same position block derive current block when
Between merge it is candidate.Term " time merges candidate " can mean the motion information of same position block.For example, can from spatially and currently
The time that the adjacent block H in the lower left corner that block X is located at the block C of same position derives current block X merge it is candidate, or from including block C
Middle position block C3 derive current block X time merge it is candidate.Time for deriving current block merge candidate block H,
C3 etc. is referred to as with position block (also referred to as same block of locations).
When can be from when the time that the block H for including position outside the block C derives current block merging candidate, block H be set
It is set to the same position block of current block.In this case, the time that current block can be derived based on the motion information of block H, which is merged, waits
Choosing.On the contrary, the block C3 including being located at the position inside block C can quilt when the time that cannot derive current block from block H merging candidate
It is set as the same position block of current block.In this case, the time that current block can be derived based on the motion vector of block C3 is merged
It is candidate.It is candidate (for example, both block H and block C3 when that can neither merge from any time that block H can not derive current block from block C3
All it is Intra-coded blocks) when, it is candidate cannot may to derive that the time of current block merges completely, or can be from addition to block H and C3
Except block derive current block time merge it is candidate.
Optionally, for example, the multiple times for the multiple pieces of derivation current blocks that can include out of same position picture merge candidate.?
That is the multiple times that can derive current block from block H, C3 etc. are candidate.
Figure 12 is to show time merging the exemplary diagram that candidate is added to the processing of merging candidate list.
Referring to Fig.1 2, when a time, which merges the candidate same position block from positioned at position H1, to be derived, derive when
Between merge candidate can be added to merging candidate list.
When the distance between reference picture of the current picture and current block that include current block is different from including same position block
When with position picture with the distance between the reference picture of position block, it can be zoomed in and out by the motion vector to same position block to obtain
The time of current block merges candidate motion vector.Here, can based between current picture and the reference picture of current block away from
From and with position picture with executing to motion vector with a distance from least one of the distance between reference picture of position block
Scaling.For example, can be by the distance between reference picture according to current picture and current block with same position picture and with position block
The ratio of the distance between reference picture zooms in and out the motion vector of same position block to derive the time of current block and merge candidate
Motion vector.
In addition, time merging can be derived based on current block, contiguous block or at least one of coding parameter of position block
Candidate utilizes at least one of mark and reference picture indices with position picture, with position block, predicting list.
It can merge at least one of candidate by generating space merging candidate and time and merge candidate and according to derivation
The merging order of candidates derived is added to by sequence merges candidate list to generate merging candidate list.
Next, the method that description is derived to the additional merging candidate of current block.
Term " additional to merge candidate " can mean that modified space merges candidate, the modified time merges candidate, group
Merge at least one of candidate and predetermined merging candidate with predetermined motion information value.Here, statement " derives additional close
And candidate " can mean to derive and add merging candidate and the additional merging candidate derived is added to the place for merging candidate list
Reason.
Modified space, which merges candidate, can mean by merging in candidate motion information extremely to the space derived
The merging that one item missing is modified and obtained is candidate.
The modified time, which merges candidate, can mean by merging in candidate motion information extremely to the time derived
The modified merging that one item missing is modified and obtained is candidate.
Combination, which merges candidate, can mean that merging candidate, modified space by the candidate, time to space merging merges time
Choosing, modified time merge candidate, combination and merge in candidate and predetermined merging candidate with predetermined motion information value
The merging that the motion information of at least one is combined and obtains is candidate, wherein space merges candidate, the time merges candidate, repairs
Space after changing merges candidate, the modified time merges candidate, combination and merges candidate and with predetermined motion information value
Predetermined merging candidate, which is entirely included in, to be merged in candidate list.
Optionally, combination merges candidate and can mean through the motion information progress at least one of following merging candidate
It combines and the merging derived is candidate: being not included in and merge in candidate list but from can derive that space merges the candidate and time
The space that the block of at least one of merging candidate is derived merges merging of candidate and time candidate;Based on being derived from described piece
Space merge the candidate and time and merge the candidate modified space derived and merge the candidate and modified time and merge
It is candidate;Combination merges candidate;And the predetermined merging with predetermined motion information value is candidate.
Optionally, the motion information obtained and executing entropy decoding to bit stream in a decoder can be used to derive group
Merge candidate.In this case, for derive combination merge candidate motion information can be coded by entropy in the encoder for
Bit stream.
Combination, which merges candidate, can mean that combined bidirectional merges candidate.It is using bi-directional predicted conjunction that combined bidirectional, which merges candidate,
And it is candidate, and combined bidirectional merges the candidate merging candidate that can be with L0 motion information and L1 motion information.
It is candidate that merging candidate with predetermined motion information value can be zero merging with motion vector (0,0).Have
The merging candidate of predetermined motion information value can be set such that merging candidate has phase in encoding device and decoding device
Same value.
It can derive or generate based on current block, contiguous block or at least one of coding parameter of position block and is modified
Space merges candidate, modified time merging candidate, combination merges candidate and merging candidate with predetermined motion information value
At least one of.In addition, can be based on current block, contiguous block or will be modified at least one of coding parameter of position block
Space merges candidate, modified time merging candidate, combination merges candidate and merging candidate with predetermined motion information value
At least one of be added to merging candidate list.
The size for merging candidate list can be determined based on current block, contiguous block or coding parameter with position block, and can
Changed according to coding parameter.
Next, the step of description is determined into the motion information of current block using the merging candidate list of generation (S702,
S803)。
Encoder can be waited by estimation from the merging for merging the motion compensation that candidate list selection will be used for current block
Choosing, and the candidate merging candidate index merge_idx of the determining merging of instruction can be encoded to bit stream.
In order to generate the prediction block of current block, encoder can be selected by using candidate index is merged from candidate list is merged
Merge motion information that is candidate, and determining current block.Then, encoder can execute motion compensation based on determining motion information,
To generate the prediction block of current block.
Decoder can be decoded the merging candidate index in the bit stream received, and determine and be included in merging candidate
The merging indicated by merging candidate index in list is candidate.Determining merging candidate can be confirmed as the movement letter of current block
Breath.Determining motion information is used for the motion compensation of current block.Here, term " motion compensation " can have phase with inter-prediction
Same meaning.
Next, will description the step of executing motion compensation using motion vector or motion information (S504, S605, S703,
S804)。
Encoding device and decoding device can calculate the fortune of current block by using predicted motion vector sum motion vector difference
Dynamic vector.After calculating motion vector, calculated motion vector can be used to execute interframe for encoding device and decoding device
Prediction or motion compensation (S504, S605).
Encoding device and decoding device can be used determining motion information execute inter-prediction or motion compensation (S703,
S804).Here, the motion information that current block can have determining merging candidate.
According to the prediction direction of current block, current block can have one (minimum value) a to N (maximum value) a motion vector.It can
It is a to N (maximum value) a prediction block that one (minimum value) is generated using one to N number of motion vector, and can be in generation
Final prediction block is selected among prediction block.
For example, will be used caused by the motion vector (or motion information) when current block has a motion vector
Prediction block is determined as the final prediction block of current block.
In addition, using the multiple motion vector when current block has multiple motion vectors (or a plurality of motion information)
(or described a plurality of motion information) is determined currently to generate multiple prediction blocks based on the weighted sum of the multiple prediction block
The final prediction block of block.Respectively include the more of the multiple prediction blocks indicated respectively by multiple motion vectors (or a plurality of motion information)
A reference picture can be listed in different reference picture lists or in a reference picture list.
For example, can based on spatial motion vectors candidate, temporal motion vector candidate, the motion vector with predetermined value, with
And at least one of aggregate motion vectors candidates generate multiple prediction blocks of current block, and can then be based on multiple pre-
The weighted sum of block is surveyed to determine the final prediction block of current block.
Optionally, for example, can work as based on by the indicated motion vector candidates of predetermined movement vectors candidates index to generate
Then preceding piece of multiple prediction blocks can determine the final prediction block of current block based on the weighted sum of multiple prediction block.In addition,
Multiple prediction blocks can be generated based on the motion vector candidates indicated by the index in predetermined motion vector candidates index range,
Then the final prediction block of current block can be determined based on the weighted sum of multiple prediction block.
Weight factor for each prediction block can be equally 1/N (here, the quantity that N is the prediction block generated).
For example, the weight factor for each prediction block is 1/2 when two prediction blocks are generated.Similarly, when three prediction block quilts
When generation, the weight factor for each prediction block is 1/3.Power when four prediction blocks are generated, for each prediction block
Repeated factor can be 1/4.Optionally, it can be determined in such a way that different weight factors is applied to each prediction block current
The final prediction block of block.
Weight factor for prediction block can not be it is fixed, but it is variable.Weight factor for prediction block can
It is unequal, and be different.For example, the weight factor for described two prediction blocks can phase when two prediction blocks are generated
Deng, such as (1/2,1/2), or can be unequal, such as (1/3,2/3), (1/4,3/4), (2/5,3/5) or (3/8,5/8).Weight because
Son can be real positive value or negative real number value.That is, the value of weight factor may include negative real number value, such as (- 1/2,3/2),
(- 1/3,4/3) or (1-1/4,5/4).
For the variable weight factor of application, the one or more items for being used for current block can be transmitted by bit stream signal
Weight factor information.Weight factor information by prediction block can be transmitted with signal, or can be by reference picture by with signal
Transmission.Optionally, multiple prediction blocks can share a weight factor.
Encoding device and decoding device can be determined whether using mark using predicted motion vector based on prediction block list
(or predicted motion information).For example, for each reference picture list, when prediction block list has the first value one using mark
(1) when, the predicted motion vector of current block can be used to execute inter-prediction or movement to current block for encoding device and decoding device
Compensation.However, encoding device and decoding device can not use and work as when prediction block list has second value zero (0) using mark
Preceding piece of predicted motion vector executes inter-prediction or motion compensation to current block.Prediction block list using mark the first value and
Second value can be respectively set to 0 and 1 on the contrary.Expression formula 3 to expression formula 5 is when the inter-prediction indicator of current block is
When PRED_BI, PRED_TRI or PRED_QUAD and when the prediction direction of each reference picture list is unidirectional, generation is worked as
The example of the method for preceding piece of final prediction block.
[expression formula 1]
P_BI=(WF_LO*P_LO+OFFSET_LO+WF_L1*P_L1+OFFSET_L1+RF) > > 1
[expression formula 2]
P_TRI=(WF_LO*P_LO+OFFSET_LO+WF_L1*P_L1+OFFSET_L1+WF_L2*P_L2+OFF SET_
L2+RF)/3
[expression formula 3]
P_QUAD=(WF_L0*P_L0+OFFSET_L0+WF_L1*P_L1+OFFSET_L1+WF_L2*P_L2+OFF SET_
L2+WF_L3*P_L3+OFFSET_L3+RF)>>2
In expression formula 1 into expression formula 3, each of P_BI, P_TRI and P_QUAD indicate the final prediction of current block
Block, LX (X=0,1,2,3) indicate reference picture list.WF_LX is indicated using prediction block caused by LX reference picture list
Weight factor.OFFSET_LX is indicated for the deviant using prediction block caused by LX reference picture list.P_LX expression is worked as
Prediction block caused by preceding piece of the motion vector (or motion information) using LX reference picture list.RF means the rounding-off factor,
And it can be arranged to 0, positive integer or negative integer.LX reference picture list may include below with reference at least one in picture
It is a: long term reference picture, the reference picture without past wave filter, the reference picture, not without sampling point self adaptation skew (SAO)
By the reference picture of auto-adaptive loop filter, merely through de-blocking filter and adaptively the reference picture that deviates, merely through
The reference picture of de-blocking filter and auto-adaptive loop filter, by sampling point self adaptation skew (SAO) and auto-adaptive loop filter
Reference picture and the reference picture for passing through de-blocking filter, sampling point self adaptation skew (SAO) and auto-adaptive loop filter whole.?
In this case, LX reference picture list can be at least any one in L2 reference picture list and L3 reference picture list
It is a.
Even if may be based on the weighted sum of prediction block there are multiple prediction directions for predetermined reference picture list to obtain
The final prediction block of current block.In this case, the power for the multiple prediction blocks derived using a reference picture list
Repeated factor can be equal, or can be different from each other.
At least weight factor WF_LX of multiple prediction blocks or offset OFFSET_LX, which can be, to be coded by entropy/decoded volume
Code parameter.Optionally, for example, weight factor and offset can be derived from the previous coding adjacent with current block/decoded contiguous block.
Here, the contiguous block adjacent with current block may include block for deriving the spatial motion vectors candidate of current block and for pushing away
Lead at least one of the block of the temporal motion vector candidate of current block.
Still optionally further, for example, can display order (picture order count (POC)) based on current picture and each
The POC of reference picture determines weight factor and offset.In this case, when the distance between current picture and reference picture
When increase, the value of weight factor or offset can reduce.That is, can be incited somebody to action when current picture and reference picture closer to each other
Bigger value is set as weight factor or offset.For example, the distance between the POC of POC and the L0 reference picture when current picture
When being 2,1/3 can be set by the value for being applied to the weight factor of the prediction block generated using L0 reference picture.Meanwhile when working as
When difference between the POC of the POC and L0 reference picture of preceding picture is 1, it can will be applied to generate pre- using L0 reference picture
The value for surveying the weight factor of block is set as 2/3.As described above, weight factor or offset can be with the display orders of current picture
(POC) difference between the display order of reference picture (POC) is inversely proportional.Optionally, weight factor or offset can be drawn with current
Difference between the display order (POC) in face and the display order (POC) of reference picture is directly proportional.
Optionally, for example, entropy can be carried out at least one of weight factor and offset based at least one coding parameter
Coding/decoding.In addition, the weighted sum of prediction block can be calculated based at least one coding parameter.
The weighted sum of multiple prediction blocks can be only applied to the partial region of prediction block.The partial region can be with often
The adjacent borderline region in the boundary of a prediction block.It, can be every in order to which weighted sum is only applied to the partial region as described above
Weighted sum by sub-block is calculated in a prediction block.
In there is the block of block size indicate by area information, can by using identical prediction block or it is identical finally
Prediction block executes inter-prediction or motion compensation for the sub-block smaller than described piece.
In there is the block of block depth indicate by area information, can by using identical prediction block or it is identical finally
The sub-block that prediction block is directed to the block depth with the depth depth than described piece executes inter-prediction or motion compensation.
In addition, maying be used at motion vector candidates list when calculating the weighted sum of prediction block by motion-vector prediction
In include at least one of motion vector candidates calculate the weighted sum, and calculated result can be used as current block most
Whole prediction block.
For example, prediction block can be generated using only spatial motion vectors candidate, the weighted sum of prediction block can be calculated, and can
Calculated weighted sum is used as to the final prediction block of current block.
For example, spatial motion vectors candidate and temporal motion vector candidate can be used to generate prediction block, prediction can be calculated
The weighted sum of block, and calculated weighted sum can be used as to the final prediction block of current block.
For example, prediction block can be generated using only aggregate motion vectors candidates, the weighted sum of prediction block can be calculated, and can
Calculated weighted sum is used as to the final prediction block of current block.
For example, prediction block can be generated using only the motion vector candidates indicated by particular index, prediction block can be calculated
Weighted sum, and calculated weighted sum can be used as to the final prediction block of current block.
For example, prediction block can be generated using only the motion vector candidates indicated by the index within the scope of predetermined index,
The weighted sum of prediction block can be calculated, and calculated weighted sum can be used as to the final prediction block of current block.
When calculating the weighted sum of prediction block using merging patterns, it can be used the merging for merging and including in candidate list candidate
At least one of merge candidate to calculate the weighted sum, and calculated result can be used as to the final prediction of current block
Block.
For example, candidate can be merged to generate prediction block using only space, the weighted sum of prediction block can be calculated, and can will count
The weighted sum of calculating is used as the final prediction block of current block.
For example, space can be used to merge merging candidate of candidate and time to generate prediction block, the weighting of prediction block can be calculated
With, and calculated weighted sum can be used as to the final prediction block of current block.
For example, candidate can be merged to generate prediction block using only combination, the weighted sum of prediction block can produce, and can will count
The weighted sum of calculating is used as the final prediction block of current block.
For example, prediction block can be generated using only the merging candidate indicated by particular index, the weighting of prediction block can produce
With, and calculated weighted sum can be used as to the final prediction block of current block.
For example, prediction block can be generated using only the merging candidate indicated by the index within the scope of predetermined index, can count
The weighted sum of prediction block is calculated, and calculated weighted sum can be used as to the final prediction block of current block.
In the encoder and the decoder, motion vector or the motion information of current block can be used to execute motion compensation.This
When, at least one prediction block can be used to be determined as the final prediction block of the result of motion compensation.Here, current block can be meant
At least one of present encoding block and current prediction block.
Overlapped block motion compensation can be executed by the borderline region to current block to generate the final prediction block of current block.
The borderline region of current block can be the side in current block and between current block and the contiguous block of current block
The adjacent region in boundary.The borderline region of current block may include coboundary region, left margin region, lower boundary region, the right battery limit (BL)
At least one of domain, upper right comer region, lower right field, upper left corner area and lower left corner region.The borderline region of current block
It can be the region with the partial response of the prediction block of current block.
Overlapped block motion compensation can be meant by calculating prediction block corresponding with the borderline region of current block and use and working as
The weighted sum of prediction block caused by the motion information of preceding piece of adjacent encoded/decoding block executes the processing of motion compensation.
The calculating to weighted sum can by sub-block be executed by the way that current block is divided into multiple sub-blocks.That is, can
The motion compensation of current block by sub-block is executed using the motion information of encoded/decoded sub-block adjacent with current block.Son
Block can mean junior's block of current block.
In addition, can be used and produced using the motion information of current block for each sub-block of current block when calculating weighted sum
Second prediction of raw the first prediction block and the motion information generation using the adjacent sub-blocks spatially adjacent with current block
Block.In this case, statement " using motion information " means " deriving motion information ".First prediction block can mean by using
Prediction block caused by the motion information of coding/decoding target sub-block in current block.Second prediction block can be by using
The prediction block caused by the motion information of spatially adjacent adjacent sub-blocks with the coding/decoding target sub-block in current block.
The weighted sum of the first prediction block and the second prediction block can be used to generate the final prediction block of current block.Namely
It says, overlapped block motion compensation is to find the final pre- of current block using the motion information and another piece of motion information of current block
Survey block.
In addition, working as advanced motion vector prediction (AMVP) mode, merging patterns, affine motion compensation model, decoder-side
Motion vector derivation pattern (DM), adaptive motion vector resolution model, local illumination compensation model, in bi-directional light stream mode extremely
Current block in use, multiple sub-blocks can be divided into, and can by sub-block execute overlapped block motion compensation by a few quilt.
It, can be candidate to the improved motion vector prediction factor (ATMVP) when merging patterns are used for motion compensation
Overlapped block motion compensation is executed at least one of space-time motion vector predictor (STMVP) candidate.
The details for then describing overlapped block motion compensation for referring to Fig.1 3 to Figure 24.
Next, will description to information associated with motion compensation execute the processing of entropy coding/entropy decoding (S505,
S601、S704、S801)。
Encoding device can will Entropy Coding associated with motion compensation to bit stream, decoder can be to wrapping in bit stream
The information associated with motion compensation included is decoded.As the associated with motion compensation of the target of entropy coding or entropy decoding
Information may include in following item at least one of: inter-prediction indicator inter_pred_idc, reference picture indices ref_
Idx_l0, ref_idx_l1, ref_idx_l2 and ref_idx_l3, motion vector candidates index mvp_l0_idx, mvp_l1_
Idx, mvp_l2_idx and mvp_l3_idx, motion vector difference, skip mode use/information cu_skip_flag is not used, closes
And mode use/be not used information merge_flag, merge index information merge_index, weight factor wf_l0, wf_l1,
Wf_l2 and wf_l3 and deviant offset_10, offset_11, offset_12 and offset_13.
Inter-prediction indicator can mean the prediction of the inter-prediction when current block is encoded/is decoded by inter-prediction
Both direction, the prediction direction of the quantity of prediction direction or inter-prediction and quantity of prediction direction.For example, inter-prediction refers to
Show that symbol can indicate single directional prediction or multiforecasting (such as bi-directional predicted, three-dimensional prediction and four-way prediction).Inter-prediction indicator
It can indicate the quantity of the reference picture of the prediction block for generating current block.Optionally, a reference picture can be used for multiple sides
To prediction.In this case, M reference picture is used to carry out the prediction of N number of direction (wherein, N > M).Inter-prediction
Indicator can also mean the quantity of the prediction block of the inter-prediction or motion compensation for current block.
According to the quantity of the prediction direction of current block, reference picture indicator can indicate a direction PRED_LX, two sides
To PRED_BI, three direction PRED_TRI, four direction PRED_QUAD, or more direction.
Indicate whether to generate prediction block using reference picture list using mark with particular reference to the predicting list of picture list.
For example, when the predicting list with particular reference to picture list has first value one (1) using mark, it means that make
Prediction block is generated with reference picture list.When predicting list has second value zero (0) using mark, it means that without using ginseng
It examines picture list and generates prediction block.Here, predicting list can be set individually on the contrary using the first value and second value of mark
For 0 and 1.
That is, can be used and ginseng when the predicting list with particular reference to picture list has the first value using mark
The corresponding motion information of picture list is examined to generate the prediction block of current block.
Reference picture indices can indicate to be present in reference picture list and by current block reference with particular reference to picture.
For each reference picture list, entropy coding/decoding can be carried out to one or more reference picture indices.Can be used one or
More reference picture indices carry out motion compensation to current block.
The instruction of motion vector candidates index is in the movement for each reference picture list or the preparation of each reference picture indices
The motion vector candidates for the current block among motion vector candidates for including in vectors candidates list.Each motion vector can be directed to
Candidate list indexes at least one or more motion vector candidates and carries out entropy coding/entropy decoding.Can be used at least one or
More motion vector candidates index carries out motion compensation to current block.
Motion vector difference indicates the difference between current motion vector and predicted motion vector.For for for current block
Each motion vector candidates list caused by each reference picture list or each reference picture indices, can be to one or more
A motion vector difference carries out entropy coding/entropy decoding.One or more motion vector differences can be used to carry out movement benefit to current block
It repays.
About skip mode use/unused information cu_skip_flag, as skip mode use/unused information cu_
When skip_flag has first value one (1), skip mode can be used.On the contrary, as skip mode use/unused information cu_
When skip_flag has second value zero (0), skip mode can not used.It can be used according to skip mode use/unused information
Skip mode executes motion compensation to current block.
About merging patterns use/unused information merge_flag, as merging patterns use/unused information merge_
When flag has first value one (1), merging patterns can be used.On the contrary, as merging patterns use/unused information merge_flag
When with second value zero (0), merging patterns can not used.Merging patterns can be used according to merging patterns use/unused information
Motion compensation is executed to current block.
Merging index information merge_index can mean that instruction merges the candidate information of the merging in candidate list.
Optionally, merging index information can mean about the information for merging index.
In addition, merging index information can indicate to be used among the reconstructed block adjacent in space time with current block
It derives and merges candidate reconstructed block.
Merging index information can indicate to merge candidate one or more motion informations having.For example, when merging index
When information has the first value zero (0), merging index information can indicate that merging first entry is listed in candidate list first closes
And it is candidate;When merging index information has second value one (1), merging index information can indicate to be listed in merging candidate list
The second of second entry merges candidate;When merging index information has third value two (2), merges index information instruction merging and wait
The third for being listed in third entry in list is selected to merge candidate.Similarly, when merging index information has from the 4th value to N value
Value when, merge index information can indicate merge candidate list in the merging according to listed by the position of the sequence of described value wait
Choosing.Here, N can be 0 or positive integer.
Motion compensation can be executed to current block using merging patterns based on merging patterns index information.
It, can be based on the weighting of prediction block when being generated during motion compensation of two or more prediction blocks in current block
With come the final prediction block that determines current block.It, can be by weight factor, offset or weight factor and offset when calculating weighted sum
The two is applied to each prediction block.The weighted sum factor (such as weight factor and offset) for calculating weighted sum can be according to following
In at least one of quantity be coded by entropy/entropy decoding or can according to in following item at least one of corresponding quantity by entropy
Coding/entropy decoding: reference picture list, motion vector candidates index, motion vector difference, motion vector, skips mould at reference picture
Formula use/information, merging patterns use is not used/is not used information, merges index information.In addition, can be referred to based on inter-prediction
Show that symbol carries out entropy coding/entropy decoding to the weighted sum factor for each prediction block.The weighted sum factor may include weight factor and
At least one of offset.
Entropy coding/entropy decoding can be carried out to information associated with motion compensation block by block, or can be with superordinate elements
Entropy coding/entropy decoding is carried out for unit pair information associated with motion compensation.For example, can block-by-block (such as by CTU, by CU or
By PU) to information associated with motion compensation carry out entropy coding/entropy decoding.It is alternatively possible to (such as be regarded with superordinate elements
Frequency parameter set, sequence parameter set, parameter sets, auto-adaptive parameter set or slice header) it is that unit pair is associated with motion compensation
Information carry out entropy coding/entropy decoding.
Entropy coding/entropy decoding can be carried out to information associated with motion compensation based on motion compensation information difference, wherein fortune
Dynamic compensated information difference indicates between information associated with motion compensation and the predicted value of information associated with motion compensation
Difference.
Information associated with the motion compensation of encoded/decoding block adjacent to current block can be used as and current block
The associated information of motion compensation, instead of carrying out entropy coding/entropy decoding to information associated with the motion compensation of current block.
At least one information associated with motion compensation can be derived based at least one of coding parameter.
Bit stream can be decoded based at least one of coding parameter to generate at least one and motion compensation phase
Associated information.On the contrary, at least one of coding parameter can be based on by least one comentropy associated with motion compensation
It is encoded to bit stream.
At least one information associated with motion compensation may include motion vector, motion vector candidates, motion vector time
Choosing index, motion vector difference, motion vector predictor, skip mode use/information skip_flag is not used, merging patterns make
With/information merge_flag, merging index information merge_index, motion vector resolution information, overlapped block motion is not used
Compensated information, local illumination compensated information, affine motion compensated information, decoder-side motion vector derive information and two-way light stream
At least one of information.Here, decoder-side motion vector, which derives, can mean that the motion vector of pattern match derives.
Motion vector resolution information, which can be, indicates which kind of specified resolution is used for motion vector and motion vector difference
At least one of information.Here, resolution ratio can mean precision.The specified resolution can be arranged to 16- pixel (16-
Pel) unit, 8- pixel (8-pel) unit, 4- pixel (4-pel) unit, integer-pixel (integer-pel) unit, 1/2- pixel
(1/2-pel) unit, 1/4- pixel (1/4-pel) unit, 1/8- pixel (1/8-pel) unit, 1/16- pixel (1/16-pel)
In unit, 1/32- pixel (1/32-pel) unit and 1/64- pixel (1/64-pel) unit at least any one.
Overlapped block motion compensation information can be instruction during the motion compensation of current block with current block spatially phase
Whether the motion vector of adjacent contiguous block will be otherwise used for calculating the information of the weighted sum of the prediction block of current block.
Local illumination compensated information can be instruction when generate current block prediction block when whether apply weight factor and partially
The information of any one of shifting.Here, at least one of weight factor and offset can be the value calculated based on reference block.
Affine motion compensated information can be whether instruction affine motion model will be used for motion compensation to current block
Information.Here, affine motion model, which can be, is divided into multiple sub-blocks for a block using multiple parameters and uses representative
The model of the motion vector of motion vector computation sub-block.
Whether motion vector needed for decoder-side motion vector derivation information can be instruction motion compensation is by decoder
The information for deriving and then being used in a decoder.Information is derived according to decoder-side motion vector, it is related to motion vector
The information of connection can not be coded by entropy/entropy decoding.It is pushed away when decoder-side motion vector derives information instruction motion vector by decoder
It leads and then in a decoder by use, information associated with merging patterns can be coded by entropy/entropy decoding.That is,
Decoder-side motion vector, which derives information, can indicate whether merging patterns are used in a decoder.
Two-way Optic flow information can be whether instruction motion vector is modified pixel-by-pixel or by sub-block and then modified
Whether motion vector afterwards is used for the information of motion compensation.According to two-way Optic flow information, motion vector can not by pixel-by-pixel or
By sub-block entropy coding/entropy decoding.Value by block-based motion vector, which is converted to based on picture, to be meant to the modification of motion vector
The value of the motion vector of element or the value of the motion vector based on sub-block.
Motion compensation can be carried out to current block based at least one information associated with motion compensation, and can be to described
At least one information associated with motion compensation carries out entropy coding/entropy decoding.
When information associated with motion compensation is coded by entropy/entropy decoding when, binarization method, such as truncation Lay can be used
This binarization method, K rank Exp-Golomb binarization method, limited K rank Exp-Golomb binarization method, regular length two-value
Change method, unitary binarization method and truncation unitary binarization method.
When information associated with motion compensation is coded by entropy/entropy decoding when, can be based at least one of following information
Information determines context model: with the area information of the motion information of the contiguous block adjacent to current block or contiguous block associated
Information;Previous coding/decoded information associated with motion compensation or previous coding/decoded area information;About working as
The information of preceding piece of depth;The information of size about current block.
Optionally, when information associated with motion compensation is coded by entropy/entropy decoding when, can be by will be in following information
At least one information is used as the predicted value of information associated with the motion compensation of current block to execute entropy coding/entropy decoding: with
The associated information of the motion compensation of contiguous block, previous coding/decoded information associated with motion compensation, about current block
Depth information and the size about current block information.
Hereinafter, it can refer to the details that Figure 13 to Figure 24 describes overlapped block motion compensation.
Figure 13 is to show the exemplary diagram for by sub-block executing overlapped block motion compensation.
Referring to Fig.1 3, shaded block is the region that will be performed overlapped block motion compensation.Shaded block may include the position of current block
Sub-block in the sub-block on boundary or current block.Current block can be by the region that heavy line is demarcated.
The motion information of arrow instruction adjacent sub-blocks is used for the motion compensation of current sub-block.Here, arrow tail institute position
In region can mean the adjacent sub-blocks neighbouring with current sub-block in (1) adjacent sub-blocks adjacent with current block or current block.
The region that arrowhead is located at can mean the current sub-block in current block.
For shaded block, the weighted sum of the first prediction block and the second prediction block can be calculated.Current sub-block in current block
Motion information is used as the motion information for generating the first prediction block.The motion information of the adjacent sub-blocks adjacent with current block
It is adjacent with current sub-block and can quilt including at least one of motion information of adjacent sub-blocks in current block or both
As the motion information for generating the second prediction block.
In addition, the motion information for generating the second prediction block may include working as in current block in order to improve code efficiency
In the upper block of preceding sub-block, left side block, lower block, right side block, upper right square, bottom right square, upper left square and lower-left square
The motion information of at least one.The adjacent sub-blocks that can be used for generating the second prediction block can be determined according to the position of current sub-block.
For example, the upside positioned at current sub-block, upper right side and upper left side can be used when current sub-block is located at the coboundary of current block
At least one adjacent sub-blocks in adjacent sub-blocks.When current sub-block is located at the left margin of current block, it can be used and be located at currently
At least one adjacent sub-blocks in the adjacent sub-blocks in the left side of sub-block, upper left side and lower left side.
Here, positioned at the upside of current sub-block, left side, downside, right side, upper right side, lower right side, upper left side and lower left side
Block can be known respectively as adjacent sub-blocks, left adjacent sub-blocks, lower adjacent son closely block, right adjacent sub-blocks, upper right adjacent sub-blocks, bottom right neighbour
Nearly sub-block, upper left adjacent sub-blocks and lower-left adjacent sub-blocks.
Meanwhile in order to reduce computation complexity, for generate the second prediction block motion information can according to current block phase
The size of the motion vector of adjacent adjacent sub-blocks or the adjacent sub-blocks adjacent with current sub-block in current block and change.
For example, when adjacent sub-blocks are bi-directional predicted sub-blocks, the size in the direction L0 and the direction L1 to motion vector into
Row compares, and only can be used to generate the second prediction block for the motion information in the direction of larger size.
Optionally, for example, the sum of absolute value of the x-component in the direction L0 of motion vector and y-component and motion vector
The sum of the x-component and the absolute value of y-component in the direction L1 are calculated.Then, the motion vector of predetermined value will can be only equal to or more than
For generating the second prediction block.Here, which can be 0 or positive integer.The predetermined value, which can be, to be based on using from encoder
Signal is transmitted to value determined by the information of decoder.Optionally, the predetermined value can not signal transmission, and can be in encoder
With in decoder by the value of identical setting.
In addition, the motion information for generating the second prediction block can be according to current sub-block in order to reduce computation complexity
The size and Orientation of motion vector and change.
For example, can the absolute value of x-component and y-component to the motion vector of current sub-block be compared.It is exhausted when x-component
When to being worth larger, the left side sub-block of current sub-block and the motion information of at least one of right side sub-block can be used for generation second
Prediction block.
Optionally, for example, can the absolute value of x-component and y-component to the motion vector of current sub-block be compared.Work as y
When the absolute value of component is larger, the top sub-block of current sub-block and the motion information of at least one of lower section sub-block can be used for
Generate the second prediction block.
Optionally, for example, when the absolute value of the x-component of the motion vector of current sub-block is equal to or more than predetermined value, when
The left side sub-block of preceding sub-block and the motion information of at least one of right side sub-block can be used for generating the second prediction block.Here, should
Predetermined value can be zero (0) or positive integer.The predetermined value can be true based on the information for being transmitted to decoder with signal from encoder
It is fixed, or can be in the encoder and the decoder by identical setting.
Still optionally further, for example, when the absolute value of the y-component of the motion vector of current sub-block is equal to or more than predetermined value
When, the top sub-block of current sub-block and the motion information of at least one of lower section sub-block can be used for generating the second prediction block.This
In, which can be zero (0) or positive integer.The predetermined value can be based on the information for being transmitted to decoder with signal from encoder
It is determined, or can be in the encoder and the decoder by identical setting.
Here, sub-block can have the size of N × M, wherein N and M is positive integer.N and M can be equal or can be unequal.
For example, the size of sub-block can be 4 × 4 or 8 × 8.The information of the size of sub-block can be coded by entropy/entropy according to sequence units grade
Decoding.
The size of sub-block being dimensioned based on current block.For example, when current block size be K sampling point or less than
When K sampling point, the size of sub-block can be 4 × 4.Meanwhile when the size of current block is greater than K sampling point, the size of sub-block can
To be 8 × 8.Here, K is positive integer, such as 256.
Here, the information of the size of sub-block can be at least appointing in sequence, picture, band, parallel block, CTU, CU and PU
Meaning one is coded by entropy/entropy decoding for unit.In addition, the size of sub-block can be predetermined in the encoder and the decoder it is pre-
Definite value.
Sub-block can have square shape or rectangular shape.For example, when current block has square shape or rectangular shape
When, sub-block can have square shape.
For example, sub-block can have rectangular shape when current block has rectangular shape.
Here, the information of the shape of sub-block can be at least one in sequence, picture, band, parallel block, CTU, CU and PU
It is a to be coded by entropy/entropy decoding for unit.In addition, the shape of sub-block can be the preboarding being predetermined in the encoder and the decoder
Shape.
Figure 14 is to show the exemplary diagram that overlapped block motion compensation is executed using the motion information of the sub-block with position block.For
Raising code efficiency, is spatially positioned at the same seat block of same position in same position picture or reference picture with current block
Motion information can be used for generating the second prediction block.
Referring to Fig.1 4, the motion information of the sub-block adjacent in time with current block can be used for current son in same position block
Block carries out overlapped block motion compensation.The region that arrow tail is located at can be the sub-block in same position block.Arrowhead is located at
Region can be the current sub-block in current block.
In addition, in the same seat block and the spatially adjacent adjacent sub-blocks of current block and current block in the picture of position
The motion information of at least one of spatially adjacent adjacent sub-blocks can be used for generating the second prediction block with current sub-block.
Figure 15 is to show to execute showing for overlapped block motion compensation using the motion information of the block adjacent with the boundary of reference block
The diagram of example.It, can be by using at least one in the motion vector and reference picture indices of current block in order to improve code efficiency
It is a to identify the reference block in reference picture, and the motion information of the contiguous block adjacent with the boundary of the reference block identified can
For generating the second prediction block.Here, contiguous block may include adjacent with the sub-block for being located at right margin or left margin of reference block
Encoded/decoding block.
Referring to Fig.1 5, the motion information of the encoded/decoding block adjacent with the lower boundary of reference block or right margin can be used for
Overlapped block motion compensation is carried out to current sub-block.
In addition, the motion information and current block of the encoded/decoding block adjacent with the lower boundary of reference block or right margin exist
Spatially in the motion information of adjacent adjacent sub-blocks and current block with the spatially adjacent adjacent sub-blocks of current sub-block
At least one motion information in motion information can be used for generating the second prediction block.
In order to improve code efficiency, at least one of multiple merging candidates for including in merging candidate list, which merge, to be waited
The motion information of choosing can be used for generating the second prediction block.Here, merge candidate list can be multiple inter-frame forecast modes it
In merging patterns used in list.
Believe for example, merging the space in candidate list and merging the movement that candidate is used as generating the second prediction block
Breath.
Optionally, for example, the time merged in candidate list merges candidate be used as generating the second prediction block
Motion information.
Still optionally further, for example, the combination merging candidate merged in candidate list is used as generating second in advance
Survey the motion information of block.
Optionally, in the multiple motion vector candidates for including in order to improve code efficiency, in motion vector candidates list
At least one motion vector is used as the motion vector for generating the second prediction block.Here, motion vector candidates list can
To be list used in the AMVP mode among multiple inter-frame forecast modes.
For example, the spatial motion vectors candidate in motion vector candidates list is used as generating the second prediction block
Motion information.
Optionally, for example, the temporal motion vector candidate in motion vector candidates list is used as generating second
The motion information of prediction block.
The movement needed for merging at least one of candidate and motion vector candidates and being used as generating the second prediction block is believed
When breath, the region that overlapped block motion compensation is applied to can be arranged differently than.The region that overlapped block motion compensation is applied to can
Be block the region (that is, sub-block positioned at boundary of block) or block adjacent with boundary the not region adjacent with boundary (that is,
The sub-block for not being located at boundary of block).
When overlapped block motion compensation is applied to the not region adjacent with boundary of block, merges candidate and motion vector and wait
At least one of choosing is used as motion information needed for generating the second prediction block.
For example, motion information can be used as by the way that space is merged candidate or spatial motion vectors candidate, for block not with
The adjacent region in boundary executes overlapped block motion compensation.
Optionally, for example, motion information can be used as by the way that candidate or temporal motion vector candidate will be merged the time, for block
The not region adjacent with boundary execute overlapped block motion compensation.
Still optionally further, for example, motion information can be used as by the way that space is merged candidate or spatial motion vectors candidate,
Overlapped block motion compensation is executed for the region adjacent with lower boundary or right margin of block.
Still optionally further, for example, motion information can be used as by the way that candidate or temporal motion vector candidate will be merged the time,
Overlapped block motion compensation is executed for the region adjacent with lower boundary or right margin of block.
In addition, being pushed away to improve code efficiency in the slave specific piece merged in candidate list or motion vector candidates list
Derived motion information can be used for the overlapped block motion compensation for specific region.
For example, merging candidate list or motion vector time when the motion information of the upper right side contiguous block of current block is included in
When selecting in list, which can be used for the overlapped block motion compensation of the right border area of current block.
Optionally, for example, merging candidate list or fortune when the motion information of the lower left contiguous block of current block is included in
When in dynamic vector candidate list, which can be used for the overlapped block motion compensation in the lower boundary region of current block.
Figure 16 is to show the exemplary diagram that overlapped block motion compensation is executed by sub-block group.Complexity is calculated in order to reduce
Degree, can execute the overlapped block motion compensation based on sub-block as unit of including the sub-block collection of one or more sub-blocks.Son
The unit of block collection can mean the unit of sub-block group.
Referring to Fig.1 6, sub-block group is referred to alternatively as with the shadow region that line is demarcated.Arrow means the fortune of adjacent adjacent sub-blocks
Dynamic information can be used for the motion compensation of current sub-block group.It is adjacent with current block that the region that arrow tail is located at can be (1)
The adjacent sub-blocks adjacent with current sub-block in adjacent sub-blocks, (2) the adjacent sub-blocks group adjacent with current block or (3) current block
Group.In addition, the region that arrowhead is located at can mean the current sub-block group in current block.
For each sub-block group, the weighted sum of the first prediction block and the second prediction block can be calculated.Current son in current block
The motion information of block group is used as the motion information for generating the first prediction block.Here, the current sub-block group in current block
Motion information can be the average value of motion information of each sub-block in current sub-block group, intermediate value, minimum value, maximum value and
Any one in weighted sum.The motion information of the adjacent sub-blocks adjacent with current block, the adjacent sub-blocks group adjacent with current block
Motion information and current block at least one of the motion information of adjacent sub-blocks adjacent with current sub-block can by with
Act on the motion information for generating the second prediction block.Here, the motion information of the adjacent sub-blocks group adjacent with current block can be
In the average value of the motion information of each sub-block for including in adjacent sub-blocks group, intermediate value, minimum value, maximum value and weighted sum
Any one.
Here, current block may include one or more sub-block groups.The horizontal size of one sub-block group can be equal to or be less than
The horizontal size of one current sub-block.In addition, the vertical dimension of a sub-block group can be equal to or hanging down less than current sub-block
Ruler cun.In addition, weight can be executed at least one sub-block in the multiple sub-blocks for being located at coboundary and left margin of current block
Folded block motion compensation.
Due to in current block lower boundary and the adjacent block of right margin be not yet encoded/decoding, can not be for working as
At least one sub-block being located in multiple sub-blocks of lower boundary and right margin in preceding piece executes overlapped block motion compensation.Optionally,
Due to in current block lower boundary and the adjacent block of right margin be not yet encoded/decoding, can be by using current sub-block
The motion information of upper block, the motion information of left side block, the motion information of upper left square, the motion information of lower-left square and
At least one of the motion information of upper right square, for being located in multiple sub-blocks of left margin and right margin in current block extremely
Any one few execution overlapped block motion compensation.
In addition, when current block will be predicted under merging patterns and have improved motion vector prediction candidate and
When at least one in space-time motion-vector prediction candidate, for being located at the multiple of lower boundary and right margin in current block
At least one sub-block in sub-block, can not execute overlapped block motion compensation.
In addition, when current block will be predicted under decoder-side motion vector derivation pattern (DM) or affine motion compensation model
When, at least one sub-block in the multiple sub-blocks for being located at lower boundary and right margin in current block, overlapping block fortune can not be executed
Dynamic compensation.
In addition, overlapped block motion compensation can be executed at least one of color component of current block.Color component can
Including at least one of luminance component and chromatic component.
Optionally, overlapped block motion compensation can be executed according to the inter-prediction indicator of current block.That is, when current
When block will be predicted by single directional prediction, bi-directional predicted, three-dimensional prediction and/or four-way, overlapped block motion compensation can be performed.Optionally,
Can overlapped block motion compensation only be executed when current block is by single directional prediction.It still optionally further, can be only in current block by Two-way
Overlapped block motion compensation is executed when survey.
Figure 17 is the exemplary diagram for showing a plurality of motion information for overlapped block motion compensation.
Maximum item number for generating the motion information of the second prediction block can be K.That is, at most K second pre-
Surveying block can be generated and for overlapped block motion compensation.Here, K can be zero (0) or positive integer, for example, 1,2,3 or 4.
For example, when generating the second prediction block using the motion information of the adjacent sub-blocks adjacent with current block, it can be from top
At least one of block and right side block derive most two motion informations.When based on the neighbour adjacent with current sub-block in current block
When the motion information of nearly sub-block generates the second prediction block, can from the upper block of current sub-block, left side block, right side block, upper left square,
At least one of upper right square, lower-left square and bottom right square derive most four motion informations.Here, statement " derives fortune
Dynamic information " can mean then generating the second prediction block using the motion information derived executes weight using the second prediction block generated
The processing of folded block motion compensation.
Referring to Fig.1 7, in order to improve code efficiency, when for the multiple sub-blocks for being located at coboundary and left margin in current block
At least one of execute motion compensation when, most three motion informations can be derived for generate the second prediction block.Namely
It says, the motion information for generating the second prediction block can be derived based on 3 connections.
It, can be from adjacent with current block for example, when executing motion compensation for the sub-block for being located at coboundary in current block
At least one of top contiguous block, upper left side contiguous block and upper right side contiguous block among adjacent sub-blocks derive motion information.
It, can be from adjacent with current block for example, when executing motion compensation for the sub-block for being located at left margin in current block
At least one of left side contiguous block, upper left side contiguous block and lower left contiguous block among adjacent sub-blocks derive motion information.
It, can be from adjacent with current block in addition, when executing motion compensation for the sub-block for being located at upper left boundary in current block
Adjacent sub-blocks among at least one of top contiguous block, left side contiguous block and upper left side contiguous block derive motion information.
It, can be from adjacent with current block in addition, when executing motion compensation for the sub-block for being located at upper right boundary in current block
Adjacent sub-blocks among at least one of top contiguous block, upper left side contiguous block and upper right side contiguous block derive movement letter
Breath.
It, can be from adjacent with current block meanwhile when executing motion compensation for the sub-block for being located at lower-left boundary in current block
Adjacent sub-blocks among at least one of left side contiguous block, upper left side contiguous block and lower left contiguous block derive movement letter
Breath.
Optionally, in order to improve code efficiency, when being not located at the multiple of coboundary and left margin in current block
When the sub-block of at least one of sub-block executes motion compensation, most 8 motion informations can be derived for generating the second prediction
Block.That is, the motion information for generating the second prediction block can be derived based on 8 connections.
It, can be from including in current block as adjacent with current sub-block neighbouring for example, for the sub-block in current block
Top contiguous block, left side contiguous block in sub-block, lower section contiguous block, right side contiguous block, upper left side contiguous block, lower left are neighbouring
At least one of block, lower right contiguous block and upper right side contiguous block derive motion information.
In addition, the motion information for generating the second prediction block can be derived from the same seat block in same position picture.In addition, can
It derives from the encoded/decoding block adjacent with the lower boundary of reference block and right margin in reference picture for generating the second prediction
The motion information of block.
In addition, can be determined according to the size and Orientation of motion vector pre- for generating second to improve code efficiency
Survey the item number of the motion information of block.
For example, most L items can be used when the sum of absolute value of the x-component of motion vector and y-component is equal to or more than J
Motion information.On the contrary, most P item movement letters can be used when the sum of absolute value of the x-component of motion vector and y-component is less than J
Breath.In this case, J, L and P are zero or positive integer.L and P is preferably different value.However, L and P can be equal to each other.
In addition, when current block will be predicted with merging patterns and when improved motion vector prediction is candidate and empty
At least one of m- motion vector prediction candidate is by use, at most K motion information can be used for generating second in advance
Survey block.Here, K can be zero or positive integer, such as 4.
In addition, most K motion informations can quilt when current block will be predicted with decoder-side motion vector derivation pattern (DM)
For generating the second prediction block.Here, K can be zero or positive integer, such as 4.
In addition, most K motion informations can be used for generating when current block will be predicted with affine motion compensation model
Second prediction block.Here, K can be zero or positive integer, such as 4.
Figure 18 and Figure 19 is the diagram for showing the sequence for deriving the motion information for generating the second prediction block.For generating
The motion information of second prediction block can be derived according to preset predetermined order in the encoder and the decoder.
Referring to Fig.1 8, it can be derived according to the sequence of upper block, left side block, lower block and right side block from the contiguous block of current block
Motion information.
Referring to Fig.1 9, in order to improve code efficiency, derivation can be determined for generating second based on the position of current sub-block
The sequence of the motion information of prediction block.
For example, when deriving motion information for the current sub-block for being located at coboundary in current block, can according to as with work as
The sequence of contiguous block, (2) upper left side contiguous block and (3) upper right side contiguous block is from neighbour above (1) of preceding piece of adjacent adjacent sub-blocks
Nearly sub-block derives motion information.
In addition, when deriving motion information for the current sub-block for being located at left margin in current block, can according to as with work as
The sequence of contiguous block, (2) upper left side contiguous block and (3) lower left contiguous block is from neighbour on the left of (1) of preceding piece of adjacent adjacent sub-blocks
Nearly sub-block derives motion information.
In addition, when deriving motion information for the current sub-block for being located at upper left boundary in current block, can according to as with
Contiguous block above (1) of the adjacent adjacent sub-blocks of current block, on the left of (2) sequence of contiguous block and (3) upper left side contiguous block from neighbour
Nearly sub-block derives motion information.
In addition, when deriving motion information for the current sub-block for being located at upper right boundary in current block, can according to as with
Above (1) of the adjacent adjacent sub-blocks of current block the sequence of contiguous block, (2) upper left side contiguous block and (3) upper right side contiguous block from
Adjacent sub-blocks derive motion information.
In addition, when for current block be located at bottom right boundary current sub-block derivation motion information when, can according to as with
On the left of (1) of the adjacent adjacent sub-blocks of current block the sequence of contiguous block, (2) upper left side contiguous block and (3) lower left contiguous block from
Adjacent sub-blocks derive motion information.
As in the example of Figure 19, can according to contiguous block above (1) as the adjacent sub-blocks adjacent with current sub-block,
(2) left side contiguous block, (3) lower section contiguous block, (4) right side contiguous block, (5) upper left side contiguous block, (6) lower left contiguous block, (7)
The sequence of lower right contiguous block and (8) upper right side contiguous block derives the motion information of the current sub-block in current block.Optionally,
Motion information can be derived according to order in a different order shown in Figure 19.
It on the other hand, can be after the motion information of the adjacent sub-blocks spatially adjacent with current sub-block be derived then
Derive the motion information with the same seat block in the picture of position.Optionally, it can derive and the spatially adjacent neighbour of current sub-block
The motion information with the same seat block in the picture of position is derived before the motion information of nearly sub-block.
In addition, can then be derived after the motion information of the adjacent sub-blocks spatially adjacent with current sub-block is derived
It is located at the motion information of the lower boundary of reference block and encoded/decoding block of right margin in reference picture.Optionally, it can derive
It is derived before the motion information of spatially adjacent adjacent sub-blocks with current sub-block and is located at the following of reference block in reference picture
The motion information of encoded/decoding block of boundary and right margin.
Only when predetermined condition is satisfied, the motion information of the adjacent sub-blocks adjacent with current block or with current sub-block phase
The motion information of adjacent adjacent sub-blocks can be derived as the motion information for generating the second prediction block.
For example, when in the adjacent sub-blocks adjacent with current sub-block in the adjacent sub-blocks or current block adjacent with current block
In the presence of at least one, the motion information of existing adjacent sub-blocks can be derived as the movement letter for generating the second prediction block
Breath.
Still optionally further, for example, when adjacent with current sub-block in the adjacent sub-blocks or current block adjacent with current block
At least one of adjacent sub-blocks when being predicted with inter-frame forecast mode, with the movement for the adjacent sub-blocks that inter-prediction is predicted
Information can be derived as the motion information for generating the second prediction block.Meanwhile when adjacent sub-blocks adjacent with current block or
When the adjacent sub-blocks at least one of adjacent with current sub-block in current block are predicted with intra prediction mode, with pre- in frame
The motion information for the adjacent sub-blocks that survey mode is predicted can not be derived as the motion information for generating the second prediction block, this
It is because the adjacent sub-blocks do not have motion information.
In addition, when in the adjacent sub-blocks adjacent with current sub-block in the adjacent sub-blocks or current block adjacent with current block
The inter-prediction indicator of at least one do not indicate L0 prediction, L1 prediction, L2 prediction, L3 prediction, single directional prediction, it is bi-directional predicted,
Three-dimensional prediction and four-way prediction at least one when, the motion information for generating the second prediction block can not be derived.
In addition, when the inter-prediction indicator for being used to generate the second prediction block is different from the frame for generating the first prediction block
Between prediction indicator when, the motion information for generating the second prediction block can be derived.
In addition, when the motion vector for being used to generate the second prediction block is different from the motion vector for generating the first prediction block
When, motion information needed for generating the second prediction block can be derived.
In addition, when the reference picture indices for being used to generate the second prediction block are different from the reference for generating the first prediction block
When picture index, motion information needed for generating the second prediction block can be derived.
In addition, being different from using when being used to generate at least one of motion vector and reference picture indices of the second prediction block
In generate the first prediction block motion vector and reference picture indices at least one when, can derive generate the second prediction block institute
The motion information needed.
In addition, in order to reduce computation complexity, it is unidirectional in the inter-prediction indicator instruction for generating the first prediction block
In the case where prediction, when the motion vector and reference picture of L0 prediction direction and L1 prediction direction for generating the second prediction block
At least one of index is different from for generating at least one of motion vector and reference picture indices of the first prediction block
When, motion information needed for generating the second prediction block can be derived.
In addition, in order to reduce computation complexity, based on the inter-prediction indicator for generating the first prediction block, in interframe
In the case that prediction indicator instruction is bi-directional predicted, when the L0 prediction direction and L1 prediction direction for generating the second prediction block
At least one of motion vector and reference picture indices set are different from L0 prediction direction and L1 for generating the first prediction block
When at least one of the motion vector of prediction direction and reference picture indices set, fortune needed for generating the second prediction block can be derived
Dynamic information.
In addition, in order to reduce computation complexity, when at least one motion information for generating the second prediction block is different from
When for generating at least one motion information of the first prediction block, motion information needed for generating the second prediction block can be derived.
Figure 20 is shown by by the POC of the POC of the reference picture of current sub-block and the reference picture of specific adjacent sub-blocks
It is compared to determine whether the motion information of the specific adjacent sub-blocks can be used as the fortune for generating the second prediction block
The exemplary diagram of dynamic information.
Referring to Figure 20, in order to reduce computation complexity, when the POC of the reference picture of current sub-block is equal to the ginseng of adjacent sub-blocks
When examining the POC of picture, the motion information of current sub-block can be used for the second prediction block for generating current sub-block.
In addition, in order to reduce computation complexity, as in the example of Figure 20, when the reference for generating the second prediction block
When the POC of picture is different from the POC for the reference picture for generating the first prediction block, it can derive needed for generating the second prediction block
Motion information.
Specifically, when the POC of the reference picture for generating the second prediction block is different from for generating the first prediction block
When the POC of reference picture, the movement for generating the first prediction block can be sweared by the POC based on reference picture or reference picture
Amount zooms in and out to derive the motion vector for generating the second prediction block.
Figure 21 is shown when calculating the weighted sum of the first prediction block and the second prediction block using the exemplary of weight factor
Diagram.
When calculating the weighted sum of the first prediction block and the second prediction block, different weight factors can be applied to according to sample
Position of the point in block and the sampling point in the block that uses.It is located at phase in the first prediction block and the second prediction block in addition, can calculate
With the weighted sum of the sampling point of position.In this case, when weighted sum is computed to result final prediction block, weight factor and
At least one of offset can be used for the calculating.
Here, weight factor can be minus negative value or the positive greater than zero.Offset can be zero, be less than
Zero negative value or positive greater than zero.
When the weighted sum of the first prediction block and the second prediction block is calculated, identical weight factor can be applied to each
All sampling points in prediction block.
Referring to Figure 21, for example, weight factor { 3/4,7/8,15/16 and 31/32 } can be applied to each of the first prediction block
A row or column, and weight factor { 1/4,1/8,1/16 and 1/32 } can be applied to each row or column of the second prediction block.?
In this case, the sampling point in Xiang Tonghang or same column can have identical weight factor by application.
The value of weight factor increases as the distance on the boundary away from current sub-block reduces.In addition, weight factor can be answered
For all sampling points in sub-block.
In Figure 21, (a), (b), (c) and (d) are shown through the motion information of contiguous block, lower section above use respectively
The motion information of the motion information of contiguous block, the motion information of left side contiguous block and right side contiguous block generates the second prediction block
Situation.Here, the second prediction block of top, the second prediction block of lower section, the second prediction block of left side and the second prediction block of right side can be meant
It is based respectively on the top motion information of contiguous block, the motion information of lower section contiguous block, the motion information of left side contiguous block and right side
The motion information of contiguous block and the second prediction block generated.
Figure 22 is will be different according to the position in block when showing the weighted sum when the first prediction block of calculating and the second prediction block
Weight factor is applied to the diagram of the embodiment of sampling point.In order to improve code efficiency, when the first prediction block and the second prediction block
When weighted sum is calculated, weight factor can change according to position of the sampling point in block.That is, using according to it is current
The position of the spatially adjacent sampling point of sub-block and different weight factors calculate weighted sum.In addition, can be for pre- first
It surveys in block and the second prediction block and is located at the sampling point calculating weighted sum of same position.
Referring to Figure 22, in the first prediction block, can according to position by weight factor 1/2,3/4,7/8,15/16,31/32,
63/64,127/128,255/256,511/512 and 1023/1024 } it is applied to each sampling point, and in the second prediction block, it can
Weight factor { 1/2,1/4,1/16,1/32,1/64,1/128,1/256,1/512 and 1/1024 } is applied to respectively according to position
A sampling point.Here, above the second prediction block, in the second prediction block of the second prediction block of left side, the second prediction block of lower section and right side
At least one used in weight factor can be greater than in the second prediction block of upper left side, the second prediction block of lower left, lower right the
Weight factor used at least one of second prediction block of two prediction blocks and upper right side.
In addition, above the second prediction block, in the second prediction block of the second prediction block of left side, the second prediction block of lower section and right side
At least one used in weight factor can be equal in the second prediction block of upper left side, the second prediction block of lower left, lower right
Weight factor used at least one of second prediction block of two prediction blocks and upper right side.
In addition, all sampling points in the second prediction block generated using the motion information of the same seat block in same position picture
Weight factor can be equal.
In addition, the power of the sampling point in the second prediction block generated using the motion information of the same seat block in same position picture
Repeated factor can be equal to the weight factor of the sampling point in the first prediction block.
In addition, using the fortune of the encoded/decoding block adjacent with the lower boundary of reference block and right margin in reference picture
Dynamic information and the weight factor of all sampling points in the second prediction block for generating can be equal.
In addition, using the fortune of the encoded/decoding block adjacent with the lower boundary of reference block and right margin in reference picture
Dynamic information and the weight factor of the sampling point in the second prediction block for generating can be equal to the weight factor of the sampling point in the first prediction block.
In order to reduce computation complexity, weight factor can according in the adjacent sub-blocks or current block adjacent with current block with
The size of the motion vector of the adjacent adjacent sub-blocks of current sub-block and change.
For example, when the sum of the x-component of the motion vector of adjacent sub-blocks and the absolute value of y-component are equal to or more than predetermined value
When, { 1/2,3/4,7/8,15/16 } is used as the weight factor of current sub-block.On the contrary, the x of the motion vector when adjacent sub-blocks
When the sum of absolute value of component and y-component is less than the predetermined value, { 7/8,15/16,31/32,63/64 } is used as current sub-block
Weight factor.In this case, which can be zero or positive integer.
In addition, in order to reduce computation complexity, weight factor can size according to the motion vector of current sub-block or direction
And change.
For example, when the absolute value of the x-component of the motion vector of current sub-block is equal to or more than predetermined value, 1/2,3/4,
7/8,15/16 } it is used as the weight factor of left side adjacent sub-blocks and right side adjacent sub-blocks.On the contrary, working as the movement of current sub-block
When the absolute value of the x-component of vector is less than the predetermined value, { 7/8,15/16,31/32,63/64 } is used as left side adjacent sub-blocks
With the weight factor of right side adjacent sub-blocks.In this case, which can be zero or positive integer.
For example, when the absolute value of the y-component of the motion vector of current sub-block is equal to or more than predetermined value, 1/2,3/4,
7/8,15/16 } it is used as the weight factor of top adjacent sub-blocks and lower section adjacent sub-blocks.On the contrary, working as the movement of current sub-block
When the absolute value of the y-component of vector is less than the predetermined value, { 7/8,15/16,31/32,63/64 } is used as top adjacent sub-blocks
With the weight factor of lower section adjacent sub-blocks.In this case, which can be zero or positive integer.
For example, when the sum of the x-component of the motion vector of current sub-block and the absolute value of y-component are equal to or more than predetermined value
When, { 1/2,3/4,7/8,15/16 } is used as the weight factor of current sub-block.On the contrary, the x of the motion vector when current sub-block
When the sum of absolute value of component and y-component is less than the predetermined value, { 7/8,15/16,31/32,63/64 } is used as current sub-block
Weight factor.In this case, which can be zero or positive integer.
Weighted sum can not be calculated for all sampling points in sub-block, and only for the K row/column adjacent with each block boundary
In some sampling points calculate weighted sum.In this case, K can be zero or positive integer, such as 1 or 2.
In addition, when the size of current block is less than N × M, it can be for the sample in the K row/column adjacent with each block boundary
Point calculates weighted sum.In addition, when current block is divided into sub-block and motion compensation is performed based on sub-block, can for it is every
Sampling point in K adjacent row/column of a block boundary calculates weighted sum.Here, K can be zero or positive integer, such as 1 or 2.Separately
Outside, N and M can be positive integer.For example, N and M can be 4 or greater than 4 and 8 or greater than 8.N and M can be equal or can be with
It is unequal.
It optionally, can be according to the type of the color component of current block in the K row/column adjacent with each block boundary
Sampling point calculates weighted sum.In this case, K can be zero or positive integer, such as 1 or 2.When current block is luminance component block
When, weighted sum can be calculated for the sampling point in two row/columns adjacent with each block boundary.On the other hand, when current block is color
When spending component blocks, weighted sum can be calculated for the sampling point in a row/column adjacent with each block boundary.
In addition, when current block will be predicted with merging patterns and have improved motion vector prediction candidate and empty
It, can be only in the K row/column adjacent with each block boundary when at least one in m- motion vector prediction candidate
Sampling point calculates weighted sum.
In addition, can be directed to and each block boundary when current block will be predicted with decoder-side motion vector derivation pattern (DM)
Sampling point in K adjacent row/column calculates weighted sum.In addition, when current block will be predicted with affine motion compensation model, it can
Weighted sum is calculated for the sampling point in the K row/column adjacent with each block boundary.In these cases, K can be zero or just whole
Number, such as 1 or 2.
It, can be according to the size of the sub-block of current block, for adjacent with each block boundary meanwhile in order to reduce computation complexity
K row/column in sampling point calculate weighted sum.
For example, one adjacent with each block boundary, two can be directed to when the sub-block of current block has 4 × 4 size
Sampling points a, in three or four row/columns calculate weighted sum.It optionally, can when the sub-block of current block has 8 × 8 size
For the sampling point meter in one adjacent with each block boundary, two, three, four, five, six, seven or eight row/columns
Calculate weighted sum.In this case, K can be zero or positive integer.The maximum value of K can correspond to the row or column for including in sub-block
Quantity.
In addition, in order to reduce computation complexity, can for one or two row adjacent with each block boundary in sub-block/
Sampling point in column calculates weighted sum.
In addition, in order to reduce computation complexity, can according to the item number of the motion information for generating the second prediction block, for
Sampling point in the K row/column adjacent with each block boundary calculates weighted sum.Here, K can be zero or positive integer.
For example, when the item number of motion information is less than predetermined value, it can be in two row/columns adjacent with each block boundary
Sampling point calculate weighted sum.
In addition, adjacent with each block boundary one can be directed to when the item number of motion information is equal to or more than the predetermined value
Sampling point in a row/column calculates weighted sum.
In addition, in order to reduce computation complexity, can according to the inter-prediction indicator of current block, for each block boundary
Sampling point in K adjacent row/column calculates weighted sum.K can be zero or positive integer.
For example, two row/columns adjacent with each block boundary can be directed to when inter-prediction indicator indicates single directional prediction
In sampling point calculate weighted sum.Meanwhile when the instruction of inter-prediction indicator is bi-directional predicted, it can be directed to adjacent with each block boundary
A row/column in sampling point calculate weighted sum.
In addition, in order to reduce computation complexity, can according to the POC of the reference picture of current block, for each block boundary
Sampling point in K adjacent row/column calculates weighted sum.Here, K can be zero or positive integer.
For example, when the difference between the POC of current picture and the POC of reference picture be less than predetermined value when, can for it is each
Sampling point in two adjacent row/columns of block boundary calculates weighted sum.On the contrary, when current picture POC and reference picture POC it
Between difference when being equal to or more than the predetermined value, weighting can be calculated for the sampling point in a row/column adjacent with each block boundary
With.
It, can be according to the motion vector of the adjacent sub-blocks adjacent with current block or current in addition, in order to reduce computation complexity
The size of the motion vector of the adjacent sub-blocks adjacent with current sub-block in block, for the K row/column adjacent with each block boundary
In sampling point calculate weighted sum.Here, K can be zero or positive integer.
For example, when the sum of the x-component of the motion vector of adjacent sub-blocks and the absolute value of y-component are equal to or more than predetermined value
When, weighted sum can be calculated for the sampling point in two row/columns adjacent with each block boundary.On the contrary, working as the movement of adjacent sub-blocks
When the sum of absolute value of x-component of vector and y-component is less than the predetermined value, can for a row adjacent with each block boundary/
Sampling point in column calculates weighted sum.In this case, which can be zero or positive integer.
In addition, in order to reduce computation complexity, can size according to the motion vector of current sub-block or direction, for it is every
Sampling point in K adjacent row/column of a block boundary calculates weighted sum.Here, K can be zero or positive integer.
For example, can be directed to and a left side when the absolute value of the x-component of the motion vector of current sub-block is equal to or more than predetermined value
Sampling point in two adjacent row/columns of each of boundary and right margin calculates weighted sum.On the contrary, working as the movement of current sub-block
When the absolute value of the x-component of vector is less than the predetermined value, one adjacent with each of left margin and right margin can be directed to
Sampling point in row/column calculates weighted sum.In this case, which can be zero or positive integer.
For example, when the absolute value of the y-component of the motion vector of current sub-block be equal to or more than predetermined value when, can for it is upper
Sampling point in two adjacent row/columns of each of boundary and lower boundary calculates weighted sum.On the contrary, working as the movement of current sub-block
It, can be for the sample in a row/column adjacent with coboundary and lower boundary when the absolute value of the y-component of vector is less than the predetermined value
Point calculates weighted sum.In this case, which can be zero or positive integer.
For example, when the sum of absolute value of the x-component of motion vector and y-component be equal to or more than predetermined value when, can for
Sampling point in two adjacent row/columns of each block boundary calculates weighted sum.On the contrary, when motion vector x-component and y-component it is exhausted
When being less than the predetermined value to the sum of value, weighted sum can be calculated for the sampling point in a row/column adjacent with each block boundary.?
In this case, which can be zero or positive integer.
Figure 23 be show during overlapped block motion compensation successively cumulatively calculated according to predetermined order the first prediction block and
The diagram of the embodiment of the weighted sum of second prediction block.The weighted sum of first prediction block and the second prediction block can be according in encoder
It is added with predetermined order preset in decoder.
It, can be according to the sequence of the upper block adjacent with current sub-block, left side block and right side block from adjacent sub-blocks referring to Figure 23
Derive motion information;The second prediction block can be sequentially generated using the motion information derived according to this;The first prediction can be calculated
The weighted sum of block and the second prediction block.When weighted sum is calculated according to predetermined order, can in the order described above accumulated weights and,
And it therefore can derive the final prediction block of current block.
As in the example of Figure 23, second is pre- caused by the motion information of the first prediction block of calculating and use upper block
The weighted sum for surveying block, so that can produce the first weighted sum result block.Then, it can calculate on the left of the first weighted sum result block and use
The weighted sum of second prediction block caused by the motion information of block, so that can produce the second weighted sum result block.Then, it can calculate
The weighted sum of second prediction block caused by second weighted sum result block and motion information using lower block, so that can produce the
Three weighted sum result blocks.Finally, third weighted sum result block can be calculated and used second caused by the motion information of right side block
The weighted sum of prediction block, so that can produce final prediction block.
On the other hand, the sequence of the motion information for generating the second prediction block is derived and for calculating the first prediction
The sequence of second prediction block of the weighted sum of block and the second prediction block can be different.
Figure 24 is the reality for showing the weighted sum that the first prediction block and the second prediction block are calculated during overlapped block motion compensation
Apply the diagram of example.In order to improve code efficiency, when weighted sum is calculated, weighted sum is not calculated cumulatively successively, but can be
Do not consider to calculate the first prediction block in the case where the sequence that the second prediction block is generated and uses upper block, left side block, lower block
The weighted sum of second prediction block caused by motion information at least one of right side block.
In this case, for using the movement of at least one of upper block, left side block, lower block and right side block to believe
The weight factor of second prediction block caused by ceasing can be equal to each other.Optionally, for the weight factor and use of the second prediction block
It can be equal in the weight factor of the first prediction block.
Referring to Figure 24, preparation multiple record spaces corresponding with the sum of the first prediction block and the second prediction block, and
When generating final prediction block, the first prediction block can be calculated while for all second prediction blocks using equal weight factor
With the weighted sum of each second prediction block.
In addition, the second prediction block caused by motion information even for the same seat block used in same position picture,
The weighted sum of the first prediction block and the second prediction block can be calculated.
When the size of current block be K sampling point or less than K sampling point when, can to determine whether to current block execution overlapping block
The information of motion compensation carries out entropy coding/entropy decoding.Here, K can be positive integer, such as 256.
When the size of current block is greater than K sampling point or when current block will be with specific inter-frame forecast mode (for example, merging
Mode or advanced motion vector prediction mode) when being predicted, it can not be to determining whether to execute overlapped block motion compensation to current block
Information carry out entropy coding/entropy decoding, but overlapped block motion compensation can be executed substantially.
Encoder can when executing motion prediction from the original signal of the borderline region of current block subtract the second prediction block it
Prediction is executed afterwards.In this case, when subtracting the second prediction block from original signal, the second prediction block and original letter can be calculated
Number weighted sum.
For the current block for not being performed overlapped block motion compensation, enhancing multiple transform (EMT) can not be applied, wherein
Enhance in multiple transform (EMT), discrete cosine transform (DCT) and discrete sine transform (DST) are applied to vertical/horizontal change
It changes.That is, current block application enhancing multiple transform that can only to overlapped block motion compensation has been performed.
Figure 25 is the flow chart for showing picture decoding method according to an embodiment of the invention.
Referring to Figure 25, the motion information of current block can be used to generate the first prediction block (step S2510) of current block.
Next, can be determined among the motion information of at least one adjacent sub-blocks of current sub-block for generating the
The motion information (step S2520) of two prediction blocks.
In this case, energy can be determined based at least one of size and Orientation of motion vector of adjacent sub-blocks
It is enough in the motion information for generating the second prediction block.
It, can be based on the ginseng of adjacent sub-blocks in the step S2520 for being determined to the motion information for generating the second prediction block
The POC of the frames count (POC) of picture and the reference picture of current block is examined to be determined to the fortune for generating the second prediction block
Dynamic information.It specifically, can will be neighbouring only when the POC of the reference picture of adjacent sub-blocks is equal to the POC of the reference picture of current block
The motion information of sub-block is determined as the motion information that can be used in generating the second prediction block.
Current sub-block has square shape or non-square shape.
The identified motion information determined in step S2520 can be used to generate at least one second prediction block (step
S2530)。
Only when current block had not both had motion vector derivation pattern (DM) did not had affine motion compensation model yet, just it can be used
The motion information of at least one adjacent sub-blocks generates at least one second prediction block.
Next, can at least one the second prediction block described in the first prediction block and current sub-block based on current block plus
It weighs and to generate final prediction block (step S2540).
When current sub-block is included in the borderline region of current block, can by obtain the first prediction block with boundary phase
Sampling point in several rows or several columns adjacent with boundary of adjacent several rows or sampling point and the second prediction block in several columns
Weighted sum generate final prediction block.
Here, can block size based on current sub-block, the size and Orientation of the motion vector of current sub-block, current block frame
Between at least one of the POC of reference picture of prediction indicator and current block determine the first prediction block and boundary phase
Sampling point in several rows or several columns adjacent with boundary of adjacent several rows or sampling point and the second prediction block in several columns.
Step S2540 is generated in final prediction block, it can be in the size and Orientation according to the motion vector of current sub-block
Different weight factors is applied to calculate weighted sum while the sampling point in the first prediction block and the second prediction block by least one.
Each step of the picture decoding method of Figure 25 can be similarly applied even to the phase of image encoding method of the invention
Answer step.
It may be recorded in recording medium by executing the bit stream that image encoding method according to the present invention generates.
Above embodiments can be executed in the same manner in the encoder and the decoder.
Applied to above embodiments sequence can between encoder and decoder it is different, or be applied to above embodiments
Sequence can be identical in the encoder and the decoder.
Above embodiments can be executed to each luminance signal and carrier chrominance signal, or can be to luminance signal and carrier chrominance signal phase
Above embodiments are executed together.
What above embodiments of the invention were applied to block-shaped has square shape or non-square shape.
It can be according to encoding block, prediction block, transform block, block, current block, coding unit, predicting unit, converter unit, unit
Above embodiments of the invention are applied with the size of at least one of active cell.Here, the size can be defined as
Minimum dimension or full-size or both minimum dimension and full-size so that above embodiments are applied, or can be determined
Justice is the fixed dimension that above embodiments are applied to.In addition, in the embodiment above, first embodiment can be applied to first
Size, second embodiment can be applied to the second size.In other words, above embodiments can be applied according to dimension combination.Separately
Outside, when size is equal to or more than minimum dimension and is equal to or less than full-size, above embodiments can be applied.In other words
It says, when block size is included in particular range, above embodiments can be applied.
For example, above embodiments can be applied when the size of current block is 8 × 8 or is bigger.For example, when current block
When size is 4 × 4 or is bigger, above embodiments can be applied.For example, can be answered when the size of current block is 16 × 16 or is bigger
Use above embodiments.For example, can be applied when the size of current block is equal to or more than 16 × 16 and is equal to or less than 64 × 64
Above embodiments.
Above embodiments of the invention can be applied according to time horizon.In order to identify that above embodiments can be applied to the time
Layer, available signal transmits respective identifier, and above embodiments can be applied to the specific time identified by respective identifier
Layer.Here, identifier can be defined as that the lowermost layer or top or lowermost layer and top two of above embodiments can be applied
Person, or can be defined as indicating by the certain layer of Application Example.In addition, can define by the set time of Application Example
Layer.
For example, above embodiments can be applied when the time horizon of present image is lowermost layer.For example, when present image
When time layer identifier is 1, above embodiments can be applied.For example, when the time horizon of present image is top, can using with
Upper embodiment.
It can define the type of strip for being applied above embodiments of the invention, and can be according to respective strap type application
Above embodiments.
In the above-described embodiments, the method, but this hair are described based on the flow chart with a series of steps or units
The bright sequence for being not limited to the step, but, some steps can be performed simultaneously with other steps, or can be pressed with other steps
It is performed according to different order.It will be appreciated by one of ordinary skill in the art that the step in flow chart does not repel one another, and
And in the case where not influencing the scope of the present invention, other steps can be added in flow chart or some steps can be from stream
Journey figure is deleted.
Embodiment includes exemplary various aspects.It can not be described for all possible combinations of various aspects, but ability
Field technique personnel will recognize that various combination.Therefore, the present invention may include all alternative forms in scope of the claims,
Modification and change.
The embodiment of the present invention can be implemented according to the form of program instruction, wherein described program instruction can be by various meters
Thermomechanical components are calculated to execute, and are recorded on a computer readable recording medium.Computer readable recording medium may include individual
The combination of program instruction, data file, data structure etc. or program instruction, data file, data structure etc..It is recorded in
Program instruction in computer readable recording medium can be specifically designed and be configured to the present invention, or for computer software
It is known for the those of ordinary skill of technical field.The example of computer readable recording medium includes: that magnetic recording media is (all
Such as hard disk, floppy disk and tape);Optical data carrier (such as CD-ROM or DVD-ROM);Magnet-optical medium (such as soft light
Disk);And be specially constructed for storage and implementation procedure instruction hardware device (such as read-only memory (ROM) is deposited at random
Access to memory (RAM), flash memories etc.).The example of program instruction not only includes the machine language code formed by compiler,
It further include the higher-level language code that interpreter can be used to implement by computer.Hardware device can be configured to by one or more
Software module is operated to carry out processing according to the present invention, and vice versa.
Although describing the present invention according to specific term (such as detailed elements) and limited embodiments and attached drawing, it
Be only provided to help more generically understand the present invention, the present invention is not limited to the above embodiments.It is of the art
The skilled person will understand that various modifications and change can be made from the above description.
Therefore, spirit of the invention should not be so limited to above-described embodiment, whole models of the following claims and their equivalents
Enclosing will fall within the spirit and scope of the invention.
Industrial applicability
The present invention can be applied to the equipment encoded/decoded to image.
Claims (19)
1. a kind of method for being decoded to image, which comprises
The first prediction block of current block is generated using the motion information of current block;
The fortune for generating the second prediction block is determined among the motion information of at least one adjacent sub-blocks of current sub-block
Dynamic information;
At least one second prediction block of current sub-block is generated using determining motion information;And
It is generated most based on the weighted sum of at least one the second prediction block described in the first prediction block and current sub-block of current block
Whole prediction block.
2. according to the method described in claim 1, wherein it is determined that can be used in the step of generating the motion information of the second prediction block
Include:
It can be used described in being determined based at least one of size and Orientation of motion vector of adjacent sub-blocks of current sub-block
In the motion information for generating the second prediction block.
3. according to the method described in claim 1, wherein it is determined that can be used in the step of generating the motion information of the second prediction block
Include:
Based on the POC of the frames count POC of the reference picture of the adjacent sub-blocks and the reference picture of current block to determine
State the motion information that can be used in generating the second prediction block.
4. according to the method described in claim 3, wherein, in the step for being determined to the motion information for generating the second prediction block
In rapid, only when the POC of the reference picture of the adjacent sub-blocks is equal to the POC of the reference picture of current block, the adjacent sub-blocks
Motion information be confirmed as the motion information that can be used in generating the second prediction block.
5. according to the method described in claim 1, wherein, current sub-block has at least one in square shape and rectangular shape
Kind.
6. according to the method described in claim 1, wherein, the step of generating at least one second prediction block, includes:
Only when current block had not both had motion vector derivation pattern (DM) did not had affine motion compensation model yet, current sub-block is used
The motion information of at least one adjacent sub-blocks generate at least one described second prediction block.
7. according to the method described in claim 1, wherein, the step of generating final prediction block, includes:
When current sub-block is included in the borderline region of current block, by the portion adjacent with boundary for obtaining the first prediction block
The part row adjacent with boundary or each in the column of part of each of branch or part column sampling point and the second prediction block
The weighted sum of a sampling point generates final prediction block.
8. according to the method described in claim 7, wherein, the part row adjacent with boundary of the first prediction block or the portion
The sampling point divided in the part row adjacent with boundary or part column of sampling point and the second prediction block in column is based on working as
The block size of preceding sub-block, the size and Orientation of the motion vector of current sub-block, the inter-prediction indicator of current block and current
At least one of POC of reference picture of block is determined.
9. according to the method described in claim 1, wherein, the step of generating final prediction block, includes:
Different weight factors are applied to the by least one of size and Orientation of motion vector according to current sub-block
Sampling point in one prediction block and the second prediction block obtains the weighted sum of the first prediction block and the second prediction block.
10. a kind of method for being encoded to image, which comprises
The first prediction block of current block is generated using the motion information of current block;
The fortune for generating the second prediction block is determined among the motion information of at least one adjacent sub-blocks of current sub-block
Dynamic information;
At least one second prediction block of current sub-block is generated using determining motion information;
It is generated most based on the weighted sum of at least one the second prediction block described in the first prediction block and current sub-block of current block
Whole prediction block.
11. according to the method described in claim 10, wherein it is determined that can be used in generating the step of the motion information of the second prediction block
Suddenly include:
It can be used in generating described in determining based at least one of size and Orientation of motion vector of the adjacent sub-blocks
The motion information of second prediction block.
12. according to the method described in claim 10, wherein it is determined that can be used in generating the step of the motion information of the second prediction block
Suddenly include:
It can be used described in being determined based on the POC of the POC of the reference picture of the adjacent sub-blocks and the reference picture of current block
In the motion information for generating the second prediction block.
13. according to the method for claim 12, wherein be determined to the step of the motion information for generating the second prediction block
Suddenly include:
Only when the POC of the reference picture of the adjacent sub-blocks is equal to the POC of the reference picture of current block, by the adjacent sub-blocks
Motion information be determined as the motion information that can be used in generating the second prediction block.
14. according to the method described in claim 10, wherein, current sub-block has in square shape and rectangular shape at least
It is a kind of.
15. according to the method described in claim 10, wherein, the step of generating at least one second prediction block, includes:
Only when current block had not both had motion vector derivation pattern (DM) did not had affine motion compensation model yet, using it is described at least
The motion information of one adjacent sub-blocks generates at least one described second prediction block.
16. according to the method described in claim 10, wherein, the step of generating final prediction block, includes:
When current sub-block is included in the borderline region of current block, the part row adjacent with boundary based on the first prediction block
Or the part row adjacent with boundary of the sampling point and the second prediction block in the column of part or the weighted sum of the sampling point in the column of part are come
Generate final prediction block.
17. according to the method for claim 16, wherein the part row or described adjacent with boundary of the first prediction block
The part row adjacent with boundary or the sampling point in the column of the part of sampling point and the second prediction block in the column of part are based on
The block size of current sub-block, the size and Orientation of the motion vector of current sub-block, current block inter-prediction indicator and work as
Preceding piece of at least one of the POC of reference picture is determined.
18. according to the method described in claim 10, wherein, the step of generating final prediction block, includes:
Different weighted values are applied to first by least one of the size and Orientation of motion vector according to current sub-block
Sampling point in prediction block and the second prediction block obtains the weighted sum.
19. a kind of storage includes: by the recording medium of bit stream caused by image encoding method, described image coding method
The first prediction block of current block is generated using the motion information of current block;
The fortune for generating the second prediction block is determined among the motion information of at least one adjacent sub-blocks of current sub-block
Dynamic information;
At least one second prediction block of current sub-block is generated using determining motion information;
It is generated most based on the weighted sum of at least one the second prediction block described in the first prediction block and current sub-block of current block
Whole prediction block.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311025877.1A CN116866594A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311021525.9A CN116886929A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311020975.6A CN116886928A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311023493.6A CN116886930A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311024704.8A CN116866593A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2016-0159507 | 2016-11-28 | ||
KR20160159507 | 2016-11-28 | ||
PCT/KR2017/013672 WO2018097692A2 (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image, and recording medium in which bit stream is stored |
Related Child Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311023493.6A Division CN116886930A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311020975.6A Division CN116886928A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311024704.8A Division CN116866593A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311025877.1A Division CN116866594A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311021525.9A Division CN116886929A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110024394A true CN110024394A (en) | 2019-07-16 |
CN110024394B CN110024394B (en) | 2023-09-01 |
Family
ID=62195247
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201780073517.5A Active CN110024394B (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311024704.8A Pending CN116866593A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311025877.1A Pending CN116866594A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311021525.9A Pending CN116886929A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311020975.6A Pending CN116886928A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311023493.6A Pending CN116886930A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311024704.8A Pending CN116866593A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311025877.1A Pending CN116866594A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311021525.9A Pending CN116886929A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311020975.6A Pending CN116886928A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN202311023493.6A Pending CN116886930A (en) | 2016-11-28 | 2017-11-28 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
Country Status (3)
Country | Link |
---|---|
KR (3) | KR102328179B1 (en) |
CN (6) | CN110024394B (en) |
WO (1) | WO2018097692A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112468817A (en) * | 2019-09-06 | 2021-03-09 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113099240A (en) * | 2019-12-23 | 2021-07-09 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113242427A (en) * | 2021-04-14 | 2021-08-10 | 中南大学 | Rapid method and device based on adaptive motion vector precision in VVC (variable valve timing) |
CN114982228A (en) * | 2020-10-16 | 2022-08-30 | Oppo广东移动通信有限公司 | Inter-frame prediction method, encoder, decoder, and computer storage medium |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019234598A1 (en) | 2018-06-05 | 2019-12-12 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between ibc and stmvp |
WO2019244117A1 (en) | 2018-06-21 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Unified constrains for the merge affine mode and the non-merge affine mode |
TWI729422B (en) | 2018-06-21 | 2021-06-01 | 大陸商北京字節跳動網絡技術有限公司 | Sub-block mv inheritance between color components |
WO2020004990A1 (en) * | 2018-06-27 | 2020-01-02 | 엘지전자 주식회사 | Method for processing image on basis of inter-prediction mode and device therefor |
CN116708815A (en) | 2018-08-09 | 2023-09-05 | Lg电子株式会社 | Encoding device, decoding device, and data transmitting device |
CN110876057B (en) * | 2018-08-29 | 2023-04-18 | 华为技术有限公司 | Inter-frame prediction method and device |
US10834417B2 (en) * | 2018-09-21 | 2020-11-10 | Tencent America LLC | Method and apparatus for video coding |
GB2591906B (en) | 2018-09-24 | 2023-03-08 | Beijing Bytedance Network Tech Co Ltd | Bi-prediction with weights in video coding and decoding |
WO2020089823A1 (en) * | 2018-10-31 | 2020-05-07 | Beijing Bytedance Network Technology Co., Ltd. | Overlapped block motion compensation with adaptive sub-block size |
WO2020094150A1 (en) | 2018-11-10 | 2020-05-14 | Beijing Bytedance Network Technology Co., Ltd. | Rounding in current picture referencing |
CN113615186B (en) * | 2018-12-21 | 2024-05-10 | Vid拓展公司 | Symmetric motion vector difference coding |
JP7491929B2 (en) | 2018-12-21 | 2024-05-28 | サムスン エレクトロニクス カンパニー リミテッド | Video encoding device and video decoding device using triangular prediction mode, and video encoding method and video decoding method using the same |
US11394999B2 (en) * | 2019-03-11 | 2022-07-19 | Alibaba Group Holding Limited | Method, device, and system for determining prediction weight for merge mode |
US11394993B2 (en) * | 2019-03-13 | 2022-07-19 | Tencent America LLC | Method and apparatus for affine inter prediction with small subblocks |
CN114788286A (en) * | 2019-11-26 | 2022-07-22 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
CN113542768B (en) * | 2021-05-18 | 2022-08-09 | 浙江大华技术股份有限公司 | Motion search method, motion search device and computer-readable storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130002243A (en) * | 2011-06-28 | 2013-01-07 | 주식회사 케이티 | Methods of inter prediction using overlapped block and appratuses using the same |
WO2013051899A2 (en) * | 2011-10-05 | 2013-04-11 | 한국전자통신연구원 | Scalable video encoding and decoding method and apparatus using same |
CN103299642A (en) * | 2011-01-07 | 2013-09-11 | Lg电子株式会社 | Method for encoding and decoding image information and device using same |
CN103444181A (en) * | 2011-04-12 | 2013-12-11 | 松下电器产业株式会社 | Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus |
CN103828373A (en) * | 2011-10-05 | 2014-05-28 | 松下电器产业株式会社 | Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device |
KR20140096130A (en) * | 2011-11-18 | 2014-08-04 | 퀄컴 인코포레이티드 | Adaptive overlapped block motion compensation |
CN104137549A (en) * | 2012-01-18 | 2014-11-05 | 韩国电子通信研究院 | Method and device for encoding and decoding image |
US20150085933A1 (en) * | 2012-04-30 | 2015-03-26 | Humax Co., Ltd. | Method and apparatus for encoding multi-view images, and method and apparatus for decoding multi-view images |
KR20150079742A (en) * | 2012-12-28 | 2015-07-08 | 니폰 덴신 덴와 가부시끼가이샤 | Video coding device and method, video decoding device and method, and programs therefor |
CN105075260A (en) * | 2013-02-25 | 2015-11-18 | Lg电子株式会社 | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
WO2015195942A1 (en) * | 2014-06-19 | 2015-12-23 | Vid Scale, Inc. | Methods and systems for intra block copy coding with block vector derivation |
CN105580365A (en) * | 2013-09-26 | 2016-05-11 | 高通股份有限公司 | Sub-prediction unit (pu) based temporal motion vector prediction in hevc and sub-pu design in 3d-hevc |
US20160323573A1 (en) * | 2013-12-19 | 2016-11-03 | Sharp Kabushiki Kaisha | Image decoding device, image coding device, and residual prediction device |
US20170019680A1 (en) * | 2014-03-06 | 2017-01-19 | Samsung Electronics Co., Ltd. | Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101553850B1 (en) * | 2008-10-21 | 2015-09-17 | 에스케이 텔레콤주식회사 | / Video encoding/decoding apparatus and method and apparatus of adaptive overlapped block motion compensation using adaptive weights |
US8837592B2 (en) * | 2010-04-14 | 2014-09-16 | Mediatek Inc. | Method for performing local motion vector derivation during video coding of a coding unit, and associated apparatus |
US9426465B2 (en) * | 2013-08-20 | 2016-08-23 | Qualcomm Incorporated | Sub-PU level advanced residual prediction |
-
2017
- 2017-11-28 CN CN201780073517.5A patent/CN110024394B/en active Active
- 2017-11-28 CN CN202311024704.8A patent/CN116866593A/en active Pending
- 2017-11-28 KR KR1020170160140A patent/KR102328179B1/en active IP Right Grant
- 2017-11-28 WO PCT/KR2017/013672 patent/WO2018097692A2/en active Application Filing
- 2017-11-28 CN CN202311025877.1A patent/CN116866594A/en active Pending
- 2017-11-28 CN CN202311021525.9A patent/CN116886929A/en active Pending
- 2017-11-28 CN CN202311020975.6A patent/CN116886928A/en active Pending
- 2017-11-28 CN CN202311023493.6A patent/CN116886930A/en active Pending
-
2021
- 2021-11-12 KR KR1020210155947A patent/KR20210137982A/en not_active IP Right Cessation
-
2023
- 2023-03-09 KR KR1020230031291A patent/KR20230042673A/en not_active Application Discontinuation
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103299642A (en) * | 2011-01-07 | 2013-09-11 | Lg电子株式会社 | Method for encoding and decoding image information and device using same |
CN103444181A (en) * | 2011-04-12 | 2013-12-11 | 松下电器产业株式会社 | Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus |
KR20130002243A (en) * | 2011-06-28 | 2013-01-07 | 주식회사 케이티 | Methods of inter prediction using overlapped block and appratuses using the same |
WO2013051899A2 (en) * | 2011-10-05 | 2013-04-11 | 한국전자통신연구원 | Scalable video encoding and decoding method and apparatus using same |
CN103828373A (en) * | 2011-10-05 | 2014-05-28 | 松下电器产业株式会社 | Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device |
KR20140096130A (en) * | 2011-11-18 | 2014-08-04 | 퀄컴 인코포레이티드 | Adaptive overlapped block motion compensation |
CN104137549A (en) * | 2012-01-18 | 2014-11-05 | 韩国电子通信研究院 | Method and device for encoding and decoding image |
US20150085933A1 (en) * | 2012-04-30 | 2015-03-26 | Humax Co., Ltd. | Method and apparatus for encoding multi-view images, and method and apparatus for decoding multi-view images |
KR20150079742A (en) * | 2012-12-28 | 2015-07-08 | 니폰 덴신 덴와 가부시끼가이샤 | Video coding device and method, video decoding device and method, and programs therefor |
CN105075260A (en) * | 2013-02-25 | 2015-11-18 | Lg电子株式会社 | Method for encoding video of multi-layer structure supporting scalability and method for decoding same and apparatus therefor |
CN105580365A (en) * | 2013-09-26 | 2016-05-11 | 高通股份有限公司 | Sub-prediction unit (pu) based temporal motion vector prediction in hevc and sub-pu design in 3d-hevc |
US20160323573A1 (en) * | 2013-12-19 | 2016-11-03 | Sharp Kabushiki Kaisha | Image decoding device, image coding device, and residual prediction device |
US20170019680A1 (en) * | 2014-03-06 | 2017-01-19 | Samsung Electronics Co., Ltd. | Inter-layer video decoding method and apparatus therefor performing sub-block-based prediction, and inter-layer video encoding method and apparatus therefor performing sub-block-based prediction |
WO2015195942A1 (en) * | 2014-06-19 | 2015-12-23 | Vid Scale, Inc. | Methods and systems for intra block copy coding with block vector derivation |
Non-Patent Citations (5)
Title |
---|
CHUN-CHI CHEN: "CE2: Report of OBMC with Motion Merging", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》 * |
CHUN-CHI CHEN: "CE2: Report of OBMC with Motion Merging", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》, 22 July 2011 (2011-07-22) * |
JIANLE CHEN: "Algorithm Description of Joint Exploration Test Model 3", 《JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 3RD MEETING: GENEVA, CH, 26 MAY – 1 JUNE 2016》 * |
PEISONG CHEN: "Overlapped block motion compensation in TMuC", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》 * |
PEISONG CHEN: "Overlapped block motion compensation in TMuC", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》, 15 October 2010 (2010-10-15) * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112468817A (en) * | 2019-09-06 | 2021-03-09 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113709486A (en) * | 2019-09-06 | 2021-11-26 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113709487A (en) * | 2019-09-06 | 2021-11-26 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN112468817B (en) * | 2019-09-06 | 2022-07-29 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113709486B (en) * | 2019-09-06 | 2022-12-23 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113709487B (en) * | 2019-09-06 | 2022-12-23 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113099240A (en) * | 2019-12-23 | 2021-07-09 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN113099240B (en) * | 2019-12-23 | 2022-05-31 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
CN114982228A (en) * | 2020-10-16 | 2022-08-30 | Oppo广东移动通信有限公司 | Inter-frame prediction method, encoder, decoder, and computer storage medium |
CN113242427A (en) * | 2021-04-14 | 2021-08-10 | 中南大学 | Rapid method and device based on adaptive motion vector precision in VVC (variable valve timing) |
CN113242427B (en) * | 2021-04-14 | 2024-03-12 | 中南大学 | Rapid method and device based on adaptive motion vector precision in VVC |
Also Published As
Publication number | Publication date |
---|---|
KR102328179B1 (en) | 2021-11-18 |
KR20180061041A (en) | 2018-06-07 |
CN116886929A (en) | 2023-10-13 |
KR20230042673A (en) | 2023-03-29 |
WO2018097692A3 (en) | 2018-07-26 |
CN110024394B (en) | 2023-09-01 |
WO2018097692A2 (en) | 2018-05-31 |
KR20210137982A (en) | 2021-11-18 |
CN116886928A (en) | 2023-10-13 |
CN116866594A (en) | 2023-10-10 |
CN116866593A (en) | 2023-10-10 |
CN116886930A (en) | 2023-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110024394A (en) | The recording medium of method and apparatus and stored bits stream to encoding/decoding image | |
CN109196864B (en) | Image encoding/decoding method and recording medium therefor | |
CN109792515A (en) | The recording medium of image coding/decoding method and device and stored bits stream | |
CN109644276A (en) | Image coding/decoding method | |
CN113273213B (en) | Image encoding/decoding method and apparatus, and recording medium storing bit stream | |
CN109417617A (en) | Intra-frame prediction method and device | |
CN109804627A (en) | Image coding/decoding method and equipment | |
CN110463201A (en) | Use the prediction technique and device of reference block | |
CN110024402A (en) | Image coding/decoding method and device and the recording medium for being stored with bit stream | |
CN112740697B (en) | Image encoding/decoding method and apparatus, and recording medium storing bit stream | |
CN109804626A (en) | Method and apparatus for being coded and decoded to image and the recording medium for stored bits stream | |
CN109479141A (en) | Image coding/decoding method and recording medium for the method | |
CN109997363A (en) | Image coding/decoding method and device and the recording medium for being stored with bit stream | |
CN109479129A (en) | The recording medium of image coding/decoding method and device and stored bits stream | |
CN110024399A (en) | The recording medium of method and apparatus and stored bits stream to encoding/decoding image | |
CN109417636A (en) | Method and apparatus for the encoding/decoding image based on transformation | |
CN109417629A (en) | Image coding/decoding method and recording medium for this method | |
CN117156155A (en) | Image encoding/decoding method, storage medium, and transmission method | |
CN109479138A (en) | Image coding/decoding method and device | |
CN110476425A (en) | Prediction technique and device based on block form | |
CN109314785A (en) | Method and apparatus for exporting motion prediction information | |
CN108353166A (en) | Method and apparatus for encoding/decoding image | |
CN110089113A (en) | Image coding/decoding method, equipment and the recording medium for stored bits stream | |
CN109952762A (en) | The recording medium of video coding/decoding method and equipment and stored bits stream | |
CN110024386A (en) | Method and apparatus for being encoded/decoded to image, for the recording medium of stored bits stream |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |