CN110476425A - Prediction technique and device based on block form - Google Patents
Prediction technique and device based on block form Download PDFInfo
- Publication number
- CN110476425A CN110476425A CN201880020454.1A CN201880020454A CN110476425A CN 110476425 A CN110476425 A CN 110476425A CN 201880020454 A CN201880020454 A CN 201880020454A CN 110476425 A CN110476425 A CN 110476425A
- Authority
- CN
- China
- Prior art keywords
- block
- prediction
- object block
- blockette
- coding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 133
- 238000005070 sampling Methods 0.000 claims description 190
- 238000000605 extraction Methods 0.000 abstract description 2
- 230000033001 locomotion Effects 0.000 description 340
- 238000012545 processing Methods 0.000 description 191
- 239000013598 vector Substances 0.000 description 180
- 230000009466 transformation Effects 0.000 description 159
- 238000013139 quantization Methods 0.000 description 102
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 34
- 238000003860 storage Methods 0.000 description 26
- 238000001914 filtration Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 20
- 239000011159 matrix material Substances 0.000 description 20
- 238000009795 derivation Methods 0.000 description 18
- 238000005192 partition Methods 0.000 description 16
- 230000005540 biological transmission Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 230000008859 change Effects 0.000 description 9
- 241000208340 Araliaceae Species 0.000 description 8
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 8
- 235000003140 Panax quinquefolius Nutrition 0.000 description 8
- 235000008434 ginseng Nutrition 0.000 description 8
- 230000003044 adaptive effect Effects 0.000 description 7
- 239000000523 sample Substances 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 239000013074 reference sample Substances 0.000 description 6
- 238000009826 distribution Methods 0.000 description 5
- 230000008520 organization Effects 0.000 description 5
- 230000008054 signal transmission Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000007717 exclusion Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013316 zoning Methods 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present invention relates to a kind of for the coding/decoding method of video, decoding apparatus, coding method and code device, wherein when coding and decoding to video, by generating multiple divided blocks to the block being predicted is divided.For at least some of the multiple divided block divided block draw prediction mode, can the prediction mode based on extraction prediction is executed to the multiple divided block.When executing prediction for divided block, can be used with by the associated information of the block being predicted, and usable information associated with the other divided blocks being predicted before divided block.
Description
Technical field
Following embodiment relates generally to a kind of video encoding/decoding method and equipment and method for video coding and equipment, more
It says to body, is related to a kind of method and apparatus for executing prediction for shape block-based in the coding and decoding to video.
This application claims on March 22nd, 2017 the 10-2017-0036257 South Korea patent application submitted, in 2017
On November 20, the 10-2017-0155097 South Korea patent application submitted and in the 10- submitted on March 22nd, 2018
The equity of 2018-0033424 South Korea patent application, they pass through whole reference herein and are integrated into the application.
Background technique
With the sustainable development of Information And Communication industry, support the broadcast service of fine definition (HD) resolution ratio complete
The world is universal.By this universal, a large number of users has got used to high-resolution and high-definition image and/or video.
In order to meet user to demand high-definition, big measuring mechanism has accelerated the exploitation to next-generation imaging device.
User is other than the interest to fine definition TV (HDTV) and overall height clarity (FHD) TV has increased, to the interest of UHD TV
Also increased, wherein the resolution ratio of UHD TV is four times or more of the resolution ratio of overall height clarity (FHD) TV.With its interest
Increase, constantly need for the encoding/decoding image technology with higher resolution and image more high-definition.
Inter-frame prediction techniques, infra-prediction techniques, entropy coding etc. can be used in encoding/decoding image device and method,
To execute coding/decoding to high-resolution and high-definition image.Inter-frame prediction techniques, which can be, to be used to use on the time
The technology that the upper posterior picture of preceding picture and/or time predicts the value for the pixel for including in current picture.It is pre- in frame
Survey technology can be for use the information about the pixel in current picture to the value of the pixel for including in current picture into
The technology of row prediction.Entropy coding can be for short code word to be distributed to the symbol frequently occurred and distributes long code word
To the technology of the symbol seldom occurred.
Various prediction techniques have been developed to improve the efficiency and precision of intra prediction and/or inter-prediction.For example, can be right
Block is divided effectively to be predicted, and can execute prediction to by being divided into each of generation piece.Forecasting efficiency can
Whether can be divided and be changed a lot according to block.
Summary of the invention
Technical problem
Embodiment is intended to provide a kind of block-based size and/or shape and divide and for by dividing production to block
The encoding device and method and decoding device and method of raw each blockette derivation prediction mode.
Embodiment is intended to provide a kind of encoding device that the prediction mode that basis is derived predicts the execution of each blockette
With method and decoding device and method.
Solution
According to one aspect, provide a kind of coding method, the coding method include: by object block divided come
Generate multiple blockettes;Prediction mode is derived at least part blockette in the multiple blockette;And based on derivation
The prediction mode out executes prediction to the multiple blockette.
A kind of coding/decoding method is provided according to another aspect, and the coding/decoding method includes: by dividing to object block
To generate multiple blockettes;Prediction mode is derived at least part blockette in the multiple blockette;And it is based on pushing away
The derived prediction mode executes prediction to the multiple blockette.
It can determine whether to divide object block based on information relevant to object block.
Indicator can be divided based on block to determine whether to carry out object block to divide and will use which type of division.
Object block can be divided based on the size of object block.
Object block can be divided based on the shape of object block.
The prediction mode can be derived for the particular zones block among the multiple blockette.
The particular zones block can be the block positioned at specific position among the multiple blockette.
For the prediction mode that the particular zones block is derived can be used among the multiple blockette in addition to
Remaining blockette except the particular zones block.
It is determined by combining the prediction mode derived for the particular zones block and additional prediction mode
Prediction mode can be used for remaining blockette among the multiple blockette other than the particular zones block.
Most probable mode (MPM) list can be used for the derivation to the prediction mode.
MPM list may include multiple MPM lists.
MPM candidate pattern in the multiple MPM list can not overlap each other.
MPM list can be configured for discrete cell.
The discrete cell can be object block.
It can be arranged based on the MPM for being configured to the multiple blockette for one or more reference blocks of object block
Table.
It can be used for for first piece of prediction mode derived among the multiple blockette to the multiple subregion
Second piece among block is predicted.
First piece of reconstruction pixel is used as the reference sampling point for being predicted second piece.
Reference sampling point for being predicted the multiple blockette can be the reconstruction pixel adjacent with object block.
It can be for the bottom block or the rightmost side block derivation prediction mode among the multiple blockette.
The reconstruction pixel adjacent with the top of object block is used as the reference for being predicted the bottom block
Pixel.
Prediction can be executed to the multiple blockette according to predefined sequence.
The predefined sequence can be from bottom block to the sequence of the top block, from rightmost side block to leftmost side block
Sequence, select bottom block first and hereafter successively select second block from the top block to bottom in the range of block
It sequentially or first selects rightmost side block and hereafter successively selects the block in the range of second block from the leftmost side block to right side
Sequence.
A kind of coding/decoding method is provided according to another aspect, and the coding/decoding method includes: derivation prediction mode;By right
Object block is divided to generate multiple blockettes;And the multiple blockette is executed based on the prediction mode derived
Prediction.
Beneficial effect
It provides a kind of block-based size and/or shape block is divided and is directed to and generated by described divide
The encoding device and method and decoding device and method of each blockette derivation prediction mode.
Provide a kind of prediction mode that basis is derived to each blockette execute encoding device and the method for prediction with
And decoding device and method.
Detailed description of the invention
Fig. 1 is the block diagram for showing the configuration of embodiment for the encoding device for being applied the disclosure;
Fig. 2 is the block diagram for showing the configuration of embodiment for the decoding device for being applied the disclosure;
Fig. 3 is the diagram for schematically showing the partitioned organization of image when image is encoded and is decoded;
Fig. 4 is the diagram for showing the form for the predicting unit (PU) that coding unit (CU) can include;
Fig. 5 is the diagram for showing the form for the converter unit (TU) that can be included in CU;
Fig. 6 is the diagram for explaining the embodiment of intra-prediction process;
Fig. 7 is the diagram for explaining the position of the reference sampling point used during intra prediction;
Fig. 8 is the diagram for explaining the embodiment of inter predication process;
Fig. 9 shows spatial candidate according to the embodiment;
Figure 10 shows the sequence according to the embodiment for being added to the motion information of spatial candidate and merging list;
Figure 11 is shown to be handled according to exemplary transform and quantization;
Figure 12 is the configuration diagram of encoding device according to the embodiment;
Figure 13 is the configuration diagram of decoding device according to the embodiment;
Figure 14 is the flow chart of prediction technique according to the embodiment;
Figure 15 is the flow chart of according to the embodiment piece of division methods;
Figure 16 is shown according to exemplary 8 × 4 object block;
Figure 17 is shown according to exemplary 4 × 4 blockette
Figure 18 is shown according to exemplary 4 × 16 object block;
Figure 19 is shown according to exemplary 8 × 4 blockette;
Figure 20 is shown according to exemplary 4 × 4 blockette;
Figure 21 is according to exemplary for deriving the flow chart of the method for the prediction mode of blockette;
Figure 22 is shown according to the exemplary prediction to blockette;
Figure 23 shows the prediction according to the exemplary reconstructed block using blockette to blockette;
Figure 24 shows the prediction according to the exemplary external reference pixel using for blockette to blockette;
Figure 25 is shown according to the exemplary prediction to four blockettes;
Figure 26 is shown according to the exemplary prediction after performing prediction to the 4th piece to first piece;
Figure 27 is shown according to the exemplary prediction to second piece;
Figure 28 is shown according to the exemplary prediction to third block;
Figure 29 is the flow chart of prediction technique according to the embodiment;
Figure 30 shows the derivation according to the exemplary prediction mode to object block;
Figure 31 is the flow chart for showing target block prediction method and bit stream generation method according to the embodiment;
Figure 32 is the flow chart for showing the target block prediction method according to the embodiment using bit stream;
Figure 33 is shown according to the exemplary division to upper layer block;
Figure 34 is shown according to the exemplary division to object block;
Figure 35 is the signal flow graph for showing image coding and decoding method according to the embodiment.
Preferred forms
The present invention can be by various changes, and can have various embodiments, describe in detail below with reference to accompanying drawings specific
Embodiment.However, it should be understood that these embodiments are not intended to limit the invention to specific open form, they are included in this
The all changes that include in the spirit and scope of invention, equivalent or modification.
Following exemplary embodiment will be described in detail referring to the attached drawing for showing specific embodiment.These implementations are described
Example, enables disclosure person of an ordinary skill in the technical field easily to practice these embodiments.It should be noted that various realities
It is different from each other to apply example, but does not need mutually exclusive.For example, specific shape described herein, structure and characteristic can not depart from
Other embodiments are implemented as in the case where the spirit and scope of the relevant multiple embodiments of one embodiment.In addition, should manage
Solution, in the case where not departing from the spirit and scope of embodiment, can change various components in each disclosed embodiment
Position or arrangement.Therefore, it is not intended to limit the scope of the present disclosure for appended detailed description, and the model of exemplary embodiment
It encloses and is only limited by the attached claims and its equivalent (as long as they are suitably described).
In the accompanying drawings, similar reference label be used to specify the same or similar function in all fields.In attached drawing
Shape, size of component etc. can be exaggerated to so that describing clear.
Such as term of " first " and " second " can be used for describing various assemblies, but component is not limited by the term.
The term is only used for distinguishing a component with another component.For example, the range for not departing from this specification the case where
Under, first assembly is referred to alternatively as the second component.Similarly, the second component is referred to alternatively as first assembly.Term "and/or" may include
Any one in the combination of multiple associated description projects or multiple associated description projects.
It will be appreciated that two components can be directly with one another when component is referred to as " connection " or " in conjunction with " and arrives another component
Connection combines, or intermediate module may be present between two components.It will be appreciated that " being directly connected to when component is referred to as
Or combine " when, intermediate module is not present between two components.
In addition, the component described in embodiment is shown separately to indicate different feature functionalities, but this is not
Mean that each component is formed by an individual hardware or software.That is, for the convenience of description, multiple components are independent
Ground arrangement and including.For example, at least two components in multiple components can be integrated into single component.On the contrary, a component can
It is divided into multiple components.Without departing from the essence of this specification, embodiment or some component quilts that multiple components are integrated
Isolated embodiment is included in the range of this specification.
Moreover, it is noted that in the exemplary embodiment, the statement of description component " comprising " specific components means other group
Part can be included in the range of practice or the technical spirit of exemplary embodiment, but not excluded the presence of in addition to the specific group
Component except part.
The term used in the present specification is only used for description specific embodiment, it is not intended to the limitation present invention.Singular table
It states and is stated including plural number, unless specifically noting opposite description within a context.In the present specification, it should be appreciated that such as " packet
Include " or the term of " having " be intended merely to instruction existing characteristics, number, step, operation, component, component or their combination, without
It is intended to exclude to will be present or add one or more other features, number, step, operation, component, component or their combination
Possibility.
Embodiment is described in detail below with reference to accompanying drawings, enables embodiment person of an ordinary skill in the technical field
It is enough easily to practice embodiment.In being described below of embodiment, it is considered as making the known function for wanting point fuzziness of this specification
Or the detailed description of configuration will be omitted.In addition, identical reference label is used to specify identical component throughout the drawings, and
And the repeated description by omission to same components.
Hereinafter, " image " can indicate to constitute the single picture of a part of video, or can indicate video itself.Example
Such as, " coding and/or decoding to image " can indicate " coding and/or decoding to video ", and may also indicate that and " regard to constituting
The coding of any one image in the multiple images of frequency and/or decoding ".
Hereinafter, term " video " and " motion picture " can be used for having the same meaning, and can be interchangeable with one another
It uses.
Hereinafter, target image can be as by the coding target image for the target being encoded and/or as will be by
The decoding target pictures of decoded target.In addition, target image can be the input picture for being input into encoding device or by
It is input to the input picture of decoding device.
Hereinafter, term " image ", " picture ", " frame " and " screen " can be used for having the same meaning, and can
Use interchangeable with one another.
Hereinafter, object block can be encoding target block (that is, will encoded target) and/or decoding object block (that is,
The target that will be decoded).In addition, object block can be current block, that is, currently by encoded and/or decoded target.Here,
Term " object block " and " current block " can be used for having the same meaning, and interchangeable with one another can use.
Hereinafter, term " block " and " unit " can be used for having the same meaning, and interchangeable with one another can use.It can
Selection of land, " block " can indicate discrete cell.
Hereinafter, term " region " and " section " interchangeable with one another can use.
Hereinafter, signal specific can be the signal of instruction specific piece.For example, original signal can be instruction object block
Signal.Prediction signal can be the signal of indication predicting block.Residual signals can be the signal of instruction residual block.
Below in an example, specific information, data, mark, element and attribute can have their own value.With
The corresponding value " 0 " of each of the information, data, mark, element and attribute can indicate logical falsehood or the first predefined value.
In other words, value " 0 ", vacation, logical falsehood and the first predefined value interchangeable with one another can use.With the information, data, mark, member
The corresponding value " 1 " of each of element and attribute can indicate logical truth or the second predefined value.In other words, value " 1 ", it is true, patrol
Collecting true and the second predefined value interchangeable with one another can use.
When the variable of such as i or j are used to indicate that row, column or index, value i can be integer 0 or the integer greater than 0,
Or it can be integer 1 or the integer greater than 1.In other words, in embodiment, each of row, column and index can be opened from 0
Begin to count, or can be started counting from 1.
In the following, the term that description will be used in embodiment.
Encoder: encoder indicates the device for executing coding.
Decoder: decoder is indicated for executing decoded device.
Unit: " unit " can indicate the unit of image coding and decoding.Term " unit " and " block " can be used to have
Identical meaning, and interchangeable with one another can use.
" unit " can be M × N sampling point array.M and N can be positive integer respectively.Term " unit " can usually indicate two dimension
The array of (2D) sampling point.
During the coding and decoding of image, " unit " can be to be generated and carrying out subregion to an image
Region.Single image can be partitioned multiple units.Optionally, an image can be partitioned subdivision, and unit can table
Show the subdivision that each subregion goes out when the subdivision gone out to subregion executes coding or decoding.
In the coding and decoding to image, predefined processing can be executed to each unit according to the type of unit.
According to function, cell type can be classified as macroblock unit, coding unit (CU), predicting unit (PU), residual error list
Member, converter unit (TU) etc..Optionally, according to function, unit can indicate block, macro block, coding tree unit (CTU), coding tree block,
Coding unit, encoding block, predicting unit, prediction block, residual unit, residual block, converter unit, transform block etc..
Term " unit " can indicate to include brightness (luma) component blocks, coloration corresponding with luminance component block (chroma)
Component blocks and information for each piece of syntactic element make unit be designated as distinguishing with block.
The size and shape of unit can be differently implemented.In addition, unit can have it is any in various sizes and shape
It is a kind of.Specifically, the shape of unit not only may include square, can also include the geometry that can be indicated with two-dimentional (2D)
(such as, rectangle, trapezoidal, triangle and pentagon).
In addition, unit information may include type (instruction coding unit, predicting unit, residual unit or the transformation list of unit
Member), it is the size of unit, the depth of unit, one or more in the sequence of the coding and decoding of unit etc..
- one unit can be partitioned subelement, and each subelement has the size smaller than the size of correlation unit.
Unit depth: unit depth can indicate the degree that unit is partitioned.In addition, unit depth can be indicated when to set knot
Structure indicates grade existing for corresponding units when unit.
Unit partition information may include the unit depth of the depth of indicating unit.Unit depth can indicating unit be partitioned
Number and/or the degree that is partitioned of unit.
In tree construction, it is believed that the depth minimum of root node and the depth maximum of leaf node.
Individual unit can be multiple subelements by layering and zoning, while the multiple subelement has based on tree construction
Depth information.In other words, unit and and carrying out subregion to the unit subelement that generates can correspond respectively to node and
The child node of the node.The subelement being each partitioned out can have unit depth.Since unit depth indicating unit is partitioned
Number and/or the degree that is partitioned of unit, therefore the partition information of subelement may include the size about the subelement
Information.
In tree construction, top node can correspond to carry out the start node before subregion.Top node is referred to alternatively as
" root node ".In addition, root node can have minimum depth value.Here, the depth of top node can be grade " 0 ".
Depth is that the node of grade " 1 " can indicate the generated unit when initial cell is partitioned one time.Depth be etc.
The node of grade " 2 " can indicate the generated unit when initial cell is partitioned twice.
Depth is that the leaf node of grade " n " can indicate the generated unit when initial cell is partitioned n times.
Leaf node can be bottom node, which cannot be by further subregion.The depth of leaf node can be maximum
Grade.For example, the predefined value for greatest level can be 3.
Sampling point: sampling point can be the basic unit for constituting block.It can be with from according to 0 to the 2 of bit-depth (Bd)Bd-1 value
To indicate sampling point.
Sampling point can be pixel or pixel value.
Hereinafter, term " pixel " and " sampling point " can be used for identical meanings, and interchangeable with one another can use.
Coding tree unit (CTU): CTU can be encoded tree block by single luminance component (Y) and encode tree block phase with luminance component
Two chromatic components (Cb, Cr) the coding tree block closed is constituted.In addition, CTU can indicate to include above-mentioned piece and for each piece
The information of syntactic element.
It can be used one or more partition methods (such as quaternary tree and binary tree) to each coding tree unit (CTU)
Subregion is carried out, to configure subelement, such as coding unit, predicting unit and converter unit.
" CTU " is used as specifying in image decoding and coded treatment (such as the case where carrying out subregion to input picture
Under) as processing unit block of pixels term.
Coding tree block (CTB): " CTB " is used as appointing in specified Y coding tree block, Cb coding tree block and Cr coding tree block
The term of meaning one.
Contiguous block: contiguous block indicates the block adjacent with object block.The block adjacent with object block can indicate boundary and object block
The block of contact indicates to be located at away from the block in object block preset distance.Contiguous block can indicate adjacent with the vertex of object block
Block.Here, the block adjacent with the vertex of object block can indicate with the horizontally adjacent contiguous block in object block vertically adjacent to block or
Person and the block horizontally adjacent vertically adjacent to the contiguous block in object block.Contiguous block can be the contiguous block of reconstruction.
Predicting unit: predicting unit can be the basic unit for prediction, such as inter-prediction, intra prediction, interframe
Compensation and motion compensation in compensation, frame.
Single predicting unit can be divided into smaller size of multiple subregions or sub- predicting unit.It is the multiple
Subregion is also possible to the basic unit when executing prediction or compensation.Pass through subregion caused by being divided to predicting unit
It can be predicting unit.
Predicting unit subregion: predicting unit subregion can be the shape that predicting unit is divided into.
The adjacent unit of reconstruction: the adjacent unit of reconstruction can be the list for being decoded and having rebuild around object element
Member.
The adjacent unit rebuild can be spatially adjacent with object element or adjacent with object element in time
Unit.
Rebuild spatial neighbor unit can be include in current picture pass through coding and/or decode weighed
The unit built.
Rebuild time adjacent unit can be include in a reference image pass through coding and/or decode weighed
The unit built.It the position of the time adjacent unit of reconstruction in a reference image can be with position of the object element in current picture
It is identical, or can be corresponding to position of the object element in current picture.
Parameter set: parameter set can be the head information in the structure of bit stream.For example, parameter set may include sequential parameter
Collection, parameter sets, auto-adaptive parameter set etc..
Rate-distortion optimization: encoding device In-commission Rate aberration optimizing provides high volume will pass through using the combination of following item
Code efficiency: size, prediction mode, the size of predicting unit (PU), motion information and the converter unit (TU) of coding unit (CU)
Size.
Rate-distortion optimization scheme can calculate each combined rate distortion costs to select optimum combination from these combinations.
Following equation 1 can be used to calculate rate distortion costs.In general, the combination that minimize rate distortion costs can be selected as being distorted in rate
Optimum combination under prioritization scheme.
[equation 1]
D+λ*R
- D can indicate to be distorted.D can be between the transformation coefficient of original transform coefficient and reconstruction in converter unit
Difference square average value (i.e. mean square error).
- R can indicate code rate, and related context information can be used to indicate bit rate.
- λ indicates Lagrange's multiplier.R not only may include coding parameter information (such as prediction mode, motion information and volume
Code block mark), it may also include the bit generated due to being encoded to transformation coefficient.
Such as inter-prediction and/or intra prediction, transformation, quantization, entropy coding, the inverse quantization (amount of going can be performed in encoding device
Change) and inverse transformation process, to calculate accurate D and R.These processes can greatly increase the complexity of encoding device.
Bit stream: bit stream can indicate include the bit of coded image information stream.
Parameter set: parameter set can be the head information in the structure of bit stream.
Parameter set may include video parameter collection (VPS), sequence parameter set (SPS), parameter sets (PPS) and adaptive ginseng
At least one of manifold (APS).In addition, parameter set may include the information about slice header and the information about parallel build.
Parsing: parsing can be the decision for the value to syntactic element made and executing entropy decoding to bit stream.It can
Selection of land, term " parsing " can indicate this entropy decoding itself.
Symbol: symbol can be encoding target unit and/or decode syntactic element, coding parameter and the transformation of object element
At least one of coefficient.In addition, symbol can be the target of entropy coding or the result of entropy decoding.
Reference picture: reference picture can be the image that inter-prediction or motion compensation are executed by elements reference.It can
Selection of land, reference picture can be including by object element refer to so as to execute inter-prediction or motion compensation reference unit figure
Picture.
Hereinafter, term " reference picture " and " reference picture " can be used for having the same meaning, and can be each other
It is used interchangeably.
Reference picture list: reference picture list can be including be used for inter-prediction or motion compensation one or more
The list of multiple reference pictures.
The type of reference picture list may include combined list (LC), list 0 (L0), list 1 (L1), list 2
(L3), list 3 (L3) etc..
One or more reference picture lists can be used for inter-prediction.
Inter-prediction indicator: inter-prediction indicator can indicate the inter-prediction direction of object element.Inter-prediction can
To be single directional prediction and one of bi-directional predicted.Optionally, inter-prediction indicator can indicate the prediction for generating object element
The quantity of the reference picture of unit.Optionally, inter-prediction indicator can indicate inter-prediction or movement for object element
The quantity of the prediction block of compensation.
Reference picture indices: reference picture indices can be the rope for indicating the certain reference picture in reference picture list
Draw.
Motion vector (MV): motion vector can be the 2D vector for inter-prediction or motion compensation.Motion vector can
Indicate the offset in coding target image/between decoding target pictures and reference picture.
For example, can be with such as (mvx, mvy) form indicate MV.mvxIt can indicate horizontal component, mvyIt can indicate to hang down
Straight component.
Search range: search range can be the region 2D that the search for MV is executed during inter-prediction.For example,
The size of search range can be M × N.M and N can be positive integer respectively.
Motion vector candidates: motion vector candidates can be when motion vector is predicted as the block of predicting candidate or
The motion vector of block as predicting candidate.
Motion vector candidates can be included in motion vector candidates list.
Motion vector candidates list: motion vector candidates list can be is matched using one or more motion vector candidates
The list set.
Motion vector candidates index: motion vector candidates index can be the fortune being used to indicate in motion vector candidates list
The indicator of dynamic vector candidate.Optionally, motion vector candidates index can be the index of motion vector predictor.
Motion information: motion information can be including reference picture list, reference picture, motion vector candidates, movement arrow
Measure at least one of candidate index, merging candidate and merging index and motion vector, reference picture indices and inter-prediction
The information of indicator.
Merge candidate list: merging candidate list can be using the list for merging candidate configuration.
Merge candidate: merging candidate can be space merging candidate, the time merges candidate, combine merging candidate, combination pair
Prediction merges candidate, zero merging candidate etc..Merging candidate may include motion information, and such as inter-prediction indicator is used for each
The reference picture indices and motion vector of list.
Merge index: merging index can be the candidate indicator of the merging for being used to indicate and merging in candidate list.
Merge index can indicate spatially adjacent with object element reconstruction unit and in time with object element phase
Merge candidate reconstruction unit for deriving between adjacent reconstruction unit.
Merging index can indicate to merge at least one in candidate a plurality of motion information.
Converter unit: converter unit can be residual signals coding and/or residual signals decoding (such as transformation, inverse transformation,
Quantify, go quantization, transform coefficients encoding and transformation coefficient decoding) basic unit.Single converter unit, which can be divided into, to be had
Smaller size of multiple converter units.
Scaling: scaling can indicate the process by the factor multiplied by transform coefficient levels.
As to transform coefficient levels zoom in and out as a result, can produce transformation coefficient.Scaling is also referred to as the " amount of going
Change ".
Quantization parameter (QP): quantization parameter can be for generating the transform coefficient levels for being used for transformation coefficient in quantization
Value.Optionally, quantization parameter is also possible to for generating change by zooming in and out to transform coefficient levels in going quantization
Change the value of coefficient.Optionally, quantization parameter can be mapped to the value of quantization step.
Delta (Delta) quantization parameter: Delta quantization parameter is the quantization parameter of coding/decoding object element
With the difference between the quantization parameter that predicts.
Scanning: scanning can indicate the method arranged the coefficient order in unit, block or matrix.For example, for pressing
The method arranged according to the form of one-dimensional (1D) array 2D array is referred to alternatively as " scanning ".Optionally, for according to 2D gusts
The method that the form of column arranges 1D array also referred to as " scans " or " inverse scan ".
Transformation coefficient: transformation coefficient can be the coefficient value generated when encoding device is executed and converted.Optionally, transformation series
Number can be the coefficient value generated when decoding device executes entropy decoding and goes at least one in quantization.
The grade of the quantization of transformation coefficient or residual signals or the transform coefficient levels of quantization are applied in quantization
It can be included in the meaning of term " transformation coefficient ".
The grade of quantization: the grade of quantization can be the production when encoding device executes quantization to transformation coefficient or residual signals
Raw value.Optionally, the grade of quantization can be when quantization is gone in decoding device execution as the mesh target value for going quantization.
The transform coefficient levels of the quantization of result as transform and quantization also are included within the meaning of the grade of quantization
In.
Non-zero transform coefficient: non-zero transform coefficient can be the transformation coefficient with the value other than 0, or can be
Transform coefficient levels with the value other than 0.Optionally, non-zero transform coefficient can be the transformation that the amplitude of value is not 0
Coefficient, or can be the amplitude of value is not 0 transform coefficient levels.
Quantization matrix: quantization matrix, which can be, to be quantified or is going in quantizing process using the subjective image to improve image
The matrix of quality or Objective image quality.Quantization matrix is also referred to as " scaling list ".
Quantization matrix coefficient: quantization matrix coefficient can be each element in quantization matrix.Quantization matrix coefficient can also
Referred to as " matrix coefficient ".
Default matrix: default matrix can be equipment encoded and decoding device quantization matrix predetermined.
Non-default matrix: non-default matrix, which can be, is not encoded equipment and decoding device quantization matrix predetermined.
Non-default matrix can send decoding device to signal by encoding device.
Fig. 1 is the block diagram for showing the configuration of embodiment for the encoding device for being applied the disclosure.
Encoding device 100 can be encoder, video encoder or image encoding apparatus.Video may include one or more
Multiple images (picture).Encoding device 100 can sequentially encode one or more images of video.
Referring to Fig.1, encoding device 100 includes inter prediction unit 110, intraprediction unit 120, switch 115, subtraction
Device 125, quantifying unit 140, entropy code unit 150, removes quantization (inverse quantization) unit 160, inverse transformation block at converter unit 130
170, adder 175, filter unit 180 and reference picture buffer 190.
Frame mode and/or inter-frame mode can be used to execute coding to target image for encoding device 100.
In addition, encoding device 100 can generate the ratio including the information about coding by being encoded to target image
Spy's stream, and the bit stream of exportable generation.The bit stream of generation can be stored in computer readable storage medium, and can
It is streamed by Wireless/wired transmission medium.
When frame mode is used as prediction mode, switch 115 can switch to frame mode.When inter-frame mode is used as
When prediction mode, switch 115 can switch to inter-frame mode.
Encoding device 100 can produce the prediction block of object block.In addition, after having produced prediction block, encoding device 100
Residual error between object block and prediction block can be encoded.
When prediction mode is frame mode, intraprediction unit 120 can be by previous encoded/solution around object block
The pixel of the contiguous block of code, which is used as, refers to sampling point.The sampling point that refers to can be used to execute sky to object block for intraprediction unit 120
Between predict, and can via spatial prediction for object block generate prediction sampling point.
Inter prediction unit 110 may include motion prediction unit and motion compensation units.
When prediction mode is inter-frame mode, motion prediction unit can search in a reference image in motion prediction process
With the most matched region of object block, and movement arrow can be derived for object block and the region found based on the region found
Amount.
Reference picture can be stored in reference picture buffer 190.More particularly, when the coding of reference picture and/
Or when decoding processed, reference picture can be stored in reference picture buffer 190.
Motion compensation units can execute motion compensation by using motion vector to generate the prediction block for object block.This
In, motion vector can be two dimension (2D) vector for inter-prediction.In addition, motion vector can indicate target image and reference
Offset between image.
When motion vector has the value other than integer, motion prediction unit and motion compensation units can be by that will insert
Value filter is applied to the partial region of reference picture to generate prediction block.It, can be true in order to execute inter-prediction or motion compensation
Determine any mould in skip mode, merging patterns, advanced motion vector prediction (AMVP) mode and current picture reference model
Formula corresponds to for carrying out prediction to the movement for the PU for including in CU based on CU and to the method that the movement compensates, and can
Inter-prediction or motion compensation are executed according to the mode.
Subtracter 125 can produce residual block, wherein residual block is the difference between object block and prediction block.Residual block can also
Referred to as " residual signals ".
Residual signals can be the difference between original signal and prediction signal.Optionally, residual signals can be by right
Difference between original signal and prediction signal converted or quantified and the signal that generates or by the difference carry out transformation and
The signal of quantization and generation.Residual block can be the residual signals for module unit.
Converter unit 130 can generate transformation coefficient, and the transformation of exportable generation by being converted to residual block
Coefficient.Here, transformation coefficient can be the coefficient value generated and converting to residual block.
When using skip mode is converted, converter unit 130 can omit the operation converted to residual block.
By implementing to quantify to transformation coefficient, the transform coefficient levels of quantization or the grade of quantization can produce.Below
In, in embodiment, each of transform coefficient levels and the grade of quantization of quantization are also referred to as " transformation coefficient ".
Quantifying unit 140 can generate transformation coefficient of quantization etc. by being quantified according to quantization parameter to transformation coefficient
Grade or the grade of quantization.The transform coefficient levels of the quantization of the exportable generation of quantifying unit 140 or the grade of quantization.In this feelings
Under condition, quantization matrix can be used to quantify transformation coefficient for quantifying unit 140.
Entropy code unit 150 can be by being based on calculating by the calculated value of quantifying unit 140 and/or in an encoding process
Encoded parameter values execute entropy coding based on probability distribution to generate bit stream.The ratio of the exportable generation of entropy code unit 150
Spy's stream.
Entropy code unit 150 can also be directed to the information of the pixel about image and be decoded required information to image
Execute entropy coding.For example, being decoded required information to image may include syntactic element etc..
Coding parameter, which can be, is encoded and/or is decoded required information.Coding parameter may include by encoding device 100
The information of decoding device is encoded and be sent to from encoding device 100, and may additionally include and derived in coding or decoding process
Information out.For example, the information for being sent to decoding device may include syntactic element.
For example, coding parameter may include value or statistical information, such as prediction mode, motion vector, reference picture indices, volume
Code block pattern divides presence or absence of residual signals, transformation coefficient, the transformation coefficient of quantization, quantization parameter, block size and block
Area's information.Prediction mode can be intra prediction mode or inter-frame forecast mode.
Residual signals can indicate the difference between original signal and prediction signal.Optionally, residual signals can be by right
Difference between original signal and prediction signal is converted and the signal that generates.Optionally, residual signals can be by original
The poor signal for carrying out transform and quantization and generating between beginning signal and prediction signal.
When application entropy coding, less bit can be distributed to the symbol more frequently occurred, and will can more compare
Spy distributes to the symbol seldom occurred.Due to indicating symbol by the distribution, the target for will be encoded can be reduced
The size of the Bit String of symbol.Therefore, the compression performance of Video coding can be improved by entropy coding.
In addition, in order to carry out entropy coding, entropy code unit 150 can be used such as Exp-Golomb, context-adaptive can
Become the coding method of length coding (CAVLC) or context adaptive binary arithmetic coding (CABAC).For example, entropy coding list
First 150 usable variable length code/code (VLC) tables execute entropy coding.It is used for for example, entropy code unit 150 can be derived
The binarization method of aiming symbol.In addition, entropy code unit 150 can be derived for aiming symbol/binary digit probability mould
Type.The binarization method derived, probabilistic model and context model can be used to execute arithmetic coding for entropy code unit 150.
The transformation of coefficient of 2D block form can be 1D vector form by transformation coefficient scan method by entropy code unit 150,
To be encoded to transform coefficient levels.
Code coefficient not only may include being encoded by encoding device and being transmitted to all of decoding device with signal by encoding device
Such as the information (or mark or index) of syntactic element, the information derived in coding or decoding process may additionally include.In addition, compiling
Code parameter may include image being encoded or being decoded required information.For example, coding parameter may include in following item at least
The combination of one or less item: the size of unit/block, the depth of unit/block, the partition information of unit/block, unit/block point
Whether information that whether plot structure, indicating unit/block are partitioned with quad-tree structure, indicating unit/block are with binary tree structure quilt
The information of subregion, the subregion direction (horizontal direction or vertical direction) of binary tree structure, the zoned format of binary tree structure are (symmetrical
Subregion or asymmetric subregion), prediction scheme (intra prediction or inter-prediction), intra prediction mode/direction, with reference to sampling point filter
Method, prediction block filtering method, prediction block boundary filtering method, the filter tap for filtering, the filter for filtering
Coefficient, inter-frame forecast mode, motion information, motion vector, reference picture indices, inter-prediction direction, inter-prediction indicator,
Reference picture list, reference picture, motion vector predictor, motion-vector prediction candidate, motion vector candidates list, instruction
Information that whether merging patterns are used, merge it is candidate, merge candidate list, the information whether used about skip mode,
Type, the tap of interpolation filter, the filter coefficient of interpolation filter, the size of motion vector, movement of interpolation filter
The accuracy of vector representation, alternative types, transform size, instruction convert the information whether used, instruction additional (two for the first time
It is secondary) transformation whether used information, for the first time manipulative indexing, quadratic transformation index, instruction residual signals whether there is information,
Coded block pattern, coded block flag, quantization parameter, quantization matrix, about wave filter in loop information, instruction loop in filter
The coefficient of information, wave filter in loop that whether device is applied, the tap of wave filter in loop, the shape of wave filter in loop/
Information, the coefficient of de-blocking filter, the tap of de-blocking filter, the deblocking whether form, instruction de-blocking filter are applied are filtered
Wave device intensity, the shape/form of de-blocking filter, the adaptive sampling point of instruction deviate the information whether applied, adaptive sampling point
The value of offset, adaptive sampling point offset classification, adaptive sampling point offset type, instruction auto-adaptive loop filter whether by
The information of application, the coefficient of auto-adaptive loop filter, the tap of auto-adaptive loop filter, the shape of auto-adaptive loop filter
Shape/form, binaryzation/anti-binarization method, context model, context model determining method, context model update method,
Indicate normal mode whether be performed information, instruction bypass mode whether be performed information, context binary digit, bypass
Binary digit, transformation coefficient, transform coefficient levels, transform coefficient levels scan method, image show/output sequence, band knowledge
Other information, type of strip, band partition information, parallel block identification information, parallel block type, parallel block partition information, picture class
Type, bit-depth, the information about luminance signal and the information about carrier chrominance signal.
Here, with signal transmit mark or index can presentation code equipment 100 will by mark or index execute entropy compile
Code and the mark of entropy coding or the index of entropy coding generated includes that in the bitstream, and can indicate that decoding device 200 passes through
The index of mark or entropy coding to the entropy coding from bitstream extraction executes entropy decoding to obtain mark or index.
Since encoding device 100 executes coding via inter-prediction, the target image of coding is used as inciting somebody to action
By the reference picture of the other image of subsequent processing.Therefore, encoding device 100 can the target image to coding carry out rebuild or
Decoding, and will rebuild or decoded image as reference pictures store in reference picture buffer 190.It, can be into for decoding
The capable target image to coding goes quantization and inverse transformation.
The grade of quantization can be by going quantifying unit 160 to carry out inverse quantization, and can carry out inversion by inverse transformation block 170
It changes.It can will be added by inverse quantization and/or the coefficient of inverse transformation with prediction block by adder 175.By inverse quantization and/or inverse transformation
Coefficient be added with prediction block, then can produce reconstructed block.Here, inverse quantization and/or the coefficient of inverse transformation can indicate to be performed
One or more coefficients in quantization and inverse transformation are removed, and can also indicate the residual block rebuild.
Reconstructed block can be filtered by filter unit 180.Filter unit 180 can be adaptive by de-blocking filter, sampling point
It deviates one or more filters in (SAO) filter and auto-adaptive loop filter (ALF) and is applied to reconstructed block or again
Build picture.Filter unit 180 is also referred to as " loop filter ".
De-blocking filter can eliminate the block distortion of the appearance of boundary between blocks.In order to determine whether using deblocking filtering
Device, can determine the column for being included in block and comprising determining whether the pixel being based on to object block application de-blocking filter or
Capable quantity.When de-blocking filter is applied to object block, applied filter can be according to required deblocking filtering
Intensity and it is different.In other words, among different filters, the filter that can will be considered the intensity of deblocking filtering and determine is answered
For object block.
Offset appropriate can be added with pixel value to compensate to encoding error by SAO.SAO can be based on pixel to quilt
Correction is executed using the image of deblocking, the correction is using original image and by the offset of the difference between the image of application deblocking.It can
Using for the pixel for including in image to be divided into, certain amount of region, determination will be applied among the region marked off
The region of offset and by offset applications in the method in identified region, and the edge for considering each pixel can also be used
Information is come the method for applying offset.
ALF can execute filtering based on the value obtained and being compared reconstruction image with original image.In image
In include pixel have been divided the group for predetermined quantity after, it may be determined that the filter of group will be applied to, and can
Filtering is differently carried out for each group.It can be transmitted for each CU with signal and whether apply auto-adaptive loop filter phase
The information of pass.By be applied to each piece ALF shape and filter coefficient can be different for each piece.
The reconstructed block or reconstruction image filtered by filter unit 180 can be stored in reference picture buffer 190.It is logical
The reconstructed block for crossing the filtering of filter unit 180 can be a part of reference picture.In other words, reference picture can be by passing through
The reconstruction picture that the reconstructed block that filter unit 180 filters is constituted.The reference picture of storage can be subsequently used to inter-prediction.
Fig. 2 is the block diagram for showing the configuration of embodiment for the decoding device for being applied the disclosure.
Decoding device 200 can be decoder, video decoding apparatus or image decoding apparatus.
Referring to Fig. 2, decoding device 200 may include entropy decoding unit 210, remove quantization (inverse quantization) unit 220, inverse transformation list
Member 230, intraprediction unit 240, inter prediction unit 250, adder 255, filter unit 260 and reference picture buffer
270。
Decoding device 200 can receive the bit stream exported from encoding device 100.Decoding device 200 can receive and be stored in meter
Bit stream in calculation machine readable storage medium storing program for executing, and can receive and spread defeated bit stream by wire/wireless transmission medium.
Decoding device 200 can execute decoding to bit stream under frame mode and/or inter-frame mode.In addition, decoding device
200 can generate reconstruction image or decoding image, and exportable reconstruction image or decoding image via decoding.
Frame mode or frame are switched to based on the prediction mode for being decoded for example, can execute by switch
Between mode operation.When being frame mode for decoded prediction mode, switch can be operable to be switched to frame mode.
When being inter-frame mode for decoded prediction mode, switch can be operable to be switched to inter-frame mode.
Decoding device 200 can obtain the residual block of reconstruction by being decoded to the bit stream of input, and can produce
Prediction block.When the residual block of reconstruction and prediction block are acquired, decoding device 200 can be by the residual block and prediction block that will rebuild
Phase Calais generates the reconstructed block as decoded target.
Entropy decoding unit 210 can generate symbol by executing entropy decoding to bit stream based on the probability distribution of bit stream.
The symbol of generation may include the hierarchical format symbol of quantization.Here, entropy decoding method can be with entropy coding method phase described above
Seemingly.That is, entropy decoding method can be the inverse process of entropy coding method described above.
The coefficient of quantization can be by going quantifying unit 220 to carry out inverse quantization.Go quantifying unit 220 can be by the coefficient to quantization
Execute the coefficient for going quantization to generate inverse quantization.In addition, the coefficient of inverse quantization can carry out inverse transformation by inverse transformation block 230.It is inverse
Converter unit 230 can execute inverse transformation by the coefficient to inverse quantization to generate the residual block of reconstruction.As the coefficient to quantization
It executes and goes quantization and inverse transformation as a result, can produce the residual block of reconstruction.Here, it when generating the residual block rebuild, goes to quantify
Quantization matrix can be applied to the coefficient of quantization by unit 220.
When using frame mode, intraprediction unit 240 can be solved by executing using previous around object block
The spatial prediction of the pixel value of the contiguous block of code generates prediction block.
Inter prediction unit 250 may include motion compensation units.Optionally, inter prediction unit 250 can be designated as " fortune
Dynamic compensating unit ".
When using inter-frame mode, motion compensation units 250 can be by executing using motion vector and being stored in reference
The motion compensation of reference picture in picture buffer 270 generates prediction block.
Interpolation filter can be applied to reference when motion vector has the value other than integer by motion compensation units
The partial region of image, and the reference picture for being applied interpolation filter can be used to generate prediction block.In order to execute fortune
Dynamic compensation, motion compensation units can be determined based on CU skip mode, merging patterns, advanced motion vector prediction (AMVP) mode with
Any mode in current picture reference model corresponds to the motion compensation process of the PU for including in CU, and can basis
Identified mode executes motion compensation.
The residual block of reconstruction can be added each other with prediction block by adder 255.Adder 255 can be by residual error that will rebuild
Block and prediction block phase Calais generate reconstructed block.
Reconstructed block can be filtered by filter unit 260.Filter unit 260 can by de-blocking filter, SAO filter and
At least one of ALF is applied to reconstructed block or rebuilds picture.
The reconstructed block filtered by filter unit 260 can be stored in reference picture buffer 270.Pass through filter unit
The reconstructed block of 260 filtering can be a part of reference picture.In other words, reference picture can be by passing through filter unit
The image that the reconstructed block of 260 filtering is constituted.The reference picture of storage can be subsequently used to inter-prediction.
Fig. 3 is the diagram for schematically showing the partitioned organization of image when image is encoded and is decoded.
Fig. 3 can schematically show the example that individual unit is partitioned multiple subelements.
In order to effectively carry out subregion to image, coding unit (CU) can be used in coding and decoding.Term " unit "
Can be used for jointly specifying 1) include image sampling point block and 2) syntactic element.For example, " subregion of unit " can indicate " with list
The subregion of first corresponding block ".
CU is used as the basic unit for encoding/decoding image.CU is used as in encoding/decoding image from frame
The unit that internal schema and a mode of inter mode decision are applied to.It in other words, can be true in encoding/decoding image
Which of framing internal schema and inter-frame mode mode will be applied to each CU.
In addition, CU can be prediction to transformation coefficient, transformation, quantization, inverse transformation, go in quantization and coding/decoding
Basic unit.
Referring to Fig. 3, image 200 can be unit corresponding with maximum coding unit (LCU) by sequentially subregion, and image
300 partitioned organization can be determined according to LCU.Here, LCU can be used for having with coding tree unit (CTU) identical meaning.
Carrying out subregion to unit can indicate to carry out subregion to block corresponding with unit.Block partition information may include about unit
Depth depth information.Depth information can the degree that is partitioned of the number that is partitioned of indicating unit and/or unit.Individual unit
It can be subelement by layering and zoning, while subelement has the depth information based on tree construction.The subelement being each partitioned out
There can be depth information.Depth information can be the information of the size of instruction CU.Each CU storage depth information can be directed to.Each
CU can have depth information.
Partitioned organization can indicate the distribution of the coding unit (CU) for carrying out efficient coding to image in LCU 310.
This distribution can be determined according to whether single CU will be partitioned multiple CU.Quantity by carrying out the CU that subregion generates can
To be positive integer two or more, including 2,3,4,8,16 etc..According to the quantity by carrying out the CU that subregion generates, pass through progress
Subregion and the horizontal size of each CU and vertical dimension generated can be less than the horizontal size of the CU before being partitioned and vertical
Size.
The CU being each partitioned out can be in the same fashion four CU by recursively subregion.With the CU before being partitioned
At least one of horizontal size and vertical dimension compare, via recursive partitioning, the horizontal size for the CU being each partitioned out
It can be reduced at least one of vertical dimension.
The subregion of CU can recursively be executed until predefined depth or predefined size.For example, the depth of LCU
It can be 0, the depth of minimum coding unit (SCU) can be predefined depth capacity.Here, as described above, LCU can be
CU with maximum coding unit size, SCU can be the CU with minimum coding unit size.
It can start to carry out subregion at LCU 310, and whenever the horizontal size of CU and/or vertical dimension are by being divided
Area and when reducing, the depth of CU can increase by 1.
For example, the CU not being partitioned can have the size of 2N × 2N for each depth.In addition, the feelings being partitioned in CU
Under condition, the CU having a size of 2N × 2N can be partitioned four CU that size is N × N.When depth increases by 1, the value of N can subtract
Half.
Referring to Fig. 3, the LCU that depth is 0 can have 64 × 64 pixels or 64 × 64 pieces.0 can be minimum-depth.Depth
Can have 8 × 8 pixels or 8 × 8 pieces for 3 SCU.3 can be depth capacity.Here, as LCU with 64 × 64 pieces
CU available depth 0 indicates.It can be indicated with depth 1 with 32 × 32 pieces of CU.It can be with deeply with 16 × 16 pieces of CU
2 are spent to indicate.As SCU there is 8 × 8 pieces of CU can be indicated with depth 3.
It can be indicated with the partition information of CU about the corresponding CU information whether being partitioned.Partition information can be 1 ratio
Special information.All CU other than SCU may include partition information.For example, the value of the partition information for the CU not being partitioned can be with
It is 0.The value of the partition information for the CU being partitioned can be 1.
For example, when single CU is partitioned four CU, by carrying out each CU's in four CU that subregion generates
Horizontal size and vertical dimension can be the horizontal size of the CU before being partitioned and the half of vertical dimension.When have 32 ×
When the CU of 32 sizes is partitioned four CU, the size for each CU in four CU being partitioned out can be 16 × 16.Work as list
When a CU is partitioned four CU, it is believed that CU is partitioned with quad-tree structure.
For example, when single CU is partitioned into two CU, by carrying out each CU's in two CU that subregion generates
Horizontal size or vertical dimension can be the horizontal size of the CU before being partitioned or the half of vertical dimension.When have 32 ×
The CU of 32 sizes by vertical partitioning be two CU when, the size for each CU in two CU being partitioned out can be 16 × 32.
When single CU is partitioned two CU, it is believed that CU is partitioned with binary tree structure.
Other than quaternary tree subregion, binary tree subregion can be applied to the LCU 310 of Fig. 3.
Fig. 4 is the diagram for showing the form for the predicting unit (PU) that coding unit (CU) can include.
From the CU that LCU subregion goes out, the CU being no longer partitioned can be divided into one or more predicting units (PU).
This divide is also referred to as " subregion ".
PU can be the basic unit for prediction.PU can be any in skip mode, inter-frame mode and frame mode
It is encoded and decodes under one mode.It can be various shape by PU subregion according to each mode.For example, being described above by reference to Fig. 1
Object block and above by reference to Fig. 2 describe object block can be PU.
Under skip mode, subregion may not be present in CU.Under skip mode, 2N × 2N mode 410 can be supported, without
Carry out subregion, wherein in 2N × 2N mode, the size of PU and the size of CU are mutually the same.
In inter mode, the partition shapes of 8 seed types may be present in CU.For example, in inter mode, can support 2N
× 2N mode 410,2N × N mode 415, N × 2N mode 420, N × N mode 425,2N × nU mode 430,2N × nD mode
435, nL × 2N mode 440 and nR × 2N mode 445.
In intra mode, 2N × 2N mode 410, N × N mode 425,2N × N mode and N × 2N mode can be supported.
Under 2N × 2N mode 410, the PU having a size of 2N × 2N can be encoded.PU having a size of 2N × 2N can be indicated
Size PU identical with the size of CU.For example, the PU having a size of 2N × 2N can have size 64 × 64,32 × 32,16 × 16 or 8
×8。
Under N × N mode 425, the PU having a size of N × N can be encoded.
For example, when the size of PU is 8 × 8, the PU that can go out to four subregions is encoded in intra prediction.Each
The size for the PU that subregion goes out can be 4 × 4.
When being encoded in intra mode to PU, can be used in multiple intra prediction modes any one to PU into
Row coding.For example, HEVC technology can provide 35 intra prediction modes, PU can be any in 35 intra prediction modes
It is encoded under one.
Can be determined based on rate distortion costs which of 2N × 2N mode 410 and N × N mode 425 mode will by with
It is encoded in PU.
Encoding device 100 can execute encoding operation to the PU having a size of 2N × 2N.Here, encoding operation can be can
The operation that PU is encoded under each mode being encoded in multiple intra prediction modes that equipment 100 uses.Pass through coding
Operation, can derive the best intra prediction mode for the PU having a size of 2N × 2N.Best intra prediction mode can be energy
The appearance when being encoded to the PU having a size of 2N × 2N being enough encoded among multiple intra prediction modes that equipment 100 uses
The intra prediction mode of minimum rate distortion costs.
In addition, encoding device 100 sequentially can execute encoding operation to by carrying out each PU that N × N subregion obtains.
Here, encoding operation can be right under each mode that can be encoded in multiple intra prediction modes that equipment 100 uses
The operation that PU is encoded.By encoding operation, the best intra prediction mode for the PU having a size of N × N can be derived.Most
Good intra prediction mode can be can be encoded among multiple intra prediction modes that equipment 100 uses to having a size of N
Occurs the intra prediction mode of minimum rate distortion costs when the PU of × N is encoded.
Fig. 5 is the diagram for showing the form for the converter unit (TU) that can be included in CU.
Converter unit (TU), which can be, to be used for such as transformation, quantization, inverse transformation, goes quantization, entropy coding and entropy solution in CU
The basic unit of the process of code.TU can have square or rectangular shape.
From LCU subregion go out CU in, can by be no longer partitioned CU CU subregion be one or more TU.Here,
The partitioned organization of TU can be quad-tree structure.For example, as shown in Figure 5, single CU 510 can be divided according to quad-tree structure
Area is one or more times.By this subregion, single CU 510 can be made of TU of various sizes.
In encoding device 100, the coding tree unit (CTU) having a size of 64 × 64 can be divided according to recurrence quad-tree structure
Area is multiple smaller CU.Single CU can be partitioned four CU with identical size.Each CU can be divided by recurrence, and
There can be quad-tree structure.
CU can have given depth.When CU is partitioned, can have by carrying out the CU that generates of subregion from being partitioned
CU depth increase by 1 depth.
For example, the depth of CU can have value of the range from 0 to 3.According to the depth of CU, the size range of CU can be from 66 × 64
Size of the size to 8 × 8.
By the recursive partitioning to CU, the optimally partitioned method for generating minimum rate distortion costs may be selected.
Fig. 6 is the diagram for explaining the embodiment of intra-prediction process.
The prediction direction of intra prediction mode is indicated from the arrow of the figure in Fig. 6 radially extended.In addition, appearing in
Number near arrow can indicate to be assigned to intra prediction mode or the mould of the prediction direction that is assigned to intra prediction mode
The example of formula value.
The reference sampling point of the block neighbouring with object block can be used to execute intraframe coding and/or decoding.Contiguous block can be
Neighbouring reconstructed block.For example, maying be used at the value or neighbouring reconstruction of the reference sampling point in each neighbouring reconstructed block included
The coding parameter of block executes intraframe coding and/or decoding.
Encoding device 100 and/or decoding device 200 can by based on the information about the sampling point in target image to target
Block executes intra prediction to generate prediction block.When intra prediction is performed, encoding device 100 and/or decoding device 200 can lead to
It crosses and intra prediction is executed based on the information about the sampling point in target image to generate the prediction block for object block.When pre- in frame
Survey is performed, and encoding device 100 and/or decoding device 200 can execute orientation based on the reference sampling point that at least one is rebuild
Prediction and/or non-directional prediction.
Prediction block can be the block generated as the result for executing intra prediction.Prediction block can correspond to CU, PU and TU
At least one of.
The unit of prediction block can have size corresponding at least one of CU, PU and TU.Prediction block can have size
For the square shape of 2N × 2N or N × N.Size N × N may include size 4 × 4,8 × 8,16 × 16,32 × 32,64 × 64
Deng.
Optionally, prediction block can be the square having a size of (2 × 8,4 × 8,2 × 16,4 × 16,8 × 16 etc.) M × N
Shape block.
It is contemplated that the intra prediction mode for object block executes intra prediction.The intra prediction mode that object block can have
Quantity can be predefined fixed value, and can be the value differently determined according to the attribute of prediction block.For example, prediction
The attribute of block may include the size of prediction block, type of prediction block etc..
No matter it is 35 that the quantity of intra prediction mode, which all can be fixed, for example, the size of prediction block.Optionally, frame
The quantity of inner estimation mode can be such as 3,5,9,17,34,35 or 36.
Intra prediction mode can be non-directional mode or directional pattern.For example, as shown in Figure 6, intra prediction mode
It may include two kinds of non-directional modes and 33 kinds of directional patterns.
Non-directional mode includes DC mode and plane mode.For example, the value of DC mode can be 1.The value of plane mode can
To be 0.
Directional pattern can be the mode with specific direction or special angle.Among multiple intra prediction modes, remove
Remaining mode except DC mode and plane mode can be directional pattern.
At least one of each of intra prediction mode enabled mode number, mode value and pattern angles carry out table
Show.The quantity of intra prediction mode can be M.The value of M can be 1 or larger.In other words, the quantity of intra prediction mode can
To be M, M includes the quantity of non-directional mode and the quantity of directional pattern.
The quantity of intra prediction mode can be fixed as M, without plumber block size how.Optionally, intra prediction mode
Quantity can be different according to the size of block and/or the type of color component.For example, the quantity of prediction mode can be according to color point
Amount is luminance signal or carrier chrominance signal and difference.For example, the size of block is bigger, the quantity of intra prediction mode is bigger.It is optional
The quantity on ground, intra prediction mode corresponding with luminance component block can be greater than intra prediction mode corresponding with chromatic component block
Quantity.
For example, mode value be 26 vertical mode in, can the pixel value based on reference sampling point vertically execute in advance
It surveys.
Even if in directional pattern in addition to the above modes, encoding device 100 and decoding device 200 still can be used according to
Intra prediction is executed to object element according to the reference sampling point of angle corresponding with directional pattern.
The intra prediction mode for being located relative to the right side of vertical mode is referred to alternatively as " vertical-right-hand mode ".Positioned at water
The intra prediction mode of flat-die type powdered lower section is referred to alternatively as " level-down mode ".For example, in Fig. 6, mode value is 27,28,
29, one of 30,31,32,33 and 34 intra prediction mode can be vertical-right-hand mode 613.Mode value is 2,3,4,5,6,
7, one of 8 and 9 intra prediction mode can be level-down mode 616.
The quantity of intra prediction mode described above and the mode value of each intra prediction mode are merely exemplary.
Can according to embodiment, realization and/or require come differently define intra prediction mode described above quantity and each frame
The mode value of inner estimation mode.
It is executable to check whether the sampling point for including in the contiguous block of reconstruction quilt in order to execute intra prediction to object block
The step of reference sampling point as object block.In the presence of the reference sample that cannot be used as current block among the sampling point in contiguous block
When the sampling point of point, via the interpolation for using at least one sample value in the sampling point for including in the contiguous block of reconstruction and/or
The value for replicating and generating cannot alternatively be used as referring to the sample value of the sampling point of sampling point.It is produced when via duplication and/or interpolation
When raw value replaces the sample value of existing sampling point, which is used as the reference sampling point of object block.
In intra prediction, can at least one of size based on intra prediction mode and object block by filter application
In at least one of reference sampling point and prediction sampling point.
When intra prediction mode is plane mode, can be existed when generating the prediction block of object block according to prediction target sampling point
Joined using the top of object block with reference to the upper right side of sampling point, the left side reference sampling point of object block, object block position in prediction block
Examine sampling point, object block lower left with reference to sampling point weighted sum come generate prediction object block sample value.
When intra prediction mode is DC mode, the reference above object block can be used when generating the prediction block of object block
The average value of reference sampling point on the left of sampling point and object block.
When intra prediction mode is directional pattern, the top of object block can be used to refer to sampling point, left side reference sampling point, the right side
Top generates prediction block with reference to sampling point with reference to sampling point and/or lower left.
In order to generate above-mentioned prediction sampling point, the interpolation based on real number can be performed.
The intra prediction mode of object block can execute prediction from the intra prediction of the contiguous block adjacent with object block, and use
It can be coded by entropy/decode in the information of prediction.
For example, predefined mark can be used to use when the intra prediction mode of object block and contiguous block is mutually the same
The intra prediction mode of signal transfer destination block and contiguous block is identical.
For example, available signal transmission is used to indicate in the frame among the intra prediction mode of multiple contiguous blocks with object block
The indicator of the identical intra prediction mode of prediction mode.
It, can be based on the intra prediction mode pair of contiguous block when the intra prediction mode of object block and contiguous block is different from each other
The intraprediction mode information of object block carries out entropy coding/decoding.
Fig. 7 is for explaining used in the intra-prediction process with reference to the diagram of the position of sampling point.
Fig. 7 shows the position of the reference sampling point for carrying out intra prediction to object block.Referring to Fig. 7, for target
It may include lower left with reference to sampling point 731, left side reference sampling point 733, upper left corner ginseng that the reconstruction that block carries out intra prediction, which refers to sampling point,
Sampling point 735, top are examined with reference to sampling point 737 and upper right side with reference to sampling point 739.
For example, left side reference sampling point 733 can indicate the reconstruction reference pixel adjacent with the left side of object block.Top refers to sample
Point 737 can indicate the reconstruction reference pixel adjacent with the top of object block.The upper left corner can indicate to be located at object block with reference to sampling point 735
Left upper reconstruction reference pixel.Lower left can indicate to form with by left side reference sampling point 733 being located at reference to sampling point 731
The identical line of left side sampling point line on sampling point among be located at the left side sampling point line below reference sampling point.Upper right side reference
Sampling point 739 can indicate among the sampling point on line identical with the top sampling point line being made of top with reference to sampling point 737
Reference sampling point on the right side of the top sampling point line.
When the size of object block is N × N, lower left refers to sampling point with reference to sampling point 731, left side reference sampling point 733, top
737 and upper right side with reference to the quantity of sampling point 739 can be N.
By executing intra prediction to object block, prediction block can produce.The process for generating prediction block may include determining prediction
The value of pixel in block.The size of object block and prediction block can be identical.
Reference sampling point for carrying out intra prediction to object block can change according to the intra prediction mode of object block.Frame
The direction of inner estimation mode can be indicated with reference to the dependence between sampling point and the pixel of prediction block.For example, specified refer to sampling point
Value be used as the values of one or more specified pixels in prediction block.In this case, described specified with reference to sampling point
It can be and be located at along the straight line in the direction of intra prediction mode with one or more specified pixel in prediction block
Sampling point and pixel.In other words, the specified value with reference to sampling point can be copied as being located at the direction with intra prediction mode
The value of pixel on opposite direction.Optionally, the value of the pixel in prediction block can be the position position relative to the pixel
In the value of the reference sampling point on the direction of intra prediction mode.
In one example, when the intra prediction mode of object block is the vertical mode that mode value is 26, top reference
Sampling point 737 can be used for intra prediction.When intra prediction mode is vertical mode, the value of the pixel in prediction block can be vertical
Directly the value of the reference sampling point above the position of the pixel.Therefore, the top adjacent with the top of object block refers to sampling point
737 can be used for intra prediction.In addition, the value of the pixel in a line of prediction block can refer to the pixel of sampling point 737 with top
Value it is identical.
In one example, when the mode value of the intra prediction mode of current block is 18, in left side reference sampling point 733
At least some, upper left corner can be used for intra prediction with reference at least some of sampling point 737 with reference to sampling point 735 and top.When
When the mode value of intra prediction mode is 18, the value of the pixel in prediction block can be the left upper for being diagonally located at the pixel
Reference sampling point value.
For determining that the quantity of the reference sampling point of the pixel value of a pixel in prediction block can be 1 or 2 or more
It is more.
As described above, can be according to the position of pixel and the position of the reference sampling point as indicated by the direction of intra prediction mode
To determine the pixel value of the pixel in prediction block.Reference indicated by position when pixel and the direction as intra prediction mode
When the position of sampling point is integer position, a value with reference to sampling point as indicated by integer position be can be used to determine in prediction block
Pixel pixel value.
The position of reference sampling point is not integer-bit indicated by position when pixel and the direction as intra prediction mode
When setting, it can produce based on the interpolation with the immediate two references sampling point in the position for referring to sampling point with reference to sampling point.Interpolation reference
The value of sampling point can be used to determine the pixel value of the pixel in prediction block.In other words, when the position of the pixel in prediction block with
And it when the position instruction of the reference sampling point as indicated by the direction of intra prediction mode two positions referred between sampling points, can produce
The interpolated value of the raw value based on the two sampling points.
The prediction block generated via prediction can be different from original object block.In other words, it is possible to there is prediction error,
The prediction error is the difference between object block and prediction block, and is also likely to be present the pixel of the pixel and prediction block in object block
Between prediction error.
Hereinafter, term " poor ", " error " and " residual error " can be used for having the same meaning, and can be interchangeable with one another
It uses.
For example, the pixel and the distance between reference sampling point of prediction block are longer in the case where directional intra-prediction, then may be used
The prediction error that can occur is bigger.This prediction error can lead to the discontinuity between the prediction block of generation and contiguous block.
In order to reduce prediction error, the filtering operation for prediction block can be used.Filtering operation can be configured to adaptively
Filter is applied to the region for being considered to have larger prediction error in prediction block by ground.For example, being considered to have larger pre-
The region for surveying error can be the boundary of prediction block.In addition, the region for being considered to have larger prediction error in prediction block can
It is different according to intra prediction mode, and the characteristic of filter can also be different according to intra prediction mode.
Fig. 8 is the diagram for explaining the embodiment of inter predication process.
Rectangle shown in Fig. 8 can indicate image (or picture).In addition, in fig. 8, arrow can indicate prediction direction.
That is each image can be encoded and/or be decoded according to prediction direction.
Image can be classified as picture in frame (I picture), single directional prediction picture or prediction encoded picture according to type of coding
(P picture) and bi-directional predicted picture or bi-directional predictive coding picture (B picture).It can be according to the type of coding of each picture to every
A picture is encoded.
When as being I picture by the target image for the target being encoded, target image can be without referring to other figures
It is encoded in the case where the inter-prediction of picture using the data that image itself includes.For example, I picture can be only via intra prediction quilt
Coding.
It, can be via the inter-prediction pair for using the reference picture being present on a direction when target image is P picture
Target image is encoded.Here, one direction can be forward direction or backward direction.
It, can be via the inter-prediction pair for using the reference picture being present in both direction when target image is B picture
Image is encoded, or can be via using the interframe for the reference picture being present on one of forward direction and backward direction pre-
Survey encodes image.Here, described two directions can be forward direction and backward direction.
Coding is carried out using reference picture and/or decoded P picture and B picture can be considered as figure using inter-prediction
Picture.
In the following, will be described in detail inter-prediction in inter mode according to the embodiment.
Motion information can be used to execute inter-prediction.
In inter mode, encoding device 100 can execute inter-prediction and/or motion compensation to object block.Decoding device
200 object block can be executed corresponding with the inter-prediction and/or motion compensation that are executed by encoding device 100 inter-prediction and/
Or motion compensation.
Can by encoding device 100 and decoding device 200 during inter-prediction the individually motion information of derived object block.
The motion information for the contiguous block rebuild can be used, with the fortune of the motion information of position block (col block) and/or the block adjacent with col block
Dynamic information derives motion information.Col block can be the block in the same position picture (col picture) previously rebuild.Col block is in col
Position in picture can be corresponding to the position of object block in the target image.Col picture can be to be wrapped in reference picture list
Any one in one or more reference pictures included.
For example, encoding device 100 or decoding device 200 can be by by the motion informations of spatial candidate and/or time candidate
Motion information as object block executes prediction and/or motion compensation.Object block can indicate PU and/or PU subregion.
Spatial candidate can be reconstructed block spatially adjacent with object block.
Time candidate can be the reconstructed block corresponding with object block in the same position picture (col picture) previously rebuild.
In inter-prediction, encoding device 100 and decoding device 200 can be by candidate using spatial candidate and/or time
Motion information improve code efficiency and decoding efficiency.The motion information of spatial candidate is referred to alternatively as " spatial movement information ".
The time motion information of candidate is referred to alternatively as " time motion information ".
In the following, the motion information of spatial candidate can be the motion information of the PU including spatial candidate.The fortune of time candidate
Dynamic information can be the motion information of the PU including time candidate.The motion information of candidate blocks can be the PU's including candidate blocks
Motion information.
Reference picture can be used to execute inter-prediction.
Reference picture can be at least one of the picture before target picture and the picture after target picture.
Reference picture can be the image for the prediction to object block.
In inter-prediction, using be used to indicate reference picture reference picture indices (or refIdx), then will be by
Motion vector of description etc. specifies the region in reference picture.Here, the region specified in reference picture can indicate to refer to
Block.
Inter-prediction selecting reference picture can also select reference block corresponding with object block from reference picture.In addition,
Selected reference block can be used to generate the prediction block for object block in inter-prediction.
Motion information can be derived during inter-prediction by each of encoding device 100 and decoding device 200.
Spatial candidate can be 2 1) be present in target picture) rebuild previously via coding and/or decoding
And 3) the block of corner adjacent with object block or positioned at object block.It here, can be with " positioned at the block of the corner of object block "
Be with the horizontally adjacent contiguous block in object block vertically adjacent to block, or with it is horizontal vertically adjacent to the contiguous block in object block
Adjacent block.In addition, can have " positioned at the block of the corner of object block " identical with " block adjacent with the turning of object block "
Meaning.The meaning of " positioned at the block of the corner of object block " can be included in the meaning of " block adjacent with object block ".
For example, spatial candidate can be the reconstructed block on the left of object block, the reconstructed block above object block, be located at
The reconstructed block in the object block lower left corner, the object block positioned at the reconstructed block in the object block upper right corner or positioned at the object block upper left corner.
Each of encoding device 100 and decoding device 200 can recognize be present in col picture spatially with mesh
Mark the block of the corresponding position of block.But position of the object block in target picture and position of the block identified in col picture that
This is corresponding.
Each of encoding device 100 and decoding device 200 can will be present in the predefined phase for the block identified
The col block of closed position is determined as time candidate.The predefined relevant position can be present in inside identified block and/
Or external position.
For example, col block may include the first col block and the 2nd col block.When the coordinate of the block identified be (xP, yP) and
When the size of the block identified is indicated with (nPSW, nPSH), the first col block be can be positioned at coordinate (xP+nPSW, yP+nPSH)
The block at place.2nd col block can be the block at coordinate (xP+ (nPSW > > 1), yP+ (nPSH > > 1)).When the first col block not
When available, the 2nd col block is optionally used.
The motion vector of object block can be determined based on the motion vector of col block.In encoding device 100 and decoding device 200
Each the motion vector of col block can be zoomed in and out.The scaled motion vectors of col block are used as the fortune of object block
Dynamic vector.In addition, the motion vector of the operation information of the time candidate of storage in lists can be scaled motion vectors.
The ratio of the motion vector of the motion vector and col block of object block can be with the ratio phase of first distance and second distance
Together.First distance can be the distance between the target picture of reference picture and object block.Second distance can be reference picture
The distance between col picture of col block.
Scheme for deriving motion information can change according to the inter-frame forecast mode of object block.For example, as being answered
For the inter-frame forecast mode of inter-prediction, the advanced motion vector prediction factor (AMVP) mode may be present, merging patterns, skip
Mode, current picture reference model etc..Merging patterns are also referred to as " movement merging patterns ".Each mould is described more fully below
Formula.
1) AMVP mode
When using AMVP mode, encoding device 100 can search for similar block in the adjacent domain of object block.Encoding device
100 can execute prediction to object block by using the motion information of the similar block found to obtain prediction block.Encoding device 100 can
Residual block as the difference between object block and prediction block is encoded.
1-1) create the list of predicted motion vectors candidates
When AMVP mode is used as prediction mode, sky is can be used in each of encoding device 100 and decoding device 200
Between candidate motion vector, the motion vector of time candidate and zero vector create the lists of predicted motion vectors candidates.Prediction
Motion vector candidates list may include one or more predicted motion vectors candidates.The motion vector of spatial candidate, time wait
At least one of motion vector and zero vector of choosing can be determined and used as predicted motion vectors candidates.
Hereinafter, term " predicted motion vector (candidate) " and " motion vector (candidate) " can be used for having the same
Meaning, and interchangeable with one another can use.
Spatial movement candidate may include the spatial neighbor block rebuild.In other words, the motion vector of the contiguous block of reconstruction can
Referred to as " spatial prediction motion vectors candidates ".
Time motion candidates may include col block and the block adjacent with col block.In other words, the motion vector of col block or with
The motion vector of the adjacent block of col block is referred to alternatively as " time prediction motion vector candidates ".
Zero vector can be (0,0) motion vector.
Predicted motion vectors candidates can be the motion vector predictor for being predicted motion vector.In addition,
In encoding device 100, each predicted motion vectors candidates can be the initial searching position for motion vector.
1-2) use the list search motion vector of predicted motion vectors candidates
The list of predicted motion vectors candidates determination in search range that can be used will be used for target for encoding device 100
The motion vector that block is encoded.In addition, encoding device 100 can be in the prediction fortune being present in predicted motion vectors candidates list
The predicted motion vectors candidates for the predicted motion vector that will be used as object block are determined among dynamic vector candidate.
The motion vector that can be encoded by minimum cost will be used to can be the motion vector that object block encodes.
In addition, encoding device 100 can be determined whether to encode object block using AMVP mode.
1-3) to the transmission of inter-prediction information
Encoding device 100 can produce the bit stream including inter-prediction information needed for inter-prediction.Decoding device 200 can
Inter-prediction is executed to object block using the inter-prediction information of bit stream.
Inter-prediction information may include 1) instruction AMVP whether used pattern information, 2) predicted motion vector index,
3) motion vector difference (MVD), 4) reference direction and 5) reference picture indices.
In addition, inter-prediction information may include residual signals.
When pattern information instruction AMVP mode is by use, decoding device 200 can be obtained in advance by entropy decoding from bit stream
Survey motion vector index, MVD, reference direction and reference picture indices.
Predicted motion vector index can indicate the predicted motion vectors candidates for including in predicted motion vectors candidates list
Among the predicted motion vectors candidates that will be used to predict object block.
Inter-prediction 1-4) is carried out under AMVP mode using inter-prediction information
The list of predicted motion vectors candidates can be used to derive predicted motion vectors candidates in decoding device 200, and can base
The motion information of object block is determined in the predicted motion vectors candidates of derivation.
The prediction fortune that predicted motion vector index can be used to include in predicted motion vectors candidates list for decoding device 200
The motion vector candidates for being used for object block are determined among dynamic vector candidate.Decoding device 200 can be arranged from predicted motion vectors candidates
The predicted motion vectors candidates as indicated by predicted motion vector index are selected among the predicted motion vectors candidates for including in table
Predicted motion vector as object block.
Reality will be used for can not be with predicted motion vector matching to the motion vector of the inter-prediction of object block.In order to refer to
Show that reality will be used for the difference between the motion vector and predicted motion vector to the inter-prediction of object block, MVD can be used.It compiles
Decoding apparatus 100, which can be derived, will be used for practical to the similar predicted motion vector of the motion vector of the inter-prediction of object block,
To use MVD as small as possible.
MVD can be the difference between the motion vector of object block and predicted motion vector.Encoding device 100 can calculate MVD,
And entropy coding can be carried out to MVD.
Decoding device 200 can be sent from encoding device 100 by MVD by bit stream.Decoding device 200 can be to received
MVD is decoded.Decoding device 200 can be by summing come derived object block to decoded MVD and predicted motion vector
Motion vector.In other words, the motion vector for the object block derived by decoding device 200 can be the MVD and fortune of entropy decoding
The sum of dynamic vector candidate.
Reference direction can be indicated the list for the reference picture that be used to predict object block.For example, reference direction
It can indicate one in reference picture list L0 and reference picture list L1.
Reference direction only indicates the reference picture list that will be used to predict object block, is not meant to reference to picture
The direction in face is restricted to forward direction or backward direction.In other words, in reference picture list L0 and reference picture list L1
Each may include the picture on forward direction and/or backward direction.
Reference picture is that unidirectional may imply that uses single reference picture list.Reference direction is two-way may imply that
Use two reference picture lists.In other words, reference direction can indicate one of following situations: reference picture list L0 is used only
The case where, the case where reference picture list L1 is used only and the case where using two reference picture lists.
Reference picture indices can indicate among the reference picture in reference picture list will be used for object block into
The reference picture of row prediction.Entropy coding can be carried out to reference picture indices by encoding device 100.The reference picture rope being entropy encoded
Decoding device 200 can be transmitted to signal by encoding device 100 by bit stream by drawing.
When two reference picture lists be used to predict object block, single reference picture indices and single movement
Vector can be used for each of reference picture list.In addition, when two reference picture lists be used to carry out object block
When prediction, two prediction blocks can be specified for object block.For example, the average value for two prediction blocks of object block can be used or add
It weighs and to generate (final) prediction block of object block.
It can be sweared by predicted motion vector index, MVD, reference direction and reference picture indices come the movement of derived object block
Amount.
Decoding device 200 can generate the prediction for object block based on the motion vector and reference picture indices derived
Block.For example, prediction block can be in the reference picture as indicated by reference picture indices by the motion vector institute that derives
The reference block of instruction.
Since predicted motion vector index and MVD are encoded, and the motion vector of object block itself is not encoded, therefore from
The quantity that encoding device 100 is sent to the bit of decoding device 200 can be reduced, and code efficiency can be improved.
The motion information of the contiguous block of reconstruction can be used for object block.In specific inter-frame forecast mode, encoding device
100 can not the actual motion information individually to object block encode.The motion information of object block is not encoded, but can be to volume
External information is encoded, and the motion information that the additional information makes it possible for the contiguous block rebuild carrys out the fortune of derived object block
Dynamic information.Since the additional information is encoded, the quantity for being sent to the bit of decoding device 200 can be reduced, and
Code efficiency can be improved.
For example, the inter-frame forecast mode that the motion information as object block is not encoded directly, may be present skip mode and/
Or merging patterns.Here, each of encoding device 100 and decoding device 200 can be used instruction in the adjacent unit of reconstruction
Among its motion information will be used as object element motion information unit indicator and/or index.
2) merging patterns
As the scheme of the motion information for derived object block, there is merging.Term " merging " may imply that multiple
The movement of block merges." merging " may imply that the motion information of a block is also applied to other pieces.In other words, merge
Mode can be the mode of the motion information from the motion information derived object block of contiguous block.
When using merging patterns, the motion information of spatial candidate and/or the fortune of time candidate is can be used in encoding device 100
Dynamic information predicts the motion information of object block.Spatial candidate may include that the space of reconstruction spatially adjacent with object block is adjacent
Nearly block.Spatially adjacent block may include left side adjacent block and top adjacent block.Time candidate may include col block.Term " space
Candidate " and " space merges candidate " can be used for having the same meaning, and interchangeable with one another can use.Term " time is candidate "
" time merges candidate " can be used for having the same meaning, and interchangeable with one another can use.
Encoding device 100 can obtain prediction block via prediction.Encoding device 100 can to as object block and prediction block it
Between the residual block of difference encoded.
2-1) creation merges candidate list
When using merging patterns, the fortune of spatial candidate is can be used in each of encoding device 100 and decoding device 200
The motion information of information and/or time candidate are moved to create merging candidate list.Motion information may include 1) motion vector, 2) join
Examine picture index and 3) reference direction.Reference direction can be one-way or bi-directional.
Merging candidate list may include merging candidate.Merging candidate can be motion information.In other words, merge candidate column
Table can be the list for storing a plurality of motion information.
Merge the motion information that candidate can be a plurality of time candidate and/or spatial candidate.It can in addition, merging candidate list
Including by candidate to the new merging for merging and generating and the candidate of the merging in candidate list is combined is present in.In other words
It says, merging candidate list may include being produced and merging to a plurality of motion information being previously present in merging candidate list
Raw new motion information.
In addition, merging the motion information that candidate list may include zero vector.Zero vector is also referred to as " zero merges candidate ".
In other words, a plurality of motion information merged in candidate list can be at least one of following information: 1) empty
Between candidate motion information, 2) motion information of time candidate, 3) by be previously present in merge it is a plurality of in candidate list
Motion information is combined and the motion information that generates and 4) zero vector.
Motion information may include 1) motion vector, 2) reference picture indices and 3) reference direction.Reference direction can also be claimed
For " inter-prediction indicator ".Reference direction can be one-way or bi-directional.Unidirectional reference direction can indicate L0 prediction or L1 prediction.
It can be created before executing the prediction under merging patterns and merge candidate list.
The candidate quantity of the merging merged in candidate list can be pre-defined.In encoding device 100 and decoding device 200
Each can be added to merging candidate list for candidate is merged according to predefined scheme or predefined priority, so that merging
Candidate list has the merging of predefined quantity candidate.It can be used predefined scheme and predefined priority by encoding device
100 merging candidate list and the merging candidate list of decoding device 200 are made as mutually the same.
Merging can be applied based on CU or PU.When executing merging based on CU or PU, encoding device 100 can will include predetermined
The bit stream of the information of justice is sent to decoding device 200.For example, predefined information may include 1) indicating whether for each piece
Subregion executes combined information and 2) about among the block as spatial candidate and/or time candidate for object block
Will be performed the information of the block of merging.
2-2) search uses the motion vector for merging candidate list
Encoding device 100 can determine that the merging that will be used to encode object block is candidate.For example, encoding device 100
It the merging candidate merged in candidate list can be used to execute prediction to object block, and can produce for the residual error for merging candidate
Block.Encoding device 100 may be used in the coding of prediction and residual block generate minimum cost merging candidate come to object block into
Row coding.
In addition, encoding device 100 can be determined whether to encode object block using merging patterns.
2-3) to the transmission of inter-prediction information
Encoding device 100 can produce the bit stream including inter-prediction information needed for inter-prediction.Encoding device 100 can
The inter-prediction information being entropy encoded is generated by executing entropy coding to inter-prediction information, and can will include being entropy encoded
The bit stream of inter-prediction information be sent to decoding device 200.The inter-prediction information being entropy encoded can be by encoding device 100
Decoding device 200 is transmitted to signal by bit stream.
The inter-prediction information of bit stream can be used to execute inter-prediction to object block for decoding device 200.
Inter-prediction information may include the pattern information that 1) whether instruction merging patterns are used and 2) merge index.
In addition, inter-prediction information may include residual signals.
Decoding device 200, which can be obtained only when pattern information instruction merging patterns are used from bit stream, merges index.
Pattern information can be merging mark.The unit of pattern information can be block.Information about block may include mode
Information, and pattern information can indicate whether merging patterns are applied to block.
Merge index can indicate merge candidate list in include merging candidate among will be used for object block into
The merging of row prediction is candidate.Optionally, merging index can indicate spatially or temporally going up adjacent contiguous block with object block
Among by the block with target merged block.
2-4) using the inter-prediction of the merging patterns of inter-prediction information
Decoding device 200, which may be used to merge, indexes instruction by merging among the merging candidate for including in candidate list
Merge candidate and prediction is executed to object block.
It can be specified by the merging by merging index instruction candidate motion vector, reference picture indices and reference direction
The motion vector of object block.
3) skip mode
Skip mode can be the motion information of spatial candidate or the motion information of time candidate in the feelings not changed
It is applied to the mode of object block under condition.In addition, skip mode can be the mode without using residual signals.In other words, when making
When with skip mode, reconstructed block can be prediction block.
Difference between merging patterns and skip mode is whether send or use residual signals.That is, in addition to
It does not send or using except residual signals, skip mode can be similar to merging patterns.
When using skip mode, encoding device 100 can by bit stream will with candidate as spatial candidate or time
Block among its motion information send decoding device 200 for the related information of block for being used as the motion information of object block.
Encoding device 100 can generate the information being entropy encoded by executing entropy coding to the information, and can will be through by bit stream
The information of entropy coding is transmitted to decoding device 200 with signal.
In addition, encoding device 100 can not send solution for other syntactic informations (such as MVD) when using skip mode
Decoding apparatus 200.For example, when using skip mode, encoding device 100 can not will be with MVC, coded block flag and transformation coefficient etc.
The relevant syntactic element of at least one of grade is transmitted to decoding device 200 with signal.
3-1) creation merges candidate list
Merging candidate list can also be used in skip mode.In other words, can make in both merging patterns and skip mode
With merging candidate list.In this respect, merge candidate list be also referred to as " skipping candidate list " or " merge/skip candidate
List ".
Optionally, the additional candidate list different from the candidate list of merging patterns can be used in skip mode.In this feelings
Under condition, in the following description, it can be replaced respectively and merged candidate list and merge candidate with skipping candidate list and skip candidate.
It can be created before executing the prediction under skip mode and merge candidate list.
3-2) using merging candidate list searching motion vector
Encoding device 100 can determine that the merging that will be used to encode object block is candidate.For example, encoding device 100
The merging candidate merged in candidate list can be used to execute prediction to object block.Encoding device 100 may be used in prediction and generates
The merging candidate of minimum cost encodes object block.
In addition, encoding device 100 can be determined whether to encode object block using skip mode.
3-3) to the transmission of inter-prediction information
Encoding device 100 can produce the bit stream including inter-prediction information needed for inter-prediction.Decoding device 200 can
Inter-prediction is executed to object block using the inter-prediction information of bit stream.
Inter-prediction information may include the pattern information and 2) skip index that 1) whether instruction skip mode is used.
Skipping index can be identical as merging index described above.
When using skip mode, object block can be encoded in the case where not using residual signals.Inter-prediction
Information can not include residual signals.Optionally, bit stream may not include residual signals.
Decoding device 200 only can skip index from bit stream acquisition when pattern information instruction skip mode is used.Such as
Upper described, merging indexes and skips index can be mutually the same.Decoding device 200 only can indicate merging patterns or jump in pattern information
It crosses and skips index from bit stream acquisition when mode is used.
Skip index can indicate merge candidate list in include merging candidate among will be used for object block into
The merging of row prediction is candidate.
Inter-prediction 3-4) is carried out under skip mode using inter-prediction information
Decoding device 200 may be used to merge and be indicated among the merging candidate for including in candidate list by skipping index
Merge candidate and prediction is executed to object block.
It can be specified by the merging by skipping index instruction candidate motion vector, reference picture indices and reference direction
The motion vector of object block.
4) current picture reference model
Current picture reference model can indicate such prediction mode: the prediction mode uses current picture belonging to object block
The region previously rebuild in face.
It can be defined for the vector in the specified region previously rebuild.The reference picture indices of object block can be used to determine mesh
Mark whether block is encoded under current picture reference model.
Indicate whether object block is that the mark for the block being encoded with current picture reference model or index can be by encoding devices
100 are transmitted to decoding device 200 with signal.Optionally, whether object block can be inferred by the reference picture indices of object block
It is the block being encoded with current picture reference model.
When object block is encoded with current picture reference model, current picture can be added to the ginseng for object block
Examine fixation position or any position in picture list.
For example, the fixed position can be the position or rearmost position that reference picture indices are 0.
When current picture is added to any position in reference picture list, the additional of such any position is indicated
Reference picture indices can be transmitted to decoding device 200 with signal by encoding device 100.
In AMVP mode described above, merging patterns and skip mode, the index of list can be used to arrange to specify
The motion information that will be used to predict object block among a plurality of motion information in table.
In order to improve code efficiency, encoding device 100 can only among signal transmission element in lists to mesh
Mark the index that the element of minimum cost is generated in the inter-prediction of block.Encoding device 100 can encode the index, and can
With the index after signal transmission coding.
It is thus necessary to be able to by encoding device 100 and decoding device 200 using identical scheme based on identical data come
Derive list described above (that is, the list of predicted motion vectors candidates and merging candidate list).Here, the identical data
It may include rebuilding picture and reconstructed block.In addition, indexing specified element to use, it is necessary to the sequence of the element in fixed list.
Fig. 9 shows spatial candidate according to the embodiment.
In fig. 9 it is shown that the position of spatial candidate.
Object block can be indicated in the bulk of the centre of figure.Five fritters can representation space candidate.
The coordinate of object block can be (xP, yP), and the size of object block can be used (nPSW, nPSH) to indicate.
Spatial candidate A0It can be the block adjacent with the lower left corner of object block.A0It can be and occupy positioned at coordinate (xP-1, yP+
NPSH+1 the block of the pixel at).
Space coordinate A1It can be the block adjacent with the left side of object block.A1It can be adjacent with the left side of object block
The block of bottom among block.Optionally, A1It can be and A0The adjacent block in top.A1It can be and occupy positioned at coordinate (xP-
1, yP+nPSH) block of the pixel at.
Spatial candidate B0It can be the block adjacent with the upper right corner of object block.B0It can be and occupy positioned at coordinate (xP+nPSW+
1, yP-1) block of the pixel at.
Spatial candidate B1It can be the block adjacent with the top of object block.B1It can be adjacent with the top of object block
The block of the rightmost side among block.Optionally, B1It can be and B0The adjacent block in left side.B1It can be and occupy positioned at coordinate (xP+
NPSW, yP-1) at pixel block.
Spatial candidate B2 can be the block adjacent with the upper left corner of object block.B2 can be occupy positioned at coordinate (xP-1,
YP-1 the block of the pixel at).
Determine the availability of spatial candidate and time candidate
In order to include in lists by the motion information of spatial candidate or the motion information of time candidate, it must be determined that space
Whether the motion information of candidate motion information or time candidate can be used.
Hereinafter, candidate blocks may include that spatial candidate and time are candidate.
For example, can be by sequentially applying following steps 1) execute to step 4) determining spatial candidate motion information or
The whether available operation of the motion information of time candidate.
When the PU for including candidate blocks is located at outside the boundary of picture, the availability of candidate blocks can be arranged to step 1)
"false".Statement " availability is arranged to false " can have and " being set as unavailable " identical meaning.
When the PU for including candidate blocks is located at outside the boundary of band, the availability of candidate blocks can be arranged to step 2)
"false".When object block and candidate blocks are located in different bands, the availability of candidate blocks can be arranged to "false".
When the PU for including candidate blocks is located at outside the boundary of parallel block, the availability of candidate blocks can be arranged to step 3)
"false".When object block and candidate blocks are located in different parallel blocks, the availability of candidate blocks can be arranged to "false".
When the prediction mode for the PU for including candidate blocks is intra prediction mode, the availability of candidate blocks can be set step 4)
It is set to "false".When the PU for including candidate blocks does not use inter-prediction, the availability of candidate blocks can be arranged to "false".
Figure 10 shows the sequence according to the embodiment for being added to the motion information of spatial candidate and merging list.
As shown in Figure 10, when a plurality of motion information of spatial candidate, which is added to, merges list, A can be used1、B1、
B0、A0And B2Sequence.That is, can be according to A1、B1、B0、A0And B2Sequence by a plurality of motion information of available space candidate
It is added to merging list.
For deriving the method for merging list under merging patterns and skip mode
As described above, the maximum quantity that the settable merging merged in list is candidate.The maximum of setting can be indicated with " N "
Quantity.The quantity of setting can be sent to decoding device 200 from encoding device 100.The slice header of band may include N.In other words,
The candidate maximum quantity of the merging of object block for band merged in list can be set by slice header.For example, the value of N
It substantially can be 5.
A plurality of motion information (that is, merging candidate) 1) can be added to merging list to sequence 4) according to the following steps
In.
Step 1)Among spatial candidate, effective spatial candidate can be added to merging list.It can be according to showing in Figure 10
The a plurality of motion information of available space candidate is added to merging list by sequence out.Here, when the fortune of available space candidate
When dynamic information is Chong Die with other motion informations already existing in merging list, the motion information of available space candidate can not be added
It is added to merging list.Check whether the operation Chong Die with other motion informations present in list is referred to alternatively as corresponding sports information
" overlapping checks ".
The maximum item number for the motion information being added can be N.
Step 2)When the item number for merging the motion information in list is less than N and available time candidate, can will wait the time
The motion information of choosing is added to merging list.Here, when applicable between candidate motion information have existed with merging in list
Other motion informations overlapping when, the motion information of time candidate can not be added to merging list.
Step 3)When the item number of the motion information in merging list is " B " less than the type of N and target stripe, can incite somebody to action
It is added to merging list by the aggregate motion information that bi-directional predicted (double prediction) of combination generates.
Target stripe can be the band including object block.
Aggregate motion information can be the combination of L0 motion information and L1 motion information.L0 motion information can be only referring to
The motion information of reference picture list L0.L1 motion information can be the motion information only referring to reference picture list L1.
In merging list, one or more L0 motion informations may be present.In addition, may be present one in merging list
Item or more L1 motion information.
Aggregate motion information may include one or more aggregate motion information.It, can be pre- when generating aggregate motion information
First defining will be used to produce among the one or more L0 motion informations and the one or more L1 motion informations
The L0 motion information and L1 motion information of the step of raw aggregate motion information.It can be via the different fortune of a pair used in merging list
The bi-directional predicted of combination of dynamic information generates one or more aggregate motion information according to predefined sequence.It is the pair of
A motion information in different motion information can be L0 motion information, and another in the pair of different motion information
Motion information can be L1 motion information.
For example, the aggregate motion information for being added highest priority can be with the L0 motion information for merging index 0
With the combination with the L1 motion information for merging index 1.When have merge index 0 motion information be not L0 motion information when or
Person neither can generate nor add aggregate motion information when having the motion information for merging index 1 is not L1 motion information.It connects
Get off, the aggregate motion information for being added next priority can be with the L0 motion information for merging index 1 and have conjunction
And index the combination of 0 L1 motion information.Subsequent particular combination may conform to other combinations in encoding and decoding of video field.
It here, can not be by group when aggregate motion information is Chong Die with already existing other motion informations in list are merged
Resultant motion information is added to merging list.
Step 4)When the item number for merging the motion information in list is less than N, zero vector motion information can be added to conjunction
And list.
Zero vector motion information can be the motion information that motion vector is zero vector.
The item number of zero vector motion information can be one or more.The ginseng of one or more zero vector motion informations
Examining picture index can be different from each other.For example, the value of the reference picture indices of the first zero vector motion information can be 0.2nd 0
The value of the reference picture indices of vector motion information can be 1.
The item number of zero vector motion information can be identical as the quantity of the reference picture in reference picture list.
The reference direction of zero vector motion information can be two-way.Two motion vectors can be zero vector.Zero vector
The item number of motion information can be the quantity of the reference picture of reference picture list L0 and the reference picture of reference picture list L1
Quantity in lesser one.Optionally, as the quantity of the reference picture in reference picture list L0 and reference picture list L1
In reference picture quantity it is different from each other when, can be used for that single reference picture can be only applied to as unidirectional reference direction
The reference picture indices of list.
Zero vector motion information then can be added to merging list by encoding device 100 and/or decoding device 200, simultaneously
Change reference picture indices.
It, can not be by zero vector when zero vector motion information is Chong Die with already existing other motion informations in list are merged
Motion information is added to merging list.
Above-mentioned steps 1) it is merely exemplary to the sequence of step 4), and can be changed.In addition, can be according to predefined
Condition omits some steps in above step.
Method for deriving the list of predicted motion vectors candidates under AMVP mode
The maximum quantity of the predicted motion vectors candidates in predicted motion vectors candidates list can be previously defined in.N can be used
Carry out the maximum quantity of indicating predetermined justice.For example, predefined maximum quantity can be 2.
A plurality of motion information (that is, predicted motion vectors candidates) can be added according to the sequence of following step 1) to step 3)
It is added to predicted motion vectors candidates list.
Step 1)Available space candidate among spatial candidate can be added to predicted motion vectors candidates list.It waits in space
Choosing may include that the first spatial candidate and second space are candidate.
First spatial candidate can be A0、A1, scaled A0With scaled A1In one.Second space candidate can
To be B0、B1、B2, scaled B0, scaled B1With scaled B2In one.
The a plurality of motion information of available space candidate can be added according to the sequence of the first spatial candidate and second space candidate
It is added to predicted motion vectors candidates list.In this case, when the motion information of available space candidate and predicted motion vector
When already existing other motion informations are overlapped in candidate list, the motion information of available space candidate can not be added to prediction
Motion vector candidates list.In other words, when the value of N is 2, if the motion information of second space candidate and the first space are waited
The motion information of choosing is identical, then the motion information of second space candidate can not be added to predicted motion vectors candidates list.
The maximum item number for the motion information being added can be N.
Step 2)When to be less than N and time candidate available for the item number of the motion information in predicted motion vectors candidates list
When, the motion information of time candidate can be added to predicted motion vectors candidates list.In this case, when applicable between wait
It, can not be when will be available when the motion information of choosing is Chong Die with other motion informations already existing in predicted motion vectors candidates list
Between candidate motion information be added to predicted motion vectors candidates list.
Step 3)When the item number of the motion information in predicted motion vectors candidates list is less than N, zero vector can be moved
Information is added to predicted motion vectors candidates list.
Zero vector motion information may include one or more zero vector motion informations.The one or more zero vectors
The reference picture indices of motion information can be different from each other.
A plurality of zero vector motion information sequentially can be added to predicted motion by encoding device 100 and/or decoding device 200
Vectors candidates list, changes simultaneously reference picture indices.
When zero vector motion information is Chong Die with other motion informations already existing in predicted motion vectors candidates list,
Zero vector motion information can not be added to predicted motion vectors candidates list.
Zero vector motion information can also be applied to the description of zero vector motion information in conjunction with what merging list was made above.
Its repetitive description will be omitted.
Process described above 1) it is merely exemplary to the sequence of step 3), and can be changed.In addition, can be according to pre-
The condition of definition omits some steps in step.
Figure 11 is shown to be handled according to exemplary transform and quantization.
As shown in Figure 11, the grade of quantization can be generated by executing transformation and/or quantification treatment to residual signals.
Residual signals can be generated as the difference between original block and prediction block.Here, prediction block can be via pre- in frame
It surveys or inter-prediction and the block that generates.
Transformation may include at least one of transformation and quadratic transformation for the first time.It can be by being converted for the first time to residual signals execution
It generates transformation coefficient, and quadratic transformation coefficient can be generated by executing quadratic transformation to transformation coefficient.
At least one of predefined a variety of transform methods can be used to convert for the first time to execute.For example, described predefined
A variety of transform methods may include discrete cosine transform (DCT), discrete sine transform (DST), Karhunent-Loeve transformation (KLT) etc..
Quadratic transformation can be executed to by executing the transformation coefficient for converting generate for the first time.
It can be applied to become for the first time to determine based at least one of the coding parameter for object block and/or contiguous block
It changes and/or the transform method of quadratic transformation.Optionally, indicate that the information converting of transform method can be transmitted by encoding device with signal
To decoding device 200.
It can be by executing quantization or believe residual error via executing the result that transformation and/or quadratic transformation and generate for the first time
Number quantization is executed to generate the grade of quantization.
It based on upper right diagonal scan, can vertically be swept according at least one of intra prediction mode, block size and block form
It retouches and the grade of quantization is scanned at least one of horizontal sweep.
Coefficient is changed into 1D vector shape for example, can be scanned by using coefficient of the upper right diagonal scan to block
Formula.Optionally, according to the size of intra block and/or intra prediction mode, can be used according to column direction to 2D block format coefficient into
The vertical scanning of row scanning replaces upper right diagonally to sweep according to the horizontal sweep that line direction is scanned 2D block format coefficient
It retouches.
Can to the grade of the quantization after scanning carry out entropy coding, and bit stream may include the quantization being entropy encoded etc.
Grade.
Decoding device 200 can generate the grade of quantization via entropy decoding is carried out to bit stream.Can via inverse scan according to
The form of 2D block arranges the grade of quantization.Here, as the method for inverse scan, upper right diagonal scan, vertical can be performed
At least one of scanning and horizontal sweep.
The grade of quantization can be executed and go to quantify.It can be changed according to whether executing secondary inverting, go quantization to by executing
The result of generation executes secondary inverting and changes.In addition, can be according to whether changing inverse transformation for the first time is executed by executing secondary inverting
And the result generated executes inverse transformation for the first time.The result that can be generated and to changing via execution secondary inverting executes inversion for the first time
Bring the residual signals for generating and rebuilding.
Figure 12 is the configuration diagram of encoding device according to the embodiment.
Encoding device 1200 can correspond to encoding device 100 described above.
Encoding device 1200 may include the processing unit 1210 to be communicated with one another by bus 1290, memory 1230, user
Interface (UI) input unit 1250, UI output device 1260 and reservoir 1240.Electronic device 1200, which may also include, is connected to net
The communication unit 1220 of network 1299.
Processing unit 1210 can be for running the process instruction being stored in memory 1230 or reservoir 1240
Central processing unit (CPU) or semiconductor devices.Processing unit 1210 can be at least one hardware processor.
Processing unit 1210 can produce and handle be input into encoding device 1200, from encoding device 1200 export,
Or the signal used in encoding device 1200, data or information, and inspection relevant to signal, data or information can be performed
It looks into, compare, determine.In other words, in embodiment, the generation and processing of data or information can be executed by processing unit 1210
And it is relevant with data or information check, compared with and determine.
Processing unit 1210 may include inter prediction unit 110, intraprediction unit 120, switch 115, subtracter
125, converter unit 130, quantifying unit 140, entropy code unit 150, go quantifying unit 160, inverse transformation block 170, adder
175, filter unit 180 and reference picture buffer 190.
Inter prediction unit 110, intraprediction unit 120, switch 115, subtracter 125, converter unit 130, quantization
Unit 140, entropy code unit 150 go quantifying unit 160, inverse transformation block 170, adder 175, filter unit 180 and reference
At least some of picture buffer 190 can be program module, and can be communicated with external device (ED) or system.The journey
Sequence module can be included in encoding device 1200 in the form of operating system, application program module or other program modules.
Described program module can be physically stored in various types of well known storage devices.In addition, described program
At least some of module alternatively can be stored in the remote storage that can be communicated with encoding device 1200.
Program module may include but be not limited to for executing functions or operations according to the embodiment or for realizing basis
Routine, subroutine, programs, objects, component and the data structure of the abstract data type of embodiment.
Instruction or code by least one processor operation of encoding device 1200 can be used to realize described program mould
Block.
Processing unit 1210 may operate at inter prediction unit 110, intraprediction unit 120, switch 115, subtracter
125, converter unit 130, quantifying unit 140, entropy code unit 150, go quantifying unit 160, inverse transformation block 170, adder
175, filter unit 180 and instruction or code in reference picture buffer 190.
Storage unit can indicate memory 1230 and/or reservoir 1240.It is every in memory 1230 and reservoir 1240
One any one that can be in various types of volatibility or non-volatile memory medium.For example, memory 1230 can wrap
Include read-only memory (ROM) 1231 and random access memory (RAM) at least one of 1232.
Storage unit can store the data or information of the operation for encoding device 1200.In embodiment, encoding device
1200 data or information can be stored in storage unit.
For example, storage unit can stored picture, block, list, motion information, inter-prediction information, bit stream etc..
Encoding device 1200 can be implemented in the computer system including computer readable storage medium.
Storage medium can store at least one module needed for the operation of encoding device 1200.Memory 1230 can store to
A few module, and can be configured such that at least one module is run by processing unit 1210.
Function relevant to the communication of the data of encoding device 1200 or information can be executed by communication unit 1220.
For example, communication unit 1220 can send bit stream to the decoding device 1300 willed then be described.
Figure 13 is the configuration diagram of decoding device according to the embodiment.
Decoding device 1300 can correspond to decoding device 200 described above.
Decoding device 1300 may include the processing unit 1310 to be communicated with one another by bus 1390, memory 1330, user
Interface (UI) input unit 1350, UI output device 1360 and reservoir 1340.Decoding device 1300, which may also include, is connected to net
The communication unit 1320 of network 1399.
Processor 1310 can be for running in the process instruction being stored in memory 1330 or reservoir 1340
Central processor (CPU) or semiconductor devices.Processing unit 1310 can be at least one hardware processor.
Processing unit 1310 can produce and handle be input into decoding device 1300, from decoding device 1300 export,
Or the signal used in decoding device 1300, data or information, and inspection relevant to signal, data or information can be performed
It looks into, compare, determine.In other words, in embodiment, the generation and processing of data or information can be executed by processing unit 1310
And it is relevant with data or information check, compared with and determine.
Processing unit 1310 may include entropy decoding unit 210, go quantifying unit 220, inverse transformation block 230, intra prediction
Unit 240, inter prediction unit 250, adder 255, filter unit 260 and reference picture buffer 270.
The entropy decoding unit 210 of decoding device 1300 goes quantifying unit 220, inverse transformation block 230, intraprediction unit
240, at least some of inter prediction unit 250, adder 255, filter unit 260 and reference picture buffer 270 can be with
It is program module, and can be communicated with external device (ED) or system.Described program module can be with operating system, application program
The form of module or other program modules is included in decoding device 1300.
Program module can be physically stored in various types of well known storage devices.In addition, described program module
At least some of alternatively can be stored in the remote storage that can be communicated with decoding device 1300.
Program module may include but be not limited to for executing functions or operations according to the embodiment or for realizing basis
Routine, subroutine, programs, objects, component and the data structure of the abstract data type of embodiment.
Instruction or code by least one processor operation of decoding device 1300 can be used to realize described program mould
Block.
Processing unit 1310 can run entropy decoding unit 210, go quantifying unit 220, inverse transformation block 230, intra prediction
Unit 240, inter prediction unit 250, adder 255, filter unit 260 and instruction or generation in reference picture buffer 270
Code.
Storage unit can indicate memory 1330 and/or reservoir 1340.It is every in memory 1330 and reservoir 1340
One any one that can be in various types of volatibility or non-volatile memory medium.For example, memory 1330 can wrap
Include at least one of ROM 1331 and RAM 1332.
Storage unit can store the data or information of the operation for decoding device 1300.In embodiment, decoding device
1300 data or information can be stored in storage unit.
For example, storage unit can stored picture, block, list, motion information, inter-prediction information, bit stream etc..
Decoding device 1300 can be implemented in the computer system including computer readable storage medium.
Storage medium can store at least one module needed for the operation of decoding device 1300.Memory 1330 can store to
A few module, and can be configured such that at least one module is run by processing unit 1310.
Function relevant to the communication of the data of decoding device 1300 or information can be executed by communication unit 1320.
For example, communication unit 1320 can receive bit stream from encoding device 1200.
Figure 14 is the flow chart of prediction technique according to the embodiment.
Prediction technique can be executed by encoding device 1200 and/or decoding device 1300.
For example, prediction technique according to the embodiment can be performed for object block and/or multiple subregions in encoding device 1200
The efficiency of the more multiple prediction schemes of block, and the prediction technique according to the present embodiment can be performed also to generate the reconstruction of object block
Block.
In embodiment, object block can be CTU, CU, PU, TU, the block with specific dimensions and have fall in it is predetermined
At least one of the block of size in adopted range.
For example, prediction technique according to the embodiment can be performed to generate the reconstructed block of object block in decoding device 1300.
Hereinafter, processing unit can correspond to the processing unit 1210 and/or decoding device 1300 of encoding device 1200
Processing unit 1310.
In step 1410, processing unit can generate multiple blockettes by being divided to object block.
Processing unit can divide object block by using coding parameter relevant to object block to generate multiple points
Block.
In embodiment, processing unit can be by one or more in the shape of size and object block based on object block
It is a object block to be divided to generate multiple blockettes.
For example, object block may include multiple blockettes.Multiple blockettes are also referred to as " multiple sub-blocks ".
The example of step 1410 will be described in detail below in reference to Figure 15.
In embodiment, it can determine whether to execute step 1410 based on information relevant to object block, i.e., whether pass through
Object block is divided to generate multiple blockettes.Processing unit can be determined whether pair based on information relevant to object block
Object block application divides.
In embodiment, information relevant to object block may include the coding parameter of object block, the target including object block
The scene-related information of picture, about the volume for including the information of band of object block, the quantization parameter (QP) of object block, object block
Code block mark (CBF), the depth of object block, the shape of object block, the entropy coding scheme of object block, is used for the size of object block
The division information of the reference block of object block, the time horizon grade of object block and block divide at least one in indicator (mark)
It is a.
Reference block may include in spatially adjacent with object block block and block adjacent with object block in time
It is one or more.
1) in embodiment, processing unit can determine whether to answer object block according to the scene-related information of target picture
With division.For example, whether the parameter sets (PPS) of target picture may include indicating block in target picture by divided letter
Breath.By PPS, whether the block in instruction target picture by divided information can be encoded and/or decoded.Optionally,
By PPS, it can recognize that the block being set such that in picture will not by divided picture or the block being set such that in picture
Divided picture.
For example, when non-square object block is included in the block being set such that in picture in divided picture,
Non-square object block can be divided into square block by processing unit.
2) in embodiment, processing unit can determine whether to draw object block application based on the information about specific picture
Point.For example, specific picture can be the picture before target picture.
For example, processing unit can be according to whether apply division to the block in the picture before target picture to determine
Whether object block application is divided.
For example, processing unit can when non-square block is divided into square block in the picture before target picture
Non-square block in target picture is divided into square block.
3) in embodiment, processing unit can determine whether to divide object block application based on the information about band.
Band may include object block.Optionally, band may include reference block.
For example, processing unit can determine whether to divide object block application according to the type of band.Type of strip can wrap
Include I band, B band and P band.
For example, processing unit can be by the non-square of target picture when non-square object block is included in I band
Object block is divided into square block.
For example, processing unit can divide non-square block when non-square block is included in P band or B band
It is square block.
4) in embodiment, processing unit can determine whether to draw object block application based on the information about additional band
Point.
For example, additional band can be band before or after including the respective strap of object block.Additional band can
To be the band for including reference block for object block.
For example, processing unit can determine whether to divide object block application according to the type of additional band.Additional band
Type may include I band, B band and P band.
For example, the non-square object block of target picture can be divided and is positive by processing unit when additional band is I band
Square block.
For example, non-square object block can be divided into pros by processing unit when additional band is P band or B band
Shape block.
5) in embodiment, processing unit can determine whether to draw object block application based on the quantization parameter of object block
Point.
For example, processing unit can be by non-square when the quantization parameter of non-square object block is fallen in particular range
Object block is divided into square block.
6) in embodiment, processing unit can be determined whether based on the coded block flag (CBF) of object block to object block
Using division.
For example, when the value of the CBF of non-square object block be equal to particular value or it is corresponding to particular value when, processing unit can
Non-square object block is divided into square block.
7) in embodiment, processing unit can determine whether to divide object block application based on the size of object block.
For example, when the size 1 of non-square object block) equal to specific dimensions or when 2) falling in particular range, processing
Non-square object block can be divided into square block by unit.
For example, when non-square object block width and height the sum of 1) be equal to particular value, 2) be equal to or more than particular value,
3) when being less than or equal to particular value or 4) falling in particular range, non-square object block can be divided and is positive by processing unit
Square block.For example, particular value can be 16.
8) in embodiment, processing unit can determine whether to divide object block application based on the depth of object block.
For example, when the depth 1 of non-square object block) equal to certain depth or when 2) falling in particular range, processing
Non-square object block can be divided into square block by unit.
9) in embodiment, processing unit can determine whether to divide object block application based on the shape of object block.
For example, when the width of non-square object block and the ratio 1 of height) it is equal to particular value or 2) falls in particular range
When interior, non-square object block can be divided into square block by processing unit.
10) in embodiment, processing unit can divide indicator (mark) based on block to determine whether to object block application
It divides.
Whether block divides indicator can be instruction object block by divided indicator.In addition, block division indicator can
Indicate the type divided to object block.
The type of division may include the direction divided.The direction of division can be vertically or horizontally.
The type of division may include the quantity of the blockette generated and being divided.
In embodiment, indicator may include explicitly being transmitted to decoding with signal from encoding device 1200 by bit stream
The information of equipment 1300.In embodiment, indicator may include that block divides indicator.
When block divides indicator by use, decoding device 1300 can be referred to based on the block division provided from encoding device 1200
Show symbol to directly determine and whether carry out the type for dividing and dividing using which kind of to object block.
Block, which divides indicator, can be selective (or optional).When dividing indicator without using block, processing is single
Member can be determined whether to carry out dividing to object block based on the use condition of information relevant to object block and which kind of will be used
The type of division.Therefore, can determine whether to divide object block in the case where not having to signal transmission additional information.
For example, processing unit can draw non-square object block when block, which will divide indicator instruction object block, to be divided
It is divided into square block.
Sequence parameter set (SPS), parameter sets (PPS), slice header, parallel build, coding tree unit can be directed to
(CTU), at least one unit in coding unit (CU), predicting unit (PU) and converter unit (TU) to block divide indicator into
Row coding and/or decoding.In other words, it block is provided divides unit used in indicator and can be SPS, PPS, slice header, parallel
At least one of build, CUT, CU, PU and TU.Dividing indicator for the block that discrete cell provides can be jointly applied to
One or more object blocks for including in discrete cell.
11) in embodiment, processing unit can determine whether to draw object block application based on the division information of reference block
Point.
Reference block can be the upper adjacent block of spatially adjacent block and/or time.
For example, division information, which can be quad-tree partition information, binary tree division information and quaternary tree, adds binary tree
(QTBT) at least one of information.
For example, processing unit can be by non-square object block when reference block division information instruction object block will be divided
It is divided into square block.
12) in embodiment, processing unit can be determined whether based on the time horizon grade of object block to object block application
It divides.
For example, when the time horizon grade 1 of non-square object block) equal to particular value or when 2) falling in particular range, place
Non-square object block can be divided into square block by reason unit.
13) in addition, in embodiment, information relevant to object block may also include described above be used for object block
Carry out coding and/or decoded information.
In embodiments described above 1) in 13), particular value, particular range and/or discrete cell can be by encoding devices
1200 or decoding device 1300 be arranged.When particular value, particular range and discrete cell are arranged by encoding device 1200, setting
The discrete cell of particular value, the particular range of setting and/or setting can be transmitted from encoding device 1200 with signal by bit stream
To decoding device 1300.
Optionally, particular value, particular range and/or discrete cell can be derived from additional coding parameter.When coding parameter is logical
When crossing bit stream and being shared between encoding device 1200 and decoding device 1300, or when coding parameter can be by encoding device
1200 and decoding device 1300 when comparably being derived using predefined derivation scheme, particular value, particular range and/or specific
Unit can not be transmitted to decoding device 1300 from encoding device 1200 with signal.
In embodiments described above 1) in 13), the standard based on the shape for object block determines whether to target
The operation that block is divided is only example.In embodiments described above 1) in 13), it is determined whether object block is divided
Operation can be combined with the other standards (size of such as object block) described in embodiment.
In step 1420, processing unit can derive the prediction mode at least some of multiple blockettes blockette.
In embodiment, prediction can be intra prediction or inter-prediction.
The example of step 1420 will be described in detail below in reference to Figure 21.
In step 1430, processing unit can execute prediction to multiple blockettes based on the prediction mode derived.
In embodiment, processing unit can be used the prediction mode derived at least some of multiple blockettes subregion
Block executes prediction.Processing unit can be used the prediction mode that is generated based on the prediction mode derived come to multiple blockettes it
In remaining block execute prediction.
The prediction executed to blockette is described below with reference to Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27 and Figure 28.
Figure 15 is the flow chart of according to the embodiment piece of division methods.
It can correspond to process described above 1410 according to the block division methods of the present embodiment.Step 1410 may include step
At least one of 1510 and 1520.
In step 1410, processing unit can be by one or more in the shape of size and object block based on object block
It is a object block to be divided to generate multiple blockettes.
The size of object block can indicate the width and/or height of object block.
The shape of object block can indicate whether object block has square shape.The shape of object block can indicate that object block is
Still there is non-square shape with square shape.The shape of object block can be the width of object block and the ratio of height
Rate.
Processing unit can be by using the predicting mode selecting method at step 1510 place and the prediction mode at step 1520 place
At least one of selection method divides object block to generate multiple blockettes.
In step 1510, processing unit can be generated by being divided based on the width of object block or height to object block
Multiple blockettes.
In embodiment, processing unit can divide object block in the width and height different from each other of object block.
In embodiment, processing unit can divide at least once biggish one in the width and height of object block.
In embodiment, processing unit can divide object block, so that the width of blockette and height are mutually the same.
Optionally, the width of the blockette generated and being divided and height can be equal to or width and height greater than object block in
Lesser one.
Referring to Fig.1 6, Figure 17, Figure 18, Figure 19 and Figure 20 object block then are described into and based on the size of object block to mesh
The example that mark block is divided.
In embodiment, when the size of object block is different from each other less than the height and width of specific dimensions and object block
When, processing unit can divide object block.
In embodiment, when the sum of the width of object block and height be less than particular value and object block width and height that
When this is different, processing unit can divide object block.
In embodiment, when the size of object block is fallen in particular range and the width of object block and height are different from each other
When, processing unit can divide object block.
In step 1520, processing unit can generate multiple points by being divided based on the shape of object block to object block
Block.
When object block has square shape, processing unit can not divide object block.
When object block has non-square shape, object block can be divided into square shape by processing unit.Then will
Referring to Fig.1 6, Figure 17, Figure 18 and Figure 20 description are divided into the operation of square shape.
As described in above in step 1510 and 1520, processing unit can be to determine using only the size and shape of object block
It is no that object block is divided, and can be without using the information for directly indicating whether to divide object block.Therefore, indicate whether
The information divided to block can not be transmitted to decoding device 1300 from encoding device 1200 with signal, and can be based on object block
Size and/or shape whether block is divided to derive.
Figure 16 is shown according to exemplary 8 × 4 object block.
In Figure 17, the division to object block will be explained.
Figure 17 shows according to exemplary 4 × 4 blockette.
The size of each of first blockette and the second blockette can be 4 × 4.
It as shown in Figure 17, can be vertically to shown in Figure 16 when the width of object block is greater than the height of object block
The width of object block is divided, and therefore can derive two blockettes.
Figure 18 is shown according to exemplary 4 × 16 object block.
In Figure 19 and Figure 20, the division to object block will be explained.
Figure 19 is shown according to exemplary 8 × 4 blockette.
The size of each of first blockette and the second blockette can be 8 × 4.
Figure 20 is shown according to exemplary 4 × 4 blockette.
The size of each of first blockette, the second blockette, third blockette and the 4th blockette can be 4 ×
4。
As shown in Figure 19 and Figure 20, when the height of object block is greater than the width of object block, horizontally to showing in Figure 18
The height of object block out is divided, and therefore can derive two blockettes or four blockettes.
Figure 21 is according to exemplary for deriving the flow chart of the method for the prediction mode of blockette.
Prediction mode according to the embodiment pushes method to can correspond to process described above 1420.Step 1420 may include
At least one of step 2110,2120 and 2130.
For the multiple blockettes generated and being divided to object block, 1) it can be derived respectively for multiple blockettes
Prediction mode, 2) prediction mode can be derived for the particular zones block among multiple blockettes and 3) can be directed to all
Multiple blockettes derive common prediction mode.
At least one of step 2110,2120 and 2130 can be executed according to the target for the prediction mode derived.
In step 2110, processing unit can derive respective prediction mode for multiple blockettes.
The above prediction mode derivation method described in embodiment can be used to push away to be directed to multiple blockettes for processing unit
Lead respective prediction mode.
In step 2120, processing unit can derive prediction mode for the particular zones block among multiple blockettes.
Particular zones block can be the block positioned at specific location among multiple blockettes.
For example, particular zones block can be the top block among multiple blockettes, bottom block, leftmost side block, most right
Lateral mass, n-th piece since top, n-th piece since bottom, n-th piece since left side and n-th piece since right side
In it is one or more.Here, n can be the integer for equal to or more than 1 and being less than or equal to the quantity of blockette.
In embodiment, processing unit may be used at prediction mode derivation method described in previous embodiment for multiple
Particular zones block among blockette derives prediction mode.
In embodiment, the prediction mode derived can be used among multiple blockettes other than the block of particular zones
Remaining blockette.The prediction mode derived can be used among multiple blockettes other than the block of particular zones by processing unit
Remaining blockette.
In embodiment, the combination of the prediction mode and additional prediction mode derived can be used among multiple blockettes
Remaining blockette other than the block of particular zones.The prediction mode and extra predictions that processing unit can will be derived by combination
Mode and the prediction mode that determines are for remaining block among multiple blockettes other than the block of particular zones.
For example, coding parameter relevant to remaining each block can be used to determine additional prediction mode.Processing unit can make
Determine additional prediction mode with coding parameter relevant to remaining block, and can be used derived for specific piece it is above-mentioned pre-
The combination of survey mode and additional prediction mode is to determine the prediction mode for remaining block.
For example, the combination of prediction mode can be the prediction mould for indicating the direction between the direction of multiple prediction modes
Formula.The combination of prediction mode can be the prediction mode selected among prediction mode according to certain priority.Use prediction mould
The combination of formula and the prediction mode determined may differ from being used for combined each of the prediction mode.
In step 2130, processing unit can derive common prediction mode for all multiple blockettes.In other words,
Single common prediction mode for multiple blockettes can be derived.
For example, that the common coding parameter for all multiple blockettes can be used is all more to be directed to for processing unit
A blockette derives common prediction mode.
Prediction mode is derived using most probable mode (MPM)
In the derivation for the prediction mode of above-mentioned blockette, most probable mode (MPM) is can be used in processing unit.
In order to use MPM, processing unit can configure MPM list.
MPM list may include one or more MPM candidate patterns.The quantity of one or more MPM candidate patterns can
To be N.N can be positive integer.
In embodiment, the value of N can be arranged in processing unit according to the size and/or shape of object block.Optionally, it handles
The value of N can be arranged in unit according to the size of blockette, shape and/or quantity.
Each of one or more MPM candidate patterns can be one of predefined intra prediction mode.
Processing unit can one or more prediction modes based on one or more reference blocks for object block come
Configure one or more MPM candidate patterns in MPM list.Reference block can be the block at predefined position, or
Person can be the block adjacent with object block.For example, one or more reference blocks can be the block adjacent with the top of object block
And the block adjacent with the left side of object block.
One or more MPM candidate patterns can be the prediction mode based on reference block and determine one or more
Prediction mode.Processing unit can be specified by the prediction mode referring to one or more reference blocks one or more predictions
Mode is determined as one or more MPM candidate patterns.In other words, one or more MPM candidate patterns can be high general
Prediction mode of the rate as the prediction mode of object block.Experiment etc. can be used to calculate such probability.It is known, for example, that
Due to the local association between reference block and object block, the prediction mode of reference block will be used as the general of the prediction mode of object block
Rate is very big.Therefore, the prediction mode of reference block can be included in one or more MPM candidate patterns.
In embodiment, the quantity of MPM list can be one or more, and can be multiple.For example, MPM is arranged
The quantity of table can be M.M can be positive integer.Different methods can be used to configure corresponding multiple MPM column in processing unit
Table.
For example, processing unit can configure the first MPM list, the 2nd MPM list and the 3rd MPM list.
MPM candidate pattern in one or more MPM lists can be different from each other.Optionally, one or more MPM column
MPM candidate pattern in table can not overlap one another.For example, when specific intra prediction mode is comprised in a MPM list,
Multiple MPM lists may be configured such that the specific intra prediction mode is not comprised in other MPM lists.
It includes to be used for target that MPM list designators, which can be used for specifying among one or more MPM lists,
Block carries out the MPM list of coding and/or decoded prediction mode.In other words, among one or more MPM lists by
MPM list designators instruction MPM list can be designated, and processing unit can will include in specified MPM list one
Any one MPM candidate pattern in a or more MPM candidate pattern is for predicting object block.
MPM list designators can be transmitted to decoding device 1300 with signal from encoding device 1200 by bit stream.
When MPM list designators are by use, decoding device 1300 can be based on the MP list provided from encoding device 1200
Indicator directly determines candidate comprising the MPM that will be used to predict object block among one or more MPM lists
The MPM list of mode.
In embodiment, MPM can indicate whether prediction mode will use MPM list to be determined using indicator.
MPM using indicator can indicate the prediction mode of object block with the presence or absence of one in the MPM list of configuration or
Among more MPM candidate patterns.
When MPM is present among one or more MPM candidate patterns using the prediction mode of indicator instruction object block
When, the usable index indicators of processing unit determine the prediction mode of the object block among one or more MPM candidate patterns.
Index indicators can indicate be used among one or more MPM candidate patterns in MPM list to mesh
The MPM candidate pattern that mark block is predicted.Processing unit can will be among one or more MPM candidate patterns in MPM list
The MPM candidate pattern by index indicators instruction be determined as the prediction mode of object block.Index indicators are also referred to as
" MPM index ".
When indicating MPM list with MPM list designators among one or more MPM lists, index indicators
May be used to indicate that one or more MPM candidate patterns in the MPM list indicated by MPM list designators which of
MPM candidate pattern will be used to predict object block.In other words, the prediction mode of object block can be indicated by MPM list
Symbol and index indicators are specified.
When MPM does not exist in one or more MPM in MPM list using the prediction mode of indicator instruction object block
When among candidate pattern, the prediction mode indicator of the prediction mode of instruction object block can be used to determine object block for processing unit
Prediction mode.Prediction mode indicator can indicate the prediction mode of object block.
Prediction mode indicator can indicate the prediction mould being not included in MPM list (or one or more MPM lists)
One of formula.In other words, configured in the form of prediction mode list according to predefined sequence be not included in MPM list or
One or more prediction modes in one or more MPM lists, and prediction mode indicator can indicate prediction mode
One of one or more prediction modes in list.
One or more prediction modes in prediction mode list can be ranked up according to ascending or descending order.Here,
Order standard can be the quantity of each prediction mode.
When there are multiple MPM lists, individual MPM can be used for each MPM in multiple MPM lists using indicator
List.Optionally, when there are multiple MPM lists, for some MPM lists in multiple MPM lists, MPM use may be present
Indicator.
For example, the n-th MPM for the n-th MPM list can indicate that the prediction mode of object block whether there is using indicator
In the n-th MPM list.
Firstly, processing unit can be used the first MPM determined using indicator object block prediction mode whether there is in
In first MPM list.If it is determined that the prediction mode of object block is present in the first MPM list, then processing unit can be by first
The MPM candidate pattern as indicated by index indicators in MPM list is derived as the prediction mode of object block.If it is determined that target
The prediction mode of block does not exist in the first MPM list, then the 2nd MPM can be used to determine target using indicator for processing unit
The prediction mode of block whether there is in the 2nd MPM list.
Processing unit can be used the n-th MPM and be determined that the prediction mode of object block whether there is in n-th using indicator
In MPM list.If it is determined that the prediction mode of object block is present in the n-th MPM list, then index can be used to refer to for processing unit
Show that symbol determines the MPM candidate pattern of the prediction mode of the instruction object block in the n-th MPM list.If it is determined that the prediction of object block
Mode does not exist in the first MPM list, then the (n+1)th subsequent MPM can be used to determine target using indicator for processing unit
The prediction mode of block whether there is in the (n+1)th MPM list.
When a MPM is present in corresponding MPM list using the prediction mode of indicator instruction object block, can not have to
Signal transmits the MPM and uses indicator using the MPM after indicator.
MPM can be by bit stream from encoding device 1200 using indicator, index indicators and/or prediction mode indicator
Decoding device 1300 is transmitted to signal.
When MPM uses indicator, index indicators and/or prediction mode indicator by use, decoding device 1300 can
It 1) is being included in based on the MPM provided from encoding device 1200 using indicator, index indicators and/or prediction mode indicator
MPM candidate pattern in one or more MPM lists and 2) be not included in one or more MPM lists one or
Which MPM candidate pattern or which prediction mode are directly determined among more prediction modes will be used to carry out in advance object block
It surveys.
Each MPM list can be configured for discrete cell.
In embodiment, discrete cell can be block or object block with specified size.
When discrete cell is divided, the MPM list of configuration can be used for by dividing the more of generation by processing unit
A blockette is predicted.
In embodiment, when the size of object block be equal to specified size or it is corresponding to specified size when, processing unit can
It is configured to the MPM list of object block.When object block is divided into multiple blockettes, processing unit, which can be used, is directed to object block
The MPM list of configuration derives the prediction mode of each of multiple blockettes.
For example, when the size of object block is 8 × 8 and blockette is four 4 × 4 pieces, it can be for 8 × 8 pieces of configuration MPM column
Table, and can be used respectively for the MPM list of four 4 × 4 pieces of configurations.
In embodiment, when MPM list is configured, processing unit can be based on the block with specified size in the tool
There is each blockette for including in the block of specified size to configure MPM list.In other words, for the block with specified size
Generated MPM list can be commonly used for blockette.
For example, when the size of object block is specified size, can be used for one of object block (rather than blockette) or
The prediction mode of more reference blocks is come the MPM list for each blockette being configured in object block.
For example, when the size of object block is 8 × 8, and when blockette is four 4 × 4 pieces, processing unit can be based on being used for
One or more reference blocks of object block are come the MPM list for each blockette being configured in four blockettes.This
In the case of, due to having obtained the prediction mode of the reference block for object block, processing unit can concurrently be configured to four
The MPM list of a blockette.
Figure 22 is shown according to the exemplary prediction to blockette.
In Figure 22, first piece of specific piece that can be among blockette.For example, first piece can be among blockette
It is executed first the block of prediction.
When predicting first piece, processing unit can derive first piece of prediction mode.
As shown in Figure 22, when predicting first piece, processing unit be can be used and first piece of adjacent reference sample
Point.Optionally, it can be the pixel adjacent with first piece with reference to sampling point.It can be the reconstructed block adjacent with first piece with reference to sampling point
In pixel.
Figure 23 shows the prediction according to the exemplary reconstructed block using blockette to blockette.
In Figure 23, second piece of specific piece that can be among blockette.For example, second piece can be among blockette
2) block of prediction is executed by the last one by block that second executes prediction, 3), 3) next after first piece of prediction is held
Row prediction block, 4) be performed after first piece of prediction prediction block or 5) the prediction of at least one blockette it
It is performed the block of prediction afterwards.
As described above, the prediction mode for first piece of derivation can be used in processing unit when predicting second piece of execution.
As shown in Figure 22, when predicting second piece, processing unit be can be used and second piece of adjacent reference sample
Point.
It can be the pixel in the reconstructed block adjacent with second piece with reference to sampling point.It may include first piece of reconstruction with reference to sampling point
Reconstruction pixel in block.
It optionally, may include being present in the additional partitions block predicted before predicting second piece with reference to sampling point
Reconstruction pixel in reconstructed block.In other words, when predicting second piece of execution, can be used among multiple blockettes to the
Two pieces of additional partitions blocks predicted before being predicted.
Figure 24 shows the prediction according to the exemplary external reference pixel using for blockette to blockette.
Processing unit can will be used as the reference sampling point in the prediction to blockette for the external pixels of multiple blockettes.
In other words, when executing prediction to blockette, processing unit can exclude the interior pixels of multiple blockettes from reference sampling point.
Can with 1) be located at direction identical with the direction of the pixel of exclusion on nearest pixel or it is 2) adjacent with object block and positioned at
Pixel on the identical direction in the direction of the pixel of exclusion replaces the pixel excluded from reference sampling point.
In embodiment, in the prediction to multiple blockettes, the reference sampling point for prediction can be with object block (and
Non- each blockette) adjacent reconstruction pixel.
For example, as shown in Figure 24, processing unit can exclude first piece in the prediction to second piece from reference sampling point
Reconstructed block in pixel, and can by the reconstruction pixel adjacent with reference block be used as refer to sampling point.
For example, when executing prediction to each of multiple blockettes generated and dividing to object block,
The reconstruction pixel adjacent with object block can be used as by processing unit refers to sampling point.By determining that these, can be to more with reference to sampling point
All values with reference to sampling point that will be used to predict multiple blockettes are arranged before being predicted in a blockette.Therefore,
Before predicting multiple blockettes, the settable all ginsengs that will be used to predict multiple blockettes of processing unit
The value of sampling point is examined, then prediction concurrently can be executed to multiple blockettes.
Figure 25 is shown according to the exemplary prediction to four blockettes.
As shown in Figure 25, first piece, second piece, third block and the 4th can be generated by being divided to object block
Block.
As described above, processing unit can derive the prediction mode of the particular zones block among multiple blockettes.
In Figure 25, as an example, can be for the 4th piece of derivation prediction mode as bottom block.The prediction derived
Mode can be used for remaining block, i.e., first piece, second piece and third block.
Processing unit can first execute the particular zones block for being derived prediction mode among multiple blockettes pre-
It surveys.Next, processing unit can be used the prediction mode derived among multiple blockettes other than the block of particular zones
Remaining block executes prediction.
In to the prediction of particular zones block for being derived prediction mode, processing unit can will with particular zones block and/
Or the neighbouring reconstruction pixel of object block is used as and refers to sampling point.
According to fig. 25, the reconstruction pixel adjacent with the 4th piece of top may be not present when predicting the 4th piece.
Therefore, when predicting the 4th piece, the reconstruction pixel adjacent with the top of object block can be used as reference image by processing unit
Element.
Processing unit can execute prediction to multiple blockettes according to predefined sequence.Predefined sequence can in addition to
The sequence of conventional bar except the block generated and being divided is different.For example, 1) predefined sequence can be from most lower
Square is to the sequence of the top block or 2) from rightmost side block to the sequence of leftmost side block.Optionally, predefined sequence can be with
It is the block in the range of 3) selecting bottom block first and hereafter successively selecting second block from the top block to bottom
Sequence or in the range of 4) selecting rightmost side block first and hereafter successively selecting second block from the leftmost side block to right side
Block sequence.
Optionally, predefined sequence can be arbitrarily arranged by encoding device 1200 and/or decoding device 1300.When predefined
Sequence by encoding device 1200 be arranged when, the predefined sequence of setting can be transmitted to decoding from encoding device 1200 with signal
Equipment 1300.
Forecasting sequence indicator can indicate the sequence predicted multiple blockettes.The settable prediction of encoding device 1200
The value of order indicator.Forecasting sequence indicator can be transmitted to decoding device with signal from encoding device 1200 by bit stream
1300。
Optionally, predefined sequence can be based on identical predefined by encoding device 1200 and/or decoding device 1300
Scheme individually derive.Coding parameter relevant to object block etc. can be used to derive predefined sequence for processing unit.
Figure 26 is shown according to the exemplary prediction after performing prediction to the 4th piece to first piece.
As described above, predefined sequence can be used for executing prediction to multiple blockettes.
Predefined sequence shown in Figure 26 can be designated, so that first to the bottom block among multiple blockettes
It executes prediction and prediction successively is executed to the block in the range of second block from the top block to bottom from the top to the bottom later.
In the sequence of definition according to the embodiment, term " bottom (bottom) " can be replaced with term " rightmost side ", term " the top
(top) " can be replaced with term " leftmost side ".
Due to executing prediction to multiple blockettes according to predefined sequence, as shown in Figure 26, and according to existing
Sequence prediction is executed to multiple blockettes configuration compare, additional reference sampling point can be used.Therefore, with it is pre- in traditional frame
Survey is compared, and the intra prediction on the additional direction using additional available reference sampling point can be used.
When executing prediction to each of multiple blockettes, processing unit can be will be present in corresponding blockette
Pixel in the reconstructed block for the blockette predicted before being predicted, which is used as, refers to sampling point.
For example, as shown in Figure 26, processing unit can will be present in the pixel in the 4th piece of reconstructed block of previous prediction
As the reference sampling point when executing prediction to first piece.
Compared with only executing the case where predicting to blockette according to plain sequence, according to making for such predefined sequence
With and be present in previous prediction blockette reconstructed block in pixel use, can according to more directions provide refer to sample
Point.For example, as shown in Figure 26, the reference sampling point positioned at first piece of lower section is provided to first piece.
In embodiment, in the prediction to blockette, processing unit is executable using adjacent with the bottom of blockette
With reference to the intra prediction and the use intra prediction with reference to sampling point adjacent with the right side of blockette of sampling point.
For example, can be by top, the upper left side copied to reference to sampling point positioned at the blockette adjacent with the bottom of blockette
And/or the prediction block at the position of upper right side.The reference sampling point adjacent with the right side of blockette can be copied to positioned at the blockette
Left side, the prediction block at upper left side and/or lower left position.
Figure 27 is shown according to the exemplary prediction to second piece.
The prediction to second piece can be executed after the prediction to the 4th piece of prediction and to first piece.Therefore, institute as above
It states, the reference sampling point for being predicted second piece may include pixel and first piece of reconstructed block in the 4th piece of reconstructed block
In pixel.
In other words, when executing prediction to the particular zones block in multiple blockettes, processing unit can be by other subregions
Pixel in the reconstructed block of block is used as reference pixel.Here, other blockettes can be to specific among multiple blockettes
Blockette has been carried out the block of prediction before executing prediction.
It optionally, can also quilt with reference to sampling point shown in Figure 27 when executing prediction to third block earlier than second piece
For predicting third block.
Figure 28 is shown according to the exemplary prediction to third block.
It can finally execute after to the 4th piece of prediction, the prediction to first piece of prediction and to second piece to third block
Prediction.
Figure 28 shows the available reference sampling point in the prediction to third block.
In embodiment, can will be used for the particular zones in blockette from multiple with reference to selection among sampling point type
The type for the reference sampling point predicted.
For example, processing unit can be used shown in Figure 25 with reference to shown in sampling point, Figure 26 in the prediction to third block
Reference sampling point, shown in Figure 27 with reference to shown in sampling point and Figure 28 with reference to one in sampling point.
When executing prediction to particular zones block, processing unit can be used and one of multiple reference sampling point types corresponding ginseng
Examine sampling point.
Multiple sampling point types that refer to may include the first reference sampling point type, the second reference sampling point type and third with reference to sampling point
Type.
The reference sampling point of first reference sampling point type can be the reconstruction pixel adjacent with object block.In other words, first
It can be shown in Figure 25 with reference to the reference sampling point of sampling point type with reference to sampling point.
Second with reference to sampling point type reference sampling point can be first with reference to sampling point type reference sampling point and it is previous
The pixel being performed in the reconstructed block of the blockette of prediction.It in other words, second can be with reference to the reference sampling point of sampling point type
It is shown in Figure 26 or Figure 27 with reference to sampling point.
In embodiment, be previously performed prediction blockette reconstructed block in pixel can only by with object block
It is used on the adjacent unlapped direction of reconstruction pixel.In other words, the reconstruction pixel adjacent with object block can be in blockette
Reconstructed block in pixel (for example, sampling point is referred to shown in Figure 26) before used.
Optionally, in embodiment, the pixel in the reconstructed block of blockette for being previously performed prediction is replaceable
At least some of reconstruction pixel (for example, sampling point is referred to shown in Figure 27) adjacent with object block rebuilds pixel.
In other words, the pixel in the reconstructed block of blockette can be used before the reconstruction pixel adjacent with object block.
That is, since the pixel ratio in the reconstructed block for the blockette for being previously performed prediction is adjacent with object block
Reconstruction sampling point closer to particular zones block, therefore previously be performed prediction blockette reconstructed block in pixel (and
It is not the reconstruction pixel adjacent with object block) it can be used for the weight predicted particular zones block, and adjacent with object block
Building pixel can only be used on the direction of the pixel covering in the reconstructed block of blockette for not being performed prediction previously.
Third can be the reconstruction pixel adjacent with particular zones block with reference to the reference sampling point of sampling point type.In other words,
Third can be shown in Figure 28 with reference to the reference sampling point of sampling point type with reference to sampling point.
In embodiment, processing unit information relevant to object block or blockette can be used determine to be used for point
The reference sampling point that block is predicted.
In embodiment, processing unit can be based on determining to be used to predict blockette with reference to sampling point indicator
Reference sampling point.
It can be instruction for the indicator for the reference sampling point that be used to predict block with reference to sampling point indicator.With reference to sample
Point indicator can indicate multiple reference sampling point types that will be used to predict block with reference among sampling point type.
The settable value with reference to sampling point indicator of processing unit.
With reference to sampling point indicator decoding device 1300 can be transmitted to signal from encoding device 1200 by bit stream.It is optional
Coding parameter relevant to reference block or blockette can be used in order at least partly be arranged with reference to sampling point indicator in ground.
When reference sampling point indicator is by use, the reference sample provided from encoding device 1200 can be used in decoding device 1300
Point indicator directly determines the reference sampling point that will be used to predict blockette.
Reference sampling point is filtered
It is performed before in above-mentioned prediction, processing unit can execute filtering to reference sampling point, and can be determined whether to ginseng
It examines sampling point and executes filtering.
In embodiment, processing unit can determine whether to hold reference sampling point based on the size and/or shape of object block
Row filtering.
In embodiment, processing unit can be determined whether based on the size and/or shape of each blockette to reference sample
Point executes filtering.
In embodiment, whether processing unit can be used as the ginseng for blockette according to the reconstructed block adjacent with object block
Block is examined to determine whether to execute filtering to reference sampling point.
In embodiment, processing unit can be according to whether concurrently execute prediction to blockette to determine whether to reference sample
Point executes filtering.
Optionally, in embodiment, processing unit can according to the specific function described in embodiment, operate and handle and be
It is no to be performed to determine whether to execute filtering to reference sampling point.
Optionally, in embodiment, processing unit can based on coding parameter relevant to object block or with each blockette
Relevant coding parameter determines whether to execute filtering to reference sampling point.
Figure 29 is the flow chart of prediction technique according to the embodiment.
In above by reference to Figure 14 described embodiment, produced in step 1410 by being divided to object block
It gives birth to multiple blockettes and derives prediction mode at least some of prediction mode of multiple blockettes in step 1420
It is described under assuming that.
It in the present embodiment, can be multiple to generate by being divided to object block after having derived prediction mode
Blockette.
In step 2910, processing unit can derive prediction mode.
For example, the prediction mode derived can be the prediction mode of object block.Processing unit can be based on described above
The scheme of prediction mode for derived object block derives prediction mode.
For example, the prediction mode derived can be for by drawing to object block when object block is divided
Point and the prediction mode predicted of multiple blockettes that generates.In other words, the prediction mode derived can be in target
Used prediction mode when block is divided.
In embodiment, the prediction mode derived may include multiple prediction modes.
For example, multiple prediction modes derived can be used for multiple points generated and dividing to object block
Block is predicted.
Description relevant to the derivation in the embodiments described above to the prediction mode of blockette can also be applied to
In the present embodiment to the derivation of prediction mode.For example, MPM can be used for the derivation to prediction mode.Thereof will be omitted repeat
Description.
In step 2920, processing unit can generate multiple blockettes by being divided to object block.
Step 2920 can also be applied to above by reference to the description relevant to the division to object block that step 1410 describes.
Thereof will be omitted repeated descriptions.
In step 2930, processing unit can be used the prediction mode derived at least some of multiple blockettes subregion
Block executes prediction.
It is related to the prediction at least some of multiple blockettes blockette above by reference to described in step 1430 etc.
Description can also be applied to step 2930.However, in the description carried out referring to step 1420 and 1430, following
It is described under assuming that: deriving prediction mode for the particular zones block among multiple blockettes and for particular zones
Prediction mode that block is derived or the prediction mode determined based on the prediction mode derived are used in addition to particular zones mould
Remaining block except formula.It is understood that from such description and derives prediction mode in step 2910 and derived in step 2910
Prediction mode out or the prediction mode determined based on the prediction mode derived are used for multiple blockettes.Thereof will be omitted
Repeated description.
Figure 30 shows the derivation according to the exemplary prediction mode to object block.
As described in the embodiment above by reference to Figure 29, in step 2910, can derived object block prediction mode.Work as mesh
It, can be according to the mode similar with the mode above by reference to described in Figure 22, Figure 23 and Figure 24 when the prediction mode of mark block is derived
The prediction to first piece and second piece is executed using the prediction mode derived.
Optionally, in step 2910, multiple prediction modes can be derived for object block.The multiple prediction modes derived can
It is used for the prediction to each blockette.
Processing unit can determine more what is derived according to the scheme that coding parameter relevant to object block etc. is used
Among a prediction mode by the prediction mode that be used to predict blockette and under prediction mode it is to be used
Blockette.
Figure 31 is the flow chart for showing target block prediction method and bit stream generation method according to the embodiment.
The target block prediction method and bit stream generation method according to the present embodiment can be executed by encoding device 1200.The reality
Applying example can be a part of target block coding method or method for video coding.
In step 3110, processing unit 1210 can divide block and derive prediction mode.
Step 3110 can correspond to the step 1410 above by reference to described in Figure 14 and 1420.Step 3110 can correspond to
The upper step 2910 and 2920 referring to described in Figure 29.
In step 3120, the executable prediction using the prediction mode derived of processing unit 1210.
Step 3120 can correspond to the step 1430 above by reference to described in Figure 14.Step 3120 can correspond to above by reference to
Step 2930 described in Figure 29.
In step 3130, processing unit 1210 can produce predictive information.Can step 3110 or 3120 at least partly
Generate predictive information.
Predictive information, which can be, divides the information derived with prediction mode for above-mentioned piece.For example, predictive information can wrap
Include above-described indicator.
In step 3140, processing unit 1210 can produce bit stream.
Bit stream may include the information of the object block about coding.For example, the information of the object block about coding may include
The coefficient of the transform and quantization of object block and/or blockette and the coding parameter of object block and/or blockette.Bit stream can
Including predictive information.
Processing unit 1210 can execute entropy coding to predictive information, and can produce including the predictive information that is entropy encoded
Bit stream.
The bit stream of generation can be stored in reservoir 1240 by processing unit 1210.Optionally, communication unit 1220 can
Decoding device 1300 is sent by bit stream.
Figure 32 is the flow chart for showing the target block prediction method according to the embodiment using bit stream.
The target block prediction method using bit stream according to the present embodiment can be executed by decoding device 1300.The embodiment
It can be a part of object block coding/decoding method or video encoding/decoding method.
In step 3210, communication unit 1320 can obtain bit stream.Communication unit 1320 can be received from encoding device 1200
Bit stream.
Bit stream may include the information of the object block about coding.For example, the information of the object block about coding may include
The coefficient of the transform and quantization of object block and/or blockette and the coding parameter of object block and/or blockette.Bit stream can wrap
Include predictive information.
The bit stream that processing unit 1310 can will acquire is stored in reservoir 1240.
In step 3220, processing unit 1310 can obtain predictive information from bit stream.
Processing unit 1310 can execute entropy decoding by the predictive information being entropy encoded to bit stream to obtain prediction letter
Breath.
In step 3230, processing unit 1210 can carry out block to divide and derive prediction mode using predictive information.
Step 3230 can correspond to the step 1410 above by reference to described in Figure 14 and 1420.Step 3230 can correspond to
The upper step 2910 and 2920 referring to described in Figure 29.
In step 3240, the executable prediction using the prediction mode derived of processing unit 1210.
Step 3240 can correspond to the step 1430 above by reference to described in Figure 14.Step 3240 can correspond to above by reference to
Step 2930 described in Figure 29.
Block division is carried out to block using indicator is divided
In the embodiments described above, object block is described as the size and/or shape based on object block and is divided.
In block division, it can be used and divide indicator.The division indicator of block may indicate whether will be by drawing block
Divide to generate each of two or more blockettes and the blockette generated whether will be when block is encoded and is decoded
It is used as the unit of coding and decoding.
Embodiment below can also be applied to description relevant to block division and block prediction in the aforementioned embodiment.
In embodiment, block, which divides indicator, can be binary tree division indicator, and binary tree divides indicator and indicates block
Whether will be divided according to the form of binary tree.For example, the title that binary tree divides indicator can be " binarytree_
Flag " or " BTsplitFlag ".
Optionally, block, which divides indicator, can be quaternary tree indicator, and quaternary tree indicator indicates whether block will be according to four
The form of fork tree is divided.
In embodiment, among the value for dividing indicator, the first predefined value can indicate that block will not be divided, and second is pre-
Definition value can indicate that block will be divided.
When block, which divides indicator, has the first predefined value, processing unit can not divide block.
In embodiment, when block, which divides indicator, to be existed and divide indicator with the first predefined value, even if block
With the shape and form that will be divided by application, block can not also be divided.
In embodiment, when block, which divides indicator, has the second predefined value, processing unit can be by drawing block
Divide to generate blockette, and coding and/or decoding can be executed to blockette.In addition, when block division indicator is pre- with second
When definition value, processing unit can generate blockette, and the shape that can be divided into according to blockette by being divided to block
Formula and/or shape re-start division to particular zones block.
In embodiment, the division indicator of object block can indicate whether object block will be divided about object block.In addition,
The division indicator of the upper layer block of object block can indicate whether upper layer block will be divided.When in the division indicator instruction of upper layer block
When layer block will be divided into multiple pieces, upper layer block can be divided into multiple pieces including object block by processing unit.That is, with
On the object block that describes in embodiment can also be considered as by via dividing the block generated and indicator etc. is divided.
Figure 33 is shown according to the exemplary division to upper layer block.
For example, the indicator that divides when upper layer block has the division indicator of the second predefined value and object block with the
When one predefined value, upper layer block can be divided into multiple pieces including object block, and each of multiple pieces including object block
It can be the target or unit of the designated treatment in coding and/or decoding.
Figure 34 is shown according to the exemplary division to object block.
For example, when the division indicator of upper layer block has the second predefined value and is produced and dividing to upper layer block
When raw object block has the size and/or shape that will be divided by application, object block can be reclassified as multiple blockettes.It is more
Each of a blockette can be the target or unit of the designated treatment in coding and/or decoding.
It is divided for the block of block transformation etc.
In the aforementioned embodiment, it is described in the case where each of blockette is the hypothesis of the unit of prediction.It is real
Apply the unit for the other processing that the blockette in example can be in coding and/or decoding other than prediction.
Below in an example, by describe object block or blockette be used as to block coding and/or decoding in
The embodiment of the unit of prediction, transformation, quantization, inverse transformation and the processing of inverse quantization (going to quantify).
Figure 35 is the signal flow graph for showing image coding and decoding method according to the embodiment.
Prediction relevant to object block can be performed in the processing unit 1210 of step 3510, encoding device 1200.
In step 3520, the processing unit 1210 of encoding device 1200 can be executed relevant to object block based on the prediction
Transformation.
In step 3530, the processing unit 1210 of encoding device 1200 can produce the bit stream of the result including transformation.
In step 3540, the communication unit 1220 of encoding device 1200 can send bit stream to the logical of decoding device 1300
Believe unit 1320.
In step 3550, the result of the extractable transformation of the processing unit 1310 of decoding device 1300.
Inverse transformation relevant to object block can be performed in the processing unit 1310 of step 3560, decoding device 1300.
Prediction relevant to object block can be performed in the processing unit 1310 of step 3570, decoding device 1300.
Will be discussed in more detail below step 3510,3520,3530,3540,3550,3560 and 3570 detailed functions and
Prediction, transformation and the inverse transformation of step 3510,3520,3560 and 3570.
The case where each blockette is the unit of transformation and inverse transformation
In step 3510, processing unit 1210 can generate the residual block of object block by executing prediction to object block.
In step 3520, processing unit 1210 can execute transformation based on blockette.In other words, processing unit 1210 can lead to
It crosses and residual block is divided to generate residual error blockette, and it is residual to generate to execute transformation to residual error blockette by respectively
The coefficient of difference blocks.
Hereinafter, other than transformation, transformation is understood to include quantization.The inverse transformation amount of being understood to include
Change, and is understood to be and is performed after going quantization to be performed.It is transformed and quantifies in addition, coefficient is understood to be expression
Coefficient.
In the coefficient that the result of the transformation of step 3530 may include multiple blockettes.
In step 3560, the coefficient of each blockette can be used to execute inversion on the basis of blockette for processing unit 1310
It changes.In other words, processing unit 1210 can execute inverse transformation by the coefficient to blockette to generate the residual error blockette of reconstruction.
For the reconstruction of multiple blockettes residual error blockette may make up object block reconstruction residual block.Optionally, mesh
The residual block for marking the reconstruction of block may include the residual error blockette of multiple reconstructions.
In step 3570, processing unit 1310 can generate the prediction block of object block by executing prediction to object block, and
And it can be by the way that prediction block and the residual block phase Calais of reconstruction be generated reconstructed block.Reconstructed block may include the sampling point rebuild.
In embodiment, TU divides indicator and may indicate whether that transformation and inverse transformation will be executed based on blockette.For example, TU
The title for dividing indicator can be " TUsplitFlag ".
TU can be transmitted by bit stream signal divide indicator.
It in embodiment, can be in transformation and inverse transformation step when the division indicator of upper layer block has the second predefined value
Suddenly object block is divided into multiple blockettes, and whether the transmission of indicator signal can be divided by TU will be to each blockette
Execute transformation and inverse transformation.
In embodiment, when the division indicator of upper layer block has the second predefined value, and the division instruction of object block
When symbol has the first predefined value, object block can be divided into multiple blockettes in transformation and inverse transformation step, and can pass through
TU divides whether the transmission of indicator signal will execute transformation and inverse transformation to each blockette.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Object block is divided into multiple blockettes in transformation and inverse transformation step, and the transmission of indicator signal can be divided by TU to be
It is no to execute transformation and inverse transformation to each blockette.
Here, will by application divide shape and/or form can be to the division of object block be described as be in it is other before
The shape and/or form being performed in embodiment are stated, for example, non-square shape.Optionally, the shape that will be divided by application
And/or form can indicate include the division of object block is described as be in the state being performed in other previous embodiments and/or
Situation.Hereinafter, this is identical as above content.
In embodiment, when the division indicator of upper layer block has the second predefined value, object block can be divided into more
A square blockette, and transformation and inverse transformation can be executed to each blockette in transformation and inverse transformation step.
In embodiment, when the division indicator for dividing indicator and there is the second predefined value and object block of upper layer block
When with the first predefined value, object block can be divided into multiple square blockettes, and can be in transformation and inverse transformation step
Transformation and inverse transformation are executed to each blockette.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Object block is divided into multiple blockettes, and transformation and inversion can be executed to each blockette in transformation and inverse transformation step
It changes.
The case where each blockette is the unit of transformation, inverse transformation and prediction
In step 3510, processing unit 1210 can execute prediction based on blockette.In other words, processing unit 1210 can lead to
It crosses and prediction is executed to prediction block to generate the residual error blockette of each blockette.
In embodiment, prediction can be intra prediction.
In step 3520, processing unit 1210 can execute transformation based on blockette.In other words, processing unit 1210 can lead to
It crosses and transformation is executed to residual error blockette to generate the coefficient of residual error blockette.
In the coefficient that the result that step 3530 is converted may include multiple blockettes.
In step 3560, processing unit 1310 can execute inverse transformation based on blockette.In other words, processing unit 1210 can
Inverse transformation is executed by the coefficient to residual error blockette to generate the residual error blockette of reconstruction.
In step 3570, processing unit 1310 can execute prediction based on blockette.In other words, processing unit 1310 can lead to
It crosses and prediction is executed to generate subarea forecasting block to blockette, and can be by by the residual block phase Calais of subarea forecasting block and reconstruction
Generate the blockette rebuild.
The blockette of the reconstruction of multiple blockettes may make up the reconstructed block of object block.Optionally, the reconstructed block of object block can
Blockette including reconstruction.Reconstructed block may include the sampling point rebuild.
In embodiment, PU is divided indicator and may indicate whether that prediction, transformation and inverse transformation will be executed based on blockette.Example
Such as, the title that PU divides indicator can be " Intra_PU_SplitFlag ".
PU can be transmitted by bit stream signal divide indicator.
It in embodiment, can be in prediction steps by target when the division indicator of upper layer block has the second predefined value
Block is divided into multiple blockettes, and can divide whether the transmission of indicator signal will execute in advance each blockette by PU
Survey, transformation and inverse transformation.
In embodiment, when the division indicator for dividing indicator and there is the second predefined value and current block of upper layer block
When with the first predefined value, object block can be divided into multiple blockettes in prediction steps, and can divide and indicate by PU
Whether the transmission of symbol signal will execute prediction, transformation and inverse transformation to each blockette.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Object block is divided into multiple blockettes in prediction steps, and whether the transmission of indicator signal can be divided by PU will be to every
A blockette executes prediction, transformation and inverse transformation.
It in embodiment, can be in prediction steps by target when the division indicator of upper layer block has the second predefined value
Block is divided into multiple blockettes, and prediction, transformation and inverse transformation can be executed to each blockette.
In embodiment, when the division indicator for dividing indicator and there is the second predefined value and object block of upper layer block
When with the first predefined value, object block can be divided into the blockette of multiple squares in prediction steps, and can be to each
Blockette executes prediction, transformation and inverse transformation.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Object block is divided into multiple blockettes in prediction steps, and prediction, transformation and inverse transformation can be executed to each blockette.
The case where each blockette is the unit of prediction and object block is the unit of transformation and inverse transformation
In step 3510, processing unit 1210 can execute prediction based on blockette.In other words, processing unit 1210 can lead to
It crosses and prediction is executed to blockette to generate the residual error blockette of each blockette.
The residual error blockette of multiple blockettes may make up the residual block of object block.
In step 3520, processing unit 1210 can execute transformation based on object block.For example, processing unit 1210 is usable more
The residual block of the residual error blockette configuration object block of a blockette.Optionally, the residual block of object block may include multiple blockettes
Residual error blockette.
Processing unit 1210 can execute transformation by the residual block to object block to generate the coefficient of object block.
In the coefficient that the result that step 3530 is converted may include object block.
In step 3560, processing unit 1310 coefficient of object block can be used to be based on object block and execute inverse transformation.In other words
It says, processing unit 1210 can carry out inverse transformation by the coefficient to object block to generate the residual block of reconstruction.
The residual block of reconstruction can be made of the residual error blockette of multiple reconstructions.Optionally, processing unit 1310 can by pair
The residual block of reconstruction is divided to generate the residual error blockette of multiple reconstructions.Optionally, the residual block of reconstruction may include multiple
The residual error blockette of reconstruction.
In step 3570, processing unit 1310 can execute prediction based on blockette.
In other words, processing unit 1210 can generate the subarea forecasting of each blockette by executing prediction to blockette
Block, and can be by the way that subarea forecasting block and the residual error blockette phase Calais of reconstruction are generated the blockette rebuild.
When executing prediction to blockette, different prediction modes can be applied to multiple blockettes.
It may make up the reconstructed block of object block for the blockette of multiple reconstructions of multiple blockettes.Optionally, object block
Reconstructed block may include the blockette of multiple reconstructions.
In embodiment, whether TU is merged PU division indicator and may indicate whether that prediction will be executed based on blockette and will
Inverse transformation is executed based on object block.For example, the title that TU merges PU division indicator can be " TU_Merge_PU_
splitFlag”。
TU can be transmitted by bit stream signal merges PU division indicator.
In embodiment, when the division indicator of upper layer block has the second predefined value, PU can be merged by TU and divided
Whether the transmission of indicator signal will execute transformation and inverse transformation to object block and whether will execute prediction to each blockette.
In embodiment, when the division indicator for dividing indicator and there is the second predefined value and object block of upper layer block
When with the first predefined value, can by TU merge PU divide indicator signal transmission whether object block will be executed transformation with
Whether inverse transformation and prediction will be executed to each blockette.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Merge PU by TU and divide the transmission of indicator signal and whether object block will be executed and converts and inverse transformation and whether will be to every
A blockette executes prediction.
In embodiment, when the division indicator of upper layer block has the second predefined value, each blockette can be executed
Prediction, and transformation can be executed to object block after executing prediction to blockette.In addition, the division indicator when upper layer block has
When having the second predefined value, inverse transformation can be executed to object block, it can be after executing inverse transformation to object block to each blockette
Prediction is executed, and can produce the sampling point of the reconstruction for object block.
In embodiment, when the division indicator for dividing indicator and there is the second predefined value and object block of upper layer block
When with the first predefined value, prediction can be executed to each blockette, and can be after executing prediction to blockette to target
Block executes transformation.In addition, the indicator that divides when upper layer block has the division indicator of the second predefined value and object block tool
When having the first predefined value, inverse transformation can be executed to object block, blockette can be executed after executing inverse transformation to object block
Prediction, and can produce the sampling point of the reconstruction for object block.
It in embodiment, can when the shape of object block and/or form are the shape and/or form that will be divided by application
Prediction is executed to each blockette, and transformation can be executed to object block after executing prediction to blockette.In addition, working as target
When the shape of block and/form are the shape and/or form that will be divided by application, inverse transformation can be executed to object block, it can be to mesh
Mark block executes prediction to each blockette after executing inverse transformation, and can produce the sampling point of the reconstruction for object block.
In the embodiments described above, although based on the flow chart side of describing as a series of steps or units
Method, but the present disclosure is not limited to the sequences of step, and some steps can be according to different from sequence the step of description
Sequence is executed or is performed simultaneously with other steps.Further, it will be understood by those skilled in the art that: the step shown in flow charts
Suddenly it is not exclusive, and may also include other steps, alternatively, stream can be deleted without departing from the scope of the disclosure
One or more steps in journey figure.
The program described above that can be implemented as being run by various computer installations in accordance with an embodiment of the present disclosure,
And it may be recorded on computer readable storage medium.Computer readable storage medium can include journey either alone or in combination
Sequence instruction, data file and data structure.The program instruction recorded on a storage medium can be especially designed or be configured to this
It is open, either the those of ordinary skill of computer software fields can be known or available.Computer storage is situated between
The example of matter may include all types of hardware devices being specially configured for recording and running program instruction, and such as, magnetic is situated between
Matter (such as hard disk, floppy disk and tape), optical medium (such as compact disk (CD)-ROM and digital versatile disc (DVD)), magneto-optic
Medium (such as floptical, ROM, RAM and flash memory).The example of program instruction includes machine code (such as by compiler-creating
Code) and the higher-level language code of interpreter execution can be used by computer.Hardware device can be configured to as one or
More software modules are operated to execute the operation of the disclosure, and vice versa.
Although as described above, being based on specific detail (embodiment and attached drawing of such as detailed components and limited quantity) description
The disclosure, but the specific detail is only provided to be readily appreciated that the disclosure, the present disclosure is not limited to these embodiments, this
Field technical staff, which will be in accordance with the description above, practices various changes and modifications.
It is therefore to be understood that the spirit of the present embodiment is not only restricted to above-described embodiment, and appended claims and
Its equivalent and their modification is fallen within the scope of the disclosure.
Claims (20)
1. a kind of coding method, comprising:
Multiple blockettes are generated by being divided to object block;
Prediction mode is derived at least part blockette in the multiple blockette;And
Prediction is executed to the multiple blockette based on the prediction mode derived.
2. a kind of coding/decoding method, comprising:
Multiple blockettes are generated by being divided to object block;
Prediction mode is derived at least part blockette in the multiple blockette;And
Prediction is executed to the multiple blockette based on the prediction mode derived.
3. coding/decoding method as claimed in claim 2, wherein determined whether based on information relevant to object block to object block
It is divided.
4. coding/decoding method as claimed in claim 2, wherein divide indicator based on block to determine whether to draw object block
Divide and will use which type of division.
5. coding/decoding method as claimed in claim 2, wherein the size based on object block divides object block.
6. coding/decoding method as claimed in claim 2, wherein the shape based on object block divides object block.
7. coding/decoding method as claimed in claim 2, in which:
The prediction mode is derived for the particular zones block among the multiple blockette, and
The particular zones block is the block positioned at specific position among the multiple blockette.
8. coding/decoding method as claimed in claim 7, wherein the prediction mode quilt derived for the particular zones block
For remaining blockette among the multiple blockette other than the particular zones block.
9. coding/decoding method as claimed in claim 7, wherein by combination for the particular zones block derive it is described pre-
Survey mode and additional prediction mode and the prediction mode determined is used among the multiple blockette in addition to described specific point
Remaining blockette except block.
10. coding/decoding method as claimed in claim 2, wherein most probable mode MPM list is used for the prediction mode
It derives.
11. coding/decoding method as claimed in claim 10, in which:
MPM list includes multiple MPM lists, and
MPM candidate pattern in the multiple MPM list does not overlap each other.
12. coding/decoding method as claimed in claim 10, in which:
MPM list is configured for discrete cell, and
The discrete cell is object block.
13. coding/decoding method as claimed in claim 10, wherein match based on for one or more reference blocks of object block
Set the MPM list for the multiple blockette.
14. coding/decoding method as claimed in claim 2, wherein derived for first piece among the multiple blockette
Prediction mode be used to predict second piece among the multiple blockette.
15. coding/decoding method as claimed in claim 14, wherein first piece of reconstruction pixel is used as carrying out to second piece
The reference sampling point of prediction.
16. coding/decoding method as claimed in claim 14, wherein the reference sampling point for being predicted the multiple blockette
It is the reconstruction pixel adjacent with object block.
17. coding/decoding method as claimed in claim 2, wherein for bottom block among the multiple blockette or most right
Lateral mass derives the prediction mode.
18. coding/decoding method as claimed in claim 17, wherein the reconstruction pixel adjacent with the top of object block is used as being used for
The reference pixel that the bottom block is predicted.
19. coding/decoding method as claimed in claim 2, in which:
Prediction is executed to the multiple blockette according to predefined sequence, and
The predefined sequence be from bottom block to the sequence of the top block, from rightmost side block to the sequence of leftmost side block,
The sequence of block in the range of selecting bottom block first and hereafter successively selecting second block from the top block to bottom or
Person selects rightmost side block first and hereafter successively selects the sequence of the block in the range of second block from the leftmost side block to right side.
20. a kind of coding/decoding method, comprising:
Derive prediction mode;
Multiple blockettes are generated by being divided to object block;And
Prediction is executed to the multiple blockette based on the prediction mode derived.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311481738.XA CN117255196A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311484265.9A CN117255198A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311482423.7A CN117255197A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170036257 | 2017-03-22 | ||
KR10-2017-0036257 | 2017-03-22 | ||
KR10-2017-0155097 | 2017-11-20 | ||
KR20170155097 | 2017-11-20 | ||
PCT/KR2018/003392 WO2018174617A1 (en) | 2017-03-22 | 2018-03-22 | Block form-based prediction method and device |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311481738.XA Division CN117255196A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311482423.7A Division CN117255197A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311484265.9A Division CN117255198A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110476425A true CN110476425A (en) | 2019-11-19 |
CN110476425B CN110476425B (en) | 2023-11-28 |
Family
ID=63863951
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311482423.7A Pending CN117255197A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN201880020454.1A Active CN110476425B (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311484265.9A Pending CN117255198A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311481738.XA Pending CN117255196A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311482423.7A Pending CN117255197A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311484265.9A Pending CN117255198A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
CN202311481738.XA Pending CN117255196A (en) | 2017-03-22 | 2018-03-22 | Prediction method and device based on block form |
Country Status (2)
Country | Link |
---|---|
KR (3) | KR102310730B1 (en) |
CN (4) | CN117255197A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115336276A (en) * | 2020-03-31 | 2022-11-11 | Kddi 株式会社 | Image decoding device, image decoding method, and program |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200034639A (en) * | 2018-09-21 | 2020-03-31 | 한국전자통신연구원 | Method and apparatus for encoding/decoding image, recording medium for stroing bitstream |
WO2020084473A1 (en) | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Multi- iteration motion vector refinement |
KR20240007302A (en) | 2018-11-12 | 2024-01-16 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Simplification of combined inter-intra prediction |
WO2020103852A1 (en) | 2018-11-20 | 2020-05-28 | Beijing Bytedance Network Technology Co., Ltd. | Difference calculation based on patial position |
CN111327899A (en) * | 2018-12-16 | 2020-06-23 | 华为技术有限公司 | Video decoder and corresponding method |
US11533506B2 (en) | 2019-02-08 | 2022-12-20 | Tencent America LLC | Method and apparatus for video coding |
CN113545065B (en) | 2019-03-06 | 2023-12-12 | 北京字节跳动网络技术有限公司 | Use of converted uni-directional prediction candidates |
KR20200113173A (en) * | 2019-03-20 | 2020-10-06 | 현대자동차주식회사 | Method and Apparatus for Intra Prediction Based on Deriving Prediction Mode |
US20220377335A1 (en) * | 2019-09-21 | 2022-11-24 | Lg Electronics Inc. | Transform-based image coding method and device therefor |
WO2022114752A1 (en) * | 2020-11-24 | 2022-06-02 | 현대자동차주식회사 | Block splitting structure for efficient prediction and transformation, and method and apparatus for video encoding and decoding using block splitting structure |
WO2023075120A1 (en) * | 2021-10-25 | 2023-05-04 | 현대자동차주식회사 | Video coding method and apparatus using various block partitioning structures |
WO2023085600A1 (en) * | 2021-11-10 | 2023-05-19 | 현대자동차주식회사 | Method and device for video encoding using implicit arbitrary block division and predictions arising therefrom |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110243225A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Determining intra prediction mode of image coding unit and image decoding unit |
WO2012096095A1 (en) * | 2011-01-12 | 2012-07-19 | 株式会社エヌ・ティ・ティ・ドコモ | Image predict coding method, image predict coding device, image predict coding program, image predict decoding method, image predict decoding device, and image predict decoding program |
US20130003855A1 (en) * | 2010-01-12 | 2013-01-03 | Lg Electronics Inc. | Processing method and device for video signals |
US20130177079A1 (en) * | 2010-09-27 | 2013-07-11 | Lg Electronics Inc. | Method for partitioning block and decoding device |
CN103250416A (en) * | 2010-12-06 | 2013-08-14 | Sk电信有限公司 | Method and device for encoding/decoding image by inter prediction using random block |
KR20140008503A (en) * | 2012-07-10 | 2014-01-21 | 한국전자통신연구원 | Method and apparatus for image encoding/decoding |
CN103765892A (en) * | 2011-06-28 | 2014-04-30 | 三星电子株式会社 | Method and apparatus for coding video and method and apparatus for decoding video, accompanied with intra prediction |
AU2015202844A1 (en) * | 2011-01-12 | 2015-06-11 | Ntt Docomo, Inc. | Image predict coding method, image predict coding device, image predict coding program, image predict decoding method, image predict decoding device, and image predict decoding program |
CN104918045A (en) * | 2011-11-23 | 2015-09-16 | 数码士控股有限公司 | Method and encoding/decoding of video using common merging candidate set of asymmetric partitions |
KR20160143588A (en) * | 2015-06-05 | 2016-12-14 | 인텔렉추얼디스커버리 주식회사 | Method and apparartus for encoding/decoding for intra prediction mode |
US20170041616A1 (en) * | 2015-08-03 | 2017-02-09 | Arris Enterprises Llc | Intra prediction mode selection in video coding |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK3468197T3 (en) * | 2011-09-09 | 2022-05-16 | Lg Electronics Inc | Method for inter-prediction and arrangement thereof |
-
2018
- 2018-03-22 CN CN202311482423.7A patent/CN117255197A/en active Pending
- 2018-03-22 KR KR1020180033424A patent/KR102310730B1/en active IP Right Grant
- 2018-03-22 CN CN201880020454.1A patent/CN110476425B/en active Active
- 2018-03-22 CN CN202311484265.9A patent/CN117255198A/en active Pending
- 2018-03-22 CN CN202311481738.XA patent/CN117255196A/en active Pending
-
2021
- 2021-10-01 KR KR1020210130679A patent/KR102504643B1/en active IP Right Grant
-
2023
- 2023-02-23 KR KR1020230024208A patent/KR20230035277A/en not_active Application Discontinuation
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130003855A1 (en) * | 2010-01-12 | 2013-01-03 | Lg Electronics Inc. | Processing method and device for video signals |
US20110243225A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Determining intra prediction mode of image coding unit and image decoding unit |
US20130177079A1 (en) * | 2010-09-27 | 2013-07-11 | Lg Electronics Inc. | Method for partitioning block and decoding device |
CN103250416A (en) * | 2010-12-06 | 2013-08-14 | Sk电信有限公司 | Method and device for encoding/decoding image by inter prediction using random block |
WO2012096095A1 (en) * | 2011-01-12 | 2012-07-19 | 株式会社エヌ・ティ・ティ・ドコモ | Image predict coding method, image predict coding device, image predict coding program, image predict decoding method, image predict decoding device, and image predict decoding program |
AU2015202844A1 (en) * | 2011-01-12 | 2015-06-11 | Ntt Docomo, Inc. | Image predict coding method, image predict coding device, image predict coding program, image predict decoding method, image predict decoding device, and image predict decoding program |
CN103765892A (en) * | 2011-06-28 | 2014-04-30 | 三星电子株式会社 | Method and apparatus for coding video and method and apparatus for decoding video, accompanied with intra prediction |
CN104918045A (en) * | 2011-11-23 | 2015-09-16 | 数码士控股有限公司 | Method and encoding/decoding of video using common merging candidate set of asymmetric partitions |
KR20140008503A (en) * | 2012-07-10 | 2014-01-21 | 한국전자통신연구원 | Method and apparatus for image encoding/decoding |
KR20160143588A (en) * | 2015-06-05 | 2016-12-14 | 인텔렉추얼디스커버리 주식회사 | Method and apparartus for encoding/decoding for intra prediction mode |
US20170041616A1 (en) * | 2015-08-03 | 2017-02-09 | Arris Enterprises Llc | Intra prediction mode selection in video coding |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115336276A (en) * | 2020-03-31 | 2022-11-11 | Kddi 株式会社 | Image decoding device, image decoding method, and program |
CN115336276B (en) * | 2020-03-31 | 2024-06-04 | Kddi株式会社 | Image decoding device, image decoding method, and program |
Also Published As
Publication number | Publication date |
---|---|
KR20230035277A (en) | 2023-03-13 |
KR102310730B1 (en) | 2021-10-12 |
KR20210122761A (en) | 2021-10-12 |
CN117255197A (en) | 2023-12-19 |
CN117255198A (en) | 2023-12-19 |
KR102504643B1 (en) | 2023-03-02 |
CN117255196A (en) | 2023-12-19 |
KR20180107762A (en) | 2018-10-02 |
CN110476425B (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110463201A (en) | Use the prediction technique and device of reference block | |
US10841577B2 (en) | Method and apparatus for video encoding and video decoding based on neural network | |
CN110476425A (en) | Prediction technique and device based on block form | |
CN109196864B (en) | Image encoding/decoding method and recording medium therefor | |
CN110121884A (en) | Method and apparatus for filtering | |
CN111164978B (en) | Method and apparatus for encoding/decoding image and recording medium for storing bit stream | |
CN109314785A (en) | Method and apparatus for exporting motion prediction information | |
CN109417617A (en) | Intra-frame prediction method and device | |
CN109565593A (en) | The recording medium of image coding/decoding method and equipment and stored bits stream | |
CN109479141A (en) | Image coding/decoding method and recording medium for the method | |
CN109479129A (en) | The recording medium of image coding/decoding method and device and stored bits stream | |
CN109792515A (en) | The recording medium of image coding/decoding method and device and stored bits stream | |
CN109644276A (en) | Image coding/decoding method | |
CN109804625A (en) | The recording medium of method and apparatus and stored bits stream to encoding/decoding image | |
CN110024394A (en) | The recording medium of method and apparatus and stored bits stream to encoding/decoding image | |
CN109804626A (en) | Method and apparatus for being coded and decoded to image and the recording medium for stored bits stream | |
CN109417629A (en) | Image coding/decoding method and recording medium for this method | |
CN110089113A (en) | Image coding/decoding method, equipment and the recording medium for stored bits stream | |
CN109479138A (en) | Image coding/decoding method and device | |
CN109691099A (en) | Video coding/decoding method and device and recording medium in wherein stored bits stream | |
CN110024399A (en) | The recording medium of method and apparatus and stored bits stream to encoding/decoding image | |
CN109417636A (en) | Method and apparatus for the encoding/decoding image based on transformation | |
CN108353166A (en) | Method and apparatus for encoding/decoding image | |
US20200366931A1 (en) | Bidirectional intra prediction method and apparatus | |
US20200394514A1 (en) | Method and device for providing compression and transmission of training parameters in distributed processing environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |