CN110495175A - Image processing method for being handled the motion information for parallel processing, method and apparatus for the method for being decoded and encoding using the image processing method - Google Patents
Image processing method for being handled the motion information for parallel processing, method and apparatus for the method for being decoded and encoding using the image processing method Download PDFInfo
- Publication number
- CN110495175A CN110495175A CN201880023458.5A CN201880023458A CN110495175A CN 110495175 A CN110495175 A CN 110495175A CN 201880023458 A CN201880023458 A CN 201880023458A CN 110495175 A CN110495175 A CN 110495175A
- Authority
- CN
- China
- Prior art keywords
- block
- information
- motion information
- prediction
- current block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 205
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012545 processing Methods 0.000 title description 27
- 238000003672 processing method Methods 0.000 title description 12
- 230000007717 exclusion Effects 0.000 claims description 2
- 230000008054 signal transmission Effects 0.000 claims 1
- 238000013139 quantization Methods 0.000 description 96
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 25
- 230000009466 transformation Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 18
- 238000001914 filtration Methods 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 17
- VIUGBRHEHAGOFV-HOTGVXAUSA-N chembl1800452 Chemical compound NC1=NC(C(C)C)=CC(C=2C(=CC=C(C=2)N2C[C@H](O)[C@@H](O)C2)O)=N1 VIUGBRHEHAGOFV-HOTGVXAUSA-N 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 230000008520 organization Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000005192 partition Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
According to one embodiment of present invention, a kind of method for being decoded to image is the following steps are included: the concurrent movement information predicting unit for the current block that identification motion information will be decoded indexes;Obtain current block is not the motion information for belonging at least one contiguous block in the contiguous block of the block of the concurrent movement information predicting unit previously indexed;And the motion information prediction decoding for current block is handled based on the motion information obtained.
Description
Technical field
The present invention relates to a kind of image processing methods, the side for being decoded and being encoded to image using the image processing method
Method and apparatus for the method.More particularly it relates at a kind of motion information to for parallel processing
The image processing method of reason, the method that image is decoded and is encoded using the image processing method and be used for this method
Equipment.
Background technique
Digital video technology can be used for miscellaneous digital video apparatus in a manner of integration, including for example digital
TV, digital direct broadcast system, wireless broadcast system, personal digital assistant (PDA), on knee or desktop computer, digital camera,
Digital recorder, video game apparatus, video game console, mobile phone, satellite radio telephone etc..Digital video apparatus
Can be achieved video compression technology (such as MPEG-2, MPEG-4 or ITU-T H.264/MPEG-4, the 10th part, advanced video compile
Code (AVC), H.265/ efficient video coding (HEVC)) more effectively to send and receive digital video information.Video compress skill
Art executes spatial prediction and time prediction to eliminate or reduce intrinsic redundancy in video sequence.
As such Image Compression, there are various technologies, such as from current picture previous picture or with
Predictive pictures include the inter-frame prediction techniques of the pixel value in current picture, for using the Pixel Information in current picture afterwards
Prediction includes the infra-prediction techniques of the pixel value in current picture and short code is distributed to the value with the high frequency of occurrences
And long code is distributed to the entropy coding of the value with the low frequency of occurrences, and these compression of images skills can be used in image data
Art and be effectively incompressible, and can be sent or be stored.
In order to cost-effectively cope with various resolution ratio, frame per second etc. according to these applications, it is necessary to have can be according to application
Needed for performance and function and the video decoding apparatus that is easily controlled.
For example, picture is partitioned multiple pieces that all have predetermined size to execute coding in method for compressing image.This
Outside, using for eliminating the redundancy between picture inter-frame prediction techniques and infra-prediction techniques improve compression efficiency.
In this case, residual signals are generated by using intra prediction and inter-prediction.Obtain residual signals
The reason is that so that data compression rate is higher, and predicting to get over because data volume is small when using residual signals to execute coding
Good, the value of residual signals is smaller.
Intra-frame prediction method predicts the data of current block by using the pixel around current block.Actual value and predicted value
Between difference be referred to as residual signals block.In the case where HEVC, due to prediction mode quantity from it is existing H.264/AVC in
The 9 kinds of prediction modes used increase to 35 kinds of prediction modes, therefore intra-frame prediction method quilt in the case where being further divided into
It executes.
In the case where inter-frame prediction method, current block is compared with the block in neighbouring picture most like to find
Block.Here, the location information (Vx, Vy) for the block being found is referred to as motion vector.Current block with by the motion vector it is pre-
The difference of pixel value between the prediction block of survey is referred to as residual signals block (motion compensated residual block).
In this way, although intra prediction and inter-prediction are further divided into, so that the data volume of residual signals subtracts
It is few, but the calculation amount for being handled video greatly increases.
Specifically, in the picture for image coding and decoding determine partitioned organization processing complexity increase with
And existing piece of partition method causes to have difficulties in terms of pipeline implementation, and existing piece of partition method and from point
The size for the block that area's operation generates may be not suitable for encoding high-definition picture.
For example, in the parallel processing for the motion information of inter-prediction, due to only can contiguous block on the left of completion, on
Current block is handled after the coding or decoding of square contiguous block and upper left contiguous block, therefore when realizing parallel pipeline,
It needs to wait until the block mode and partitions sizes of the contiguous block handled in another assembly line for current block are confirmed as
Only, to there are problems that leading to pipeline stall.
In order to solve this problem, it is had been proposed in HEVC to about some predicting units in any block size
The merging method that motion information merges.However, occurring when calculating the motion information of the first predicting unit in any piece
The same problem for needing the motion information of contiguous block, to there are problems that code efficiency significant decrease.
Summary of the invention
Technical problem
The present invention has been given to solve the above problems, and the object of the present invention is to provide one kind for carrying out to image
The method and apparatus of decoding and coding, wherein the method and equipment pass through the motion information prediction solution to high-definition picture
Code and coding carry out parallel processing to improve code efficiency.
Solution
In order to solve above-mentioned target, embodiment according to the present invention includes: to know to the method that motion information is decoded
The concurrent movement information predicting unit for the current block that other motion information will be decoded indexes;It obtains and is arranged from the contiguous block of current block
The fortune of at least one contiguous block in rest block in addition to the block for belonging to the concurrent movement information predicting unit previously indexed
Dynamic information;And the motion information prediction decoding for current block is handled based on the motion information of acquisition.
In order to solve above-mentioned target, the method for embodiment according to the present invention carried out to motion information includes: identification fortune
Dynamic information indexes the concurrent movement information predicting unit of current block encoded;It obtains and is excluding to belong to from the contiguous block of current block
The movement of at least one contiguous block in rest block except the block of the concurrent movement information predicting unit previously indexed is believed
Breath;And the motion information prediction coding for current block is handled based on the motion information of acquisition.
In addition, in order to solve above-mentioned target, the method for embodiment according to the present invention can be implemented as one kind by based on
The program of execution the method and a kind of storing said program and the non-volatile note that can be read out by computer on calculation machine
Recording medium.
Beneficial effect
According to an embodiment of the invention, predetermined concurrent movement information predicting unit can be sequentially grouped by decoded piece.
According to an embodiment of the invention, can be used from the exclusion of the contiguous block of current block in motion information decoding and belong to elder generation
The motion information of rest block except the block of preceding concurrent movement information predicting unit decodes to execute motion information.
Therefore, the pipeline processes for each concurrent movement information predicting unit can be independently executed, and can be preparatory
Pipeline stall is prevented, to improve coding and decoding efficiency.
Detailed description of the invention
Fig. 1 is the block diagram for showing the configuration of image encoding apparatus of embodiment according to the present invention.
Fig. 2 to Fig. 5 is to show the first embodiment of method for being module unit by picture portion and handling image
Diagram.
Fig. 6 is the block diagram for showing the embodiment for the method that inter-prediction is executed in image encoding apparatus.
Fig. 7 is the block diagram for showing the configuration of image decoding apparatus of embodiment according to the present invention.
Fig. 8 is the block diagram for showing the embodiment for the method that inter-prediction is executed in image decoding apparatus.
Fig. 9 is to show the diagram of the second embodiment of method for being module unit for picture portion and handling to image.
Figure 10 is to show the diagram of the 3rd embodiment of method for being module unit for picture portion and handling to image.
Figure 11 is to show to carry out subregion to coding unit using binary tree structure to construct the implementation of the method for converter unit
The diagram of example.
Figure 12 is to show the diagram of the fourth embodiment of method for being module unit for picture portion and handling to image.
Figure 13 to 14 is to show the another embodiment of method for being module unit by picture portion and handling image
Diagram.
Figure 15 and Figure 16 is to show by executing rate-distortion optimization (RDO) method for determining the partitioned organization of converter unit
Embodiment diagram.
Figure 17 and Figure 18 is to show handling the motion information for parallel processing for embodiment according to the present invention
Image processing method flow chart.
Figure 19 to Figure 22 is the motion information processing method shown for the embodiment according to the present invention of each case
Diagram.
Specific embodiment
Hereinafter, it will be described in detail with reference to the accompanying drawings the embodiment of the present invention.In being described below for the embodiment of the present invention
In, when that the theme of the disclosure may be made not know quite the detailed description comprising known function in this and configuration, will save
Slightly to the detailed description comprising known function in this and configuration.
It will be understood that when element is referred to as " connection " or when " coupled " to another element, which can be connected directly or directly
It connects and is couple to another element, but there is also other elements between the two elements.In addition, " comprising " in the present invention
The description of specific configuration is not excluded for the configuration except the configuration, but means that additional configurations can be included in reality of the invention
It tramples in range or in technical scope of the invention.
Term first, second etc. can be used for describing various assemblies, but the component should not be limited by the term.Institute
It states term and is only applied to the purpose for distinguishing a component and another component.For example, in the feelings for not departing from the scope of the present invention
Under condition, first assembly is referred to alternatively as the second component, and similarly, and the second component is also referred to as first assembly.
In addition, the component units shown in an embodiment of the present invention are shown separately different characteristic function is presented
Can, and each component does not mean that each component units are configured with individual hardware or a software component units.Also
It is to say, for ease of description, each component units are included in list, and at least two component units in component units
A component units or a component units, which can be combined to form, can be divided into multiple component units to execute each function
Energy.The integrated embodiment and isolated embodiment of this component units are also included in the scope of the present invention, unless they
It is detached from essence of the invention.
In addition, some components are not the necessary components for executing necessary function, it is only used for improving performance in the present invention
Optional component.The present invention only can realize group necessary to essence of the invention with other than the component for performance boost
Part is realized, and the structure including the necessary component other than the optional component for performance boost is also included in this hair
In bright range.
Fig. 1 is the block diagram for showing the configuration of image encoding apparatus of embodiment according to the present invention.Image encoding apparatus 10
Including pre- in picture division module 110, conversion module 120, quantization modules 130, scan module 131, entropy code module 140, frame
Survey module 150, Inter prediction module 160, inverse quantization module 135, inverse transform module 125, post-processing module 170, picture storage
Module 180, subtracter 190 and adder 195.
Referring to Fig.1, picture division module 110 analyzes incoming video signal and is coding unit by picture subregion, with true
Determine prediction mode and determines the predicting unit size for being directed to each coding unit.
In addition, picture division module 110 sends predicting unit to be encoded according to prediction mode (or prediction technique)
To intra-framed prediction module 150 or Inter prediction module 160.In addition, picture division module 110 sends out predicting unit to be encoded
It is sent to subtracter 190.
Here, the picture of image is made of multiple bands, and band can be partitioned as the substantially single of picture subregion
Multiple coding tree units (CTU) of member.
Coding tree unit can be partitioned one or at least two of the basic unit as inter prediction or infra-frame prediction
Coding unit (CU).
Here, coding tree unit and the full-size of coding unit can be different from each other, and about described maximum sized
Signaling information may be sent to that decoding device 20.This is described in more detail later herein with reference to Figure 17.
Coding unit (CU) can be partitioned one or at least two predicting units (PU) of the basic unit as prediction.
In this case, encoding device 10 is by any one in the inter-prediction and intra prediction as prediction technique
It is determined as the prediction technique for each coding unit (CU) obtained from division operation, but is directed to each predicting unit
(CU), prediction block can be generated each other differently.
In addition, coding unit (CU) can be partitioned one or two or more converter unit (TU), wherein described
Converter unit (TU) is the basic unit that transformation is executed to residual block.
In this case, picture division module 110 can with obtained as described above from division operation block (for example,
Predicting unit (PU) or converter unit (TU)) it is that unit sends image data to subtracter 190.
Referring to Fig. 2, there will be the maximum sized coding tree unit (CTU) point of 256 × 256 pixels using quad-tree structure
Area is four coding units (CU) for all having square shape.
Quad-tree structure can be used to carry out each of four coding units (CU) with square shape
Further subregion.The depth of coding unit (CU) has any one integer in 0 to 3.
It can be one or at least two predicting units (PU) by coding unit (CU) subregion according to prediction mode.
In the case where intra prediction mode, when the size of coding unit (CU) is 2N × 2N, predicting unit (PU) tool
Have 2N × 2N shown in Fig. 3 (a) size or Fig. 3 (b) shown in N × N size.
In addition, in the case where inter-frame forecast mode, when the size of coding unit (CU) is 2N × 2N, predicting unit
(PU) with any one size in following size: 2N × N shown in 2N × 2N, Fig. 4 (b) shown in Fig. 4 (a), Fig. 4
(c) N × N shown in N × 2N, Fig. 4 (d) shown in, 2N shown in 2N × nU, Fig. 4 (f) shown in Fig. 4 (e) ×
NR × 2N shown in nL × 2N and Fig. 4 (h) shown in nD, Fig. 4 (g).
Referring to Fig. 5, coding unit (CU) subregion is four changes for all having square shape by usable quad-tree structure
Change unit (TU).
Quad-tree structure can be used to carry out each of four converter units (TU) with square shape
Further subregion.The depth of the converter unit (TU) obtained from quaternary tree division operation can have any one integer in 0 to 3
Value.
Here, it when coding unit (CU) is inter-frame forecast mode, is obtained from subregion is carried out to corresponding coding unit (CU)
Predicting unit (PU) and converter unit (TU) can have partitioned organization independent of each other.
When coding unit (CU) is intra prediction mode, the converter unit that subregion obtains is carried out to coding unit (CU)
(TU) size can be not more than the size of predicting unit (PU).
In addition, the converter unit (TU) obtained as described above from division operation can have the maximum ruler of 64 × 64 pixels
It is very little.
Conversion module 120 converts residual block, wherein the residual block is the original of the predicting unit (PU) of input
Residual signals between block and the prediction block generated by intra-framed prediction module 150 or Inter prediction module 160, wherein described
Transformation can be executed as basic unit by using converter unit (TU) in conversion module 120.
In conversion process, different transformation matrixs can be determined according to prediction mode (intraframe or interframe), and pre- in frame
The residual signals of survey have directionality according to intra prediction mode, so that transformation can be adaptively determined according to intra prediction mode
Matrix.
The basic unit of transformation can be converted by two (horizontal and vertical) one-dimensional transform matrixes.For example,
In the case where inter-prediction, it may be determined that a predetermined map matrix.
In addition, when intra prediction mode is horizontal, residual block has in vertical direction in the case where intra prediction
The probability of directionality is higher.Therefore, the INTEGER MATRICES based on DCT is applied in vertical direction, and is applied in the horizontal direction
Based on DST or based on the INTEGER MATRICES of KLT.When intra prediction mode is vertical, application is based on DST or base in vertical direction
In the INTEGER MATRICES of KLT, and the INTEGER MATRICES based on DCT is applied in the horizontal direction.
In addition, all applying the INTEGER MATRICES based on DCT in two directions in the case where DC mode.
In the case where intra prediction, transformation matrix can be adaptively determined based on the size of converter unit (TU).
Quantization modules 130 determine the quantization step quantified for the coefficient to the residual block converted by transformation matrix
It is long, wherein can to determine quantization step for predetermined size or larger sized each quantifying unit in quantization modules 130
It is long.
The size of quantifying unit can be 8 × 8 or 16 × 16, and 130 use of quantization modules is according to quantization step and in advance
Quantization matrix determined by survey mode quantifies the coefficient of transform block.
In addition, quantization modules 130 can be used the quantization step of the quantifying unit adjacent with current quantisation unit as current
The quantization step predictive factor of quantifying unit.
Quantization modules 130 quantify single according to the left side quantifying unit, top quantifying unit and upper left of current quantisation unit
The sequential search of member goes out one or two effective quantization step, and generates current quantisation unit using effective quantization step
Quantization step predictive factor.
For example, search in the order described above first effective quantization step is determined as quantization step by quantization modules 130
Long predictive factor, or that the average value of search in the order described above two effective quantization steps is determined as quantization step is pre-
Survey the factor, or when only one quantization step is effective, quantization modules 130 by the quantization step be determined as quantization step prediction because
Son.
When quantifying the step-ahead prediction factor and being determined, quantization modules 130 are by the quantization step of current quantisation unit and quantization
Difference between the step-ahead prediction factor is sent to entropy code unit 140.
In addition, left side coding unit, top coding unit and the upper left coding unit of current coded unit is not present, or
May exist in maximum coding unit according to the previous existing coding unit of coded sequence.
Therefore, quantifying unit adjacent with current coded unit in maximum coding unit and according to coded sequence immediately
The quantization step of preceding quantifying unit can be candidate.
In this case, can priority be set in the following order: 1) the left side quantifying unit of current coded unit, 2)
The top quantifying unit of current coded unit, 3) the upper left side quantifying unit of current coded unit and 4) according to coded sequence
Immediately preceding quantifying unit.The sequence can be changed, and can omit upper left side quantifying unit.
In addition, the transform block by quantization is sent to inverse quantization module 135 and scanning element 131.
Scanning element 131 is scanned across the coefficient of the transform block of quantization and is one-dimensional quantization parameter by the transformation of coefficient.In
It in this case, can be according to frame since the coefficient distribution of the transform block after quantization is likely to be dependent on intra prediction mode
Inner estimation mode determines scan method.
In addition, coefficient scanning method can be determined according to the size of the basic unit of transformation, and scan pattern can basis
Direction intra prediction mode changes.In this case, the scanning sequency of quantization parameter can be scanned according to opposite direction.
When the coefficient by quantization is divided into multiple subsets, same scan pattern can be applied in each subset
Quantization parameter, and the scan pattern that zigzag scanning or diagonal scan can be applied between subset.
It is further preferred to carry out application scanning mode to residuary subset since the primary subset including DC along forward direction, but can also
The direction application scanning mode opposite along its.
Furthermore, it is possible to be arranged between subset in such a way that the scan pattern of the coefficient by quantization in subset is identical
Scan pattern, and the scan pattern between subset can be determined according to intra prediction mode.
In addition, encoding device 10 is configured such that the last non-zero quantized coefficients that can be indicated in converter unit (PU)
Position and each subset in the information of position of last non-zero quantized coefficients be included in bit stream and sent
To decoding device 20.
Inverse quantization module 135 executes inverse quantization, and 125 base of inverse transform module to the coefficient as described above by quantization
Inverse transformation is executed in each converter unit (TU), the transformation coefficient obtained from inverse quantization operation is redeveloped into the residual error of spatial domain
Block.
Adder 195 can by residual block that will be rebuild by inverse transform module 125 with from intra-framed prediction module 150 or interframe
The prediction block phase Calais that prediction module 160 receives generates reconstructed block.
In addition, post-processing module 170 execute handled for eliminating the deblocking filtering of blocking artifact generated in rebuilding picture,
At sampling point self adaptation skew (SAO) (SAO) application for being compensated based on each pixel to the difference relative to original image
It manages and the adaptive loop filter (ALF) for compensating by coding unit to the difference relative to original image is handled.
Deblocking filtering processing can be applied to predetermined size or larger sized predicting unit (PU) or converter unit
(TU) boundary.
For example, deblocking filtering is handled can include: determine the boundary that will be filtered, the side on the boundary will be applied to by determining
Boundary's filtering strength, determines whether de-blocking filter is applied, and selection will be applied to when determining using de-blocking filter
The filter on the boundary.
In addition, whether de-blocking filter is determined depending on following factor: i) whether boundary filtering strength is greater than 0, and
Ii) indicate be with by the value of the variation degree of the pixel value of the boundary of adjacent two blocks (P block and Q block) in the boundary filtered
It is no to be less than the first reference value determined by quantization parameter.
At least two filters are preferred.When the absolute difference between two pixels being located at block boundary is greater than or waits
When the second reference value, selection executes the filter of relatively weak filtering.
Second reference value is determined by quantization parameter and boundary filtering strength.
Sampling point self adaptation skew (SAO) (SAO) is to reduce pixel and original in the image for applying de-blocking filter using processing
Distortion between beginning pixel.It can be determined whether to execute sampling point self adaptation skew (SAO) (SAO) application processing based on each picture or band.
It can be multiple offset areas by picture or band subregion, and can determine offset type for each offset area.
Offset type includes the edge offset type and two band offset types of predetermined quantity (for example, four).
For example, edge type belonging to each pixel is determined, so that accordingly when offset type is edge offset type
Offset applied.The distribution of value of the edge type based on two pixel adjacent with current pixel is determined.
It, can be based on by via deblocking filtering processing or then adaptive in adaptive loop filter (ALF) processing
The image of offset applications processing reconstructed is answered to be compared with original image and the value that obtains executes filtering.
Picture memory module 180 receives image data after post treatment from post-processing module 170, and is based on each picture
Face rebuilds and stores image.Picture can be the either image based on every domain of the image based on every frame.
At least one reference picture being stored in picture memory module 180 can be used to execute for Inter prediction module 160
Estimation, and can determine motion vector and indicate the reference picture indices of reference picture.
In this case, according to determining reference picture indices and motion vector, from being stored in picture memory module 180
In multiple reference pictures among the reference picture for estimation in selection it is corresponding with by the predicting unit being encoded
Prediction block.
It is pre- in frame to execute that the reconstruction pixel value in the picture including current prediction unit can be used in intra-framed prediction module 150
Survey coding.
Intra-framed prediction module 150, which receives, to be predicted the current prediction unit of coding, and passes through the size according to current block
One of intra prediction mode of predetermined quantity is selected to execute intra prediction.
Intra-framed prediction module 150 adaptively can be filtered reference pixel to generate intra-frame prediction block, and join
Examine pixel it is unavailable when using available reference pixel generate reference pixel.
Entropy code module 140 can be received to the quantization parameter quantified by quantization modules 130, from intra-framed prediction module 150
Intraframe prediction information, the motion information that is received from Inter prediction module 160 etc. execute entropy coding.
Fig. 6 is the block diagram for the embodiment for executing the configuration of inter-prediction in encoding device 10.Frame shown in Fig. 6
Between predictive coding device include motion information determining module 161, motion information coding mode determining module 162, motion information coding
Module 163, prediction block generation module 164, residual block generation module 165, residual error block coding module 166 and multiplexer 167.
Referring to Fig. 6, motion information determining module 161 determines the motion information of current block, wherein the motion information includes
Reference picture indices and motion vector, and reference picture indices represent any one in the picture of previous coding and reconstruction.
May include when current block is by unidirectional inter prediction encoding instruction belong to it is any in the reference picture of list 0 (L0)
One reference picture and when current block is by bi-directional predictive coding indicate list 0 (L0) reference picture in one refer to picture
The reference picture indices of the reference picture indices in face and a reference picture in the reference picture of instruction list 1 (L1).
In addition, when current block is by bi-directional predictive coding, it may include instruction by by list 0 and list 1 in conjunction with and generate
Synthesis list LC one or two of reference picture picture index.
Motion vector indicates the position of the prediction block in the picture indicated by each reference picture indices, and motion vector
It can be pixel unit (graduation of whole numbers of units) or sub-pixel unit.
For example, motion vector can have the resolution ratio of 1/2,1/4,1/8 or 1/16 pixel.When motion vector is not integer list
When position, prediction block can be generated from the pixel of graduation of whole numbers of units.
The coding mode for being used for the motion information of current block can be determined as jumping by motion information coding mode determining module 162
Cross any one in mode, merging patterns and AMVP mode.
It is 0 when there are the candidate and residual signals of skipping with motion information identical with the motion information of current block
When, using skip mode.When the current block as predicting unit (PU) is identical as the size of coding unit, using skipping mould
Formula.
There is when merging candidate of motion information identical with the motion information of current block when existing, using merging patterns.
When current block and coding unit (CU) size difference, or in current block situation identical with coding unit (CU) size
There are when residual signals, using merging patterns.In addition, merge it is candidate and skip candidate can be it is identical.
AMVP mode is applied when skip mode and merging patterns are not applied, and there is the motion vector with current block
The AMVP candidate of most like motion vector can be chosen as AMVP predictive factor.
Motion information coding module 163 can be according to the method determined by motion information coding mode determining module 162 to fortune
Dynamic information is encoded.
For example, motion information coding module 163 executes when motion information coding mode is to skip mode or merging patterns
Merge motion vector encoder processing, and when motion information coding mode is AMVP mode, motion information coding module 163 is held
Row AMVP coded treatment.
Prediction block generation module 164 generates prediction block using the motion information of current block, and when motion vector is whole
When number unit by replicate corresponding piece of the position indicated with the motion vector in the picture indicated by reference picture indices come
Generate the prediction block of current block.
In addition, when motion vector is not graduation of whole numbers of units, prediction block generation module 164 can be indicated by the reference picture indices
Picture in graduation of whole numbers of units pixel generate prediction block pixel.
In this case, prediction pixel is generated using 8 tap interpolation filters for luminance pixel, and be directed to color
Degree pixel generates prediction pixel using 4 tap interpolation filters.
Residual block generation module 165 generates residual block using the prediction block of current block and current block.When the ruler of current block
It is very little be 2N × 2N when, residual block generation module 165 using current block and with 2N × 2N corresponding with current block size it is pre-
Block is surveyed to generate residual block.
In addition, obtaining two 2N for constituting 2N × 2N when the size for the current block for being used to predict is 2N × N or N × 2N
The prediction block of each of block of × N piece, and may then use that the prediction block of two 2N × N to generate having a size of 2N × 2N's
Prediction of result block.
In addition, the prediction block having a size of 2N × 2N can be used to generate the residual block having a size of 2N × 2N.In order to solve ruler
The discontinuity on the boundary of very little two prediction blocks for 2N × N, can be smooth to the application overlapping of the pixel of boundary.
Residual block is divided into one or at least two converter units (TU), and each change by residual error block coding module 166
Coding, quantization and entropy coding can be transformed by changing unit (TU).
Residual error block coding module 166 can be used the transformation matrix based on integer come residual to generating by inter-frame prediction method
Poor block is converted, and transformation matrix can be the DCT matrix based on integer.
In addition, residual error block coding module 166 is using quantization matrix with the coefficient to the residual block converted by transformation matrix
Quantified, and quantization matrix can be determined by quantization parameter.
Quantization parameter is determined for predetermined size or larger sized each coding unit (CU).When present encoding list
When first (CU) is less than the predetermined size, current coded unit (CU) is less than predetermined size, only to the predetermined size or
The quantization parameter according to first coding unit (CU) of coded sequence among smaller size of coding unit is encoded, and
The quantization parameter of remaining coding unit (CU) is identical as the quantization parameter of first coding unit (CU) and is not therefore encoded.
In addition, the quantization matrix determined according to quantization parameter and prediction mode can be used to compile to the coefficient of transform block
Code.
It can be used and come with the quantization parameter of current coded unit (CU) adjacent coding unit (CU) to described for having
Predetermined size or larger sized each coding unit (CU) and determine quantization parameter carry out predictive coding.
By being searched according to the left side coding unit (CU) of current coded unit (CU) and the sequence of top coding unit (CU)
Rope goes out one or two effective quantization parameter to generate the quantization parameter predictive factor of current coded unit (CU).
For example, according to first effective quantization parameter that sequence described above is searched for can be confirmed as quantization parameter prediction because
Son.Immediately the sequence of preceding coding unit (CU) executes search according to left side coding unit (CU) and on coded sequence,
To which first effective quantization parameter is determined as quantization parameter predictive factor.
The coefficient of transform block by quantization is scanned and is transformed to one-dimensional quantization parameter, and can be according to entropy coding mould
Scan method is arranged in formula differently.
For example, the coefficient by the quantization of inter prediction encoding can (edge in a predefined manner when being encoded according to CABAC
The raster scanning of zigzag or diagonal) it is scanned, and when being encoded according to CAVLC by different in a manner of described
Mode is scanned.
For example, in the case where interframe scan method can be determined according to zigzag mode, it can be according to frame in the case where in frame
Inner estimation mode determines scan method, and can determine differently coefficient scanning side according to the size of the basic unit of transformation
Method.
In addition, scan pattern can be changed according to direction intra prediction mode, and the scanning sequency of quantization parameter can be according to
Opposite direction scans.
Multiplexer 167 is compiled to the motion information encoded by motion information coding module 163 and by residual error block coding module 166
The residual signals of code are multiplexed.
Motion information can be different according to coding mode, and for example, motion information can in the case where skipping or merging
It only include the index of the indication predicting factor, and in the case where AMVP, motion information may include the reference picture rope of current block
Draw, differential motion vector and AMVP index.
Hereinafter, it will be described in the embodiment of the operation of intra-framed prediction module 150 shown in Fig. 1.
Firstly, intra-framed prediction module 150 receives prediction mode information and predicting unit (PU) from picture division module 110
Size, and reference pixel is read from picture memory module 180 to determine the intra prediction mode of predicting unit (PU).
Intra-framed prediction module 150 usually determines whether to produce reference image by checking for not available reference image
Element, and reference pixel can be used to determine the intra prediction mode of current block.
When at the coboundary that current block is located at current picture, the pixel adjacent with the upside of current block is not defined.When
When current block is located at the left margin of current picture, the pixel adjacent with the left side of current block is not defined, wherein these pixels
Available pixel can be determined not to be.
In addition, even if making the pixel adjacent with the upside of band or left side not and being elder generation when current block is located at band boundaries
When preceding coding and the pixel of reconstruction, these pixels can also be determined not to be available pixel.
When there is no the pixel adjacent with the left side of current block or upside or there is no previously encoded and reconstruction
When pixel, the intra prediction mode of current block can be determined using only available pixel.
In addition, the available reference pixel of current block can be used to generate the reference pixel at unavailable position.For example, when upper
When pixel at square is unavailable, some or all of left pixels can be used to generate upside pixel, and vice versa.
That is, can by along predetermined direction duplication away from the nearest position of the reference pixel at unavailable position can
Reference pixel is usually generated with reference image, or when in a predetermined direction without available reference pixel, it can be by opposite direction
The available reference pixel at proximal most position is replicated to generate reference pixel.
In addition, even if reference pixel can also be according to the pixel in the presence of the topmost pixel of current block or left pixel
The coding mode of affiliated block and be confirmed as unavailable reference pixel.
By interframe encode and therefore for example, the block belonging to the reference pixel adjacent with the upside of current block is to be weighed
When the block built, the pixel can be determined as unavailable pixel.
Here, the block adjacent with current block is intra encoded, so that can be used the pixel for belonging to reconstructed block available to generate
Reference pixel, and determine that the information of available reference pixel is sent to decoding device according to coding mode about encoding device 10
20。
Intra-framed prediction module 150 usually determines the intra prediction mode of current block using reference image, and in current block
Can the quantity of received intra prediction mode can be changed according to the size of block.
For example, 34 kinds of intra prediction modes may be present when the size of current block is 8 × 8,16 × 16 and 32 × 32, and
And when the size of current block is 4 × 4,17 kinds of intra prediction modes may be present.
Described 34 or 17 kind of intra prediction mode can be configured at least one non-direction mode and multiple directions mode.
At least one described non-direction mode can be DC mode and/or plane mode.When DC mode and plane mode quilt
When including in non-direction mode, 35 kinds of intra prediction modes are all may be present in the size regardless of current block.
Here, it may include two kinds of non-direction modes (DC mode and plane mode) and 33 kinds of direction modes.
In the case where plane mode, by using at least one pixel value (or the picture for the lower right for being located at current block
Element value predicted value, hereinafter referred to as the first reference value) and reference image usually generate the prediction block of current block.
The configuration of the image decoding apparatus of embodiment according to the present invention can be from the image coding described referring to figs. 1 to Fig. 6
The configuration of equipment 10 is derived.For example, can be by inversely performing the place above by reference to Fig. 1 to Fig. 6 image encoding method described
Reason is to be decoded image.
Fig. 7 is the block diagram for showing the configuration of video decoding apparatus of embodiment according to the present invention.Decoding device 20 includes
Entropy decoder module 210, inverse quantization/inverse transform module 220, adder 270, de-blocking filter 250, picture memory module 260, frame
Interior prediction module 230, motion compensated prediction module 240 and within the frame/frames select switch 280.
Entropy decoder module 210 receives the bit stream encoded by image encoding apparatus 10 and is decoded to the bit stream
So that the bit stream is divided into intra prediction mode index, motion information, quantization parameter sequence etc., and by decoded movement
Information is sent to motion compensated prediction module 240.
In addition, entropy decoder module 210 by intra prediction mode index be sent to intra-framed prediction module 230 and inverse quantization/
Inverse transform module 220, and inverse quantization/inverse transform module 220 is sent by dequantized coefficients sequence.
Quantization parameter sequence transformation is the dequantized coefficients of two-dimensional array by inverse quantization/inverse transform module 220, and optional
One of a variety of scan patterns are selected for converting, and for example, intra prediction mode and prediction mode (example based on current block
Such as, intra prediction or inter-prediction) select scan pattern.
The quantization matrix selected from multiple quantization matrixes is applied to two-dimensional array by inverse quantization/inverse transform module 220
Dequantized coefficients are to rebuild quantization parameter.
In addition, can be according to the size for the current block being reconstructed to be selected to quantization matrix different from each other, and can be directed to
The block of identical size selects quantization matrix based at least one of the intra prediction mode of current block and prediction mode.
The quantization parameter of 220 pairs of inverse quantization/inverse transform module reconstructions carries out inverse transformation with reconstructive residual error block, and can be used
Converter unit (TU) executes inversion process as basic unit.
Adder 270 by residual block that will be rebuild by inverse quantization/inverse transform module 220 with by intra-framed prediction module 230
Or the prediction block phase Calais reconstruction image block that motion compensated prediction module 240 generates.
De-blocking filter 250 can execute deblocking filtering processing to the reconstruction image generated by adder 270, to reduce because of root
According to removing blocking artefacts caused by the image impairment of quantification treatment.
Picture memory module 260 is the frame memory for storing the image being locally decoded, wherein de-blocking filter
The image of 250 pairs of local decoders performs deblocking filtering processing.
Intra-framed prediction module 230 is rebuild current based on the intra prediction mode index received from entropy decoder module 210
The intra prediction mode of block, and prediction block is generated according to the intra prediction mode of reconstruction.
Motion compensated prediction module 240 is produced based on motion vector information from the picture being stored in picture memory module 260
The prediction block of raw current block, and when the motion compensation of application decimal precision, selected by the application of motion compensated prediction module 240
Interpolation filter to generate prediction block.
Selection switch 280 can be provided to adder 270 in intra-framed prediction module 230 based on coding mode within the frame/frames
Or the prediction block generated in motion compensated prediction module 240.
Fig. 8 is shown in image decoding apparatus 20 for executing the block diagram of the embodiment of the configuration of inter-prediction.Interframe
Prediction decoding device includes demultiplexer 241, motion information coding mode determining module 242, merging patterns motion information decoding mould
Block 243, AMVP pattern further formation-decoding module 244, prediction block generation module 245, decoding residual block module 246, Yi Jichong
Build block generation module 247.Here, merging patterns motion information decoder module 243 and AMVP pattern further formation-decoding module 244
It can be included in motion information decoder module 248.
Referring to Fig. 8, demultiplexer 241 is from the bit stream received to the motion information of present encoding and the residual error of coding
Signal is demultiplexed, and sends motion information coding mode determining module 242 for the motion information of demultiplexing, and will demultiplex
Residual signals are sent to decoding residual block unit 246.
Motion information coding mode determining module 242 determines the motion information coding mode of current block, and works as and receive
Bit stream skip mark have value 1 when, motion information coding mode determining module 242 determine current block motion information volume
Pattern is encoded with skipping coding mode.
Skip that mark has value 0 and the motion information that receives from demultiplexer 241 is only when the bit stream received
When with merging index, motion information coding mode determining module 242 determines the motion information coding mode of current block to merge
Mode is encoded.
In addition, the mark of skipping when the bit stream received has value 0, and the movement received from demultiplexer 241
When information includes reference picture indices, differential motion vector and AMVP index, motion information coding mode determining module 242 is determined
The motion information coding mode of current block is encoded with AMVP mode.
When motion information coding mode determining module 242 determine current block motion information coding mode be to skip mode or
When merging patterns, merging patterns motion information decoder module 243 is activated.When motion information coding mode determining module 242 is true
When the motion information coding mode for determining current block is AMVP mode, AMVP pattern further formation-decoding module 244 is activated.
Prediction block generates 245 use of unit by merging patterns motion information decoder module 243 or AMVP pattern further information
The motion information that decoder module 244 is rebuild generates the prediction block of current block.
It, can be by replicating and the movement in the picture as indicated by reference picture indices when motion vector is graduation of whole numbers of units
Corresponding piece of the position of vector instruction generates the prediction block of current block.
In addition, the graduation of whole numbers of units when motion vector is not graduation of whole numbers of units, from picture indicated by reference picture indices
The pixel of pixel generation prediction block.It here, can be by using 8 tap interpolation filters in the case where luminance pixel and in color
Prediction pixel is generated using 4 tap interpolation filters in the case where degree pixel.
Decoding residual block module 246 executes entropy decoding to residual signals, and carries out inverse scan to entropy-decoded coefficient
To generate two-dimentional quantization parameter block.Inverse-scanning method can change according to entropy decoding method.
For example, when executing decoding based on CABAC, it can be according to diagonal grating inverse-scanning method application inverse scan, and work as
It, can be according to zigzag inverse-scanning method application inverse scan when executing decoding based on CAVLC.In addition, can be had according to the size of prediction block
Differently determine inverse-scanning method.
Inverse quantization matrix can be used to carry out inverse quantization to generated coefficient block for decoding residual block module 246, and right
Quantization parameter is rebuild to derive quantization matrix.Here, can for be equal to or more than predetermined size each coding unit come
Rebuild quantization step.
Decoding residual block module 260 carries out inverse transformation by the coefficient block to inverse quantization come reconstructive residual error block.
Reconstructed block generates unit 270 by will generate prediction block that unit 250 generates by prediction block and by decoding residual block
The residual block phase Calais that module 260 generates generates reconstructed block.
Hereinafter, the embodiment of the processing of current block will be rebuild by intra prediction referring again to Fig. 7 description.
Firstly, being decoded from intra prediction mode of the bit stream received to current block.For this purpose, entropy decoder module
210 rebuild the first intra prediction mode index of current block with reference to one in multiple intra prediction mode tables.
Multiple intra prediction mode tables are to be encoded the shared table of equipment 10 and decoding device 20, and according to being directed to and work as
The distribution of preceding piece of multiple pieces adjacent of intra prediction mode and any one table selected can be applied.
For example, when current block left side block intra prediction mode and current block upper block intra prediction mode each other
When identical, the first intra prediction mode table of current block is rebuild using the first intra prediction mode table, otherwise, can apply second
Intra prediction mode table indexes to rebuild the first intra prediction mode of current block.
It as another example, is all direction intra prediction mould in the intra prediction mode of the upper block of current block and left side block
In the case where formula, when the direction and left side block of the intra prediction mode of upper block intra prediction mode direction in predetermined angular
When interior, indexed by the first intra prediction mode that the first intra prediction mode table of application rebuilds current block, and work as upper block
Intra prediction mode direction and left side block intra prediction mode direction be more than the predetermined angular when, application can be passed through
Second intra prediction mode table indexes to rebuild the first intra prediction mode of current block.
The first intra prediction mode index of the current block of reconstruction is sent intra-framed prediction module by entropy decoder module 210
230。
When the index has minimum value (that is, 0), the intra-framed prediction module of the first intra prediction mode index is received
230 are determined as the maximum possible mode of current block the intra prediction mode of current block.
In addition, intra-framed prediction module 230 will be by the maximum possible of current block when the index has value except zero
The index of mode instruction is compared with the first intra prediction mode index, and as comparison result, when pre- in first frame
Survey mode index be not less than by current block maximum possible mode indicate index when, intra-framed prediction module 230 will with pass through by
The corresponding intra prediction mode of the second intra prediction mode index that first intra prediction mode index adds 1 and obtains is determined as working as
Preceding piece of intra prediction mode, otherwise, intra-framed prediction module 230 will intra predictions corresponding with the first intra prediction mode index
Mode is determined as the intra prediction mode of current block.
At least one non-direction mode and multiple can be configured with for the admissible intra prediction mode of current block
Direction mode.
At least one described non-direction mode can be DC mode and/or plane mode.In addition, DC mode or plane mode
Admissible intra prediction mode can be adaptively included in concentrate.
For this purpose, the specified information for being included in the non-direction mode that admissible intra prediction mode is concentrated can be wrapped
It includes in picture header or slice header.
Next, intra-framed prediction module 230 reads reference pixel from picture memory module 260, and determine whether there is
Unavailable reference pixel, to generate intra-frame prediction block.
The determination can be by the decoded intra prediction mode using current block, according to whether in the presence of frame is used to
The reference image of interior prediction block usually carries out.
Next, when needing to generate reference pixel, intra-framed prediction module 230 can by using what is be previously reconstructed
The reference pixel at unavailable position is usually generated with reference image.
The definition of unavailable reference pixel and the method for generating the reference pixel can be with the intra-framed prediction modules according to Fig. 1
Operation in 150 is identical.However, being used to intra-frame prediction block according to the decoded intra prediction mode of current block
Reference pixel can selectively be rebuild.
In addition, intra-framed prediction module 230 determines whether to reference pixel application filtering to generate prediction block.That is,
Size and decoded intra prediction mode based on current prediction block determine whether to filter reference pixel application, to generate
The intra-frame prediction block of current block.
As the size of block increases, blocking effect increases.Therefore, with block size increase, for reference pixel into
The quantity of the prediction mode of row filtering can increase.However, when block is greater than predetermined size, since block is confirmed as flat site,
Therefore reference pixel can not be filtered to reduce complexity.
When determining that reference pixel is needed using filter, intra-framed prediction module 230 using filter to reference pixel into
Row filtering.
At least two filters can be adaptively applied according to the depth difference degree between reference pixel.The filter of filter
Wave device coefficient is preferably symmetrical.
In addition, above-mentioned two or more filter can adaptively be applied according to the size of current block.When application filters
When device, the filter with narrow bandwidth is applied to the block of small size, and the filter with wide bandwidth is applied to big ruler
Very little block.
In the case where DC mode, since prediction block is generated as the average value of reference pixel, do not need using filter
Wave device.In the vertical mode that image has correlation in vertical direction, do not need to reference pixel application filter, and
Similarly, it in the horizontal pattern that image has correlation in the horizontal direction, does not need to reference pixel application filter.
It, can be based on the prediction block of current block since whether application filtering is related to the intra prediction mode of current block
Size and intra prediction mode adaptively filter reference pixel.
Next, intra-framed prediction module 230 is according to reconstructed intra prediction mode using reference pixel or by filtering
Reference image usually generate prediction block, and the generation of prediction block is identical as the operation of encoding device 10, so that it is described in detail
It will be omitted.
Intra-framed prediction module 230 determines whether to be filtered generated prediction block, and can be based on being included in band
Head or the information in coding unit head determine whether to be filtered according to the intra prediction mode of current block.
When determination will be filtered generated prediction block, 230 pairs of uses of intra-framed prediction module are adjacent with current block
Available reference pixel and the pixel of the specific location of prediction block that generates be filtered, to generate new pixel.
For example, in a dc mode, can be used the reference pixel adjacent with prediction pixel in prediction pixel and reference image
The adjacent prediction pixel of element is filtered.
Therefore, prediction pixel is filtered using one or two reference pixel according to the position of prediction pixel, and
Can by under DC mode to the filtering application of prediction pixel in the prediction block of all sizes.
In addition, under vertical mode, it can be used and be used to prediction block, reference pixel other than topmost pixel
The prediction pixel adjacent with left side reference pixel in prediction pixel to change prediction block.
Similarly, under horizontal pattern, it can be used and be used to prediction block, reference image other than left pixel
The prediction pixel adjacent with top reference pixel in prediction pixel caused by usually changing.
It can be come by using the prediction block for the current block rebuild in this way and the residual block of decoded current block
Rebuild current block.
Fig. 9 is to show the diagram of the second embodiment of method for being module unit for picture portion and handling to image.
Referring to Fig. 9, there will be the maximum sized coding tree unit (CTU) point of 256 × 256 pixels using quad-tree structure
Area is four coding units (CU) for all having square shape.
It can be used binary tree structure will be from least one coding unit in the coding unit that quaternary tree division operation obtains
Further subregion is two coding units (CU) for all having rectangular shape.
In addition, quad-tree structure can be used to encode at least one of coding unit obtained from quaternary tree division operation
The further subregion of unit is four coding units (CU) for all having square shape.
It can be used binary tree structure will be from least one coding unit in the coding unit that binary tree division operation obtains
Further subregion is two coding units (CU) for all having square shape or rectangular shape.
In addition, can be used quad-tree structure or binary tree structure will be from the coding unit that quaternary tree division operation obtains
The further subregion of at least one coding unit is the coding unit (CU) for all having square shape or rectangular shape.
It is obtained from binary tree division operation and the encoding block (CB) being constructed as described above can be without by further subregion
In the case where be used for predict and convert.That is, belonging to the predicting unit (PU) of encoding block CB as shown in Figure 9 and becoming
The size for changing unit (TU) can be identical as the size of encoding block (CB).
As described above, the coding unit that the method described referring to Fig. 3 and Fig. 4 that can be used will obtain from quaternary tree division operation
Subregion is one or at least two predicting units (PU).
The coding unit obtained as described above from quaternary tree division operation can be by using the method quilt referring to Fig. 5 description
Subregion is one or at least two converter units (TU), and the converter unit (TU) obtained from division operation can have 64 × 64
The full-size of pixel.
It is used to that mark can be used to the syntactic structure of image progress subregion and processing based on each piece to indicate that subregion is believed
Breath.For example, indicating whether to carry out subregion to coding unit (CU) by using split_cu_flag, and can be by using
Binary_depth indicates the depth of the coding unit (CU) by binary tree subregion.Whether use binary tree structure to coding
Unit (CU), which carries out subregion, to be indicated by individual binary_split_flag.
It is applied to obtain by the method progress subregion described above by reference to Fig. 9 above by reference to Fig. 1 to Fig. 8 method described
The block (for example, coding unit (CU), predicting unit (PU) and converter unit (TU)) arrived, so that the coding and decoding of image can quilt
It executes.
Hereinafter, referring to Fig.1 0 to Figure 15, it by coding unit (CU) subregion is one or at least two transformation by description
Another embodiment of the method for unit (TU).
According to an embodiment of the invention, coding unit (CU) subregion is converter unit (TU) by usable binary tree structure,
Wherein, the converter unit is the basic unit converted to residual block.
Referring to Fig.1 0, it will be being obtained from binary tree division operation and having a size of N × 2N or 2N × N using binary tree structure
At least one of rectangle encoding block CB0 and CB1 further subregion be size be N × N square converter unit TU0 and
TU1。
As described above, the executable prediction of block-based image encoding method, transformation, quantization and entropy code step.
In the prediction step, prediction letter is generated by reference to present encoding block and existing coded image or surrounding image
Number, and therefore can calculate the differential signal relative to current block.
In addition, in shift step, transformation is executed using with differential signal various transforming function transformation functions as input.By
The signal of transformation is classified as DC coefficient and AC coefficient, to obtain energy compression and improve code efficiency.
In addition, using transformation coefficient as input to execute quantization, and then being held to the signal of quantization in quantization step
Row entropy coding, so that image can be encoded.
In addition, picture decoding method is carried out according to the opposite sequence of above-mentioned coded treatment, and may be in quantization step
The middle quality distortion phenomenon that image occurs.
In order to reduce image quality distortion while improving code efficiency, the size or shape of converter unit (TU) and
It can be according to the distribution of the differential signal inputted in shift step and the feature of image by the type for the transforming function transformation function applied
And change.
For example, making when finding similar with current block piece by block-based motion estimation process in the prediction step
With such as absolute error and (SAD) or the cost measurement method of mean square error (MSE), and can according to image characteristic with various
The distribution of form generation differential signal.
Therefore, the size or shape of converter unit (CU) can be selectively determined by the various distributions based on differential signal
To execute efficient coding.
For example, when generating differential signal in specific coding block CBx, using binary tree structure by encoding block CBx subregion
For two converter units (TU).Since DC value is commonly known as the average value of input signal, when differential signal is as transformation
It, can be by effectively indicating DC value for encoding block CBx subregion is two converter units (TU) when the input of processing is received.
Referring to Fig.1 1, it the use of binary tree structure will be having a size of N having a size of the square numbering unit CU0 subregion of 2N × 2N
Rectangular transform the unit TU0 and TU1 of × 2N or 2N × N.
According to another embodiment of the present invention, as described above, carrying out subregion to coding unit (CU) using binary tree structure
The step of be repeated twice or more, to obtain multiple converter units (TU).
Referring to Fig.1 2, subregion is carried out to the rectangle encoding block CB1 having a size of N × 2N using binary tree structure, uses y-bend
Tree construction carries out further subregion to the block that the size obtained from the division operation is N × N, so that establishing size is N/2 × N
Or the rectangular block of N × N/2.Then, it is by the further subregion of block that size is N/2 × N or N × N/2 using binary tree structure
Size is square converter unit TU1, TU2, TU4 and TU5 of N/2 × N/2.
Referring to Fig.1 3, subregion is carried out to the square numbering block CB0 having a size of 2N × 2N using binary tree structure, and make
Further subregion is carried out with the block that binary tree structure is N × 2N to the size obtained from the division operation, so that construction size is N
The square block of × N, and may then use that binary tree structure by the further subregion of block having a size of N × N be having a size of N/2 ×
Rectangular transform the unit TU1 and TU2 of N.
Referring to Fig.1 4, subregion is carried out to the rectangle encoding block CB0 having a size of 2N × N using binary tree structure, uses four forks
Tree construction carries out further subregion to the block that the size obtained from the division operation is N × N, to obtain having a size of N/2 × N/2
Square converter unit TU1, TU2, TU3 and TU4.
It is applied to obtain by referring to the method progress subregion that Figure 10 to Figure 14 is described referring to figs. 1 to the method for Fig. 8 description
To block (for example, coding unit (CU), predicting unit (PU) and converter unit (TU)) make can to image execute coding reconciliation
Code.
Hereinafter, the embodiment of the method for the determining block partitioned organization of encoding device 10 according to the present invention will be described.
The picture division module 110 provided in image encoding apparatus 10 executes rate-distortion optimization according to predetermined sequence
(RDO), it and exports predicting unit (PU) and is determined to the coding unit (CU) such as above-mentioned carry out subregion, predicting unit (PU)
With the partitioned organization of converter unit (TU).
For example, in order to determine block partitioned organization, picture division module 110 execute rate-distortion optimization-quantization (RDO-Q) with according to
According to bit rate and it is distorted determining optimal piece of partitioned organization.
Referring to Fig.1 5, when the form of coding unit (CU) with the Pixel Dimensions of 2N × 2N, with 2N shown in (a) ×
The Pixel Dimensions of N × 2N shown in the Pixel Dimensions of N × N shown in the Pixel Dimensions of 2N, (b), (c) and (d) in institute
Converter unit (PU) subregion sequence of the Pixel Dimensions of the 2N × N shown executes RDO, so that it is determined that the most optimal sorting of converter unit (PU)
Plot structure.
Referring to Fig.1 6, when the form of coding unit (CU) with the Pixel Dimensions of N × 2N or 2N × N, shown in (a)
The Pixel Dimensions of N × 2N (or 2N × N), the Pixel Dimensions of N × N shown in (b), N/2 × N shown in (c) (or N ×
N/2) and the Pixel Dimensions of N/2 × N/2 shown in the Pixel Dimensions of N × N, (d), N/2 × N and N × N and (e) shown in
N/2 × N Pixel Dimensions for converter unit (PU) partitioned organization sequence execute RDO, so that it is determined that converter unit
(PU) optimum partition structure.
In the above description, block partition method of the invention has been described as by executing rate-distortion optimization (RDO) come really
Determine the example of block partitioned organization.However, picture division module 110 can be used absolute error and (SAD) or mean square error with determination
Block partitioned organization, to keep efficiency while reducing complexity.
Hereinafter, the image processing method and its coding and decoding of embodiment according to the present invention will be described in further detail
Method.
Figure 17 and Figure 18 is to show handling the motion information for parallel processing for embodiment according to the present invention
Image processing method flow chart.Figure 19 to Figure 22 is to show the method handled for each case motion information
Diagram.
As described above, when the motion compensating module 160 of encoding device 10 and the motion information decoder module of decoding device 20
248 when respectively handling motion information coding and decoding according to merging patterns and AMVP mode, such as motion compensation, mode
Determining and entropy coding processing can be implemented as the parallel pipeline system as hardware module.
Therefore, when the judgement of the mode of previous block is completed more late than the processing time of current block, the motion compensation of current block
Processing may not be able to carry out, this will lead to the generation of pipeline stall.
In order to solve this problem, according to an embodiment of the invention, encoding device 10 or decoding device 20 are according to scheduled
Multiple pieces are sequentially grouped into concurrent movement information predicting unit (S101) by arbitrary dimension, obtain and are excluding to belong to from contiguous block
The movement of at least one available block in rest block except the block of the previous concurrent motion information prediction unit of current block
Information (S103), and based on motion information obtained building motion information prediction list (S105), and it is based on motion information
Predicting list executes motion information prediction decoding process (S107) for each mode.However, ought be not present in step s 103
When available block, constructed using the motion vector (with position MV) at predetermined zero motion vector (zero MV) or the same position of previous frame
Motion information prediction list in step S105.
More specifically, can will encode and want decoded piece to be sequentially grouped into predetermined according to coding and decoding sequence
The concurrent movement information predicting unit (PMU) of size.
In the case where wanting n-th of concurrent movement information predicting unit belonging to decoded current block is PMU (n), when
The coding reconciliation for needing the motion vector information of contiguous block is executed in motion compensating module 160 or motion information decoder module 248
Code processing (such as AMVP or merging) when, can from contiguous block exclude be included in front of current block it is encoded or by
Block in decoded PMU (n-1).
That is, in motion compensating module 160 or motion information decoder module 248, when the movement of building current block
When vector prediction (MVP) candidate list, encoded before can excluding to be included in the PMU (n) belonging to the current block/
The motion information of block in decoded PMU (n-1).It therefore, can be from the block being not included in PMU (n-1) among contiguous block
Motion information determine corresponding with current block MVP candidate list.
However, the quantity due to the motion vector that contiguous block can be used is likely to reduced, as shown in Figure 19, in order to anti-
Only code efficiency reduces, and the motion information of more contiguous block F to I can be used for building movement compared with existing contiguous block A to E
Vector prediction (MVP) candidate list.
Therefore, can by the available block of being not included in PMU (n-1) among contiguous block (for example, same frame/it is parallel
The block of blocks/stripes band) motion-vector prediction (MVP) candidate list for being determined as of at least one of motion vector.
In addition, referring to Fig.1 8, when being decoded using the contiguous block of pre-position to motion information, motion compensation list
Member 160 or the identification of motion information decoder module 248 will be used to belong to the contiguous block of the estimation of the current block of PUM (n)
(S201), when the contiguous block is included in PMU (n-1) (S203), the contiguous block being included in PMU (n-1) is obtained
Motion information alternative information (S205), and in the feelings of the motion information of contiguous block without being included in PMU (n-1)
Under condition, (S207) is handled come the estimation decoding to current block using the alternative information.
Here, though the actual motion Vector Message in corresponding position be it is unknown, alternative information can also be calculated as
Predetermined default value or alternative information can be the value derived from the contiguous block being included in PMU (n).For example, being included in
The alternative information of contiguous block in PMU (n-1) can use the movement arrow at zero motion vector (zero MV) or the same position of previous frame
(with position MV) is measured to construct.
For such processing, encoding device 10 can include by the dimension information of the PMU of grammatical form sequence header information,
To be sent in picture header information or slice header information.The dimension information of PMU can also be explicitly or implicitly by with signal table
It is shown as value relevant to the minimum dimension and full-size of CTU size and CU (or PU).
In addition, encoding device 10 sequentially can be grouped PMU according to predetermined size, and concurrent movement information is pre-
It surveys unit index and distributes to each unit to be prepared to take index, and the index information distributed can clearly be sent or the rope
Fuse breath can be derived in decoding device 20.
For example, the size of PMU can always be equal to minimum PU size or the size of PMU can be equal to current just encoded/decoding
PU or CU size.In this case, PMU dimension information can not clearly be sent, and can be grasped according to sequential packet
Make to identify the index.
Referring to Figure 20, thick line indicates CTU, and solid line therein indicates PU, and red dotted line indicates PMU.
According to an embodiment of the invention, the motion vector at the lower-left end of the neighbouring motion vector of current PU can be and be included
According to the motion vector in coding and decoding sequence immediately preceding PMU (n-1).Therefore, when in encoding device 10 and decoding
When executing motion-vector prediction coding and decoding in equipment 20, can exclude to be included in when constructing motion vector list immediately in
The motion vector of contiguous block in preceding PMU (n-1).As the block in the PMU (n-1) for being included in immediately preceding index
Alternative information, the motion vector (with position MV) at zero motion vector (zero MV) or the same position of previous frame can be used for building fortune
Dynamic vector candidate list.
In addition, referring to Figure 21, since the contiguous block of upper right side is included in immediately preceding PMU (n-1), they
It is excluded except candidate motion vector in the motion-vector prediction coding and decoding of current block.Alternatively, zero movement arrow
Motion vector (with position MV) at amount (zero MV) or the same position of previous frame is used as candidate motion vector.
Referring to Figure 22, the situation that all contiguous blocks in left side are included in previous PMU (n-1) is described.Such as Figure 22
Shown in, encoding device 10 and decoding device 20 exclude in the processing of motion-vector prediction coding and decoding from candidate motion vector
The motion vector of all left side contiguous blocks, and alternatively, top (upper left and upper right) contiguous block motion vector can by with
To construct the candidate list for motion-vector prediction.
According to this building, the dependence between concurrent movement information predicting unit can be split, and can without etc.
To the block among contiguous block in previous assembly line processing be completed in the case where motion information decoding is handled,
Thus pipeline stall can be prevented in advance and can correspondingly improve coding and decoding efficiency.
It can be stored in computer readable recording medium according to the above method of the present invention.Computer readable recording medium
It can be ROM, RAM, CD-ROM, tape, floppy disk, optical data storage device etc., and can also (example in the form of a carrier
Such as, sent by internet) it is implemented.
Computer readable recording medium can be distributed in the computer system of network connection, so that computer-readable code can
To be stored and executed in a distributed way.In addition, the functional program, code and code segment for realizing the above method can
Easily inferred by the programmer of the technical field of the invention.
Although having been explained above and describing exemplary embodiment of the present invention, the present invention is not limited to aforementioned specific realities
Example is applied, and those skilled in the art can be in the case where not departing from the purport of the invention defined in the claims to this hair
It is bright to carry out various modifications.Technical concept or viewpoint of the invention should not be detached from and understand these modifications.
Claims (13)
1. the method that a kind of pair of motion information is decoded, which comprises
The concurrent movement information predicting unit index for the current block that identification motion information will be decoded;
It obtains and is excluding to belong to remaining except the block of the concurrent movement information predicting unit previously indexed from the contiguous block of current block
The motion information of at least one contiguous block in remaining block;And
The motion information prediction decoding for current block is handled based on the motion information of acquisition.
2. the method for claim 1, wherein obtaining step includes:
Belong to except the block of the concurrent movement information predicting unit previously indexed using from the exclusion of the contiguous block of current block
The motion information of rest block constructs motion information prediction list.
3. method according to claim 2, wherein obtaining step includes:
Substituted with alternative information in the motion information prediction list with belong to the concurrent movement information previously indexed
Motion information at the corresponding position of the block of predicting unit.
4. method as claimed in claim 3, wherein the alternative information includes zero motion vector (zero MV).
5. method as claimed in claim 3, wherein the alternative information includes and movement of the previous frame at same position is sweared
Measure (with position MV) information.
6. the method as described in claim 1, further includes:
Receive dimension information corresponding with concurrent movement information predicting unit.
7. method as claimed in claim 6, wherein the dimension information is included in information and is transmitted with signal.
8. method as claimed in claim 6, wherein the dimension information and coding tree unit, coding unit or predicting unit
At least one of correlation.
9. method as claimed in claim 6, further includes:
Sequential packet is executed to the block that will be decoded according to the concurrent movement information predicting unit dimension information of acquisition, to identify
State concurrent movement information predicting unit index.
10. the method that a kind of pair of motion information is encoded, which comprises
Identify that motion information indexes the concurrent movement information predicting unit of current block encoded;
It obtains and is excluding to belong to remaining except the block of the concurrent movement information predicting unit previously indexed from the contiguous block of current block
The motion information of at least one contiguous block in remaining block;And
The motion information prediction coding for current block is handled based on the motion information of acquisition.
11. method as claimed in claim 10, further includes:
According to the concurrent movement information predicting unit of predetermined size sequentially to being grouped the picture being encoded and be prepared to take rope
Draw;And
Information with signal transmission about the preparing index.
12. method as claimed in claim 10, wherein the previous index is concurrent movement information prediction belonging to current block
Before unit index with according to the coded sequence immediately corresponding index in preceding position.
13. a kind of equipment for being decoded to image, the equipment include:
Motion information decoder module, the concurrent movement information predicting unit index for the current block that identification motion information will be decoded;
It obtains in the rest block for excluding to belong to except the block of the concurrent movement information predicting unit previously indexed from the contiguous block of current block
The motion information of at least one contiguous block in;And it is pre- to the motion information for current block based on the motion information of acquisition
Decoding is surveyed to be handled.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211233595.6A CN115426500A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233516.1A CN115604484A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233463.3A CN115426499A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170042271A KR20180111378A (en) | 2017-03-31 | 2017-03-31 | A method of video processing providing independent properties between coding tree units and coding units, a method and appratus for decoding and encoding video using the processing. |
KR10-2017-0042271 | 2017-03-31 | ||
PCT/KR2018/002417 WO2018182185A1 (en) | 2017-03-31 | 2018-02-27 | Image processing method for processing motion information for parallel processing, method for decoding and encoding using same, and apparatus for same |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211233595.6A Division CN115426500A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233463.3A Division CN115426499A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233516.1A Division CN115604484A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110495175A true CN110495175A (en) | 2019-11-22 |
CN110495175B CN110495175B (en) | 2022-10-18 |
Family
ID=63677777
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880023458.5A Active CN110495175B (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233463.3A Pending CN115426499A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233595.6A Pending CN115426500A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233516.1A Pending CN115604484A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211233463.3A Pending CN115426499A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233595.6A Pending CN115426500A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
CN202211233516.1A Pending CN115604484A (en) | 2017-03-31 | 2018-02-27 | Image decoding method and image encoding method |
Country Status (3)
Country | Link |
---|---|
KR (5) | KR20180111378A (en) |
CN (4) | CN110495175B (en) |
WO (1) | WO2018182185A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102597461B1 (en) * | 2019-01-28 | 2023-11-03 | 애플 인크. | Method for encoding/decoidng video signal and apparatus therefor |
CN117376551B (en) * | 2023-12-04 | 2024-02-23 | 淘宝(中国)软件有限公司 | Video coding acceleration method and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1910933A (en) * | 2004-02-25 | 2007-02-07 | 索尼株式会社 | Image information encoding device and image information encoding method |
CN101455087A (en) * | 2006-05-24 | 2009-06-10 | 松下电器产业株式会社 | Image coding device, image coding method, and image coding integrated circuit |
CN102651814A (en) * | 2011-02-25 | 2012-08-29 | 华为技术有限公司 | Video decoding method, video encoding method and terminal |
US20130272421A1 (en) * | 2010-12-21 | 2013-10-17 | Nec Corporation | Motion estimation device, motion estimation method, motion estimation program and video image encoding device |
CN103404149A (en) * | 2011-03-11 | 2013-11-20 | 索尼公司 | Image processing device and method |
WO2014141899A1 (en) * | 2013-03-12 | 2014-09-18 | ソニー株式会社 | Image processing device and method |
US20140307784A1 (en) * | 2011-11-08 | 2014-10-16 | Kt Corporation | Method and apparatus for encoding image, and method and apparatus for decoding image |
CN107079159A (en) * | 2014-10-17 | 2017-08-18 | 三星电子株式会社 | The method and apparatus of parallel video decoding based on multiple nucleus system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107071458B (en) * | 2010-12-14 | 2020-01-03 | M&K控股株式会社 | Apparatus for encoding moving picture |
CN102986224B (en) * | 2010-12-21 | 2017-05-24 | 英特尔公司 | System and method for enhanced dmvd processing |
MX365013B (en) * | 2011-08-29 | 2019-05-20 | Ibex Pt Holdings Co Ltd | Method for generating prediction block in amvp mode. |
KR101197176B1 (en) * | 2011-09-23 | 2012-11-05 | 주식회사 케이티 | Methods of derivation of merge candidate block and apparatuses for using the same |
KR20140081682A (en) * | 2012-12-14 | 2014-07-01 | 한국전자통신연구원 | Method and apparatus for image encoding/decoding |
-
2017
- 2017-03-31 KR KR1020170042271A patent/KR20180111378A/en active Application Filing
-
2018
- 2018-02-27 CN CN201880023458.5A patent/CN110495175B/en active Active
- 2018-02-27 CN CN202211233463.3A patent/CN115426499A/en active Pending
- 2018-02-27 CN CN202211233595.6A patent/CN115426500A/en active Pending
- 2018-02-27 WO PCT/KR2018/002417 patent/WO2018182185A1/en active Application Filing
- 2018-02-27 CN CN202211233516.1A patent/CN115604484A/en active Pending
-
2022
- 2022-01-04 KR KR1020220001140A patent/KR102437729B1/en active IP Right Grant
- 2022-08-24 KR KR1020220106371A patent/KR102510696B1/en active IP Right Grant
-
2023
- 2023-03-13 KR KR1020230032247A patent/KR102657392B1/en active IP Right Grant
-
2024
- 2024-04-09 KR KR1020240047784A patent/KR20240052921A/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1910933A (en) * | 2004-02-25 | 2007-02-07 | 索尼株式会社 | Image information encoding device and image information encoding method |
CN101455087A (en) * | 2006-05-24 | 2009-06-10 | 松下电器产业株式会社 | Image coding device, image coding method, and image coding integrated circuit |
US20130272421A1 (en) * | 2010-12-21 | 2013-10-17 | Nec Corporation | Motion estimation device, motion estimation method, motion estimation program and video image encoding device |
CN102651814A (en) * | 2011-02-25 | 2012-08-29 | 华为技术有限公司 | Video decoding method, video encoding method and terminal |
CN103404149A (en) * | 2011-03-11 | 2013-11-20 | 索尼公司 | Image processing device and method |
US20140307784A1 (en) * | 2011-11-08 | 2014-10-16 | Kt Corporation | Method and apparatus for encoding image, and method and apparatus for decoding image |
WO2014141899A1 (en) * | 2013-03-12 | 2014-09-18 | ソニー株式会社 | Image processing device and method |
CN107079159A (en) * | 2014-10-17 | 2017-08-18 | 三星电子株式会社 | The method and apparatus of parallel video decoding based on multiple nucleus system |
Non-Patent Citations (1)
Title |
---|
郑飞扬: "HEVC的模式判决转码算法研究及滤波模块的并行实现", 《中国优秀硕士论文全文数据库(电子期刊)》 * |
Also Published As
Publication number | Publication date |
---|---|
KR20240052921A (en) | 2024-04-23 |
KR20180111378A (en) | 2018-10-11 |
CN115426500A (en) | 2022-12-02 |
CN115426499A (en) | 2022-12-02 |
KR20220120539A (en) | 2022-08-30 |
CN115604484A (en) | 2023-01-13 |
KR20220005101A (en) | 2022-01-12 |
CN110495175B (en) | 2022-10-18 |
KR102657392B1 (en) | 2024-04-15 |
KR102510696B1 (en) | 2023-03-16 |
KR20230038687A (en) | 2023-03-21 |
KR102437729B1 (en) | 2022-08-29 |
WO2018182185A1 (en) | 2018-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103703781B (en) | Using the video coding of adaptive motion vector resolution | |
CN104378637B (en) | Video signal decoding method | |
CN101385347B (en) | Method of and apparatus for video intraprediction encoding/decoding | |
CN103891293B (en) | Method and apparatus for the adaptive loop filter of chromatic component | |
CN104378639B (en) | The method of decoding video signal | |
CN107071456B (en) | Method for decoding video signal | |
CN109845269A (en) | Image treatment method, the image-decoding using it and coding method | |
CN110100436A (en) | Use export chroma mode coded video data | |
CN108141588A (en) | inter-frame prediction method and device in image encoding system | |
CN110495173A (en) | For executing the image processing method of the processing of coding tree unit and coding unit, using the image decoding of this method and coding method and its device | |
CN109314790A (en) | Image treatment method, the image-decoding using it and coding method | |
CN105723707A (en) | Color residual prediction for video coding | |
CN109644281A (en) | Method and apparatus for handling vision signal | |
CN108293113A (en) | The picture decoding method and equipment based on modeling in image encoding system | |
CN105379270A (en) | Inter-color component residual prediction | |
KR20110115987A (en) | Video coding and decoding method and apparatus | |
CN109417640A (en) | Method and apparatus for handling vision signal | |
CN110050467A (en) | The coding/decoding method and its device of vision signal | |
CN104041045A (en) | Secondary boundary filtering for video coding | |
CN103918263A (en) | Device and methods for scanning rectangular-shaped transforms in video coding | |
CN110495175A (en) | Image processing method for being handled the motion information for parallel processing, method and apparatus for the method for being decoded and encoding using the image processing method | |
CN112565767B (en) | Video decoding method, video encoding method and related equipment | |
KR101659343B1 (en) | Method and apparatus for processing moving image | |
JP2024023525A (en) | Image coding method nd image decoding method | |
CN110495171A (en) | Method, the method and its equipment for being decoded and being encoded to image using this method for handling image of improved arithmetic coding are provided |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |