CN106851315A - The method decoded to vision signal - Google Patents
The method decoded to vision signal Download PDFInfo
- Publication number
- CN106851315A CN106851315A CN201610847470.0A CN201610847470A CN106851315A CN 106851315 A CN106851315 A CN 106851315A CN 201610847470 A CN201610847470 A CN 201610847470A CN 106851315 A CN106851315 A CN 106851315A
- Authority
- CN
- China
- Prior art keywords
- pixel
- block
- prediction
- adjacent
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/24—Systems for the transmission of television signals using pulse code modulation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/192—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/48—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The present invention relates to a kind of method decoded to vision signal.Methods described includes:The remaining sample relevant with current block is obtained from bit stream;The infra-frame prediction for the current block is performed based on the adjacent sample adjacent with the current block;And by the way that the reconstructed sample relevant with the current block will be obtained by performing the described remaining sample phase Calais relevant with the current block of forecast sample obtained from the infra-frame prediction, wherein, the forecast sample is obtained based on the upper adjacent sample adjacent with the current block and the change relevant with the adjacent sample, and wherein, the change relevant with the adjacent sample represents the difference between the upper left adjacent sample adjacent with the current block left adjacent sample adjacent with the current block.
Description
Present patent application is international filing date on May 14th, 2012, national applications number are 201280035395.8, invention
The divisional application of the application for a patent for invention of entitled " method and apparatus that infra-frame prediction is carried out in display screen ".
Technical field
The present invention relates to video processing technique, the method more particularly, to being decoded to vision signal.
Background technology
Recently, in various application fields, the demand to high-resolution and high-quality image increases.As image has
Resolution ratio and higher quality higher, the information content on image also increases.Therefore, when using such as existing wired and wireless
The medium of broadband line come transmit video data and by video data storage in conventional storage media when, the transmission of video data
Rise with carrying cost.
Therefore, it can effectively to transmit, store using strong video compression technology or reproduce with superior resolution ratio and
The image of superior quality.
The content of the invention
Technical problem
It is an aspect of the invention to provide a kind of change of the reference pixel in view of adjacent block to directionality
The texture method that performs effective infra-frame prediction.
Another aspect of the present invention be to provide it is a kind of when infra-frame prediction is carried out in view of the picture of the block adjacent with prediction block
The change of plain value and the method that performs planar prediction.
Inter prediction mould is based on when using affined infra-frame prediction (CIP) it is another aspect of the present invention to provide a kind of
Frame mode adjacent block at the position of formula adjacent pixel generates reference pixel and reference pixel is used for into infra-frame prediction
Method.
It is yet another aspect of the present invention to provide a kind of when the frame in mould being based at the position of inter-frame forecast mode adjacent pixel
Formula adjacent block generates the method for generating reference pixel during reference pixel in view of the change of pixel value.
Technical scheme
The embodiment provides a kind of intra-frame prediction method for encoder.The method includes:On input
Predicting unit generates the reference pixel for infra-frame prediction, it is determined that for the frame mode of predicting unit, based on reference pixel and
Frame mode generates prediction block, and generates the rest block and prediction block for predicting unit, wherein, based on base pixel come pre-
Survey at least one of pixel of reference pixel and prediction block, and the pixel for predicting pixel value be base pixel pixel value
With the pixel value changes sum from base pixel to the pixel for being generated.
Being arranged on the reference pixel of the adjacent block in the upper left corner of prediction block can be set to the first base pixel, by will be from
The pixel value for most descending pixel in the middle of first base pixel to the reference pixel of the adjacent block being arranged on the left margin of prediction block becomes
Change and from the first base pixel to the reference pixel of the adjacent block being arranged on the coboundary of prediction block in the middle of most right pixel
The value that pixel value changes put on the base pixel and obtain is set to the pixel value of the second base pixel, and the second base pixel is pre-
The diagonal pixel in the lower right corner of block is surveyed, the diagonal pixel of prediction block can be predicted according to the first base pixel and the second base pixel
Pixel value.
Here, entered by using the pixel and diagonal pixel of the adjacent block on the coboundary of prediction block and/or left margin
Row interpolation or extrapolation predict the non-diagonal pixel of prediction block.
Additionally, the reference pixel for being arranged on the adjacent block in the upper left corner of prediction block is set to base pixel, and by inciting somebody to action
Being arranged on prediction object pixel in the middle of from base pixel to the reference pixel of the adjacent block being arranged on the left margin of prediction block
With the pixel value changes of the adjacent pixel in a line and from the base pixel to being arranged on the coboundary of the prediction block
The pixel value changes of the adjacent pixel being arranged on prediction object pixel in same row in the middle of the reference pixel of adjacent block apply
The value obtained in base pixel can be predicted to be the pixel value of prediction object pixel.
In addition, in the middle of the pixel of the adjacent block being arranged on the left margin of prediction block or coboundary with prediction object pixel
The pixel being arranged in same row or column can be set to base pixel, and by by the pixel from base pixel to prediction pixel
The value that value changes put on base pixel and obtain can be predicted to be the pixel value of prediction object pixel.
Here, prediction object pixel can be the diagonal pixel of prediction block, and can be by using the pixel of adjacent block
And diagonal pixel carries out interpolation to predict the non-diagonal pixel of prediction block.
Intra prediction mode may further include:Generated when the block adjacent with predicting unit is inter mode block and set
Borderline reference pixel between inter mode block and predicting unit, wherein, it is arranged on the left side or downside of reference pixel
Intra mode block pixel in the middle of the borderline pixel for being arranged on predicting unit can be set to the first base pixel, if
Put the borderline pixel for being arranged on predicting unit in the middle of the pixel of the intra mode block on the right side of reference pixel or upside
Can be set to the second base pixel, and reference pixel can be based on distance from the first base pixel to reference pixel and from the
Diyl pixel is generated to the distance of reference pixel.
Here, the pixel value of the first base pixel can be in the middle of the pixel of the intra mode block belonging to the first base pixel
The average pixel value of the borderline pixel of predicting unit is arranged on, and the pixel value of the second base pixel can be the reference of the second base
The average pixel value of the borderline pixel for being arranged on predicting unit in the middle of the pixel of belonging intra mode block.Additionally,
When intra mode block is provided only on the left side or downside of reference pixel, the pixel value of the first base pixel can be reference pixel
Pixel value, and when intra mode block is provided only on the right side or upside of reference pixel, the pixel value of the second base pixel can be
The pixel value of reference pixel.
Another embodiment of the present invention provides a kind of intra-frame prediction method for decoder, and the method includes:To institute
The bit stream for receiving carries out entropy decoding, the reference pixel of the infra-frame prediction for predicting unit is generated, based on for predicting list
The predictive mode of unit, prediction block is generated according to reference pixel, and according to the rest block and prediction block obtained by entropy decoding
Carry out reconstruction picture, wherein, based on base pixel come at least one of pixel of prediction reference pixel and prediction block, and predict
Pixel pixel value be base pixel pixel value and pixel value changes sum from base pixel to the pixel for being generated.
Being arranged on the reference pixel of the adjacent block in the upper left corner of prediction block can be set to the first base pixel, by will be from
The pixel value for most descending pixel in the middle of first base pixel to the reference pixel of the adjacent block being arranged on the left margin of prediction block becomes
Change and from the first base pixel to the reference pixel of the adjacent block being arranged on the coboundary of prediction block in the middle of most right pixel
The value that pixel value changes put on base pixel and obtain is set to the pixel value of the second base pixel, and the second base pixel is prediction block
The lower right corner diagonal pixel, the pixel of the diagonal pixel of prediction block can be predicted according to the first base pixel and the second base pixel
Value.
Here it is possible to by using the pixel and diagonal picture of the adjacent block on the coboundary of prediction block and/or left margin
Element carries out interpolation or extrapolation to predict the non-diagonal pixel of prediction block.
The reference pixel for being arranged on the adjacent block in the upper left corner of prediction block is set to base pixel, and by will be from base picture
Element is arranged on same a line in the middle of the reference pixel of the adjacent block being arranged on the left margin of prediction block with prediction object pixel
In adjacent pixel pixel value changes and from base pixel to the reference image of the adjacent block being arranged on the coboundary of prediction block
The pixel value changes of the adjacent pixel being arranged in same row with prediction object pixel in the middle of element put on base pixel and obtain
Value can be predicted to be prediction object pixel pixel value.
Additionally, in the middle of the pixel of the adjacent block being arranged on the left margin of prediction block or coboundary with prediction object pixel
The pixel being arranged in same row or column can be set to base pixel, and by by the pixel from base pixel to prediction pixel
The value that value changes put on base pixel and obtain can be predicted to be the pixel value of prediction object pixel.
Here, prediction object pixel can be the diagonal pixel of prediction block, and can be by using the pixel of adjacent block
And diagonal pixel carries out interpolation to predict the non-diagonal pixel of prediction block.
Infra-frame prediction may further include:When the block adjacent with predicting unit is inter mode block, generation is arranged on frame
Between borderline reference pixel between mode block and predicting unit, wherein, be arranged on the left side of reference pixel or the frame of downside
The borderline pixel for being arranged on predicting unit in the middle of the pixel of internal schema block can be set to the first base pixel, be arranged on
The borderline pixel for being arranged on predicting unit in the middle of the pixel of the right side of reference pixel or the intra mode block of upside can be with
It is set to the second base pixel, and reference pixel can be based on distance from the first base pixel to reference pixel and from the second base
Pixel is generated to the distance of reference pixel.
Here, the pixel value of the first base pixel can be in the middle of the pixel of the intra mode block belonging to the first base pixel
The average pixel value of the borderline pixel of predicting unit is arranged on, and the pixel value of the second base pixel can be the reference of the second base
The average pixel value of the borderline pixel for being arranged on predicting unit in the middle of the pixel of belonging intra mode block.Additionally,
When intra mode block is provided only on the left side or downside of reference pixel, the pixel value of the first base pixel can be reference pixel
Pixel value, and when intra mode block is provided only on the right side or upside of reference pixel, the pixel value of the second base pixel can be
The pixel value of reference pixel.
Decoder can be obtained and generate the instruction of the pixel of prediction block by entropy decoding, based on base pixel.In addition, decoder
Can obtain and generate the instruction of reference pixel by entropy decoding, based on base pixel.
The embodiment provides a kind of method decoded to vision signal.Methods described includes:From bit
Stream obtains the remaining sample relevant with current block;Performed based on the adjacent sample adjacent with the current block for described current
The infra-frame prediction of block;And by will be relevant with the current block by performing forecast sample obtained from the infra-frame prediction
Described remaining sample phase Calais obtain the reconstructed sample relevant with the current block, wherein, the forecast sample be based on and institute
The adjacent upper adjacent sample of current block and the change relevant with the adjacent sample is stated to obtain, and wherein, with the phase
The relevant change of adjacent sample represents the upper left adjacent sample adjacent with a current block left side adjacent with the current block
Difference between adjacent sample.
Beneficial effect
As described above, according to the present invention it is possible to considering the change of the reference pixel of adjacent block and effectively realizing to tool
The infra-frame prediction of directive texture.
Furthermore, it is possible to consider the change of the pixel value of the adjacent block of prediction block and perform planar prediction, so as to be lifted pre-
Survey efficiency.
In addition, when using affined infra-frame prediction (CIP), it is contemplated that the change of pixel value, reference pixel is based on frame
Between pattern adjacent pixel position at frame mode adjacent block generate and be used for infra-frame prediction, so as to lift prediction effect
Rate.
Brief description of the drawings
Fig. 1 shows the block diagram of the configuration of the video encoder of exemplary embodiment of the invention.
Fig. 2 is the frame of the configuration of the intra-framed prediction module for schematically showing exemplary embodiment of the invention
Figure.
Fig. 3 shows the block diagram of the configuration of the Video Decoder of exemplary embodiment of the invention.
Fig. 4 schematically shows planar prediction method.
Fig. 5 schematically shows alternative planar prediction method.
Fig. 6 schematically shows the diagonal pixel for predicting current prediction block first.
The method that Fig. 7 schematically shows other pixel values obtained based on diagonal pixel in prediction block.
Fig. 8 schematically show in view of reference pixel value and relative to reference pixel change and predicted pixel values
Method.
Fig. 9 schematically shows the side for obtaining the diagonal pixel of prediction block first and then obtaining the pixel value of rest of pixels
Method.
Figure 10 schematically show obtain first diagonal pixel and with the method identical side for diagonal pixel
Method obtains being different from other pixels of diagonal pixel.
Figure 11 schematically shows CIP method.
Figure 12 schematically shows alternative CIP method.
Figure 13 schematically shows the system according to the present invention and carries out CIP in view of the change of pixel value.
Figure 14 is the flow chart of the work for schematically showing the encoder in the system according to the present invention.
Figure 15 shows the prediction direction of intra prediction mode.
Figure 16 is the flow chart of the work for schematically showing the decoder in the system according to the present invention.
Specific embodiment
Although individually showing the element shown in figure to describe the different characteristic and work(of video encoder/decoder
Can, but such configuration does not indicate that nextport hardware component NextPort or component software of each element by separating are constituted.That is, it is separately provided this
A little elements and at least two elements can be combined into discrete component, or discrete component can be divided into multiple element to perform work(
Energy.Note, in the case of without departing substantially from essence of the invention, some elements are merged into a composition element and/or an element
The embodiment for being divided into multiple separate elements is included within the scope of the present invention.
Hereinafter, exemplary embodiment of the invention will be described in detail with reference to the attached drawings.Identical reference in the accompanying drawings
Identical element is referred to, and the redundancy description of similar elements will be omitted herein.
Fig. 1 shows the block diagram of the configuration of the video encoder of exemplary embodiment of the invention.With reference to Fig. 1,
Video encoder includes picture segmentation module 110, Inter prediction module 120, intra-framed prediction module 125, conversion module 130, amount
Change module 135, quantification module 140, inverse transform module 145, de-blocking filter 150, memory 160, the and of permutatation module 165
Entropy code module 170.
Picture segmentation module 110 can receive the input of current picture and the picture is divided into at least one coding list
Unit.Coding unit is the unit of the coding carried out by video encoder and can also be referred to as CU.Can be with based on quaternary tree
The depth of structure recursively segments coding unit.It is referred to as maximum coding unit (LCU) with maximum sized coding unit,
And the coding unit with minimum dimension is referred to as minimum coding unit (SCU).Coding unit can have 8 × 8,16 ×
16,32 × 32 or 64 × 64 size.Picture segmentation module 110 can split or divide coding unit with generate predicting unit with
Converter unit.Predicting unit can also be referred to as PU, and converter unit can also be referred to as TU.
Under inter-frame forecast mode, Inter prediction module 120 can perform estimation (ME) and motion compensation (MC).Frame
Between prediction module 120 it is pre- to generate based on the information relevant with least one of picture before and after current picture picture
Block is surveyed, this can be referred to as inter prediction.
Inter prediction module 120 is provided with storage at least one of memory 160 reference block and segmented pre-
Survey object block.Inter prediction module 120 performs estimation using prediction object block and reference block.Inter prediction module 120
Generation includes the movable information of motion vector (MV), reference block index and predictive mode as the result of estimation.
Additionally, Inter prediction module 120 performs motion compensation using movable information and reference block.Here, inter prediction mould
Block 120 is generated according to reference block and exports the prediction block corresponding to input block.
Entropy code is carried out to movable information to form compressed bit stream, the bit stream is sent to from video encoder
Video Decoder.
Under intra prediction mode, intra-framed prediction module 125 can be based on the information relevant with the pixel in current picture
To generate prediction block.Infra-frame prediction is also referred to as interior frame prediction.Under intra prediction mode, prediction object block and by coding and
The reconstructed blocks for decoding and reconstructing are imported into intra-framed prediction module 125.Here, reconstructed blocks are not experience the picture of de-blocking filter also
Face.Reconstructed blocks can be previous prediction block.
Fig. 2 is the frame of the configuration of the intra-framed prediction module for schematically showing exemplary embodiment of the invention
Figure.With reference to Fig. 2, intra-framed prediction module includes reference pixel generation module 210, intra prediction mode determining module 220 and prediction
Block generation module 230.
Reference pixel needed for the generation infra-frame prediction of reference pixel generation module 210.Left piece adjacent with prediction object block
Most right vertical curve in pixel and be used for generation with prediction adjacent upper piece the pixel most descended in horizontal line of object block
Reference pixel.For example, when predicting that object block has size N, then 2N pixel in each in left direction and upper direction
It is used as reference pixel.Reference pixel can be used using or via smoothing (AIS) filtering in adaptive frame same as before.When
When reference pixel experience AIS is filtered, the information of relevant AIS filtering is informed with signal.
Intra prediction mode determining module 220 receives the input of prediction object block and reconstructed blocks.Intra prediction mode determines
Module 220 selects the pattern that the amount for making information to be coded of minimizes using input picture in the middle of multiple predictive modes, and
And information of the output on predictive mode.Here it is possible to use default cost function and Hadamard transform.
Prediction block generation module 230 receives the input of reference pixel and the information on predictive mode.Prediction block generates mould
Pixel value of the block 230 using reference pixel and the information on predictive mode are spatially predicted and compensation prediction object block
Pixel value, so as to generate prediction block.
Information on predictive mode is coded by entropy and forms compressed bit stream with together with video data, and from
Video encoder is sent to Video Decoder.Video Decoder uses the letter on predictive mode when intra-frame prediction block is generated
Breath.
Refer again to Fig. 1, difference block is by the prediction block of predicting object block with generated under interframe or intra prediction mode
Between difference generate, and be imported into conversion module 130.Conversion module 130 is converted difference block to generate by converter unit
Conversion coefficient.
Transform block with converter unit has the quad-tree structure in minimum and maximum size, and is therefore not limited to pre-
It is sized.Whether each transform block has the flag for indicating current block to be divided into sub-block, wherein, it is current to become when flag is 1
Changing block can be split into four sub-blocks.Discrete cosine transform (DCT) can be used for conversion.
Quantization modules 135 can quantify to the value of the conversion of transformed module 130.Quantization parameter can be based on picture
Importance or block change.Quantified conversion coefficient can be provided to permutatation module 165 and quantification module 140.
Permutatation module 165 can change over the one-dimensional of conversion coefficient by two dimension (2D) block scanned conversion coefficient
(1D) vector, to lift the efficiency in entropy code.Permutatation module 165 can be changed based on random statistical scanning sequency with
Lifting entropy-coding efficiency.
170 pairs of values obtained by permutatation module 165 of entropy code module carry out entropy code, and encoded value is formed
It is compressed bit stream, the compressed bit stream is stored or transmitted by network abstract layer (NAL).
The conversion coefficient that quantification module 140 is received and the quantified module 135 of quantification quantifies, and inverse transform module
145 pairs of conversion coefficients carry out inverse transformation, so as to generate the difference block of reconstruct.The difference block of reconstruct with by Inter prediction module 120
Or the prediction merged block that intra-framed prediction module 125 is generated is to generate reconstructed blocks.Reconstructed blocks are provided to intra-framed prediction module 125
With de-blocking filter 150.
De-blocking filter 150 is filtered to remove the side between the block for occurring in coding and in decoding to reconstructed blocks
Distortion in boundary, and filtered result is supplied to auto-adaptive loop filter (ALF) 155.
ALF 155 is performed and filtered so that the error between prediction object block and final reconstructed blocks is minimized.AFL 155 is based on
The value that is obtained with the comparing of current predictive object block from the reconstructed blocks that are filtered through de-blocking filter 150 performs filtering, and closes
Decoder is sent in the filter coefficient information of ALF 155 is loaded into section head and from encoder.
Memory 160 can store the final reconstructed blocks obtained by ALF 155, and (final) reconstructed blocks for being stored
Inter prediction module 120 can be provided to perform inter prediction.
Fig. 3 shows the block diagram of the configuration of the Video Decoder of exemplary embodiment of the invention.With reference to Fig. 3,
Video Decoder includes that entropy decoder module 310, permutatation module 315, quantification module 320, inverse transform module 325, interframe are pre-
Survey module 330, intra-framed prediction module 335, de-blocking filter 340, ALF 345 and memory 350.
Entropy decoder module 310 receives compressed bit stream from NAL.Bit stream received by entropy decoder module 310 pairs
Entropy decoding is carried out, and also to predictive mode and motion vector if the bit stream includes predictive mode and motion vector information
Information carries out entropy decoding.Entropy-decoded conversion coefficient or differential signal is provided to permutatation module 315.Permutatation module
315 inverse scan conversion coefficients or differential signal are generating the 2D blocks of conversion coefficient.
Quantification module 320 is received and quantification is entropy-decoded and conversion coefficients of rearranged row.Inverse transform module
325 pairs of conversion coefficients through quantification carry out inverse transformation to generate difference block.
Difference block can be with the prediction merged block generated by Inter prediction module 330 or intra-framed prediction module 335 to generate
Reconstructed blocks.Reconstructed blocks are provided to intra-framed prediction module 335 and de-blocking filter 340.Inter prediction module 330 and infra-frame prediction
Module 335 can be performed and operated with the Inter prediction module 120 and the identical of intra-framed prediction module 125 of video encoder.
De-blocking filter 340 is filtered between these blocks occurred in coding and in decoding with removal to reconstructed blocks
Borderline distortion, and filtered result is supplied to ALF 345.ALF 345 is performed and filtered so that prediction object block
Minimized with the error between final reconstructed blocks.Memory 160 can store the final reconstructed blocks obtained by ALF 345, and
And (final) reconstructed blocks for being stored can be provided to Inter prediction module 330 to perform inter prediction.
Meanwhile, in the inapparent region of change of texture, for example, the dull background of sky or ocean, uses plane frame
Interior prediction further lifts code efficiency.
Infra-frame prediction is classified into oriented prediction, DC predictions and planar prediction, wherein, planar prediction can be DC predictions
Extend concept.Although planar prediction can be broadly included in DC predictions, planar prediction can cover DC predictions can not be located
The Forecasting Methodology of reason.For example, DC predictions are preferred for even grain, and planar prediction is for having directive picture
It is effective for block prediction in plain value.
Change this specification describes the pixel value of the reference pixel using adjacent block is come on having directive line
The method that reason improves planar prediction efficiency.
Fig. 4 schematically shows planar prediction method.
With reference to Fig. 4 (A), the pixel value 425 of the pixel in the lower right corner of current block 420 is predicted.The picture in the lower right corner of current block
The pixel value 425 of element can be predicted to be DC values.
With reference to Fig. 4 (B), the pixel value and the lower boundary positioned at current block of pixel of the prediction on the right margin of current block
On pixel pixel value.Work as example, can be predicted by the linear interpolation of DC values 425 and upper piece pixel value 450 and be located at
Pixel value 445 on preceding piece of right margin.Furthermore, it is possible to by the linear interpolation of DC values 425 and left piece of pixel value 430 come pre-
Location is in the pixel value 435 on the lower boundary of current block.
With reference to Fig. 4 (C), the pixel different from the lower right corner in current block, the picture in pixel and lower boundary on right margin
Element rest of pixels pixel value can by using upper piece and left piece pixel value and current block in predicted picture
Plain value carries out bilinear interpolation to predict.For example, pixel value 475 in current block can by using upper piece pixel value 460,
Left piece of pixel value 455, the predicted pixel value 445 on the right margin of current block and positioned at the following of current block
Predicted pixel value 435 in boundary carries out interpolation to predict.
With reference to Fig. 4 (D), the forecast sample (sample being predicted) obtained via aforementioned process can be refined.For example, can
So as to be refined to the pixel value X 485 in current block with sample value T 480 and left sample value L 490.Specifically, according to
The X' of X refinements can be by X'={ (X<<1)+L+T+1}>>2 obtain.Here, " x<<Y " shows the complement of two's two's complement integer of x
Expression formula is by arithmetic shift left binary unit y, and " x>>Y " shows that the complement of two's two's complement integer expression of x is entered by arithmetic shift right two
Unit y processed.
Fig. 5 schematically shows alternative planar prediction method.
In the method for Fig. 5, the pixel value of the pixel diagonally positioned in current pixel is predicted first, and using pre-
The pixel value measured predicts the pixel value of the rest of pixels in current block.For the ease of description, below, constitute the pixel of the block
Central is referred to as diagonal pixel from the pixel for diagonally positioning left to bottom right.
With reference to Fig. 5 (A), current block is predicted using the pixel value 520 of upper reference block and the pixel value 530 of left reference block
The pixel value of 510 diagonal pixel 540.For example, the pixel value of the diagonal pixel P in current block can be worked as using upper piece pixel
In positioned at current block and upper piece between borderline pixel A boveRef pixel value and left piece of pixel in the middle of position
The pixel value of the borderline pixel LeftRef between current block and left piece, by P=(LeftRef+AboveRef+1)>>
1 obtains.
With reference to Fig. 5 (B), the pixel value of other pixels different from diagonal pixel 540 in current block 510 can be by making
The pixel value that obtains in Fig. 5 (A) and it is borderline upper piece and the pixel value of left piece of pixel linear interpolation is carried out to obtain
.For example, P1 can be using upper piece pixel A boveRef and the diagonal pixel P1 for being obtained, by P1=(AboveRef*
D2+P*d1)/(d1+d2) is obtained.Additionally, P2 can be obtained by P2=(LeftRef*d3+P*d4) (d3+d4).
Meanwhile, the planar prediction method shown in Fig. 4 and Fig. 5 be for direction-free even grain it is effective,
But these methods have directive texture (such as wherein brightness change substantially in a direction such as horizontal direction,
And on such as vertical direction of another direction almost immovable luminance pixel) situation in, may have the prediction for reducing
Efficiency.
Accordingly, it may be desirable to consider the plane infra-frame prediction of the change of pixel value.Plane frame in of the invention is pre-
Selection or prediction base pixel value are surveyed, and the pixel value changes between base pixel and object pixel are put on into base pixel value, from
And predict the pixel value of object pixel.
Hereinafter, example of the invention will be described with reference to the drawings.
Example 1
Fig. 6 schematically shows the diagonal pixel Pii for predicting current prediction block first.Although for the ease of description, Fig. 6
8 × 8 prediction blocks are shown, but the present invention can also be applied to N × N prediction blocks, and it is not limited to 8 × 8 prediction blocks.
In example 1 shown in figure 6, be primarily based on the reference block adjacent with current prediction block reference pixel (8 ×
It is Ri0 and/or R0j in 8 prediction block situations, 0≤i, j≤8) predict the diagonal pixel of current prediction block.
That is, after diagonal pixel Pii is obtained, can be entered by using the reference pixel value (Rij) and Pii of adjacent block
Row interpolation or extrapolation obtain other pixel values in prediction block.
The method that Fig. 7 schematically shows other pixel values obtained based on diagonal pixel in prediction block.
In the present invention, it is contemplated that the change of pixel value and perform planar prediction.For example, as shown in Fig. 7 (A), when in x
When (downward) all increases reference pixel value (to the right) and in y-direction on direction, the pixel value in prediction block is also more likely on the right side
Lower section is increased up.In this case, it is possible to the pixel value of the P88 in the lower right corner of prediction block is predicted first, and based on P88's
Pixel value predicts other pixels.
In order to predict the value of P88, the pixel value of the reference pixel R00 in the upper left corner of current prediction block is defined as base pixel
Pixel value, from base pixel to prediction block in the change of prediction object pixel P88 can be applied in the pixel value of base pixel.
For example, the pixel value of object pixel P88 can be obtained by equation 1.For the ease of description, shown in drawing and description
Rij or Pij is in be shown as RijAnd Pij。
[equation 1]
When P88 is obtained, other diagonal pixel Pii can be obtained by equation 2.
[equation 2]
Here, because present example shows 8 × 8 prediction blocks, i can be 1,2 ..., 8.Although for the ease of description, example
1 shows 8 × 8 prediction blocks, but in N × N prediction blocks, can obtain Pii according to Pii=R00+ (i/N) P88.
As shown in Fig. 7 (B), (downward) all reduces reference pixel value (to the right) and in y-direction in the x direction immediately,
It is also contemplated that obtaining the pixel value of the P88 in the lower right corner of prediction block to the change of the pixel value for reducing, and can be based on
The pixel value of P88 predicts other pixel values.In this case, it is possible to passing through equation 3 obtains P88.
[equation 3]
When P88 is obtained, other the diagonal pixels in prediction block can be obtained by equation 4.
[equation 4]
Here, i can be 1,2 ..., 8.
It is different from Fig. 7 (A) and Fig. 7 (B) when reference pixel value is increased up in upper right side as shown in Fig. 7 (C),
The change of pixel value is primarily based on to obtain the diagonal pixel positioned from lower-left to upper right in prediction block.For example, being predicted
The pixel value of the P81 in the lower left corner of block, and can the pixel value based on P81 predict rest of pixels value.In this case, may be used
P81 is obtained with by equation 5.
[equation 5]
When P81 is obtained, remaining the diagonal pixel (lower-left to upper left) in prediction block can be obtained by equation 6.
[equation 6]
Here, i can be 1,2 ..., 8.
Also as shown in Fig. 7 (D), when reference pixel value is increased up in lower left, the change of pixel value can be based on
The diagonal pixel positioned from lower-left to upper right in prediction block is obtained first.For example, obtaining the picture of the P81 in the lower left corner of prediction block
Element value, and can the pixel value based on P81 predict rest of pixels value.In this case, P81 is obtained by equation 7.
[equation 7]
When P81 is obtained, remaining the diagonal pixel (lower-left to upper left) in prediction block can be obtained by equation 8.
[equation 8]
Here, i can be 1,2 ..., 8.
In view of computation burden, the square root for obtaining diagonal pixel calculate it is approximate may be considered that as in equation 9 that
Sample.
[equation 9]
Then, interpolation can be carried out by using the predicted value of diagonal pixel, upper reference pixel value and left reference pixel value
Or extrapolation obtains other pixel values in prediction block.
In Fig. 7 (A) and Fig. 7 (B), interpolation can be carried out by using the reference pixel R of diagonal pixel Pii and adjacent block
To obtain the pixel Pij in prediction block.Here it is possible to use the interpolation shown in equation 10.
[equation 10]
PI, j=(R0, j*d2+PI, i*d1)/(d1+d2)
Or
PI, j=(RI, 0*d2+PI, i*d1)/(d1+d2)
Here, d1It is from the pixel R0j or Rj0 of the adjacent block for interpolation to the distance of prediction object pixel Pij, d2It is
From the diagonal pixel Rii for interpolation to the distance of prediction object pixel Pij.
Additionally, in Fig. 7 (C) and Fig. 7 (D), can by equation 11 obtain in the middle of the pixel in prediction block by interior
Pixel Pi obtained from inserting.
[equation 11]
PI, j=(RI, 0*d2+PI, i*d1)/(d1+d2)
Or
PI, j=(RI, 0*d2+PI, 9-i*d1)/(d1+d2)
Here, i+j<9, and d1It is from the pixel R0j or Rj0 of the adjacent block for interpolation to prediction object pixel Pij
Distance, and d2It is the distance from the diagonal pixel Pii for interpolation to prediction object pixel Pij.Here, although the quilt of equation 11
For interpolation to obtain the pixel Pij of prediction block, but various interpolating methods can also be in the present invention used, and be not limited to
This.
Meanwhile, in Fig. 7 (C) and Fig. 7 (D), exist in the middle of the pixel of prediction block and pass through pixel Pe obtained from extrapolation.
Here, the extrapolation shown in equation 12 can be used to obtain the pixel in prediction block.
[equation 12]
Or
In this case, i+j>9 and P is the diagonal pixel for extrapolation.Additionally, as described above, d1And d2Be respectively from
Reference pixel is to the distance of prediction object pixel Pij and from pixel Pii to the distance of prediction object pixel Pij.
Example 2
Fig. 8 schematically show consider base pixel value and relative to base pixel change and predicted pixel values the opposing party
Method.Although for the ease of description, Fig. 8 shows 8 × 8 prediction blocks, and the present invention can also be applied to N × N prediction blocks, and
It is not limited to 8 × 8 prediction blocks.
Fig. 8 shows the reference pixel P00 as base pixel positioned at the upper left corner of prediction block.In example 2, by inciting somebody to action
Base pixel value is put on relative to the vertically and horizontally change of reference pixel to obtain prediction object pixel Pij.
For example, obtaining object pixel Pij by equation 13.
[equation 13]
Pij=R00+ Δ x+ Δs y
Here, in 8 × 8 prediction block situations, △ y=Ri0-R00, △ x=R0j-R00, and 1≤i, j≤8.
For example, with reference to Fig. 8, pixel P33 is obtained according to equation 7, by P33=R00+ △ x+ △ y.Here, △ x and △
Y is from base pixel R00 to P33, the pixel value changes on x directions and y directions.
Alternatively, with reference to Fig. 8, pixel P76 is obtained according to equation 13, by P76=R00+ △ x'+ △ y'.Here,
△ x' and △ y' are the pixel value changes on x directions and y directions from base pixel R00 to P76.
Example 3
Fig. 9 schematically shows the diagonal pixel for obtaining prediction block first and then the pixel value that obtains rest of pixels
Other method.
Although Fig. 5 shows the flat of two pixels on the level/vertical direction based on the block adjacent with current prediction block
Average obtains diagonal pixel, but the example 3 shown in Fig. 9 considers change and obtain diagonal pixel.
With reference to Fig. 9 (A), predicted using the pixel value of the adjacent block on the upper and/or left margin of prediction block
The diagonal pixel of block.For example, predicting diagonal pixel Pii by equation 14.
[equation 14]
PI, i=R0, i+Δy
Or
PI, i=RI, 0+Δx
For example, with reference to Fig. 9 (A), can be predicted according to equation 14, by P33=R03+ △ y or P33=R30+ △ x
P33.△ x and △ y are respectively in y directions from the in the x direction of base pixel R30 to P33 and from base pixel R03 to P33
On pixel value changes.
With reference to Fig. 9 (B), other pixels Pij different from diagonal pixel in current block can be by using current block
Reference pixel R00, R10 to the R80 and R01 to R08 of the adjacent block on coboundary and left margin and the predicted value of diagonal pixel
Linear interpolation is carried out to predict.
For example, can be by the predicted pixel values Pij of equation 15.
[equation 15]
Or
D1 be from the pixel R0j or Pi0 of the adjacent block for interpolation to the distance of prediction object pixel Pij, and d2 be from
For the distance of the diagonal pixel Pii to prediction object pixel Pij of interpolation.
Example 4
Figure 10 schematically show obtain first diagonal pixel and with the method identical side for diagonal pixel
Method obtains being different from other pixels of diagonal pixel.
In Fig. 10, can by with shown in Fig. 9 in the way of identical mode predict diagonal pixel.Therefore, with reference to figure
10 (A), can predict the diagonal pixel P33 of current prediction block by P33=R03+ △ y or P33=R30+ △ x.
Then, other pixels Pij different from diagonal pixel in current block can be by using the coboundary of current block
With reference pixel R00, R10 to the R80 and R01 to R08 of the adjacent block on left margin and the predicted value of diagonal pixel enters line
Property interpolation is predicted.
Here it is possible to using with the method identical method for obtaining diagonal pixel.For example, can by equation 16 come
Prediction pixel Pij.
[equation 16]
Pij=R0j+ Δs y
Or
Pij=RiO+ Δs x
Here, in 8 × 8 prediction block situations, △ y=Ri0-R00, △ x=R0j-R00 and 1≤i, j≤8.
For example, with reference to Figure 10, P37 can be obtained according to equation 16, by P37=R07+ △ y or P37=R70+ △ x.
Meanwhile, the long time integration of the slight error caused by integer arithmetic carried out as encoder or decoder may be made
Into gross error.Additionally, when error generation is transmitted in the block adjacent with current block, occurring between encoder and decoder
Mismatch or error diffusion.For example, when error occurs in adjacent block, the borderline pixel value of adjacent block changes.Herein
In situation, when decoder uses the pixel with the pixel value for changing as reference pixel, error diffusion to current block.Cause
This, it is necessary to prevent the instrument of this problem, for example, the coding tools of such as affined infra-frame prediction (CIP).
Figure 11 schematically shows CIP method.
In the method for Figure 11, if there is any one inter-frame forecast mode block adjacent with current macro T-phase, then only make
With DC intra prediction modes and DC predicted values are fixed in 128.
Here, the pixel value of the block predicted by inter-frame forecast mode in the middle of adjacent block is not used as reference pixel
Value.Therefore, in the method, force to use DC predictive modes, and exclude even available information, for example, adjacent infra-frame prediction
Pattern pixel.
Figure 12 schematically shows alternative CIP method.
In the method for Figure 12, using the block predicted under intra prediction mode in the middle of adjacent block pixel value as
Reference pixel value, and the block for obtaining being predicted under inter-frame forecast mode using adjacent intra prediction mode block pixel value.
Therefore, not only using DC patterns but also other intra prediction modes can be used.
With reference to Figure 12, in the middle of the block adjacent with current prediction block T-phase, the block predicted by intra prediction mode is used
Pixel obtains the pixel value 1210,1220,1230 of block A, B, D, E, F, H and I predicted by inter-frame forecast mode.
For example, when the pixel for predicting of intra prediction mode is present in the right side and left side of target interframe forecast sample
When, the pixel value P of the block for obtaining being predicted by inter-frame forecast mode by equation 17T。
[equation 17]
PT=(PLB+PRA+ 1) > > 1
Here, PTIt is target interframe forecast sample, PLBIt is left or down infra-frame prediction sample, PRAIt is right or up infra-frame prediction sample
This.Additionally, when infra-frame prediction sample exists only in the either side of target interframe forecast sample, being obtained by frame by equation 18
Between the pixel value P of block that predicts of predictive modeT.[equation 18]
PT=PRAOr PT=PLB
Method of the method for Figure 12 than Figure 11 more suitably make use of intra prediction mode, but be the use of available frame in
The average value of predictive mode pixel value or available intra prediction mode pixel value itself are used as pre- under inter-frame forecast mode
The pixel value of the adjacent block measured, the change without considering pixel value.
Accordingly, it would be desirable to consider the CIP method of the change of pixel value.
Example 5
Figure 13 schematically shows the system according to the present invention and performs CIP in view of the change of pixel value.
Method using Figure 13 of the pixel value changes of two pixels for interpolation is average than two pixel values of use
Value realizes the more accurate prediction to target pixel value as the method for the Figure 12 for the pixel value to be obtained.For example, can pass through
Equation 19 is come the object pixel P in the middle of the pixel value 1310,1320 and 1330 to be obtainedT。
[equation 19]
Here, PTIt is target prediction sample, PLBIt is left or down infra-frame prediction sample, and PRAIt is right or up infra-frame prediction sample
This.Additionally, as shown in Figure 13, d1 is from PLBTo PTDistance, and d2 is from PRATo PTDistance.
For example, with reference to Figure 13, can be by (PLB1*d21+PRA1*d11)/(d11+d21) obtain PT1, and can lead to
Cross (PLB2*d22+PRA2*d12)/(d12+d22) obtain PT2。
If the infra-frame prediction sample for being ready to use in interpolation exists only in target prediction sample PTRight side and left side in appoint
Either side in side or the upper side and lower side, then PT=PLBOr PT=PRA.Additionally, if there is no with target prediction block T
The adjacent block predicted under intra prediction mode, then can copy with the pixel value at identical position in previous picture,
For use as reference pixel value.
The average value of borderline frame in pixel is used as PLBOr PRAValue.For example, in figure 3, working as PTPositioned at E blocks or
When in the lower pixel column 1320 of D blocks, four of intra prediction mode C blocks most descend the average value of pixel to be used as PRA, and G
The average value of eight most right pixels of block is used as PLB.In this case, the reference point of d1 is in the middle of the most right pixel of G blocks
Top pixel, and the reference point of d2 is the most left pixel most descended in the middle of pixel of C blocks.
Additionally, linear interpolation provides the smooth effect to boundary pixel, therefore can close smooth in adaptive frame
(AIS).Here, under DC predictive modes, the filtering to the borderline pixel of prediction block can be opened.
Figure 14 is the flow chart of the work for schematically showing the encoder in the system according to the present invention.
With reference to Figure 14, the new predicting unit (S1410) of current picture is input into.The predicting unit (PU) can be for frame in
Prediction and the base unit of inter prediction.The predicting unit can be the block smaller than coding unit (CU), and can be rectangle,
And it is not necessarily square.The infra-frame prediction of predicting unit is substantially performed by 2N × 2N or N × N blocks.
Then, the reference pixel (S1420) needed for infra-frame prediction being obtained.Left piece adjacent with current prediction block it is most right
Pixel and upper piece the pixel most descended in horizontal line adjacent with current prediction block in vertical curve are used to generate reference pixel.
When the size of prediction block is N, left piece is all used as reference pixel with upper piece 2N pixel.
Here, pixel in left piece adjacent with current prediction block of most right vertical curve and adjacent with current prediction block
Upper piece the pixel most descended in horizontal line can same as before be used as reference pixel or be used as reference pixel via smoothing.
When being related to smooth, smoothing information signal can also be informed decoder.For example, when performing smooth, can be with
Using AIS wave filters, wherein filter coefficient [1,2,1] or [1, Isosorbide-5-Nitrae, 1,1] can be used.In the middle of the two coefficients, after
One filter coefficient can provide sharper keen border.As set forth above, it is possible to inform that decoder includes whether to use with signal
The information of wave filter, filter type and filter coefficient.
Meanwhile, when CIP is used for generation reference pixel, CIP_ flag values are set to 1.When CIP is employed, only
The pixel of the adjacent block encoded under intra prediction mode is used as reference pixel, and encoded under inter-frame forecast mode it is adjacent
The pixel of block is not used as reference pixel.In this case, as shown in Figure 13, encoded by interior being inserted under intra prediction mode
Neighboring reference pixel generate the corresponding pixel (target in position with the pixel of the adjacent block encoded under inter-frame forecast mode
Forecast sample) as reference pixel, or copy the neighboring reference pixel encoded under intra prediction mode and be used as
Reference pixel corresponding with the position of the pixel of the adjacent block encoded under inter-frame forecast mode.
For example, when intra prediction mode prediction pixel is present in the right side and left side and upside of target interframe forecast sample
During with downside, the target prediction sample being located in the block predicted under inter-frame forecast mode can be obtained by equation 11
PT.Additionally, when infra-frame prediction sample exists only in the either side of target prediction sample, can be located at by equation 12
The target prediction sample P at block position predicted under inter-frame forecast modeT.In equation 11 and/or equation 12, can make
With the average value of correspondence intra prediction mode pixel as PLBAnd PRAValue.Predicted if there is no under intra prediction mode
Adjacent block, then can copy with the pixel value at identical position in previous picture, for use as reference pixel value.
Because linear interpolation provides the smooth effect to boundary pixel, so it is probably have that AIS is closed when using CIP
Effect.
It is then determined that intra prediction mode (S1430).
Intra prediction mode is determined by predicting unit (PU), wherein in view of the relation between required bit rate and amount distortion is come
Determine optimal prediction modes.
For example, when rate-distortion optimization (RDO) is opened, can select to make cost J=R+rD that (R is bit rate, and D is distortion
Amount, r is lagrange's variable) minimize pattern.Here, it is necessary to complete local decoder, in this case, complexity may
Increase.
When RDO is closed, can select to make mean absolute difference (MAD) most by making predicated error experience Hadamard transform
The predictive mode of smallization.
Form 1 shows the number of the size according to predicting unit block, the predictive mode on luminance component.
[form 1]
Block size | The number of predictive mode |
4×4 | 17 |
8×8 | 34 |
16×16 | 34 |
32×32 | 34 |
64×64 | 3 |
Figure 15 shows the prediction direction of intra prediction mode.With reference to Figure 15, MODE NUMBER 0 is vertical pattern, wherein making
It is predicted with the pixel value in the vertical direction of adjacent block.MODE NUMBER 1 is horizontal pattern, wherein using the pixel of adjacent block
Value is predicted in the horizontal direction.MODE NUMBER 2 is DC patterns, wherein using the average pixel value of current predictive object block
(for example, the brightness value in the situation of luminance pixel and the chromatic value in the situation of chroma pixel) generates prediction block.
In fig .15 under other shown patterns, it is predicted using the pixel value of adjacent block, according to corresponding angle.
In a dc mode, can top prediction pixel and most left prediction pixel be filtered to lift forecasting efficiency.This
In, for less piece, filtering strength may become higher.Other interior pixels in current prediction block may not be filtered.
Meanwhile, DC patterns can be replaced and the plane mode for reflecting directionality is used.Under plane mode, from encoder hair
It is sent to the plane _ flag value in the middle of the information of decoder and is set to 1.When using plane mode, DC patterns are not used.Cause
This, when plane mode is replaced and use DC patterns, plane _ mark is set to 0.
When using plane mode when, it is possible to use with Forecasting Methodology identical above described in Fig. 6 to Figure 10
Forecasting Methodology.Here, decoder can perform above-mentioned RDO operations to select best practice.If necessary, can be by foregoing side
Two or more methods in method are used together.Encoder signal inform decoder on encoder from Fig. 6 to Figure 10 in
The information of which kind of method is have selected in the middle of Forecasting Methodology under shown plane mode.
Reference pixel on chromatic component, can same as before use brightness under the MODE NUMBER 4 for being referred to as DM patterns
The unified direction infra-frame prediction (UDI) of block.Under MODE NUMBER 0, prediction block is come using the linear relationship between brightness and colourity
Generation, it is referred to as linear model (LM) pattern.MODE NUMBER 1 is the vertical pattern of wherein in the vertical direction perform prediction,
And corresponding to the MODE NUMBER 0 of brightness.MODE NUMBER 2 is the wherein horizontal line of perform prediction in the horizontal direction, and right
Should be in the MODE NUMBER 1 of brightness.MODE NUMBER 3 is wherein to generate prediction block using the average color angle value of current predictive object block
DC patterns, and corresponding to the MODE NUMBER 2 of brightness.
Figure 14 is referred again to, encoder is encoded (S1440) to the predictive mode of current block.Encoder is to for current
The luminance component block of prediction block and the predictive mode of chromatic component block are encoded.Here, it is pre- due to current predictive object block
The predictive mode height correlation of survey pattern and adjacent block, so being entered to current predictive object block using the predictive mode of adjacent block
Row coding, so as to reduce bit quantity.Additionally, determining the most probable pattern (MPM) of current predictive object block, and therefore can use
MPM is encoded come the predictive mode to current predictive object block.
Then, the pixel value of current prediction block and the difference value pixel-by-pixel of the pixel value for prediction block are obtained, from
And generate residual signal (S1450).
Residual signal to being generated enters line translation and coding (S1460).Residual signal can be entered using transformation kernel
Row coding, wherein, the size of transition coding core is 2 × 2,4 × 4,8 × 8,16 × 16,32 × 32 or 64 × 64.
Conversion coefficient C is generated for the conversion, it can be the 2D blocks of conversion coefficient.For example, for n × n blocks, can be with
By equation 20 come calculation of transform coefficients.
[equation 20]
C (n, n)=T (n, n) × B (n, n) × T (n, n)T
Here, C (n, n) is n*n transform coefficient matrixs, and T (n, n) is n*n conversion nuclear matrix, and B (n, n) is for predicting
The n*n matrixes of object block.
When working as m=hN, n=2N and h=1/2, the conversion coefficient C for m*n or n*m difference blocks can be by two methods
To obtain.First, m*n or n*m difference blocks are split into four m*m blocks and transformation kernel is applied in each block, so as to generate
Conversion coefficient.Alternatively, transformation kernel is applied in m*n or n*m difference blocks, so as to generate conversion coefficient.
Encoder determines which (S1470) in the middle of residual signal and conversion coefficient transmitted.For example, ought sufficiently hold
Row prediction when, residual signal can transmit same as before and without transition coding.
Which the determination in the middle of residual signal to be transmitted and conversion coefficient can be performed by RDO etc..Compare change
Cost function is to exempt cost before and after changing coding.When the class signal to be transmitted for current prediction block is determined
When type, i.e. residual signal or conversion coefficient, the type signal of the signal that will also be transmitted informs decoder.
Then, encoder scan conversion coefficient (S1480).Can be changed by the quantified 2D blocks scanned conversion coefficient
Become the 1D vectors of conversion coefficient.
Entropy code (S1490) is carried out to scanned conversion coefficient and intra prediction mode.Encoded information is formed as
Compressed bit stream, it can be transmitted or be stored by NAL.
Figure 16 is the flow chart of the work for schematically showing the decoder in the system according to the present invention.
With reference to Figure 16, decoder carries out entropy decoding (S1610) to received bit stream.Here it is possible to from variable length
Degree coding (VLC) form obtains block type, and can currently be decoded the predictive mode of object block.When received ratio
Special stream can include decoding needed for information of the side information such as on coding unit, predicting unit and converter unit, on AIS
The information of filtering, the information of limitation counted on predictive mode, the information of the predictive mode on being not used by, on prediction
When the information of the permutatation of pattern, the information on transform method and the information on scan method, side information and bit stream
It is coded by entropy together.
Decoded information can confirm that the signal for being transmitted for current decoding object block is the change for difference block
Change coefficient or residual signal.The 1D vectors or remaining letter of the conversion coefficient for difference block are obtained for current decoding object block
Number.
Then, decoder generates rest block (S1620).
Decoder, inverse scans entropy-decoded residual signal or conversion coefficient to generate 2D blocks.Here it is possible to according to residue
Signal generation rest block, and the 2D blocks of conversion coefficient can be generated according to conversion coefficient.
Quantification is carried out to conversion coefficient.Inverse transformation is carried out to the conversion coefficient through quantification, and is given birth to via inverse transformation
Into the rest block for residual signal.The inverse transformation of n*n blocks can be expressed by equation 11.
Decoder generates reference pixel (S1630).Here, decoder by reference to about whether apply AIS filter, with
And information on the filter type used informed and transmitted with signal by encoder generates reference pixel.Similarly,
In an encoding process, the picture in decoded and reconstruct and left piece adjacent with current decoding object block of most right vertical curve
Element and upper piece the pixel most descended in horizontal line adjacent with decoding object block be used to generate reference pixel.
Meanwhile, when the CIP_ flag values received by decoder are set to 1 (this means CIP is used for mesh by encoder
Mark picture) when, decoder correspondingly generates reference pixel.For example, the picture of the adjacent block encoded only under intra prediction mode
Element is used as reference pixel, and the pixel of the adjacent block encoded under inter-frame forecast mode is not used as reference pixel.In this feelings
In shape, as shown in Figure 6, by it is interior be inserted in the neighboring reference pixel encoded under intra prediction mode generate with inter prediction
The position corresponding pixel (target prediction sample) of the pixel of the adjacent block encoded under pattern is used as reference pixel, or can copy
Neighboring reference pixel that shellfish encodes under intra prediction mode and it is used as adjacent with what is encoded under inter-frame forecast mode
The corresponding reference pixel in position of the pixel of block.
For example, when intra prediction mode prediction pixel is present in the right side and left side and upside of target interframe forecast sample
During with downside, the target prediction sample P in the block predicted under inter-frame forecast mode can be obtained by equation 17T。
Additionally, when infra-frame prediction sample exists only in the either side of target prediction sample, can obtain being located in frame by equation 18
Between target prediction sample P at the block position that predicts under predictive modeT.In equation 17 and/or equation 18, it is possible to use right
The average value of intra prediction mode pixel is answered as PLBOr PRAValue.If there is no the phase predicted under intra prediction mode
Adjacent block, then can copy with the pixel value at identical position in previous picture, for use as reference pixel value.
When encoder is filtered using AIS, i.e. be applied in and when therefore AIS is opened when smooth, decoder is also in basis
The reference pixel generation method that encoder is used performed during reference pixel AIS filtering to generate.Decoder can be based on being connect
The filter type information in the middle of information that receives determines filter coefficient.For example, when exist two filter coefficients [1,2,
1] or when [1, Isosorbide-5-Nitrae, 1,1], the wave filter indicated in filter type information in the middle of the two filter coefficients can be used
Coefficient.
Next, being generated for solving using the entropy-decoded predictive mode and reference pixel of current decoding object block
The prediction block (S1640) of code object block.
Generate prediction block process determine predictive mode with by encoder and generate prediction block process it is identical.When current
When the predictive mode of block is plane mode, the information that can be informed with information by analysis is used to generate prediction block to recognize
Planar prediction method.Here, decoder can be according to Fig. 6 to Figure 10 the pattern used in the middle of plane mode, it is based on
The information that is identified generates prediction block.
Next, generation is reconstructed by the way that the pixel value of prediction block is added pixel by pixel with the pixel value of difference block
Block, i.e. reconstructed blocks (S1670).
Inventive concept
The invention provides following inventive concept:
1. a kind of intra-frame prediction method for encoder, methods described includes:
The reference pixel for infra-frame prediction is generated on input prediction unit;
It is determined that for the frame mode of the predicting unit;
Prediction block is generated based on the reference pixel and the frame mode;And
The rest block for the predicting unit and the prediction block is generated,
Wherein, at least one of pixel of the reference pixel and the prediction block is predicted based on base pixel, and
The pixel value of the pixel for predicting be the base pixel pixel value with from the base pixel to the pixel value of the pixel for being generated
Change sum.
2. the intra-frame prediction method according to inventive concept 1, wherein, be arranged on the prediction block the upper left corner it is adjacent
The reference pixel of block is set to the first base pixel, by by from the first base pixel to the left side for being arranged on the prediction block
The pixel value changes of pixel are most descended in the middle of the reference pixel of the adjacent block in boundary and from the first base pixel to being arranged on
The pixel value changes of the most right pixel in the middle of the reference pixel of the adjacent block on the coboundary of the prediction block put on the base
Pixel and the value that obtains are set to the pixel value of the second base pixel, and the second base pixel is the lower right corner of the prediction block
Diagonal pixel, and the picture of the diagonal pixel of the prediction block is predicted according to the first base pixel and the second base pixel
Element value.
3. the intra-frame prediction method according to inventive concept 2, wherein, by using the coboundary of the prediction block
And/or the pixel and the diagonal pixel of the adjacent block on the left margin carry out interpolation or extrapolation to predict
State the non-diagonal pixel of prediction block.
4. the intra-frame prediction method according to inventive concept 1, wherein, be arranged on the prediction block the upper left corner it is adjacent
The reference pixel of block is set to the base pixel, and by by from the base pixel to the left side for being arranged on the prediction block
The pixel value for setting adjacent pixel in the same row with prediction object pixel in the middle of the reference pixel of the adjacent block in boundary becomes
Change and from the base pixel to the reference pixel of the adjacent block being arranged on the coboundary of the prediction block in the middle of with it is described
The value quilt that the pixel value changes of the adjacent pixel that prediction object pixel is arranged in same row put on the base pixel and obtain
It is predicted as the pixel value of the prediction object pixel.
5. the intra-frame prediction method according to inventive concept 1, wherein, it is arranged on left margin or the top of the prediction block
The pixel being arranged in same row or column with prediction object pixel in the middle of the pixel of the adjacent block in boundary is set to the base
Pixel, and obtained by the way that the pixel value changes from the base pixel to the prediction pixel are put on into the base pixel
Value is predicted to be the pixel value of the prediction object pixel.
6. the intra-frame prediction method according to inventive concept 5, wherein, the prediction object pixel is the prediction block
Diagonal pixel, and the pixel and the diagonal pixel by using the adjacent block to carry out interpolation described pre- to predict
Survey the non-diagonal pixel of block.
7. the intra-frame prediction method according to inventive concept 1, further includes:When the block adjacent with the predicting unit
Generation is arranged on the borderline reference pixel between the inter mode block and the predicting unit when being inter mode block, its
In, it is arranged on the predicting unit of being arranged in the middle of the pixel of the intra mode block of the left side of the reference pixel or downside
Borderline pixel is set to the first base pixel, is arranged on the picture of the right side of the reference pixel or the intra mode block of upside
The described borderline pixel for being arranged on the predicting unit in the middle of element is set to the second base pixel, and the reference image
Element is based on the distance from the first base pixel to the reference pixel and from the second base pixel to the reference pixel
Distance is generated.
8. the intra-frame prediction method according to inventive concept 7, wherein, the pixel value of the first base pixel is described
The described borderline pixel for being arranged on the predicting unit in the middle of the pixel of the intra mode block belonging to one base pixel
Average pixel value, and the pixel value of the second base pixel is that second base is worked as with reference to the pixel of belonging intra mode block
In the described borderline pixel for being arranged on the predicting unit average pixel value.
9. the intra-frame prediction method according to inventive concept 7, wherein, when intra mode block is provided only on the reference image
When the left side or downside of element, the pixel value of the first base pixel is the pixel value of the reference pixel, and works as intra mode block
When being provided only on the right side or upside of the reference pixel, the pixel value of the second base pixel is the pixel of the reference pixel
Value.
10. a kind of intra-frame prediction method for decoder, methods described includes:
Entropy decoding is carried out to received bit stream;
Generate the reference pixel of the infra-frame prediction for predicting unit;
Prediction block is generated based on the predictive mode for the predicting unit, according to the reference pixel;And
According to the rest block obtained by the entropy decoding and the prediction block come reconstruction picture,
Wherein, at least one of pixel of the reference pixel and the prediction block is predicted based on base pixel, and
The pixel value of the pixel for predicting be the base pixel pixel value with from the base pixel to the pixel value of the pixel for being generated
Change sum.
11. intra-frame prediction method according to inventive concept 10, wherein, it is arranged on the phase in the upper left corner of the prediction block
The reference pixel of adjacent block is set to the first base pixel, by by from the first base pixel to the left side for being arranged on the prediction block
The pixel value changes of pixel are most descended in the middle of the reference pixel of borderline adjacent block and from the first base pixel to setting
The pixel value changes of the most right pixel in the middle of the reference pixel of the adjacent block on the coboundary of the prediction block put on described
Base pixel and the value that obtains are set to the pixel value of the second base pixel, and the second base pixel is the lower right corner of the prediction block
Diagonal pixel, and the diagonal pixel of the prediction block is predicted according to the first base pixel and the second base pixel
Pixel value.
12. intra-frame prediction method according to inventive concept 11, wherein, by using the top of the prediction block
The pixel of the adjacent block in boundary and/or the left margin and the diagonal pixel carry out interpolation or extrapolation to predict
The non-diagonal pixel of the prediction block.
13. intra-frame prediction method according to inventive concept 10, wherein, it is arranged on the phase in the upper left corner of the prediction block
The reference pixel of adjacent block is set to the base pixel, and by by from the base pixel to the left side for being arranged on the prediction block
The pixel value that adjacent pixel in the same row is set with prediction object pixel in the middle of the reference pixel of borderline adjacent block
Change and from the base pixel to the reference pixel of the adjacent block being arranged on the coboundary of the prediction block in the middle of with institute
State the value that the pixel value changes of the adjacent pixel that prediction object pixel is arranged in same row put on the base pixel and obtain
It is predicted to be the pixel value of the prediction object pixel.
14. inter-frame prediction method according to inventive concept 10, wherein, be arranged on the left margin of the prediction block or on
The pixel being arranged in same row or column with prediction object pixel in the middle of the pixel of borderline adjacent block is set to described
Base pixel, and obtained by the way that the pixel value changes from the base pixel to the prediction pixel are put on into the base pixel
Value be predicted to be it is described prediction object pixel pixel value.
15. intra-frame prediction method according to inventive concept 14, wherein, the prediction object pixel is the prediction block
Diagonal pixel, and the pixel and the diagonal pixel by using the adjacent block to carry out interpolation described to predict
The non-diagonal pixel of prediction block.
16. intra-frame prediction method according to inventive concept 10, further includes:When adjacent with the predicting unit
Generation is arranged on the borderline reference pixel between the inter mode block and the predicting unit when block is inter mode block,
Wherein, it is arranged in the middle of the pixel of the intra mode block of the left side of the reference pixel or downside and is arranged on the predicting unit
Borderline pixel be set to the first base pixel, be arranged on the right side of the reference pixel or the intra mode block of upside
The described borderline pixel for being arranged on the predicting unit in the middle of pixel is set to the second base pixel, and the reference
Pixel is based on the distance from the first base pixel to the reference pixel and from the second base pixel to the reference pixel
Distance generate.
17. intra-frame prediction method according to inventive concept 16, wherein, the pixel value of the first base pixel is described
The described borderline pixel for being arranged on the predicting unit in the middle of the pixel of the intra mode block belonging to the first base pixel
Average pixel value, and the pixel value of the second base pixel is the pixel of second base with reference to belonging intra mode block
The average pixel value of the central described borderline pixel for being arranged on the predicting unit.
18. intra-frame prediction method according to inventive concept 16, wherein, when intra mode block is provided only on the reference
When the left side or downside of pixel, the pixel value of the first base pixel is the pixel value of the reference pixel, and works as frame mode
When block is provided only on the right side or upside of the reference pixel, the pixel value of the second base pixel is the picture of the reference pixel
Element value.
19. intra-frame prediction method according to inventive concept 10, including obtain by the entropy decoding, based on the base
Pixel generates the instruction of the pixel of the prediction block.
20. intra-frame prediction method according to inventive concept 10, including obtain by the entropy decoding, based on the base
Pixel generates the instruction of the reference pixel.
Claims (3)
1. a kind of method decoded to vision signal, methods described includes:
The remaining sample relevant with current block is obtained from bit stream;
The infra-frame prediction for the current block is performed based on the adjacent sample adjacent with the current block;And
By by the described remaining sample relevant with the current block by performing forecast sample obtained from the infra-frame prediction
This phase Calais obtains the reconstructed sample relevant with the current block,
Wherein, the forecast sample is based on the upper adjacent sample adjacent with the current block and relevant with the adjacent sample
Change to obtain, and
Wherein, the change relevant with the adjacent sample represent the upper left adjacent sample adjacent with the current block with institute
State the difference between the adjacent left adjacent sample of current block.
2. method according to claim 1, wherein, the upper adjacent sample is located at and the forecast sample identical x sits
Put on, and the left adjacent sample be located at the forecast sample identical y-coordinate on.
3. method according to claim 2, wherein, the forecast sample is adjacent with the left margin of the current block.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2011-0048130 | 2011-05-20 | ||
KR20110048130 | 2011-05-20 | ||
KR10-2011-0065210 | 2011-06-30 | ||
KR1020110065210A KR101383775B1 (en) | 2011-05-20 | 2011-06-30 | Method And Apparatus For Intra Prediction |
CN201280035395.8A CN103703773B (en) | 2011-05-20 | 2012-05-14 | The method and apparatus that infra-frame prediction is carried out in display screen |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201280035395.8A Division CN103703773B (en) | 2011-05-20 | 2012-05-14 | The method and apparatus that infra-frame prediction is carried out in display screen |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106851315A true CN106851315A (en) | 2017-06-13 |
CN106851315B CN106851315B (en) | 2020-04-21 |
Family
ID=48179115
Family Applications (20)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710912286.4A Active CN107592546B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911829.0A Expired - Fee Related CN107566832B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909575.9A Active CN107592530B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909571.0A Expired - Fee Related CN107547894B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909564.0A Expired - Fee Related CN108055537B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912304.9A Expired - Fee Related CN107566833B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911231.1A Expired - Fee Related CN107592532B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912363.6A Active CN107517379B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911186.XA Active CN107659816B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911180.2A Active CN107592545B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201280035395.8A Active CN103703773B (en) | 2011-05-20 | 2012-05-14 | The method and apparatus that infra-frame prediction is carried out in display screen |
CN201710909497.2A Expired - Fee Related CN107786870B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911191.0A Active CN107592531B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201410646265.9A Active CN104378645B (en) | 2011-05-20 | 2012-05-14 | The method decoded to vision signal |
CN201710912306.8A Active CN107517377B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911255.7A Expired - Fee Related CN107613296B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201610847470.0A Active CN106851315B (en) | 2011-05-20 | 2012-05-14 | Method for decoding video signal |
CN201710911187.4A Expired - Fee Related CN107613295B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912307.2A Expired - Fee Related CN107517378B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909400.8A Expired - Fee Related CN107786871B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
Family Applications Before (16)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710912286.4A Active CN107592546B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911829.0A Expired - Fee Related CN107566832B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909575.9A Active CN107592530B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909571.0A Expired - Fee Related CN107547894B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909564.0A Expired - Fee Related CN108055537B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912304.9A Expired - Fee Related CN107566833B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911231.1A Expired - Fee Related CN107592532B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912363.6A Active CN107517379B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911186.XA Active CN107659816B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911180.2A Active CN107592545B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201280035395.8A Active CN103703773B (en) | 2011-05-20 | 2012-05-14 | The method and apparatus that infra-frame prediction is carried out in display screen |
CN201710909497.2A Expired - Fee Related CN107786870B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911191.0A Active CN107592531B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201410646265.9A Active CN104378645B (en) | 2011-05-20 | 2012-05-14 | The method decoded to vision signal |
CN201710912306.8A Active CN107517377B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710911255.7A Expired - Fee Related CN107613296B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710911187.4A Expired - Fee Related CN107613295B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710912307.2A Expired - Fee Related CN107517378B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
CN201710909400.8A Expired - Fee Related CN107786871B (en) | 2011-05-20 | 2012-05-14 | Video decoding method |
Country Status (13)
Country | Link |
---|---|
US (11) | US9154803B2 (en) |
EP (1) | EP2712192A4 (en) |
KR (12) | KR101383775B1 (en) |
CN (20) | CN107592546B (en) |
AU (2) | AU2012259700B2 (en) |
BR (1) | BR112013029931B1 (en) |
CA (2) | CA2958027C (en) |
ES (11) | ES2597459B1 (en) |
GB (4) | GB2506039B (en) |
PL (1) | PL231066B1 (en) |
RU (6) | RU2628154C1 (en) |
SE (6) | SE539822C2 (en) |
WO (1) | WO2012161444A2 (en) |
Families Citing this family (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100882949B1 (en) | 2006-08-17 | 2009-02-10 | 한국전자통신연구원 | Apparatus and method of encoding and decoding using adaptive scanning of DCT coefficients according to the pixel similarity |
US9462272B2 (en) * | 2010-12-13 | 2016-10-04 | Electronics And Telecommunications Research Institute | Intra prediction method and apparatus |
WO2012081895A1 (en) | 2010-12-13 | 2012-06-21 | 한국전자통신연구원 | Intra prediction method and apparatus |
KR101383775B1 (en) * | 2011-05-20 | 2014-04-14 | 주식회사 케이티 | Method And Apparatus For Intra Prediction |
US9654785B2 (en) | 2011-06-09 | 2017-05-16 | Qualcomm Incorporated | Enhanced intra-prediction mode signaling for video coding using neighboring mode |
KR20120140181A (en) * | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | Method and apparatus for encoding and decoding using filtering for prediction block boundary |
US20130016769A1 (en) | 2011-07-17 | 2013-01-17 | Qualcomm Incorporated | Signaling picture size in video coding |
WO2014003421A1 (en) | 2012-06-25 | 2014-01-03 | 한양대학교 산학협력단 | Video encoding and decoding method |
US9386306B2 (en) * | 2012-08-15 | 2016-07-05 | Qualcomm Incorporated | Enhancement layer scan order derivation for scalable video coding |
JP5798539B2 (en) | 2012-09-24 | 2015-10-21 | 株式会社Nttドコモ | Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive decoding apparatus, and moving picture predictive decoding method |
US10003818B2 (en) * | 2013-10-11 | 2018-06-19 | Sony Corporation | Video coding system with intra prediction mechanism and method of operation thereof |
WO2015057947A1 (en) * | 2013-10-17 | 2015-04-23 | Huawei Technologies Co., Ltd. | Improved reference pixel selection and filtering for intra coding of depth map |
WO2016072732A1 (en) * | 2014-11-04 | 2016-05-12 | 삼성전자 주식회사 | Method and apparatus for encoding/decoding video using texture synthesis based prediction mode |
US10148953B2 (en) | 2014-11-10 | 2018-12-04 | Samsung Electronics Co., Ltd. | System and method for intra prediction in video coding |
US20180035123A1 (en) * | 2015-02-25 | 2018-02-01 | Telefonaktiebolaget Lm Ericsson (Publ) | Encoding and Decoding of Inter Pictures in a Video |
US20160373770A1 (en) * | 2015-06-18 | 2016-12-22 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US11463689B2 (en) | 2015-06-18 | 2022-10-04 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US10841593B2 (en) | 2015-06-18 | 2020-11-17 | Qualcomm Incorporated | Intra prediction and intra mode coding |
US20180213224A1 (en) * | 2015-07-20 | 2018-07-26 | Lg Electronics Inc. | Intra prediction method and device in video coding system |
EP3340632B1 (en) * | 2015-08-19 | 2021-08-11 | LG Electronics Inc. | Method and device for processing video signals |
US10136131B2 (en) * | 2015-09-11 | 2018-11-20 | Beamr Imaging Ltd. | Video coding apparatus and method |
US10819987B2 (en) * | 2015-11-12 | 2020-10-27 | Lg Electronics Inc. | Method and apparatus for coefficient induced intra prediction in image coding system |
KR20180075483A (en) * | 2015-11-24 | 2018-07-04 | 삼성전자주식회사 | Method and apparatus for post-processing an intra or inter prediction block based on a slope of a pixel |
WO2017142327A1 (en) * | 2016-02-16 | 2017-08-24 | 삼성전자 주식회사 | Intra-prediction method for reducing intra-prediction errors and device for same |
US10390026B2 (en) * | 2016-03-25 | 2019-08-20 | Google Llc | Smart reordering in recursive block partitioning for advanced intra prediction in video coding |
US10841574B2 (en) * | 2016-04-25 | 2020-11-17 | Lg Electronics Inc. | Image decoding method and device using intra prediction in image coding system |
CN114189680A (en) * | 2016-04-26 | 2022-03-15 | 英迪股份有限公司 | Image decoding method, image encoding method, and method for transmitting bit stream |
CN116506606A (en) * | 2016-08-01 | 2023-07-28 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
US10542275B2 (en) * | 2016-12-28 | 2020-01-21 | Arris Enterprises Llc | Video bitstream coding |
WO2018132380A1 (en) * | 2017-01-13 | 2018-07-19 | Vid Scale, Inc. | Prediction approaches for intra planar coding |
CN116233416A (en) * | 2017-01-16 | 2023-06-06 | 世宗大学校产学协力团 | Image coding/decoding method |
MX2019008641A (en) * | 2017-01-31 | 2019-09-10 | Sharp Kk | Systems and methods for scaling transform coefficient level values. |
US11496747B2 (en) | 2017-03-22 | 2022-11-08 | Qualcomm Incorporated | Intra-prediction mode propagation |
EP3410708A1 (en) | 2017-05-31 | 2018-12-05 | Thomson Licensing | Method and apparatus for intra prediction with interpolation |
JP2019041165A (en) * | 2017-08-23 | 2019-03-14 | 富士通株式会社 | Image encoding device, image decoding device, image processing method, and image processing program |
US20190110052A1 (en) * | 2017-10-06 | 2019-04-11 | Futurewei Technologies, Inc. | Bidirectional intra prediction |
GB2567861A (en) | 2017-10-27 | 2019-05-01 | Sony Corp | Image data encoding and decoding |
KR20190056888A (en) * | 2017-11-17 | 2019-05-27 | 삼성전자주식회사 | Apparatus and method for encoding video |
US10645381B2 (en) * | 2018-04-30 | 2020-05-05 | Google Llc | Intra-prediction for smooth blocks in image/video |
CN112272950A (en) * | 2018-06-18 | 2021-01-26 | 交互数字Vc控股公司 | Boundary filtering of plane and DC modes in intra prediction |
US11277644B2 (en) | 2018-07-02 | 2022-03-15 | Qualcomm Incorporated | Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching |
KR102022375B1 (en) * | 2018-08-22 | 2019-09-18 | (주)넥서스일렉트로닉스 | Upscale chipset module for ultra-high definition television |
CN110896476B (en) * | 2018-09-13 | 2021-11-26 | 阿里巴巴(中国)有限公司 | Image processing method, device and storage medium |
US11303885B2 (en) | 2018-10-25 | 2022-04-12 | Qualcomm Incorporated | Wide-angle intra prediction smoothing and interpolation |
GB2580036B (en) * | 2018-12-19 | 2023-02-01 | British Broadcasting Corp | Bitstream decoding |
KR20200083321A (en) * | 2018-12-28 | 2020-07-08 | 한국전자통신연구원 | Method and apparatus for image encoding/decoding and recording medium for storing bitstream |
US11418811B2 (en) * | 2019-03-12 | 2022-08-16 | Apple Inc. | Method for encoding/decoding image signal, and device therefor |
US20220217405A1 (en) * | 2019-04-03 | 2022-07-07 | Lg Electronics Inc. | Video or image coding for modifying reconstructed picture |
CN111836043A (en) * | 2019-04-15 | 2020-10-27 | 中兴通讯股份有限公司 | Code block prediction method, code block decoding method, code block prediction device, and code block decoding device |
EP4300964A3 (en) * | 2019-06-21 | 2024-03-13 | VID SCALE, Inc. | Precision refinement for motion compensation with optical flow |
EP3970365A4 (en) * | 2019-06-21 | 2022-08-10 | Huawei Technologies Co., Ltd. | Matrix-based intra prediction for still picture and video coding |
WO2020256599A1 (en) * | 2019-06-21 | 2020-12-24 | Huawei Technologies Co., Ltd. | Method and apparatus of quantizing coefficients for matrix-based intra prediction technique |
MX2022001902A (en) | 2019-08-14 | 2022-04-18 | Lg Electronics Inc | Image encoding/decoding method and apparatus for determining prediction mode of chroma block by referring to luma sample position, and method for transmitting bitstream. |
KR20210133395A (en) | 2020-04-29 | 2021-11-08 | 삼성전자주식회사 | An image encoder, an image sensing device, and an operating method of the image encoder |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1615020A (en) * | 2004-11-10 | 2005-05-11 | 华中科技大学 | Method for pridicting sortable complex in frame |
CN1722836A (en) * | 2004-07-07 | 2006-01-18 | 三星电子株式会社 | Video coding and coding/decoding method and video encoder and decoder |
JP2008271371A (en) * | 2007-04-24 | 2008-11-06 | Sharp Corp | Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method, moving image decoding method, and program |
US20090141798A1 (en) * | 2005-04-01 | 2009-06-04 | Panasonic Corporation | Image Decoding Apparatus and Image Decoding Method |
CN101483780A (en) * | 2008-01-07 | 2009-07-15 | 华为技术有限公司 | Method and apparatus for intra-frame prediction |
US20090201991A1 (en) * | 2008-02-13 | 2009-08-13 | Yong-Hyun Lim | Method for intra prediction coding of image data |
US20090232215A1 (en) * | 2008-03-12 | 2009-09-17 | Lg Electronics Inc. | Method and an Apparatus for Encoding or Decoding a Video Signal |
CN101677406A (en) * | 2008-09-19 | 2010-03-24 | 华为技术有限公司 | Method and apparatus for video encoding and decoding |
Family Cites Families (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5122873A (en) * | 1987-10-05 | 1992-06-16 | Intel Corporation | Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels |
ATE109604T1 (en) | 1987-12-22 | 1994-08-15 | Philips Nv | VIDEO SIGNAL ENCODING AND DECODING WITH AN ADAPTIVE FILTER. |
GB8729878D0 (en) | 1987-12-22 | 1988-02-03 | Philips Electronic Associated | Processing sub-sampled signals |
US4903124A (en) * | 1988-03-17 | 1990-02-20 | Canon Kabushiki Kaisha | Image information signal transmission apparatus |
EP0337564B1 (en) * | 1988-04-15 | 1993-09-29 | Laboratoires D'electronique Philips | Device for decoding signals representative of a sequence of pictures and system for transmission of high definition television pictures including such a device |
FR2633137B1 (en) * | 1988-06-21 | 1990-11-09 | Labo Electronique Physique | HIGH DEFINITION TELEVISION TRANSMISSION AND RECEPTION SYSTEM WITH IMPROVED SPEED ESTIMATOR AND REDUCED DATA RATE |
US5335019A (en) * | 1993-01-14 | 1994-08-02 | Sony Electronics, Inc. | Digital video data quantization error detection as applied to intelligent dynamic companding |
ES2610430T3 (en) * | 2001-12-17 | 2017-04-27 | Microsoft Technology Licensing, Llc | Default macroblock coding |
CN101448162B (en) * | 2001-12-17 | 2013-01-02 | 微软公司 | Method for processing video image |
JP3900000B2 (en) * | 2002-05-07 | 2007-03-28 | ソニー株式会社 | Encoding method and apparatus, decoding method and apparatus, and program |
US20030231795A1 (en) | 2002-06-12 | 2003-12-18 | Nokia Corporation | Spatial prediction based intra-coding |
JP4324844B2 (en) | 2003-04-25 | 2009-09-02 | ソニー株式会社 | Image decoding apparatus and image decoding method |
RU2329615C2 (en) * | 2003-12-01 | 2008-07-20 | Самсунг Электроникс Ко., Лтд. | Video signal coding-decoding method and device for its implementation |
KR100596706B1 (en) | 2003-12-01 | 2006-07-04 | 삼성전자주식회사 | Method for scalable video coding and decoding, and apparatus for the same |
KR100597402B1 (en) | 2003-12-01 | 2006-07-06 | 삼성전자주식회사 | Method for scalable video coding and decoding, and apparatus for the same |
CN100536573C (en) * | 2004-01-16 | 2009-09-02 | 北京工业大学 | Inframe prediction method used for video frequency coding |
KR20050112445A (en) * | 2004-05-25 | 2005-11-30 | 경희대학교 산학협력단 | Prediction encoder/decoder, prediction encoding/decoding method and recording medium storing a program for performing the method |
KR101204788B1 (en) | 2004-06-03 | 2012-11-26 | 삼성전자주식회사 | Method of and apparatus for predictive video data encoding and/or decoding |
US7953152B1 (en) * | 2004-06-28 | 2011-05-31 | Google Inc. | Video compression and encoding method |
KR100654436B1 (en) | 2004-07-07 | 2006-12-06 | 삼성전자주식회사 | Method for video encoding and decoding, and video encoder and decoder |
JP2008507211A (en) | 2004-07-15 | 2008-03-06 | クゥアルコム・インコーポレイテッド | H. based on intra prediction direction. H.264 spatial error concealment |
KR100679025B1 (en) | 2004-11-12 | 2007-02-05 | 삼성전자주식회사 | Method for intra-prediction based on multi-layer, and method and apparatus for video coding using it |
KR100679031B1 (en) * | 2004-12-03 | 2007-02-05 | 삼성전자주식회사 | Method for encoding/decoding video based on multi-layer, and apparatus using the method |
US20060133507A1 (en) | 2004-12-06 | 2006-06-22 | Matsushita Electric Industrial Co., Ltd. | Picture information decoding method and picture information encoding method |
JP4427003B2 (en) * | 2005-05-23 | 2010-03-03 | オリンパスイメージング株式会社 | Data encoding apparatus, data decoding apparatus, data encoding method, data decoding method, and program |
KR100716999B1 (en) * | 2005-06-03 | 2007-05-10 | 삼성전자주식회사 | Method for intra prediction using the symmetry of video, method and apparatus for encoding and decoding video using the same |
JP2007043651A (en) | 2005-07-05 | 2007-02-15 | Ntt Docomo Inc | Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program |
US9055298B2 (en) | 2005-07-15 | 2015-06-09 | Qualcomm Incorporated | Video encoding method enabling highly efficient partial decoding of H.264 and other transform coded information |
US8155189B2 (en) * | 2005-10-19 | 2012-04-10 | Freescale Semiconductor, Inc. | System and method of coding mode decision for video encoding |
KR101246294B1 (en) * | 2006-03-03 | 2013-03-21 | 삼성전자주식회사 | Method of and apparatus for video intraprediction encoding/decoding |
KR100716142B1 (en) * | 2006-09-04 | 2007-05-11 | 주식회사 이시티 | Method for transferring stereoscopic image data |
US9014280B2 (en) * | 2006-10-13 | 2015-04-21 | Qualcomm Incorporated | Video coding with adaptive filtering for motion compensated prediction |
JP2008153802A (en) | 2006-12-15 | 2008-07-03 | Victor Co Of Japan Ltd | Moving picture encoding device and moving picture encoding program |
KR101365574B1 (en) | 2007-01-29 | 2014-02-20 | 삼성전자주식회사 | Method and apparatus for video encoding, and Method and apparatus for video decoding |
KR101369224B1 (en) * | 2007-03-28 | 2014-03-05 | 삼성전자주식회사 | Method and apparatus for Video encoding and decoding using motion compensation filtering |
JP4799477B2 (en) * | 2007-05-08 | 2011-10-26 | キヤノン株式会社 | Image coding apparatus and image coding method |
KR101378338B1 (en) * | 2007-06-14 | 2014-03-28 | 삼성전자주식회사 | Method and apparatus for encoding and decoding based on intra prediction using image inpainting |
US8913670B2 (en) * | 2007-08-21 | 2014-12-16 | Blackberry Limited | System and method for providing dynamic deblocking filtering on a mobile device |
US8254450B2 (en) | 2007-08-23 | 2012-08-28 | Nokia Corporation | System and method for providing improved intra-prediction in video coding |
CN100562114C (en) * | 2007-08-30 | 2009-11-18 | 上海交通大学 | Video encoding/decoding method and decoding device |
KR102231772B1 (en) * | 2007-10-16 | 2021-03-23 | 엘지전자 주식회사 | A method and an apparatus for processing a video signal |
KR101228020B1 (en) | 2007-12-05 | 2013-01-30 | 삼성전자주식회사 | Video coding method and apparatus using side matching, and video decoding method and appartus thereof |
JP5111127B2 (en) * | 2008-01-22 | 2012-12-26 | キヤノン株式会社 | Moving picture coding apparatus, control method therefor, and computer program |
KR20090097688A (en) * | 2008-03-12 | 2009-09-16 | 삼성전자주식회사 | Method and apparatus of encoding/decoding image based on intra prediction |
BRPI0911307B1 (en) * | 2008-04-15 | 2020-09-29 | France Telecom | ENCODING AND DECODING AN IMAGE OR A SEQUENCE OF CUTTED IMAGES, ACCORDING TO LINEAR PIXELS PARTITIONS |
US20090262801A1 (en) * | 2008-04-17 | 2009-10-22 | Qualcomm Incorporated | Dead zone parameter selections for rate control in video coding |
US8451902B2 (en) | 2008-04-23 | 2013-05-28 | Telefonaktiebolaget L M Ericsson (Publ) | Template-based pixel block processing |
EP2958324B1 (en) | 2008-05-07 | 2016-12-28 | LG Electronics, Inc. | Method and apparatus for decoding video signal |
EP2924994B1 (en) | 2008-05-07 | 2016-09-21 | LG Electronics, Inc. | Method and apparatus for decoding video signal |
WO2009136749A2 (en) | 2008-05-09 | 2009-11-12 | Lg Electronics Inc. | Apparatus and method for transmission opportunity in mesh network |
US20090316788A1 (en) * | 2008-06-23 | 2009-12-24 | Thomson Licensing | Video coding method with non-compressed mode and device implementing the method |
US8687692B2 (en) * | 2008-08-12 | 2014-04-01 | Lg Electronics Inc. | Method of processing a video signal |
US8213503B2 (en) * | 2008-09-05 | 2012-07-03 | Microsoft Corporation | Skip modes for inter-layer residual video coding and decoding |
KR101306834B1 (en) * | 2008-09-22 | 2013-09-10 | 에스케이텔레콤 주식회사 | Video Encoding/Decoding Apparatus and Method by Using Prediction Possibility of Intra Prediction Mode |
US8879637B2 (en) * | 2008-10-06 | 2014-11-04 | Lg Electronics Inc. | Method and an apparatus for processing a video signal by which coding efficiency of a video signal can be raised by using a mixed prediction mode in predicting different macroblock sizes |
CN102204256B (en) * | 2008-10-31 | 2014-04-09 | 法国电信公司 | Image prediction method and system |
JP5238523B2 (en) * | 2009-01-13 | 2013-07-17 | 株式会社日立国際電気 | Moving picture encoding apparatus, moving picture decoding apparatus, and moving picture decoding method |
TWI380654B (en) * | 2009-02-11 | 2012-12-21 | Univ Nat Chiao Tung | The control method of transmitting streaming audio/video data and architecture thereof |
US8798158B2 (en) * | 2009-03-11 | 2014-08-05 | Industry Academic Cooperation Foundation Of Kyung Hee University | Method and apparatus for block-based depth map coding and 3D video coding method using the same |
JP5169978B2 (en) | 2009-04-24 | 2013-03-27 | ソニー株式会社 | Image processing apparatus and method |
US9113169B2 (en) | 2009-05-07 | 2015-08-18 | Qualcomm Incorporated | Video encoding with temporally constrained spatial dependency for localized decoding |
CN101674475B (en) * | 2009-05-12 | 2011-06-22 | 北京合讯数通科技有限公司 | Self-adapting interlayer texture prediction method of H.264/SVC |
JP5597968B2 (en) | 2009-07-01 | 2014-10-01 | ソニー株式会社 | Image processing apparatus and method, program, and recording medium |
KR101510108B1 (en) * | 2009-08-17 | 2015-04-10 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
KR101452860B1 (en) | 2009-08-17 | 2014-10-23 | 삼성전자주식회사 | Method and apparatus for image encoding, and method and apparatus for image decoding |
KR101507344B1 (en) | 2009-08-21 | 2015-03-31 | 에스케이 텔레콤주식회사 | Apparatus and Method for intra prediction mode coding using variable length code, and Recording Medium therefor |
FR2952497B1 (en) | 2009-11-09 | 2012-11-16 | Canon Kk | METHOD FOR ENCODING AND DECODING AN IMAGE STREAM; ASSOCIATED DEVICES |
JP2011146980A (en) * | 2010-01-15 | 2011-07-28 | Sony Corp | Image processor and image processing method |
US8619857B2 (en) | 2010-04-09 | 2013-12-31 | Sharp Laboratories Of America, Inc. | Methods and systems for intra prediction |
KR20110113561A (en) * | 2010-04-09 | 2011-10-17 | 한국전자통신연구원 | Method and apparatus for intra prediction encoding and decoding using adaptive filter |
EP2559239A2 (en) * | 2010-04-13 | 2013-02-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus for intra predicting a block, apparatus for reconstructing a block of a picture, apparatus for reconstructing a block of a picture by intra prediction |
CN102972028B (en) * | 2010-05-17 | 2015-08-12 | Lg电子株式会社 | New intra prediction mode |
CN101877792B (en) * | 2010-06-17 | 2012-08-08 | 无锡中星微电子有限公司 | Intra mode prediction method and device and coder |
SI3125552T1 (en) * | 2010-08-17 | 2018-07-31 | M&K Holdings Inc. | Method for restoring an intra prediction mode |
KR101579392B1 (en) * | 2010-09-27 | 2015-12-21 | 엘지전자 주식회사 | Method for partitioning block and decoding device |
US20120121018A1 (en) | 2010-11-17 | 2012-05-17 | Lsi Corporation | Generating Single-Slice Pictures Using Paralellel Processors |
CN107181950B (en) * | 2010-12-08 | 2020-11-06 | Lg 电子株式会社 | Encoding device and decoding device for executing internal prediction |
US20120163457A1 (en) * | 2010-12-28 | 2012-06-28 | Viktor Wahadaniah | Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus |
US9930366B2 (en) | 2011-01-28 | 2018-03-27 | Qualcomm Incorporated | Pixel level adaptive intra-smoothing |
CN102685505B (en) * | 2011-03-10 | 2014-11-05 | 华为技术有限公司 | Intra-frame prediction method and prediction device |
KR101383775B1 (en) * | 2011-05-20 | 2014-04-14 | 주식회사 케이티 | Method And Apparatus For Intra Prediction |
-
2011
- 2011-06-30 KR KR1020110065210A patent/KR101383775B1/en active IP Right Grant
-
2012
- 2012-05-14 CN CN201710912286.4A patent/CN107592546B/en active Active
- 2012-05-14 ES ES201631172A patent/ES2597459B1/en active Active
- 2012-05-14 ES ES201531015A patent/ES2545039B1/en active Active
- 2012-05-14 SE SE1551664A patent/SE539822C2/en unknown
- 2012-05-14 CA CA2958027A patent/CA2958027C/en active Active
- 2012-05-14 CN CN201710911829.0A patent/CN107566832B/en not_active Expired - Fee Related
- 2012-05-14 ES ES201390093A patent/ES2450643B1/en active Active
- 2012-05-14 CN CN201710909575.9A patent/CN107592530B/en active Active
- 2012-05-14 RU RU2015156465A patent/RU2628154C1/en active
- 2012-05-14 RU RU2016101539A patent/RU2628161C1/en active
- 2012-05-14 ES ES201631081A patent/ES2612388B1/en active Active
- 2012-05-14 CN CN201710909571.0A patent/CN107547894B/en not_active Expired - Fee Related
- 2012-05-14 GB GB1321333.5A patent/GB2506039B/en active Active
- 2012-05-14 CN CN201710909564.0A patent/CN108055537B/en not_active Expired - Fee Related
- 2012-05-14 RU RU2015156464A patent/RU2628157C1/en active
- 2012-05-14 GB GB1712865.3A patent/GB2559438B/en active Active
- 2012-05-14 SE SE1651173A patent/SE541011C2/en unknown
- 2012-05-14 ES ES201731046A patent/ES2633153B1/en active Active
- 2012-05-14 US US14/118,973 patent/US9154803B2/en active Active
- 2012-05-14 CN CN201710912304.9A patent/CN107566833B/en not_active Expired - Fee Related
- 2012-05-14 WO PCT/KR2012/003744 patent/WO2012161444A2/en active Application Filing
- 2012-05-14 CN CN201710911231.1A patent/CN107592532B/en not_active Expired - Fee Related
- 2012-05-14 ES ES201631171A patent/ES2597433B1/en active Active
- 2012-05-14 CN CN201710912363.6A patent/CN107517379B/en active Active
- 2012-05-14 CN CN201710911186.XA patent/CN107659816B/en active Active
- 2012-05-14 CN CN201710911180.2A patent/CN107592545B/en active Active
- 2012-05-14 SE SE1351441A patent/SE537736C2/en unknown
- 2012-05-14 AU AU2012259700A patent/AU2012259700B2/en active Active
- 2012-05-14 GB GB1712866.1A patent/GB2556649B/en active Active
- 2012-05-14 CN CN201280035395.8A patent/CN103703773B/en active Active
- 2012-05-14 CN CN201710909497.2A patent/CN107786870B/en not_active Expired - Fee Related
- 2012-05-14 CN CN201710911191.0A patent/CN107592531B/en active Active
- 2012-05-14 SE SE1550476A patent/SE538196C2/en unknown
- 2012-05-14 ES ES201631169A patent/ES2597432B1/en active Active
- 2012-05-14 ES ES201631167A patent/ES2596027B1/en active Active
- 2012-05-14 CN CN201410646265.9A patent/CN104378645B/en active Active
- 2012-05-14 ES ES201631168A patent/ES2597431B1/en active Active
- 2012-05-14 CN CN201710912306.8A patent/CN107517377B/en active Active
- 2012-05-14 GB GB1712867.9A patent/GB2560394B/en active Active
- 2012-05-14 PL PL407846A patent/PL231066B1/en unknown
- 2012-05-14 BR BR112013029931-2A patent/BR112013029931B1/en active IP Right Grant
- 2012-05-14 RU RU2013152690/08A patent/RU2576502C2/en active
- 2012-05-14 RU RU2015156467A patent/RU2628153C1/en active
- 2012-05-14 SE SE537736D patent/SE537736C3/xx unknown
- 2012-05-14 CN CN201710911255.7A patent/CN107613296B/en not_active Expired - Fee Related
- 2012-05-14 CN CN201610847470.0A patent/CN106851315B/en active Active
- 2012-05-14 RU RU2016101534A patent/RU2628160C1/en active
- 2012-05-14 ES ES201630382A patent/ES2570027B1/en active Active
- 2012-05-14 CN CN201710911187.4A patent/CN107613295B/en not_active Expired - Fee Related
- 2012-05-14 CN CN201710912307.2A patent/CN107517378B/en not_active Expired - Fee Related
- 2012-05-14 CN CN201710909400.8A patent/CN107786871B/en not_active Expired - Fee Related
- 2012-05-14 ES ES201631170A patent/ES2597458B1/en active Active
- 2012-05-14 CA CA2836888A patent/CA2836888C/en active Active
- 2012-05-14 EP EP12788733.9A patent/EP2712192A4/en not_active Withdrawn
- 2012-05-14 SE SE1651172A patent/SE541010C2/en unknown
-
2014
- 2014-01-22 KR KR1020140007853A patent/KR101458794B1/en active IP Right Grant
- 2014-03-31 KR KR1020140038230A patent/KR101453897B1/en active IP Right Grant
- 2014-03-31 KR KR1020140038231A patent/KR101453898B1/en active IP Right Grant
- 2014-03-31 KR KR1020140038232A patent/KR101453899B1/en active IP Right Grant
- 2014-09-18 KR KR20140124085A patent/KR101508894B1/en active IP Right Grant
- 2014-09-18 KR KR20140124086A patent/KR101508291B1/en active IP Right Grant
- 2014-10-08 KR KR1020140135609A patent/KR101552631B1/en active IP Right Grant
- 2014-10-08 KR KR20140135607A patent/KR101508486B1/en active IP Right Grant
- 2014-10-08 KR KR20140135606A patent/KR101508292B1/en active IP Right Grant
- 2014-10-08 KR KR20140135608A patent/KR101508895B1/en active IP Right Grant
-
2015
- 2015-01-26 US US14/606,008 patent/US9432695B2/en active Active
- 2015-01-26 US US14/606,007 patent/US9288503B2/en active Active
- 2015-04-06 KR KR20150048599A patent/KR20150043278A/en not_active Application Discontinuation
- 2015-11-30 AU AU2015261728A patent/AU2015261728B2/en active Active
- 2015-12-15 US US14/968,958 patent/US9445123B2/en active Active
- 2015-12-30 US US14/983,745 patent/US9432669B2/en active Active
-
2016
- 2016-07-26 US US15/219,354 patent/US9756341B2/en active Active
- 2016-07-26 US US15/219,654 patent/US9749639B2/en active Active
- 2016-07-27 US US15/220,426 patent/US9584815B2/en active Active
- 2016-07-27 US US15/220,459 patent/US9843808B2/en active Active
- 2016-07-27 US US15/220,437 patent/US9749640B2/en active Active
-
2017
- 2017-11-02 US US15/801,355 patent/US10158862B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1722836A (en) * | 2004-07-07 | 2006-01-18 | 三星电子株式会社 | Video coding and coding/decoding method and video encoder and decoder |
CN1615020A (en) * | 2004-11-10 | 2005-05-11 | 华中科技大学 | Method for pridicting sortable complex in frame |
US20090141798A1 (en) * | 2005-04-01 | 2009-06-04 | Panasonic Corporation | Image Decoding Apparatus and Image Decoding Method |
JP2008271371A (en) * | 2007-04-24 | 2008-11-06 | Sharp Corp | Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method, moving image decoding method, and program |
CN101483780A (en) * | 2008-01-07 | 2009-07-15 | 华为技术有限公司 | Method and apparatus for intra-frame prediction |
US20090201991A1 (en) * | 2008-02-13 | 2009-08-13 | Yong-Hyun Lim | Method for intra prediction coding of image data |
US20090232215A1 (en) * | 2008-03-12 | 2009-09-17 | Lg Electronics Inc. | Method and an Apparatus for Encoding or Decoding a Video Signal |
CN101677406A (en) * | 2008-09-19 | 2010-03-24 | 华为技术有限公司 | Method and apparatus for video encoding and decoding |
Non-Patent Citations (3)
Title |
---|
RASHAD JILLANI,ET AL: "Low Complexity Intra MB Encoding in AVC/H.264", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》 * |
VIKTOR WAHADANIAH, ET AL: "Constrained Intra Prediction Scheme for Flexible-Sized Prediction Units in HEVC", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 4TH MEETING: DAEGU》 * |
YU-FAN LAI,ET AL: "Design of an intra predictor with data reuse for high-profile H.264 applications", 《2009 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS》 * |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104378645B (en) | The method decoded to vision signal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |