CN103141103B - The method and apparatus of processing video data - Google Patents

The method and apparatus of processing video data Download PDF

Info

Publication number
CN103141103B
CN103141103B CN201180018316.8A CN201180018316A CN103141103B CN 103141103 B CN103141103 B CN 103141103B CN 201180018316 A CN201180018316 A CN 201180018316A CN 103141103 B CN103141103 B CN 103141103B
Authority
CN
China
Prior art keywords
mode
prediction
frame
chroma
current block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201180018316.8A
Other languages
Chinese (zh)
Other versions
CN103141103A (en
Inventor
金郑善
朴胜煜
林宰显
朴俊永
崔瑛喜
成宰源
全柄文
全勇俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Priority to CN201511018903.3A priority Critical patent/CN105635737B/en
Priority to CN201511018935.3A priority patent/CN105611304B/en
Priority to CN201511009169.4A priority patent/CN105472386B/en
Priority to CN201511010036.9A priority patent/CN105472387B/en
Publication of CN103141103A publication Critical patent/CN103141103A/en
Application granted granted Critical
Publication of CN103141103B publication Critical patent/CN103141103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/16Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Color Television Systems (AREA)

Abstract

A kind of method for signaling prediction mode for chroma in frame, with a kind of method for realizing signaling prediction mode for chroma in frame, the interpolation of the luma samples that prediction mode for chroma is predicted in advance from the contiguous tree block of video data in frame, with colorimetric prediction in the frame obtaining current chroma predicting unit.

Description

The method and apparatus of processing video data
Technical field
The present invention relates to a kind of method and apparatus for performing intra prediction mode decoding to the digital of digital video data using intra prediction mode to encode.The invention still further relates to a kind of method and apparatus for suitable intra prediction mode being signaled to decoding unit.
Background technology
Usually, exist for realizing video compression coding to eliminate two kinds of methods of Time and place redundancy.Eliminating Time and place redundancy is improve the important requirement of video signal compression than the overall size to reduce video data transmission.
Interframe prediction encoding method can based on the zone similarity prediction current video data block found on the picture of encoding in advance, the leading photo current comprising current video data block on the picture time of encoding in advance.Further, intra-frame predictive encoding method can based on the block prediction current video data block of encoding in advance, and the block of encoding in advance is adjacent to current video data block and in identical picture.Inter-frame prediction method is called time forecasting methods, and intra-frame prediction method is called space predicting method.
The video data formed with the video data picture of infra-frame prediction by inter prediction is sent to receiver, then decoded to reproduce described video data.Decoding unit must perform suitable predictive mode process to rebuild the video data received.
About the intra-frame prediction method of coding, there are the various patterns being used for implementation space prediction (it limits intra-frame prediction method).Further, in interframe and frame in two kinds of Forecasting Methodologies, the prediction that the prediction of sampling for brightness (luma) and colourity (chroma) are sampled separates and processes.Brightness can be defined as the lightness of image, and colourity can be defined as the expression of aberration in image.Although brightness and colourity are important components in any picture, but due to compared with the change of colourity, the change of human visual system to brightness is more responsive, and therefore compared with prediction mode for chroma, predictive mode is usually much more relevant with luma prediction modes.
Summary of the invention
Technical problem
Therefore, chroma samples is rebuild in the linear combination of luma samples that the prediction mode for chroma of current understanding is not expected by utilizing interpolation.By utilizing the advantage of the interpolation of luma samples, wherein luma samples is rebuild in advance, can realize the new model for effectively predicting chroma samples.
When the binary code words relevant to the information that video data sends together being sent as a whole video data signal part, also there are the needs saving many codeword bits.When sending a large amount of video datas, it is to the number saving the codeword bits sent together with video data, so that the number saving the whole bit that will send becomes even more important in fact.Video data signal more effectively compresses by this permission as a whole.
The solution of problem
An object of the present invention is to introduce a kind of method and apparatus for chroma samples in prediction processing frame, it can rebuild chroma samples by the linear combination of the luma samples of rebuilding in advance using interpolation.
Another object of the present invention is to provide a kind of by relying on the prediction mode information identified in advance, for signaling and identify effective method more and the device of suitable current prediction mode.Determining suitable current prediction mode by relying on the prediction mode information identified in advance, needing the reduction of the whole codeword bits sent by coding unit to realize.
Beneficial effect of the present invention
The invention provides a kind of for performing the new model of colorimetric prediction to current chroma sampling, it is based on the linear combination of the luma samples of rebuilding in advance of interpolation.This is used for performing colorimetric prediction new model and also utilizes the luma samples of rebuilding in advance, and it is interpolation and rebuild chroma samples in advance, and here, the block of contiguous current chroma sampling is taken from these samplings.By utilize from current chroma sample identical block the luma samples of rebuilding in advance, interpolation and come from the luma samples of rebuilding in advance of the block that contiguous current chroma is sampled, and the linear combination of the chroma samples of rebuilding in advance from the block of contiguous current chroma sampling, the higher prediction accuracy for chroma samples can be realized.
The present invention also realizes the needs of the minimizing of the whole codeword bits be sent out, thus, reduce the transmission of whole bit in the bitstream.When possible time, this makes information after a while can send realization by basis in the information of front timely transmission on time.
Accompanying drawing explanation
Adjoint accompanying drawing is included to provide further to be understood the present invention, and is incorporated into and forms the part of this specification, diagram embodiments of the invention, and is used from this specification one and explains principle of the present invention.
Fig. 1 is the block diagram according to coding unit of the present invention;
Fig. 2 is the exemplary view of the video data of coding;
Fig. 3 is the block diagram according to decoding unit of the present invention;
Fig. 4 diagram is according to the available intra prediction mode of some embodiment of the present invention;
Fig. 5 diagram is split into the video data picture of stripe cell;
Fig. 6 is the close-up view in the region of specifying from Fig. 5;
Fig. 7 is the close-up view in the region of specifying from Fig. 6;
Fig. 8 diagram is according to the result of the interpolation process of a preferred embodiment of the invention;
Fig. 9 diagram is according to the result of the interpolation process of another embodiment of the present invention;
Figure 10 diagram is according to the result of the interpolation process of another embodiment of the present invention;
Figure 11 (a) is the form of the available luma prediction modes according to a preferred embodiment of the invention;
Figure 11 (b) is the form of the available luma prediction modes according to another preferred embodiment of the present invention;
Figure 12 is the graphic extension according to the available predictive mode of some embodiments in the present invention;
Figure 13 is the block diagram of the prediction processing unit according to a preferred embodiment of the invention;
Figure 14 (a) is the form of the mapping relations between luma prediction modes information and prediction mode for chroma information;
Figure 14 (b) is that the binary code words of form in fig. 14 represents;
Figure 15 is the form numerical value and their binary bits code word value that are used for prediction mode for chroma in frame compared;
Figure 16 is the flow chart of the transmission of diagram intra prediction mode value;
Figure 17 illustrates according to one embodiment of the invention for identifying the flow chart of the signaling method of prediction mode for chroma in suitable frame;
Figure 18 illustrates according to another embodiment of the present invention for identifying the flow chart of the signaling method of prediction mode for chroma in suitable frame;
Figure 19 illustrates according to another embodiment of the present invention for identifying the flow chart of the signaling method of prediction mode for chroma in suitable frame;
Figure 20 illustrates according to the present invention for sending the method for converter unit size information;
Figure 21 illustrates according to the present invention for sending the method for converter unit size information.
Embodiment
Feature of the present invention and advantage will be set forth in the description that follows, and will be partly apparent from this description, or can put into practice acquistion by of the present invention.By the structure especially pointed out in the specification of book and claim and appended accompanying drawing thereof, can realize and obtain object of the present invention and other advantage.
In order to realize the advantage of these and other, and according to object of the present invention, as implemented herein and describing widely, a kind of method for decode digital video data, comprise: receive the sequence of pictures comprising video data, each picture of video data is made up of at least one band, and each band is set block (treeblock) by least one and formed.Each tree block is split into many predicting unit, and performs prediction according to corresponding predictive mode to each predicting unit, so that reconstruction video data.Corresponding predictive mode will be the identical predictive mode for coded prediction unit before being transmitted.
According to the present invention, predictive mode type information receives together with the video data of the predictive mode of each predicting unit for identifying this video data.This predictive mode type information difference inter-frame forecast mode and intra prediction mode.This predictive mode type information also distinguishes the predictive mode corresponding to luma prediction unit and the predictive mode corresponding to colorimetric prediction unit.
According to the present invention, when this predictive mode type information instruction linear method (LM) predictive mode is by when being implemented as infra-frame prediction current chroma predicting unit for reconstruction, LM predictive mode comprises the linear combination from the luma samples of rebuilding in advance obtaining interpolation in the identical block of current chroma predicting unit.LM pattern comprises the linear interpolation obtaining the luma samples of rebuilding in advance from the block of contiguous current chroma predicting unit further, and obtains the chroma samples of rebuilding in advance from the block of contiguous current chroma predicting unit.
In addition, according to the present invention, when predictive mode type information instruction linear method predictive mode is by when being made into for inter prediction current chroma predicting unit, a kind of method of linear combination of the luma samples of rebuilding in advance for obtaining interpolation is provided, here luma samples obtains from reference base picture, and this reference base picture is different from the photo current comprising current chroma predicting unit.For the inter-frame prediction method of LM predictive mode, the chroma samples of the reconstruction of use can from from the luma samples that reference base picture is rebuild, or obtains in the luma samples of rebuilding in advance photo current, and reference base picture is different from photo current.In addition, the chroma samples of reconstruction can directly obtain from the reference base picture of the inter-frame forecast mode of the LM predictive mode for inter prediction.Inter-frame prediction method for linear method predictive mode is also applicable to the B picture benchmark rebuild in the future.
LM predictive mode of the present invention is also applicable to the situation that predicting unit can be divided into infra-frame prediction block and inter prediction block.
According to the present invention, the LM predictive mode that mark is used for the forecasting process of chroma samples can signal to rely on the mode of the predictive mode signaled in advance relevant with luma samples.Do like this, to save the amount of the binary code words bit that the suitable predictive mode of mark needs.
Should be understood that general introduction above and detailed description are below exemplary and illustrative, and as required to the invention provides further instruction.
For pattern of the present invention
To be introduced the preferred embodiments of the present invention in detail now, its example is illustrated by adjoint accompanying drawing.First, the term used in the present specification and claims or word are not construed as limited to conventional or dictionary meanings, and should think that the concept that can define this term rightly based on inventor describes the principle of the invention of inventor in the mode expected, mate implication and the concept of technological thought of the present invention.Disclosed embodiment and the structure shown in adjoint accompanying drawing are in fact exemplary in the disclosure, and are not intended in fact be included.This preferred embodiment does not represent all admissible technique variation of the present invention.Therefore, should be understood that the present invention cover provide within the scope submitting to and the time point of the application falls into appended claims and equivalence thereof to modifications and variations of the present invention.
Such as, picture can be called frame, and frame or picture represent the example of single video data here.The sequence of picture or frame comprises video data.Picture is made up of multiple band usually, but likely single band comprises whole picture.In addition, block can also be called unit.
Each band is split into multiple tree block usually.The size of tree block is variable, and can have greatly to the size of 64 × 64 pixels.Alternatively, tree block can have any one size corresponding to 32 × 32,16 × 16,16 × 8,8 × 16,8 × 8,8 × 4,4 × 8,4 × 4,4 × 2,2 × 4 and 2 × 2 pixels.The size of tree block is subject to the impact of various factors, such as, but is not limited to the video resolution of the selection of video pictures.Encoder determines the best size setting block with also can running through the sequence self-adapting of the picture comprising video data.Another base unit for the treatment of video pictures is macro block.Macro block has the size of 16 × 16.
Before the transmission of decode video data, must first to coding video data.Fig. 1 illustrates video data and derives from video source, and this video source provides original video data.Although Fig. 1 describes the part that video source is whole transmission unit 1, video source 2 can be separated with transmission unit 1, as long as original video data can communicate to transmission unit 1 by video source 2.When video source is not the integral part of transmission unit 1, directly communicate with transmission unit 1 for video source 2 is likely actual, or with transmission unit 1 radio communication.
Prediction processing is performed by the prediction processing unit 3 of transmission unit 1.The prediction processing of original video data is for required when obtaining the sequence of video data picture from video source, and video data picture represents original video data.Although video data experiences various prediction processing in prediction processing unit 3, prediction mode information is relevant to each predicting unit of video data.Prediction mode information is identified at the predicted process of each predicting unit under which available predictive mode.By this way, after a while once receive video data at decoding unit, by experiencing and being identified identical predictive mode process by prediction mode information, each predicting unit successfully can be predicted and is rebuild for display again.After experience prediction processing, converter unit 4 performs map function to the video data of prediction.Probably discrete cosine transform (DCT) is performed to the video data of this prediction.Then, this video data is encoded by cell encoder 5 and is sent.
Fig. 2 describes the expression according to video data picture of the present invention.In fig. 2, this picture corresponds to the 4:2:0 sampling rate of this video data.4:2:0 sampling rate is guaranteed to comprise 4 luma samples (Y or L) for each 2 × 2 luma samples tree blocks, there is a pair corresponding chroma samples (Cr, Cb).Except by except the illustrated 4:2:0 sampling rate of Fig. 2, there are other sampling rates various that can be used for sending video data.Other sampling rate above-mentioned includes but are not limited to, 4:2:2 sampling rate and 4:4:4 sampling rate.Although propose under the hypothesis for the 4:2:0 of being disclosed in sampling rate of the present invention, should be understood that all aspects of the present invention are suitable under all available sampling rates.
Fig. 3 illustrates the receiver 31 receiving the video data sent from transmitting element in FIG.This receiver in receiving element 32 together with corresponding prediction mode information receiving video data.Decoding unit 33 then decode video data.During the decoding of video data, corresponding prediction mode information is read to identify suitable predictive mode process and performs using each predicting unit received for the part as video data.Then this inverse transformation unit 34 performs inverse transformation operation to video data.Probably it is inverse discrete cosine transform.Further, this reconstruction unit 35 performs the reconstruction (according to corresponding predictive mode process) of predicting unit to regenerate the video data for showing.
Chroma intra prediction modes
First, explanation is used for the intra prediction mode predicting chroma samples.
The decoding unit receiving the receiver of actual video data will also prediction mode for chroma information in received frame, and in frame, prediction mode for chroma information corresponds to each colorimetric prediction unit of this video data.Need the chroma samples of prediction processing can be called colorimetric prediction unit at decoding unit.In frame, the instruction of prediction mode for chroma information is used by encoder with the predictive mode of coding video frequency data before video data transmission.This is necessary, makes, at receipt decoding cell side place, to process corresponding predictive mode in the predicting unit of video data, to guarantee the successful reproduction of video data.Therefore, once receive the transmission of video data, this decoding unit performs the task of reading prediction mode for chroma information in frame, and then according to the value indicated by prediction mode for chroma information in frame, performs suitable prediction to predicting unit.
Table 1
Table 1 describes according to one embodiment of the present of invention, for the value of prediction mode for chroma in various frame and an embodiment of title.By applying in these frames in prediction mode for chroma, current colorimetric prediction unit can be calculated to a nicety by decoding unit and rebuild.
A series of value listed especially by table 1, and it corresponds to prediction mode for chroma in specific frame.Therefore, in frame, prediction mode for chroma information will comprise the value at least identifying prediction mode for chroma in corresponding frame.When prediction mode for chroma information in frame has value " 1 " time, current colorimetric prediction unit will be applied to for the DC pattern predicted.For DC predictive mode, be commonly referred to adjacent block at the block rebuild in advance at the left side of current predicting unit and top, will the prediction processing current predicting unit be used to.Fig. 4 (c) describes the exemplary example being used for DC predictive mode.C represents current predicting unit, and A represents the block rebuild in advance on the left of current predicting unit C, and B represents the block predicted in advance at current predicting unit C top.According to DC predictive mode, when they both can be used time, block A and B rebuild in advance is extracted mean value with the prediction of process for current predicting unit C.But if only block A is available, so, this prediction processing can follow the horizontal pattern of following explanation.Or if only block B is available, so, this prediction processing can follow the vertical prediction mode of following explanation.When it does not also have rebuilt time, or when working as it and current predicting unit is not in identical band, block is considered to disabled.
When prediction mode for chroma information in frame has value " 2 " time, current colorimetric prediction unit will be applied to for the horizontal pattern predicted.For horizontal prediction mode, the adjacent block rebuild in advance on the left of current predicting unit will be used to the prediction processing current predicting unit.Fig. 4 (b) describes the exemplary example being used for horizontal pattern prediction.According to horizontal pattern prediction, the block A rebuild in advance is by the prediction for the treatment of current predicting unit C.
When prediction mode for chroma information in frame has value " 3 " time, vertical prediction mode will be applied to current colorimetric prediction unit.For vertical prediction mode, the adjacent block rebuild in advance at current predicting unit top will be used to the prediction processing current predicting unit.Fig. 4 (a) describes the exemplary example being used for vertical mode prediction.According to vertical mode prediction, the block B rebuild in advance is by the prediction for the treatment of current predicting unit C.
When prediction mode for chroma information has the value of " 4 " in frame time, current colorimetric prediction unit will be applied to for the plane mode predicted.Fig. 4 (d) describes the exemplary example being used for plane mode prediction.Respectively in the prediction that block A and B rebuild in advance at the left side of current predicting unit C and top will be used for according to the current predicting unit C of plane mode process.
Explain in describing in detail in the disclosure after a while and be used for corresponding to the estimation predictive mode when prediction mode for chroma information in frame has value " 0 ".For quoting as proof of future, estimation mode is considered to identical with the LM model prediction pattern described in following table 2.
Table 2
Table 2 describes the twoth embodiment for identifying the intra prediction mode that will be applied to according to colorimetric prediction unit of the present invention.The preferred embodiment being used for prediction mode for chroma in frame of the present invention considered by table 2.
When prediction mode for chroma information in frame has value " 1 " time, vertical prediction mode will be applied to current colorimetric prediction unit.The vertical prediction mode described in table 2 operates in the mode same with the vertical prediction mode described in above table 1.
When prediction mode for chroma information in frame has value " 2 " time, horizontal prediction mode will be applied to current colorimetric prediction unit.The horizontal prediction mode described in table 2 works in the mode same with the horizontal prediction mode described in above table 1.
When prediction mode for chroma information in frame has value " 3 " time, DC predictive mode will be applied to current colorimetric prediction unit.The DC predictive mode described in table 2 works in the mode same with the DC predictive mode described in above table 1.
In addition, although do not identify especially in table 2, in frame, angle predictive mode can be used for the prediction processing current colorimetric prediction unit.Below with reference to luma prediction modes in frame, angle predictive mode in frame is described.In the frame that can be used for processing colorimetric prediction unit, angle predictive mode operates in the mode same with luma prediction modes in frame.Comprise all angle predictive modes, there is prediction mode for chroma in 34 (34) individual available frames according to the preferred embodiments of the present invention.
When prediction mode for chroma information in frame has value " 4 " time, in frame, DC predictive mode will be applied to current colorimetric prediction unit.DM predictive mode is disabled in Table 1.DM predictive mode is according to being applied to the predictive mode process of the luma samples found in the predicting unit that the colorimetric prediction unit with current is same to the prediction of current colorimetric prediction unit.
Therefore, in frame, luma prediction modes information is sent by decoding schema before prediction mode for chroma information and receives in frame.Therefore, according to DM predictive mode, value corresponding to colourity DM predictive mode in frame will only indicate decoding unit with the current colorimetric prediction unit of the identical model prediction process by luma prediction modes message identification in frame, and in frame, luma prediction modes information corresponds to the luma samples of identical predicting unit.The available intra prediction mode for luma prediction unit can be found in the Figure 11 (a) described in the disclosure after a while.
When prediction mode for chroma information in frame has value " 0 " time, LM (linear method) predictive mode will be applied to current colorimetric prediction unit.As mentioned above, LM predictive mode and estimation predictive mode will be interpreted as and operate in the same way, and can quote according to running through arbitrary title of the present disclosure.
LM predictive mode in frame
The detailed description will provided for LM prediction mode for chroma in frame now.When receiving video data transmits time, prediction and rebuild same block colorimetric prediction unit before, the luma prediction unit of first decoder will be predicted and rebuild (that is, decode) given block.Fig. 5 diagram is according to band of the present invention, and it is split into 4 pieces.Suppose block B1, B2 and B3 by decoding unit prediction processing and the reconstruction of receiver, what adopt is that B4 normally predicted process is for rebuilding.And in block B4, in order to exemplary object removes adopt the upper left corner of block B4 to the current block C implementing normally to want prediction processing.
Therefore Fig. 6 is the zoomed-in view in the square frame district of dotted outline in Figure 5.Fig. 6 illustrates current 32 × 32 sizes of the block C of predicted process of wanting and describes.The luma samples of each expression predicted process and reconstruction of the white blocks outline in block C.Therefore, only the chroma samples of block C needs predicted process.Being adjacent to block C is adjacent block A and block B.Block A is that the part of the B1 be positioned on the left of block C represents, as shown in Figure 5.Further, block B is that the part of the B2 being positioned at block C top represents, as shown in Figure 5.Block A and block B is rebuilt.
Fig. 7 diagram is from the further close-up view of 4 × 4 size block of the upper left of the current prediction block C seen in figure 6.Fig. 7 is also provided in the view of 2 × 4 size partial blocks of the adjacent block A on the left of block C.And Fig. 7 is also provided in the view of the partial block of 4 × 2 size block of the adjacent block B at the top of block C.Because block A and B rebuilds, therefore the white blocks of block A and B represents the luma samples of having rebuild, and black squares represents the set of the chroma samples of having rebuild.Notice, although cannot see due to the chrominance block of black reconstruction, in block A and B, that samples at each black chtominance rebuilds luma samples below in addition accordingly.Therefore, the luma samples of current block C is rebuilt, and the luma samples of adjacent block A and B is rebuilt, and the chroma samples of adjacent block A and B is rebuilt.Further, " X " mark in block A and B represents the linear interpolation from the luma samples of the reconstruction of each corresponding block A and B.These brightness from the reconstruction of adjacent block and chroma samples, and will all work in the LM predictive mode process of colorimetric prediction unit being used for current block C from the linear interpolation of the luma samples of adjacent block.
In order to perform according to LM pattern in block C be used for current colorimetric prediction unit frame in colorimetric prediction, first must obtain the linear interpolation of the luma samples of rebuilding in advance in current prediction block C.
According to the preferred embodiment of LM predictive mode in frame, two luma samples of rebuilding in advance obtain in block C.The luma samples of rebuilding in advance is represented by PL (x, y), x and y corresponds to the position reference for the current chroma predicting unit in block C here, and block C is current is carried out prediction processing by according to LM predictive mode.Extract the first luma samples at PL (2x, 2y), and extract the second luma samples at PL (2x, 2y+1) in block C.Then, according to the preferred embodiment of LM predictive mode in frame, the linear combination PL* (x, y) of the luma samples of interpolation can by following acquisition:
Mathematical expression 1 [mathematics .1]
P L*(x,y)=0.5*[P L(2x,2y)+P L(2x,2y+1)]
Now by means of the linear combination PL* (x, y) of the luma samples of the interpolation obtained, can by following acquisition by predicting for LM in the frame of current colorimetric prediction unit of representing of P'c:
Mathematical expression 2 [mathematics .2]
P' c=α*0.5*P L*
Here Alpha α and beta β can by following acquisition:
Mathematical expression 3 [mathematics .3]
α = R ( P ^ L * , P ^ C ) R ( P ^ L * , P ^ L * )
β = M ( P ^ C ) - α × M ( P ^ L * )
Represent the correlation between two reference variables according to mathematical expression 3, R (*, *), and M (*) represents the mean value be used in interior variable reference.P^L* represents the linear interpolation of any one luma samples of rebuilding in advance extracted from adjacent block A or B.According to Fig. 7, P^L* by adjacent block A or B any one in " X " that find mark and represent.Further, P^C represents the chroma samples of any one reconstruction of extracting from adjacent block A or B.According to Fig. 7, P^C by adjacent block A or B any one in black block represent.P^L* can also realize moving to left or move to right function to solve contingent any round-off error.
Fig. 8 is the close-up view identical with the upper left of block C as shown in Figure 7.But, Fig. 8 to be described in addition in the block C that marks as marked by " X " in block C generate the luma samples of interpolation linear combination.Luma samples 1 and 3 represents when (x, y)=(0,0) and (x, y)=(1,0) respectively, the luma samples of rebuilding in advance obtained by PL (2x, 2y).Further, luma samples 2 and 4 represents when (x, y)=(0,0) and (x, y)=(1,0) respectively, the luma samples of the reconstruction obtained by PL (2x, 2y+1).The black block unit found in block A on the left of contiguous current block C and the black block unit found in the block B at contiguous current block C top are the expressions of the chroma samples of rebuilding in advance, and the chroma samples of rebuilding in advance can be used for obtaining the α in mathematical expression 3 and beta coefficient." X " mark found in block A on the left of contiguous current block C and " X " mark found in the block B at contiguous current block C top are the expressions of the linear interpolation of the luma samples of rebuilding in advance, and the luma samples of rebuilding in advance can be used for obtaining the α in mathematical expression 3 and beta coefficient.
As mentioned above, Fig. 8 illustrates the result of the linear combination obtaining the luma samples of interpolation according to PL* (x, y).For example, extract luma samples 1 and luma samples 2, and apply the linear combination of the luma samples of interpolation according to PL* (0,0), in fig. 8, result is marked by " X " that find between the two in luma samples 1 and luma samples 2 and represents.
Similarly, " X " that find between the two in luma samples 3 and 4 represents the linear combination of the luma samples of the interpolation generated according to PL* (1,0).Remaining " X " mark seen in fig. 8 represents based on the remaining luma samples of rebuilding in advance found in current block C, the linear interpolation generated by PL* (0,1) and PL* (1,1).α and beta coefficient can obtain from adjacent block A and B.
Now in order to process colorimetric prediction in actual frame, linear combination PL* (the x of the luma samples of the interpolation more than obtained, y) cooperate together with beta coefficient in company with the α calculated, P'c (x, y) is predicted then to obtain colourity LM in the frame for current colorimetric prediction unit.The accurate Calculation being used for current colorimetric prediction unit according to colourity LM predictive mode in frame can be seen in mathematical expression 3.
But the present invention is not limited to the linear combination of the luma samples only comprised as the interpolation described in fig. 8.In second embodiment of the linear combination of the luma samples PL* (x, y) for obtaining interpolation, the luma samples that two different can be adopted.Fig. 9 represents second embodiment, and describes the close-up view identical with the upper left corner of the current block C described in fig. 8.But compared with the result of the linear combination of the luma samples of interpolation as shown in Figure 8, in fig .9, the linear combination of the luma samples of the interpolation generated moves right a unit.This moves can by merging mobile realization in the luma samples extracted when obtaining PL* (x, y).Therefore, according to second embodiment, the first luma samples can be extracted at PL (2x+1,2y), and the second luma samples can be extracted at PL (2x+1,2y+1).
Therefore, by mobile from the luma samples wherein extracting linear interpolation, second embodiment can provide further flexibility to LM predictive mode.The location expression that Fig. 9 is marked by " X " in block C is according to the linear combination of the luma samples of the interpolation generated of second embodiment.
According to the PL* (x, y) of second embodiment by following acquisition:
Mathematical expression 4 [mathematics .4]
P L*(x,y)=0.5*[P L(2x+1,2y)+P L(2x+1,2y+1)]
For factor alpha and β calculating by the linear interpolation obtaining luma samples and the chroma samples of rebuilding in advance of rebuilding in advance from adjacent block A with B keep with see in mathematical expression 3 identical.And similarly, LM in the frame of the reality of current colorimetric prediction unit is predicted that P'c is still limited by mathematical expression 2.The result of the linear combination of the luma samples PL* (x, y) of interpolation according to unique difference of second embodiment, that is, due to the movement in the luma samples extracted in current block C.
In the 3rd embodiment of the linear combination of the luma samples PL* (x, y) for obtaining interpolation, four different luma samples can be extracted in current block C frame.Figure 10 represents the 3rd embodiment.Figure 10 is the near-sighted partial view identical with the upper left corner of the current block C described in figs. 8 and 9.But, according to the 3rd embodiment, from block C, extract four luma samples of rebuilding in advance to obtain PL* (x, y), instead of only as 2 of describing in first and second embodiments.These four luma samples of rebuilding in advance are marked as 1,2,3 and 4 in Fig. 10.First luma samples 1 can be obtained by PL (2x, 2y).Second luma samples 2 can be obtained by PL (2x+1,2y).3rd luma samples 3 can be obtained by PL (2x, 2y+1).Further, the 4th luma samples can be obtained by PL (2x+1,2y+1).Be averaged by the luma samples obtained four, the linear interpolation PL* (x, y) according to the 3rd embodiment can be obtained." X " mark found in the centre of luma samples 1,2,3 and 4 is the linear interpolation of four luma samples of rebuilding in advance according to the 3rd embodiment.Remaining " X " mark describes the result of the linear interpolation of the remaining luma samples acquisition of rebuilding in advance since current block C.
Therefore, can by following acquisition according to the PL* (x, y) of the 3rd embodiment:
Mathematical expression 5 [mathematics .5]
P L*(x,y)0.25*[P L(2x,2y)+P L(2x+1,2y)+P L(2x,2y+1)+P L(2x+1,2y+1)]
For factor alpha and β calculating by obtain from adjacent block A with B the brightness of rebuilding in advance and chroma samples keep with see in mathematical expression 3 identical.Therefore, according to the 3rd embodiment, predict that P'c is still limited by mathematical expression 2 for colourity LM in the frame of current chroma predicting unit.Owing to increasing in the luma samples extracted, unique difference occurs in the different result from linear interpolation PL* (x, y).
Should be appreciated that, the method for obtaining PL* (x, y) is not limited to above disclosed embodiment.Propose for obtaining PL* (x, y) above disclosed embodiment is with its preferred embodiment of demonstrating, but, those skilled in the art should understand that, without departing from the scope and spirit of the present invention, be possible under this invention for obtaining the various methods of the linear interpolation of the luma samples of reconstruction.
Colourity inter-frame forecast mode
The chroma samples of interframe prediction block does not have its oneself predictive mode in advance.Situation is, the chroma samples of interframe prediction block just carrys out prediction processing for the predictive mode of corresponding luma samples by following in identical predicting unit.But according to the present invention, the LM predictive mode of description with regard to colorimetric prediction in frame can be used for interframe colorimetric prediction.
Inter prediction also relates to prediction processing luma samples before the chroma samples of any given predicting unit of process.This refers to when predicting current colorimetric prediction cell processing, and the luma samples of rebuilding in advance is available.Therefore, the basic handling predicted for interframe colourity LM will be predicted and carry out identical process with being used for as mentioned above colourity LM in frame.Unique difference is for inter prediction, and the luma samples belonging to the predicting unit identical with current colorimetric prediction unit is rebuild by reference to reference base picture, and reference base picture is different from current picture, and current picture comprises current colorimetric prediction unit.
Therefore, after the luma samples of reconstruction obtaining the predicting unit belonging to identical, although be different for the method for rebuilding luma samples according to intra prediction mode and inter-frame forecast mode, colorimetric prediction unit can with the predicted process of interframe LM predictive mode.Then, the luma samples being included in this reconstruction in identical predicting unit can be extracted to form linear combination, and then, is interpolated, for the process of interframe colourity LM predictive mode.Similarly, the α seen in mathematical expression 3 and beta coefficient can obtain by utilizing compensation motion vector, to process the reconstruction of brightness and chroma samples in the predicting unit of contiguous current colorimetric prediction unit.Further, linear interpolation can be applied to contiguous luma samples to obtain α and beta coefficient.
And once the linear combination of the luma samples of rebuilding in advance is interpolated, and the brightness of the α that acquisition calculating is seen in mathematical expression 3 and the necessity needed for beta coefficient and chroma samples, interframe colourity LM predictive mode can process current colorimetric prediction unit according to mathematical expression 2.
For the signaling of colourity LM predictive mode in frame
The transmission of vision signal will comprise the video data of the coding being arranged to blocks of data form, and wherein this block comprises the luma samples and chroma samples predicted from original video source.In addition, in company with actual video data comprise together in video signals be the various information datas relevant with the various features of video data.This information data can comprise the size information of block, Command Flags and block type information and in other possible information data.Comprising among various information datas in the transmission signal, will be the prediction mode information relevant with the block of video data of each coding.The predictive mode which this prediction mode information indicates available is used to coding video frequency data.This prediction mode information is sent by together with the video data of coding, make the decoder of the video data of received code at coding unit place, which predictive mode can be used for prediction processing, identical predictive mode is used for accurately reconstruction video data at decoding unit place.
In any prediction processing in a decoder for block reconstruction, luma samples is predicted process at first.Therefore, such as, in 2 × 2 pixel size blocks found in 4:2:0 sampling, four luma samples and one group of corresponding chroma samples will be had.And in this case, predicted process was used for rebuilding by four luma samples before corresponding chroma samples group.
In a preferred embodiment, when to comprise in angle mode (3...33) each, there is the intra prediction mode that 34 (34) are available.Figure 11 (a) describes 34 intra prediction modes according to the preferred embodiment.As seen from Figure 11 (a), vertical prediction mode, horizontal prediction mode and DC predictive mode can be used for luma prediction in frame.The same angle predictive mode seen in Figure 11 (a) to can be used in frame both colorimetric predictions in luma prediction and frame.When in corresponding frame, luma prediction modes is angle predictive mode time, in frame, colourity angle predictive mode can process current colorimetric prediction unit by DM predictive mode being designated prediction mode for chroma in frame.The process being used for vertical prediction mode, horizontal prediction mode and DC predictive mode predictive mode is described above with reference to Fig. 4 (a)-(c).DM predictive mode only has colorimetric prediction in frame just to have.
The more detailed explanation will provided for angle predictive mode now.Represent corresponding to the angle of angle predictive mode in the frame of intraprediction mode information value " 3-33 " based on the sampling predicted in advance (it is adjacent to contiguous current predicting unit with a certain angle corresponding to the angle described in fig. 12), predict current predicting unit.As shown in figure 12, in fact each value between " 3-33 " for luma prediction modes in frame corresponds to different angle predictive modes.Figure 12 also describes the exemplary description of DC predictive mode in the frame that is used for having corresponding intraprediction mode information value " 2 ".
Although do not comprise brightness LM predictive mode in independent frame for the preferred embodiment of luma prediction modes in the frame of description in Figure 11 (a), embodiment can comprise the infra-frame prediction of such LM predictive mode for the treatment of luma samples.Figure 11 (b) describes embodiment, and it comprises brightness LM predictive mode in frame.For ease of explaining, in the table described by Figure 11 (b), provide the LM intra prediction mode of value for " 34 ".Luma prediction modes in 34 frames of the preferred embodiment described in the next comfortable Figure 11 (a) of this permission, to keep its original value in the embodiment of description in Figure 11 (b).But the value distributing to brightness LM predictive mode in frame is not limited to " 34 " that describe in Figure 11 (b).
Such as, another embodiment can apportioning cost be " 3 " frame in brightness LM predictive mode value.In this embodiment, the value that vertical prediction mode is assigned with " 0 ", this horizontal prediction mode is assigned with the value of " 1 ", and DC predictive mode is assigned with the value of " 2 ", LM pattern is assigned with the value of " 3 ", and angle predictive mode is assigned with remaining value " 4..33 ".By table 3 below in detail the table for identifying the 3rd embodiment is described in detail.
Table 3
Although explained only three embodiments for the possible intra prediction mode of luma samples herein, according to luma prediction modes conversion in above disclosed frame for being depicted as any one predictive mode value of available predictive mode within the scope of the present invention.As long as each of available predictive mode can clearly distinguish, and do not obscure mutually with another predictive mode, that any value can be distributed to for the available predictive mode of luma prediction process in frame is each.This is effective equally for the numbering of prediction mode for chroma value in frame.
Figure 13 describes the expression being used for the prediction circuit determining suitable intra prediction mode in video reception unit.Intra prediction mode illustrates before being transmitted from the intraprediction mode information that coding unit distributes, for identifying the suitable intra prediction mode for the treatment of luma prediction unit.Intra_chroma_pred_mode illustrates the intraprediction mode information distributed by coding unit before being transmitted, for identifying the suitable intra prediction mode for the treatment of colorimetric prediction unit.Further, predicting unit diagram is received the video data gone according to the predicted process of corresponding intra prediction mode by receipt decoding unit.Although Figure 13 describes predicting unit and is input to selector 131 together in company with intraprediction mode information, these predicting unit data walk around selector, and are directly inputted to predicting unit 132 within the scope of the present invention.
For luma prediction process in frame, luma prediction unit is input to selector 131 together in company with corresponding intraprediction mode information.In Figure 11 (a) and 11 (b), available intra prediction mode is disclosed, as explained above.Because luma prediction modes information in received frame before prediction mode for chroma information in frame, so luma prediction unit directly will be received by according to corresponding intraprediction mode information.Therefore, this selector 131 only needs intra prediction mode to export to predicting unit 132, and intraprediction mode information is by the suitable intra prediction mode of mark for the treatment of the prediction to luma prediction unit here.Then luma prediction unit is directly processed according to the intra prediction mode identified by intraprediction mode information in predicting unit 132.At luma prediction unit by after according to intra prediction mode prediction processing available in predicting unit 132, then export the luma prediction unit of reconstruction for display.In addition, when after a while by the colorimetric prediction unit of prediction processing from same block, intraprediction mode information will be fed back to selector 131 to use.
Figure 17 diagram when determining order predicting unit to perform the suitable output of suitable intra_chroma_pred_mode to colorimetric prediction unit in frame, must by selector 131 make may definite sequence really.Intraprediction mode information is received in step S1701, and according to after intra prediction mode process luma prediction unit in step S1702, in step S1703, colorimetric prediction unit is input to selector 131 together in company with corresponding intra_chroma_pred_mode information.If intra_chroma_pred_mode message identification Intra_DM (in frame DM) predictive mode, then, in step S1705, selector 131 must with reference to the pretreated intra prediction mode feeding back to selector 131.If intra_chroma_pred_mode does not identify Intra_DM predictive mode, so, in step S1707, S1709, S1711 and S1713, selector reads intra_chroma_pred_mode information, and suitable information is exported to predicting unit 132 to process corresponding intra_chroma_pred_mode.Although Figure 17 describes selector experience determine whether intra_chroma_pred_mode information identifies the sequence of Intra_LM (in frame _ LM) predictive mode at S1706, whether it identifies Intra_Vertical (frame is interior _ vertical) predictive mode, whether it identifies Intra_Horizontal (frame in _ level), and finally whether it identifies Intra_DC (frame in _ DC) predictive mode, but be not limited to according to selector of the present invention and follow such order all the time.Determine that any order identifying intra_chroma_pred_mode is within the scope of the present invention.In addition, although Figure 17 describes the order of the determining step followed, according to the preferred embodiments of the present invention, the brightness of identical predicting unit and chroma samples can by parallel anticipation process.In other words, before the corresponding chroma samples of the identical sampling unit of prediction processing, in fact all luma samples of prediction do not need fully prediction processing.As long as receive luma prediction modes information in frame, decoding unit can start the process for the corresponding chroma samples of prediction processing according to DM predictive mode.Therefore, in operation, once selector 131 receives the intraprediction mode information of predictive mode that mark is used for luma samples, the prediction processing of the chroma samples of identical predicting unit can be used for by parallel starting.
Prediction mode for chroma information in luma prediction modes information, intra prediction mode and frame in the mapping table mapping frame described in Figure 14 (a) and 14 (b), the intra_chroma_pred_mode that the video data in company with coding sends together.Point result being used in such intraprediction mode information of transmission is represented in the crossing coupling main body of mapping table.End value from the mapping table of Figure 14 (a) and Figure 14 (b) can be called intra prediction mode C.The angle of each example from the intra prediction mode value received is understood mapping table.
Such as, if the intra prediction mode received in advance corresponds to " 0 " of Intra_Vertical predictive mode, so, in this case, will find being assigned to the value that in available frame, prediction mode for chroma is each in identical row below.Therefore, when supposing intra prediction mode mark Intra_Vertical predictive mode, if selector 131 receives the value " 0 " being used for intra_chroma_pred_mode, so, will corresponding to value " 34 " according to Figure 14 (a) intra prediction mode C.Then, get back to reference to Figure 11 (b), can find out, " 34 " mark Intra_LM predictive mode.
When supposing that intra prediction mode still identifies Intra_Vertical predictive mode, if signal Intra_Vertical predictive mode will be used for colorimetric prediction, intra_chroma_prediction_mode information is not then needed to send value for Intra_Vertical predictive mode especially, thus, " n/a " value.This is because know the information relevant with Intra_Vertical predictive mode from luma prediction in frame, and Intra_Vertical predictive mode can be called by means of only with reference to Intra_DM predictive mode.Intra_DM predictive mode allows prediction mode for chroma to follow corresponding luma prediction modes.Therefore, when intra prediction mode is Intra_Vertical predictive mode, due to the availability of Intra_DM predictive mode, chroma samples is not needed for Intra_Vertical predictive mode assign a value especially.For every other predictive mode, although intra prediction mode has value " 0 ", specific value must be distributed to specify prediction mode for chroma in suitable frame.Therefore, when intra prediction mode remains " 0 ", and when selector 131 receives the intra_chroma_pred_mode of " 3 " subsequently, then, selector 131 will be known and just signaling Intra_DC predictive mode.Then, get back to reference to Figure 11 (b), can find out, Intra_DC predictive mode corresponds to value " 2 ".Further, this is just described by the intra prediction mode C result of Figure 14 (a).
Now, when intra prediction mode is Intra_Horizontal predictive mode " 1 ", similarly, intra_chroma_pred_mode information is not needed to send the value relevant with Intra_Horizontal predictive mode especially, thus, " n/a " value seen in table 14 (a).Replace, when intra prediction mode has the value of " 1 ", and during the value that intra_chroma_pred_mode has " 4 ", so, the intra prediction mode C generated from table 14 (a) is " 1 ", the Intra_Horizontal predictive mode of its mark as shown in Figure 11 (b).But, the value that if intra prediction mode has " 1 ", and Intra_DC predictive mode is supposed to for colorimetric prediction, intra prediction mode C in frame, so, the value that intra_chroma_pred_mode has " 3 ", and in the mapping table of Figure 14 (a), this corresponds to the value of " 2 ".Here visible in Figure 11 (b), " 3 " mark Intra_DC predictive mode.
When intra prediction mode instruction internal brightness angle predictive mode, and when intra_chroma_pred_mode identifies Intra_DM predictive mode, can according to Intra_Angular predictive mode prediction processing colorimetric prediction unit.
By utilizing Intra_DM predictive mode, can effectively save and reduce the bit of the binary code words corresponding to each intra prediction mode.When checking Figure 14 (b) and Figure 15, this saving in bit becomes more obvious.Except the numerical value replaced binary bit stream codeword bits in fact sending with digital signal, Figure 14 (b) be identical at the middle mapping table described of Figure 14 (a).First the intra prediction mode value sent is corresponded to the situation of Intra_Vertical, Intra_Horizontal and Intra_DC predictive mode (0,1,2), signal the binary bits code word that in suitable frame, prediction mode for chroma needs and can shorten to maximum three bits.This is because Intra_DM predictive mode allows one of predictive mode shared to become discarded (absolete), as represented by " n/a " in these cases.Thus, the meaning of Intra_DM predictive mode is each example for one of intra prediction mode mark Intra_Vertical, Intra_Horizontal and Intra_DC predictive mode (0,1,2), in the mapping table of Figure 14 (b), " n/a " indicates coding unit not distribute independent code word value to prediction mode for chroma in corresponding frame.Therefore, due to Intra_DM predictive mode, a less intra prediction mode needs to distribute to a code word.
From binary bits code word angle, this allows to use maximum three bit codeword " 0 ", " 10 ", " 111 " and " 110 " are assigned to when intra prediction mode is corresponding to Intra_Vertical, Intra_Horizontal and Intra_DC predictive mode, needs by prediction mode for chroma in four frames distinguishing respectively.If Intra_DM predictive mode is disabled, so, in all cases, in five independent frames, needs are distinguished by prediction mode for chroma.If need the code word that five different, so, this increases necessary codeword bits length to be increased to maximum four bits.This can see when intra prediction mode is Intra_Angular predictive mode.In this case, each intra prediction mode must distribute " 0 ", " 10 ", " 110 ", " 1111 ", one of " 1110 " binary code words.During binary bit stream, be difficult to distinguish in " 1 ", " 11 ", between " 111 " and " 1111 ", and therefore, when distributing to prediction mode for chroma in frame to each intra prediction mode example, preferably do not use these code word values more than one.
Notice, each example of intra prediction mode can distribute its oneself binary bits code word to correspond to prediction mode for chroma in a frame.This is the reason of " 0 " situation for intra prediction mode, the code word value signaled for the Intra_DC predictive mode of chroma samples can be " 110 ", and for the situation that intra prediction mode is " 2 ", the code word value signaled for the Intra_Horizontal predictive mode of chroma samples also can be " 110 ".Each example of intra prediction mode can distribute its oneself code word value for prediction mode for chroma in available frame.According to the preferred embodiments of the present invention, Intra_DM predictive mode corresponds to the binary code words " 0 " of the shortest binary bits length by being assigned with.Similarly, according to the preferred embodiments of the present invention, Intra_LM predictive mode has the binary code words " 10 " of the second short binary bits length by distributing.
But any one in an alternate embodiment of the invention any one of this code word being distributed to available predictive mode shown in Figure 15 is within the scope of the present invention.
Such as, if determine according to chroma samples in Intra_DC pattern little by little predictive frame after a while, so, Intra_DC predictive mode can be assigned with code word " 0 " to save the quantity needing the codeword bits sent.The transmission of vision signal or reception period adaptively modifying predictive mode code word distribution also within the scope of the invention.Such as, if determine that certain video sequence needs a large amount of Intra_LM model prediction process, so, for these video sequences, Intra_LM predictive mode can divide this code word and is equipped with minimum bit number " 0 ".If another video sequence so after a while in vision signal finds to need a large amount of Intra_Horizontal predictive mode process, so, Intra_Horizontal predictive mode can distribute the code word with minimum bit number " 0 ".Therefore, making great efforts to save in the process of the total bit number needing transmission, distributing binary code words value adaptively for predictive mode mark is within the scope of the present invention.
Can be used in the embodiment of the prediction processing of luma samples at Intra_LM predictive mode, can as following shown description mapping table in table 4.According to table 4, Intra_LM predictive mode corresponding to the intra prediction mode with value " 3 ", and for intra_chroma_pred_mode value keep with see at Figure 14 (a) identical.Mentioned by for Figure 11 (b), the value distributing to the intra prediction mode for identifying the Intra_LM predictive mode for luma samples can adaptively modifying.Therefore, the formation of table 4 also can change according to the value of the intra prediction mode distributed to for identifying the Intra_LM predictive mode for luma samples.
Table 4
According to the preferred embodiment for determining which binary code words value goes the step sent from encoder relative to prediction mode for chroma in frame to outline in figure 16.
Figure 18 describes for identifying when Intra_LM predictive mode is selected as the embodiment in suitable frame during prediction mode for chroma by decoding unit.Illustrated embodiment works in the mode roughly the same with illustrated preferred embodiment in fig. 17 in figure 18.But in figure 18, this embodiment is expected to receive the part of Intra_LM predictive mode mark as the prediction mode information sent.If Intra_LM predictive mode mark is lost from the prediction mode information received for a certain reason, then step S1806 provides and handles prediction processing according to the preferred embodiment outline in fig. 17.If receive Intra_LM predictive mode mark, then, determine that Intra_LM predictive mode marks whether to have the first value (such as, the value of " 1 "), its instruction Intra_LM predictive mode will be processed (S1804).If Intra_LM predictive mode mark has the first value really, so, current colorimetric prediction unit automatically will experience Intra_LM predictive mode treatment S 1805.If Intra_LM predictive mode mark has the second value, its instruction Intra_LM predictive mode is not processed, and so, in current frame, colorimetric prediction unit is handled by according to the preferred embodiment outline from S1806 in fig. 17.
Figure 19 describes and will be used for another embodiment of colorimetric prediction unit by decoding unit process for signaling LM predictive mode.After receiving most_probable_chroma_mode_flag (S1901), if most_probable_chroma_mode_flag has the first value (such as, " 1 "), so, chroma_prediction_mode will follow the current pattern identified for most_probable_chroma_mode.If A and B is available, then most_probable_chroma_mode is defined as min (chroma_prediction_mode_A, chroma_prediction_mode_B) (S1904).A and B refers to the block of the contiguous current block C comprising current colorimetric prediction unit (it needs prediction processing).Block A and block B is assumed to be and rebuilds in advance, and therefore can be used for determining which predictive mode is for rebuilding them.Therefore, min (A, B) function ratio comparatively for the value of the predictive mode of reconstructed block A and block B, here actual numerical value can according to or table 1 or table 2 in the value that finds obtain.As a possibility, max (A, B) function can be applied and replace min (A, B) function.And in another possibility, multiple pieces (A, B, C etc.) can be adopted to apply min (), max () or any other applicable function.
If adjacent block A and B is disabled, so, chroma_prediction_mode is automatically designated Intra_DC predictive mode S1904.When adjacent block and current block C be not in identical band, if or adjacent block be not intra-frame prediction block, adjacent block is considered to disabled.Process in S1904 last, in any case the currency determining chroma_prediction_mode by be sent out to identify be used for the current chroma samples of prediction processing frame in prediction mode for chroma.
If most_probable_chroma_mode_flag has the second value, so chroma_prediction_mode and most_probable_chroma_mode compares S1903.If chroma_prediction_mode has the value being less than most_probable_chroma_mode, so, the currency for chroma_prediction_mode is sent out for prediction processing S1905.If chroma_prediction_mode has the value being not less than most_probable_chroma_mode, so, will be sent out for prediction processing corresponding to prediction mode for chroma in a frame being less than the ident value of chroma_prediction_mode.Or as a possibility, if chroma_prediction_mode has the value being not less than most_probable_chroma_mode, so, prediction processing will be used to corresponding to prediction mode for chroma in a frame being greater than the ident value of chroma_prediction_mode.And in another possibility, if chroma_prediction_mode has the value being not less than most_probable_chroma_mode, so, in any one available frame, prediction mode for chroma can be set to the default mode for prediction processing.Such as, Intra_DC pattern can be used in this possibility.
Although be so far described for all scenario of colorimetric prediction process in frame, the present invention also considers to make Intra_LM predictive mode only can be used for the chromaticity transformation unit of specific size.Chromaticity transformation unit is determined during coded treatment, and refers to the size of the chroma samples by being transformed (that is, during dct transform process).Therefore, when preparing the information for transmitting, before distribution forecast mode type, must by coding unit determination chromaticity transformation cell size.Such as, when chroma samples converter unit size is greater than 8 (that is, 8 × 8 pixels), Intra_LM predictive mode is not useable for colorimetric prediction.In this case, first the size of chromaticity transformation unit must be determined before distribution forecast mode type.Similarly, when receiving video data transmission, decoding unit will receive and read chromaticity transformation cell size information before reading predictive mode type.This guarantees that this decoding unit will realize when Intra_LM predictive mode is unavailable.
If arrange the availability of Intra_LM according to the size of chromaticity transformation unit, it keeps the information of mark chromaticity transformation cell size must send before transmission represents the information of predictive mode type.Figure 20 and 21 is provided on decoding unit side how will identify the diagram of this converter unit size before identification frames inner estimation mode type.Because before this predictive mode type information, this signal transmission Input transformation cell size information, this guaranteed this decoding unit before analytical Prediction schema type information by analytic trnasformation cell size information.
Although mention the converter unit size of 8 especially, select for determining when that the different transform size of ending for the availability of the Intra_LM predictive mode of colorimetric prediction process is within the scope of the present invention.If Intra_LM predictive mode is unavailable, optional prediction pattern can be distributed, such as Intra_Vertical8 predictive mode.Replace Intra_Vertical8 predictive mode, distributing another available intra prediction mode is within the scope of the present invention.
In addition, according to the preferred embodiment, chromaticity transformation cell size is set to the converter unit size automatically equaling corresponding luminance transformation cell size.Therefore, according to the preferred embodiment seen in Figure 20 and 21, the large young pathbreaker of converter unit of mark luminance transformation cell size is by as chromaticity transformation cell size.But, can independently determine it is within the scope of the present invention with luminance transformation cell size for the converter unit size of colourity.In this possibility, information transmission will comprise chromaticity transformation cell size information.Further, chromaticity transformation cell size information will be sent out, and chromaticity transformation cell size information will be resolved by decoding unit before analytical Prediction mode type.
Although describe with reference to its preferred embodiment herein and illustrate the present invention, for apparent those skilled in the art, without departing from the spirit and scope of the present invention, various modifications and variations can be carried out wherein.Therefore, this invention is intended to cover it and be included into improvement of the present invention within appended claim and its equivalent scope and change.

Claims (8)

1. utilize a method for decoding means decodes vision signal, the method comprises:
Utilize decoding device receiving video signals, described vision signal comprises the current block be made up of luma samples and chroma samples;
Utilize decoding device to obtain the prediction type of described current block, described prediction type specifies described current block to be in the inter mode or frame mode compiles;
When described prediction type specifies described current block to compile with frame mode, utilize decoding device to obtain luma prediction modes information in the frame of described current block from described vision signal, in described frame, luma prediction modes information specifies the intra prediction mode being used for described luma samples;
The intra prediction mode being used for described luma samples is derived based on luma prediction modes Information Pull decoding device in the frame of the current block obtained;
Based on the luma samples of the described current block of decoding with frame mode for luma prediction modes Information Pull decoding device in the described frame of described luma samples of described current block;
Utilize decoding device to obtain prediction mode for chroma information in the frame of described current block from described vision signal, in described frame, prediction mode for chroma information specifies the intra prediction mode being used for described chroma samples;
When prediction mode for chroma information in described frame specifies the intra prediction mode being used for described luma samples to be used as the intra prediction mode of described chroma samples, based on the intra prediction mode for described luma samples of described derivation, decoding device is utilized to predict the chroma samples of described current block; And
Decoding device is utilized to use the chroma samples of described prediction to decode the described chroma samples of described current block.
2. method according to claim 1, intra prediction mode wherein for described luma samples comprises DC predictive mode and angle predictive mode, described DC predictive mode uses the mean value of the adjacent block of the top of current block and the reconstruction in left side to predict described current block, and described angle predictive mode represents based on the angle being adjacent to the sampling predicted in advance being close to described current block predicts described current block.
3. method according to claim 1, wherein said current block is the block generated by the segmentation of tree block, and described tree block is the block generated by the segmentation of band.
4. method according to claim 3, wherein said tree block is the base unit for the treatment of described vision signal.
5. a device for decoded video signal, this device comprises:
Receiving element, described receiving element is configured to receive described vision signal, and described vision signal comprises the current block be made up of luma samples and chroma samples;
Decoding unit, described decoding unit quilt, be configured to the prediction type obtaining described current block, described prediction type specifies described current block to be in the inter mode or frame mode compiles, be configured to when described prediction type specifies described current block to compile with frame mode, luma prediction modes information in the frame obtaining described current block from described vision signal, in described frame, luma prediction modes information specifies the intra prediction mode being used for described luma samples, the luma prediction modes information in the frame based on the current block obtained that is configured to derives the intra prediction mode for described luma samples, be configured to the luma samples of the described current block of decoding with frame mode for luma prediction modes information in the described frame of described luma samples based on described current block, prediction mode for chroma information in the frame being configured to obtain described current block from described vision signal, in described frame, prediction mode for chroma information specifies the intra prediction mode being used for described chroma samples, be configured to when prediction mode for chroma information in described frame specifies the intra prediction mode for described luma samples to be used as the intra prediction mode for described chroma samples, based on the intra prediction mode for described luma samples of described derivation, predict the chroma samples of described current block, and the chroma samples being configured to usage forecastings is decoded the described chroma samples of described current block.
6. device according to claim 5, intra prediction mode wherein for luma samples comprises DC predictive mode and angle predictive mode, described DC predictive mode uses the mean value of the adjacent block of the top of described current block and the reconstruction in left side to predict described current block, and described angle predictive mode represents based on the angle being adjacent to the sampling predicted in advance being close to described current block predicts current block.
7. device according to claim 5, wherein said current block is the block generated by the segmentation of tree block, and described tree block is the block generated by the segmentation of band.
8. device according to claim 7, wherein said tree block is the base unit for the treatment of described vision signal.
CN201180018316.8A 2010-04-09 2011-04-09 The method and apparatus of processing video data Active CN103141103B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201511018903.3A CN105635737B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511018935.3A CN105611304B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511009169.4A CN105472386B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511010036.9A CN105472387B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data

Applications Claiming Priority (13)

Application Number Priority Date Filing Date Title
US32229310P 2010-04-09 2010-04-09
US32229210P 2010-04-09 2010-04-09
US61/322,292 2010-04-09
US61/322,293 2010-04-09
US201161453955P 2011-03-17 2011-03-17
US61/453,955 2011-03-17
US201161453981P 2011-03-18 2011-03-18
US61/453,981 2011-03-18
US201161454565P 2011-03-20 2011-03-20
US61/454,565 2011-03-20
US201161454586P 2011-03-21 2011-03-21
US61/454,586 2011-03-21
PCT/KR2011/002508 WO2011126348A2 (en) 2010-04-09 2011-04-09 Method and apparatus for processing video data

Related Child Applications (4)

Application Number Title Priority Date Filing Date
CN201511018903.3A Division CN105635737B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511018935.3A Division CN105611304B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511009169.4A Division CN105472386B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511010036.9A Division CN105472387B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data

Publications (2)

Publication Number Publication Date
CN103141103A CN103141103A (en) 2013-06-05
CN103141103B true CN103141103B (en) 2016-02-03

Family

ID=44763437

Family Applications (5)

Application Number Title Priority Date Filing Date
CN201511018903.3A Active CN105635737B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511018935.3A Active CN105611304B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511010036.9A Active CN105472387B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511009169.4A Active CN105472386B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201180018316.8A Active CN103141103B (en) 2010-04-09 2011-04-09 The method and apparatus of processing video data

Family Applications Before (4)

Application Number Title Priority Date Filing Date
CN201511018903.3A Active CN105635737B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511018935.3A Active CN105611304B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511010036.9A Active CN105472387B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data
CN201511009169.4A Active CN105472386B (en) 2010-04-09 2011-04-09 The method and apparatus for handling video data

Country Status (5)

Country Link
US (8) US8861594B2 (en)
EP (1) EP2387242A3 (en)
KR (6) KR102268821B1 (en)
CN (5) CN105635737B (en)
WO (1) WO2011126348A2 (en)

Families Citing this family (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635737B (en) * 2010-04-09 2019-03-15 Lg电子株式会社 The method and apparatus for handling video data
US20110317757A1 (en) * 2010-06-25 2011-12-29 Qualcomm Incorporated Intra prediction mode signaling for finer spatial prediction directions
AU2011308105A1 (en) * 2010-10-01 2013-05-02 Samsung Electronics Co., Ltd. Image intra prediction method and apparatus
US9025661B2 (en) 2010-10-01 2015-05-05 Qualcomm Incorporated Indicating intra-prediction mode selection for video coding
US10992958B2 (en) 2010-12-29 2021-04-27 Qualcomm Incorporated Video coding using mapped transforms and scanning modes
US8913662B2 (en) * 2011-01-06 2014-12-16 Qualcomm Incorporated Indicating intra-prediction mode selection for video coding using CABAC
US9232227B2 (en) 2011-01-14 2016-01-05 Sony Corporation Codeword space reduction for intra chroma mode signaling for HEVC
US20120183064A1 (en) * 2011-01-14 2012-07-19 Sony Corporation Codeword assignment for intra chroma mode signaling for hevc
KR101953384B1 (en) 2011-03-06 2019-03-04 엘지전자 주식회사 Intra prediction method of chrominance block using luminance sample, and apparatus using same
US9848197B2 (en) 2011-03-10 2017-12-19 Qualcomm Incorporated Transforms in video coding
US9288500B2 (en) 2011-05-12 2016-03-15 Texas Instruments Incorporated Luma-based chroma intra-prediction for video coding
US9654785B2 (en) 2011-06-09 2017-05-16 Qualcomm Incorporated Enhanced intra-prediction mode signaling for video coding using neighboring mode
CN106412585A (en) * 2011-06-17 2017-02-15 联发科技股份有限公司 Method of internal prediction mode coding
KR101668583B1 (en) 2011-06-23 2016-10-21 가부시키가이샤 제이브이씨 켄우드 Image encoding device, image encoding method and image encoding program, and image decoding device, image decoding method and image decoding program
US9693070B2 (en) 2011-06-24 2017-06-27 Texas Instruments Incorporated Luma-based chroma intra-prediction for video coding
TW201309036A (en) 2011-06-28 2013-02-16 Samsung Electronics Co Ltd Method and apparatus for predicting chrominance component image using luminance component image
US8724711B2 (en) * 2011-07-12 2014-05-13 Intel Corporation Luma-based chroma intra prediction
US20130016769A1 (en) 2011-07-17 2013-01-17 Qualcomm Incorporated Signaling picture size in video coding
US9948938B2 (en) * 2011-07-21 2018-04-17 Texas Instruments Incorporated Methods and systems for chroma residual data prediction
US9787982B2 (en) 2011-09-12 2017-10-10 Qualcomm Incorporated Non-square transform units and prediction units in video coding
US20130083846A1 (en) * 2011-09-29 2013-04-04 JVC Kenwood Corporation Image encoding apparatus, image encoding method, image encoding program, image decoding apparatus, image decoding method, and image decoding program
US9699457B2 (en) 2011-10-11 2017-07-04 Qualcomm Incorporated Most probable transform for intra prediction coding
CN104378639B (en) 2011-10-19 2018-05-04 株式会社Kt The method of decoding video signal
US8811760B2 (en) * 2011-10-25 2014-08-19 Mitsubishi Electric Research Laboratories, Inc. Coding images using intra prediction modes
GB2495941B (en) * 2011-10-25 2015-07-08 Canon Kk Method and apparatus for processing components of an image
EP2775714A4 (en) * 2011-11-02 2016-06-22 Nec Corp Video encoding device, video decoding device, video encoding method, video decoding method, and program
US9154796B2 (en) 2011-11-04 2015-10-06 Qualcomm Incorporated Intra-mode video coding
CN103096055B (en) * 2011-11-04 2016-03-30 华为技术有限公司 The method and apparatus of a kind of image signal intra-frame prediction and decoding
CN103096051B (en) * 2011-11-04 2017-04-12 华为技术有限公司 Image block signal component sampling point intra-frame decoding method and device thereof
JP2014534746A (en) * 2011-11-07 2014-12-18 インテル コーポレイション Cross channel residual prediction
CN103096057B (en) * 2011-11-08 2016-06-29 华为技术有限公司 A kind of chroma intra prediction method and apparatus
US9307237B2 (en) 2012-01-19 2016-04-05 Futurewei Technologies, Inc. Reference pixel reduction for intra LM prediction
US9438904B2 (en) * 2012-01-19 2016-09-06 Futurewei Technologies, Inc. Reduced look-up table for LM mode calculation
GB2498550B (en) * 2012-01-19 2016-02-24 Canon Kk Method and device for processing components of an image for encoding or decoding
CN104093026B (en) * 2012-01-20 2018-04-10 华为技术有限公司 Decoding method and device
CN104093024B (en) * 2012-01-20 2017-08-04 华为技术有限公司 Decoding method and device
AU2014277750B2 (en) * 2012-01-20 2016-01-14 Huawei Technologies Co., Ltd. Encoding or decoding method and apparatus
CN103220508B (en) * 2012-01-20 2014-06-11 华为技术有限公司 Coding and decoding method and device
WO2013112739A1 (en) * 2012-01-24 2013-08-01 Futurewei Technologies, Inc. Simplification of lm mode
CN103260018B (en) * 2012-02-16 2017-09-22 乐金电子(中国)研究开发中心有限公司 Intra-frame image prediction decoding method and Video Codec
WO2013128010A2 (en) * 2012-03-02 2013-09-06 Canon Kabushiki Kaisha Method and devices for encoding a sequence of images into a scalable video bit-stream, and decoding a corresponding scalable video bit-stream
JPWO2013150838A1 (en) * 2012-04-05 2015-12-17 ソニー株式会社 Image processing apparatus and image processing method
EP2837186B1 (en) * 2012-04-12 2018-08-22 HFI Innovation Inc. Method and apparatus for block partition of chroma subsampling formats
US9438905B2 (en) 2012-04-12 2016-09-06 Futurewei Technologies, Inc. LM mode with uniform bit-width multipliers
CN104471940B (en) * 2012-04-16 2017-12-15 联发科技(新加坡)私人有限公司 Chroma intra prediction method and device
WO2013155662A1 (en) * 2012-04-16 2013-10-24 Mediatek Singapore Pte. Ltd. Methods and apparatuses of simplification for intra chroma lm mode
CN103379321B (en) * 2012-04-16 2017-02-01 华为技术有限公司 Prediction method and prediction device for video image component
JPWO2013164922A1 (en) * 2012-05-02 2015-12-24 ソニー株式会社 Image processing apparatus and image processing method
WO2014007514A1 (en) * 2012-07-02 2014-01-09 엘지전자 주식회사 Method for decoding image and apparatus using same
US9300964B2 (en) * 2013-01-24 2016-03-29 Sharp Kabushiki Kaisha Image decoding apparatus and image coding apparatus
US10178408B2 (en) * 2013-07-19 2019-01-08 Nec Corporation Video coding device, video decoding device, video coding method, video decoding method, and program
AU2013403224B2 (en) 2013-10-14 2018-10-18 Microsoft Technology Licensing, Llc Features of intra block copy prediction mode for video and image coding and decoding
JP6336058B2 (en) 2013-10-14 2018-06-06 マイクロソフト テクノロジー ライセンシング,エルエルシー Features of base color index map mode for video and image encoding and decoding
EP3058736B1 (en) 2013-10-14 2019-02-27 Microsoft Technology Licensing, LLC Encoder-side options for intra block copy prediction mode for video and image coding
KR102258427B1 (en) 2014-01-03 2021-06-01 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Block vector prediction in video and image coding/decoding
US10390034B2 (en) 2014-01-03 2019-08-20 Microsoft Technology Licensing, Llc Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area
US11284103B2 (en) 2014-01-17 2022-03-22 Microsoft Technology Licensing, Llc Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning
US10542274B2 (en) 2014-02-21 2020-01-21 Microsoft Technology Licensing, Llc Dictionary encoding and decoding of screen content
EP4354856A2 (en) 2014-06-19 2024-04-17 Microsoft Technology Licensing, LLC Unified intra block copy and inter prediction modes
RU2679201C2 (en) 2014-09-30 2019-02-06 МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи Rules for intra-picture prediction modes when wavefront parallel processing is enabled
US10362305B2 (en) * 2015-03-27 2019-07-23 Sony Corporation Image processing device, image processing method, and recording medium
WO2016154963A1 (en) * 2015-04-01 2016-10-06 Mediatek Inc. Methods for chroma coding in video codec
EP3308540B1 (en) 2015-06-09 2020-04-15 Microsoft Technology Licensing, LLC Robust encoding/decoding of escape-coded pixels in palette mode
US11463689B2 (en) 2015-06-18 2022-10-04 Qualcomm Incorporated Intra prediction and intra mode coding
WO2016205999A1 (en) * 2015-06-23 2016-12-29 Mediatek Singapore Pte. Ltd. Adaptive coding group for image/video coding
WO2017139937A1 (en) * 2016-02-18 2017-08-24 Mediatek Singapore Pte. Ltd. Advanced linear model prediction for chroma coding
WO2017143467A1 (en) * 2016-02-22 2017-08-31 Mediatek Singapore Pte. Ltd. Localized luma mode prediction inheritance for chroma coding
ES2800551B2 (en) 2016-06-24 2023-02-09 Kt Corp Method and apparatus for processing a video signal
US11277604B2 (en) * 2016-07-14 2022-03-15 Samsung Electronics Co., Ltd. Chroma intra prediction method and device therefor
US10750169B2 (en) * 2016-10-07 2020-08-18 Mediatek Inc. Method and apparatus for intra chroma coding in image and video coding
US10555006B2 (en) 2016-12-22 2020-02-04 Qualcomm Incorporated Deriving bilateral filter information based on a prediction mode in video coding
US11025903B2 (en) * 2017-01-13 2021-06-01 Qualcomm Incorporated Coding video data using derived chroma mode
US11272202B2 (en) * 2017-01-31 2022-03-08 Sharp Kabushiki Kaisha Systems and methods for scaling transform coefficient level values
CN107454469B (en) * 2017-07-21 2019-11-22 北京奇艺世纪科技有限公司 A kind of method of video image processing and device
US10986349B2 (en) 2017-12-29 2021-04-20 Microsoft Technology Licensing, Llc Constraints on locations of reference blocks for intra block copy prediction
US11190790B2 (en) 2018-04-01 2021-11-30 Lg Electronics Inc. Parallel processing method for color component of video signal, and device therefor
US11277644B2 (en) 2018-07-02 2022-03-15 Qualcomm Incorporated Combining mode dependent intra smoothing (MDIS) with intra interpolation filter switching
CN110719478B (en) 2018-07-15 2023-02-07 北京字节跳动网络技术有限公司 Cross-component intra prediction mode derivation
EP3815377B1 (en) 2018-07-16 2022-12-28 Huawei Technologies Co., Ltd. Video encoder, video decoder, and corresponding encoding and decoding methods
JP2022500890A (en) 2018-08-09 2022-01-04 オッポ広東移動通信有限公司Guangdong Oppo Mobile Telecommunications Corp., Ltd. Video image component prediction methods, devices and computer storage media
KR20230174287A (en) 2018-10-07 2023-12-27 삼성전자주식회사 Method and device for processing video signal using mpm configuration method for multiple reference lines
EP3854087A4 (en) * 2018-10-09 2022-07-06 HFI Innovation Inc. Method and apparatus of encoding or decoding using reference samples determined by predefined criteria
KR102606291B1 (en) 2018-10-12 2023-11-29 삼성전자주식회사 Video signal processing method and device using cross-component linear model
US11303885B2 (en) 2018-10-25 2022-04-12 Qualcomm Incorporated Wide-angle intra prediction smoothing and interpolation
KR20210089133A (en) 2018-11-06 2021-07-15 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Simplified parameter derivation for intra prediction
KR102524061B1 (en) * 2018-11-23 2023-04-21 엘지전자 주식회사 Method for decoding image on basis of cclm prediction in image coding system, and device therefor
WO2020108591A1 (en) 2018-12-01 2020-06-04 Beijing Bytedance Network Technology Co., Ltd. Parameter derivation for intra prediction
CN113228656B (en) 2018-12-21 2023-10-31 北京字节跳动网络技术有限公司 Inter prediction using polynomial model
CN116320454A (en) * 2019-01-03 2023-06-23 华为技术有限公司 Method and device for predicting chroma block
AU2019201649A1 (en) 2019-03-11 2020-10-01 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding a tree of blocks of video samples
WO2020186763A1 (en) * 2019-03-18 2020-09-24 Oppo广东移动通信有限公司 Image component prediction method, encoder, decoder and storage medium
WO2020192642A1 (en) * 2019-03-24 2020-10-01 Beijing Bytedance Network Technology Co., Ltd. Conditions in parameter derivation for intra prediction
US20220224891A1 (en) * 2019-05-10 2022-07-14 Mediatek Inc. Method and Apparatus of Chroma Direct Mode Generation for Video Coding

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101218830A (en) * 2005-07-22 2008-07-09 三菱电机株式会社 Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding
CN101283595A (en) * 2005-10-05 2008-10-08 Lg电子株式会社 Method for decoding and encoding a video signal
CN101494782A (en) * 2008-01-25 2009-07-29 三星电子株式会社 Video encoding method and apparatus, and video decoding method and apparatus

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050008240A1 (en) * 2003-05-02 2005-01-13 Ashish Banerji Stitching of video for continuous presence multipoint video conferencing
US20050105621A1 (en) * 2003-11-04 2005-05-19 Ju Chi-Cheng Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
WO2006109985A1 (en) * 2005-04-13 2006-10-19 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding video signals in intra-base-layer prediction mode by selectively applying intra-coding
JP2008536450A (en) * 2005-04-13 2008-09-04 トムソン ライセンシング Method and apparatus for video decoding
KR100703774B1 (en) 2005-04-13 2007-04-06 삼성전자주식회사 Method and apparatus for encoding and decoding video signal using intra baselayer prediction mode applying selectively intra coding
KR101424969B1 (en) * 2005-07-15 2014-08-04 삼성전자주식회사 Method of decoding image
US20080130990A1 (en) * 2005-07-22 2008-06-05 Mitsubishi Electric Corporation Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding program and computer readable recording medium recorded with image decoding program
CN100584025C (en) * 2005-08-04 2010-01-20 华为技术有限公司 Arithmetic decoding system and device based on contents self-adaption
KR100873636B1 (en) * 2005-11-14 2008-12-12 삼성전자주식회사 Method and apparatus for encoding/decoding image using single coding mode
KR20070077609A (en) * 2006-01-24 2007-07-27 삼성전자주식회사 Method and apparatus for deciding intra prediction mode
KR101330630B1 (en) * 2006-03-13 2013-11-22 삼성전자주식회사 Method and apparatus for encoding moving picture, method and apparatus for decoding moving picture, applying adaptively an optimal prediction mode
DE102007035204A1 (en) * 2006-07-28 2008-02-07 Mediatek Inc. Video processing and operating device
EP3484154A1 (en) * 2006-10-25 2019-05-15 GE Video Compression, LLC Quality scalable coding
CN101888559B (en) * 2006-11-09 2013-02-13 Lg电子株式会社 Method and apparatus for decoding/encoding a video signal
CN101193305B (en) * 2006-11-21 2010-05-12 安凯(广州)微电子技术有限公司 Inter-frame prediction data storage and exchange method for video coding and decoding chip
US8311120B2 (en) * 2006-12-22 2012-11-13 Qualcomm Incorporated Coding mode selection using information of other coding modes
JP5026092B2 (en) * 2007-01-12 2012-09-12 三菱電機株式会社 Moving picture decoding apparatus and moving picture decoding method
US8139875B2 (en) * 2007-06-28 2012-03-20 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method and image decoding method
US20090016631A1 (en) * 2007-07-10 2009-01-15 Texas Instruments Incorporated Video Coding Rate Control
BRPI0818444A2 (en) * 2007-10-12 2016-10-11 Qualcomm Inc adaptive encoding of video block header information
KR101946376B1 (en) * 2007-10-16 2019-02-11 엘지전자 주식회사 A method and an apparatus for processing a video signal
NO328906B1 (en) * 2007-12-19 2010-06-14 Tandberg Telecom As Procedure for improving color sharpness in video and still images
JP5529040B2 (en) * 2008-01-10 2014-06-25 トムソン ライセンシング Intra-predicted video illumination compensation method and apparatus
JP5111127B2 (en) * 2008-01-22 2012-12-26 キヤノン株式会社 Moving picture coding apparatus, control method therefor, and computer program
JP5359302B2 (en) * 2008-03-18 2013-12-04 ソニー株式会社 Information processing apparatus and method, and program
KR101692829B1 (en) * 2008-06-12 2017-01-05 톰슨 라이센싱 Methods and apparatus for video coding and decoding with reduced bit-depth update mode and reduced chroma sampling update mode
JP5158003B2 (en) * 2009-04-14 2013-03-06 ソニー株式会社 Image coding apparatus, image coding method, and computer program
KR101452860B1 (en) * 2009-08-17 2014-10-23 삼성전자주식회사 Method and apparatus for image encoding, and method and apparatus for image decoding
US8560604B2 (en) * 2009-10-08 2013-10-15 Hola Networks Ltd. System and method for providing faster and more efficient data communication
CN102754442A (en) * 2010-02-10 2012-10-24 Lg电子株式会社 Method and apparatus for processing a video signal
US20110200108A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Chrominance high precision motion filtering for motion interpolation
CN105635737B (en) * 2010-04-09 2019-03-15 Lg电子株式会社 The method and apparatus for handling video data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101218830A (en) * 2005-07-22 2008-07-09 三菱电机株式会社 Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding
CN101283595A (en) * 2005-10-05 2008-10-08 Lg电子株式会社 Method for decoding and encoding a video signal
CN101494782A (en) * 2008-01-25 2009-07-29 三星电子株式会社 Video encoding method and apparatus, and video decoding method and apparatus

Also Published As

Publication number Publication date
US10321156B2 (en) 2019-06-11
US9426472B2 (en) 2016-08-23
KR20170120192A (en) 2017-10-30
KR20190082986A (en) 2019-07-10
KR102223526B1 (en) 2021-03-04
US20230300371A1 (en) 2023-09-21
KR102268821B1 (en) 2021-06-23
KR101904948B1 (en) 2018-10-08
CN105635737B (en) 2019-03-15
KR20210025713A (en) 2021-03-09
CN105635737A (en) 2016-06-01
KR20130050297A (en) 2013-05-15
KR101789634B1 (en) 2017-10-25
US8861594B2 (en) 2014-10-14
CN105472386B (en) 2018-09-18
US20180199063A1 (en) 2018-07-12
US11695954B2 (en) 2023-07-04
US9918106B2 (en) 2018-03-13
KR102124495B1 (en) 2020-06-19
US20110255591A1 (en) 2011-10-20
WO2011126348A3 (en) 2012-01-26
KR20180110201A (en) 2018-10-08
KR20200071780A (en) 2020-06-19
US20220060749A1 (en) 2022-02-24
WO2011126348A2 (en) 2011-10-13
CN105472387A (en) 2016-04-06
CN105472387B (en) 2018-11-02
US11197026B2 (en) 2021-12-07
CN105611304B (en) 2019-06-11
US10841612B2 (en) 2020-11-17
CN103141103A (en) 2013-06-05
CN105472386A (en) 2016-04-06
US20160330477A1 (en) 2016-11-10
KR101997462B1 (en) 2019-07-08
EP2387242A2 (en) 2011-11-16
US20150071352A1 (en) 2015-03-12
EP2387242A3 (en) 2015-03-04
US20190268621A1 (en) 2019-08-29
CN105611304A (en) 2016-05-25
US20210006829A1 (en) 2021-01-07

Similar Documents

Publication Publication Date Title
CN103141103B (en) The method and apparatus of processing video data
CN103959789B (en) Using candidate's intra prediction mode to the method and apparatus of intra prediction mode coding/decoding
CN116684587A (en) Image decoding method, image encoding method, and image-specific data transmission method
CN102972028A (en) New intra prediction modes
CN101573985A (en) Method and apparatus for video predictive encoding and method and apparatus for video predictive decoding
CN102934445A (en) Methods and apparatuses for encoding and decoding image based on segments
CN113596429B (en) Pixel point pair selection method, device and computer readable storage medium
CN110868611A (en) Video encoding and decoding method and device
CN102638678A (en) Video encoding and decoding interframe image predicting method and video codec

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant