CN103260018B - Intra-frame image prediction decoding method and Video Codec - Google Patents
Intra-frame image prediction decoding method and Video Codec Download PDFInfo
- Publication number
- CN103260018B CN103260018B CN201210035252.9A CN201210035252A CN103260018B CN 103260018 B CN103260018 B CN 103260018B CN 201210035252 A CN201210035252 A CN 201210035252A CN 103260018 B CN103260018 B CN 103260018B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- current
- munderover
- rec
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Color Television Systems (AREA)
Abstract
The invention discloses a kind of intra-frame image prediction decoding method and Video Codec, wherein inage predicting encoding method in frame includes carrying out intra-frame image prediction coding using following chroma intra prediction modes:Obtain Current Transform unit TU luminance components Y reconstruction value;With current TU luminance components Y reconstruction value, current TU chromatic components U is predicted;Obtain current TU chromatic components U reconstruction value;With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted.Correlation between chromatic component V and chromatic component U can be made full use of using the present invention, intra-frame image prediction encoding-decoding efficiency is improved.
Description
Technical field
Compiled the present invention relates to image/video encoding and decoding and intra-frame image prediction technical field, more particularly to intra-frame image prediction
Coding/decoding method and Video Codec.
Background technology
YUV is a kind of colour coding method used by eurovision system, is the color that colour television standard is used
Space.Wherein, Y is luminance component, represents lightness (Luma), that is, grey decision-making;And U and V are chromatic components, expression is then
It is colourity (Chroma), effect is description colors of image and saturation degree, the color for specified pixel.Using YUV color spaces
Importance be it luminance component Y and chromatic component U and indexing component V separate.If only luminance component Y without
Chromatic component U and chromatic component V, then the image so represented is exactly black and white gray level image.Color TV is empty using YUV colors
Between available brightness component Y solve the compatible problem of colour television set and black and white television set, black and white television set is also received colour
TV signal.
Image/video compression coding and decoding is generally divided into inter frame image prediction encoding and decoding and intra-frame image prediction encoding and decoding two
Plant prediction encoding and decoding technique.Intra-frame image prediction coding is using the information from present frame coded images unit to current volume
Code elementary area is predicted coding;Intra-frame image prediction decoding utilizes the information pair from present frame decoded elementary area
Current decoding elementary area is predicted decoding.
It is to use LM (Luma-based chroma intra in existing intra-frame image prediction decoding method
Prediction Mode, the chroma intra prediction modes based on brightness) carry out intra-frame image prediction encoding and decoding.LM assumes colourity
Wired sexual intercourse between component U and chromatic component V and luminance component Y, based on this it is assumed that in encoded chroma, LM is divided with brightness
Amount Y reconstruction value is predicted to chromatic component U and chromatic component V.
In HM4.0, LM is predicted with luminance component Y reconstruction value to chromatic component U and chromatic component V, and is
Chromatic component U and chromatic component V difference design factors αLAnd βL.In LM, chromatic component is predicted using following linear model
U and chromatic component V:
PredC[x, y]=αL·Rec′L[x, y]+βL
Wherein, PredC[x, y] is current TU (Transform Unit, converter unit) chromatic component U or chromatic component V
Predicted value, RecL[x, y] is current TU luminance components Y reconstruction value, Rec 'L[x, y] is filtered current TU luminance components
Y reconstruction value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N.
For design factor αLAnd βLSample point it is as shown in Figure 1.In Fig. 1, Rec 'LIt is filtered current TU luminance blocks
Adjacent left-hand one arranges the reconstruction value with upside one-row pixels luminance component Y, RecCBe current TU chrominance blocks adjacent left-hand one row and
Upside one-row pixels chromatic component U or chromatic component V reconstruction value.
But, when existing intra-frame image prediction decoding method carries out intra-frame image prediction encoding and decoding using LM, do not have
Make full use of the correlation between chromatic component V and chromatic component U so that intra-frame image prediction encoding-decoding efficiency is relatively low.
The content of the invention
The embodiment of the present invention provides a kind of inage predicting encoding method in frame, to improve intra-frame image prediction coding effect
Rate, this method includes carrying out intra-frame image prediction coding using following chroma intra prediction modes:
Obtain Current Transform unit TU luminance components Y reconstruction value;
With current TU luminance components Y reconstruction value, current TU chromatic components U is predicted;
Obtain current TU chromatic components U reconstruction value;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted.
The embodiment of the present invention provides a kind of intra-frame image prediction coding/decoding method, to improve intra-frame image prediction decoding effect
Rate, this method includes:
It is determined that carrying out intra-frame image prediction decoding using above-mentioned chroma intra prediction modes;
Obtain current TU luminance components Y reconstruction value;
With current TU luminance components Y reconstruction value, current TU chromatic components U is predicted;
Obtain current TU chromatic components U reconstruction value;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted.
The embodiment of the present invention also provides a kind of video encoder, to improve intra-frame image prediction code efficiency, the video
Encoder includes carrying out the first acquisition module, the first prediction of intra-frame image prediction coding using following chroma intra prediction modes
Module, second obtain module and the second prediction module:
First obtains module, the reconstruction value for obtaining current TU luminance components Y;
First prediction module, for the reconstruction value with current TU luminance components Y, is predicted to current TU chromatic components U;
Second obtains module, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module, for the reconstruction value with current TU chromatic components U, is predicted to current TU chromatic components V.
The embodiment of the present invention also provides a kind of Video Decoder, to improve intra-frame image prediction decoding efficiency, the video
Decoder includes:
Mode decision module, for determining to carry out intra-frame image prediction decoding using above-mentioned chroma intra prediction modes;
First obtains module, the reconstruction value for obtaining current TU luminance components Y;
First prediction module, for the reconstruction value with current TU luminance components Y, is predicted to current TU chromatic components U;
Second obtains module, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module, for the reconstruction value with current TU chromatic components U, is predicted to current TU chromatic components V.
The chroma intra prediction that the intra-frame image prediction decoding method and Video Codec of the embodiment of the present invention are used
Pattern, is that current TU chromatic components U is predicted with current TU luminance components Y reconstruction value, with current TU chromatic components U's
Reconstruction value is predicted to current TU chromatic components V, with existing LM with current TU luminance components Y reconstruction value to current TU colourities
Component U is compared with the chromatic component V technical schemes being predicted, and takes full advantage of the phase between chromatic component V and chromatic component U
Guan Xing, can improve intra-frame image prediction encoding-decoding efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.In the accompanying drawings:
Fig. 1 is in the prior art for design factor αLAnd βLSample point schematic diagram;
Fig. 2 is the schematic diagram of inage predicting encoding method in frame in the embodiment of the present invention;
Fig. 3 be the embodiment of the present invention in current TU chromatic components V is predicted with current TU chromatic components U reconstruction value
Schematic diagram;
Fig. 4 be the embodiment of the present invention in calculate factor alphaVAnd βVThe schematic diagram of the sample point used;
Fig. 5 is the schematic diagram of intra-frame image prediction coding/decoding method in the embodiment of the present invention;
Fig. 6 is the structural representation of video encoder in the embodiment of the present invention;
Fig. 7 is the structural representation of an instantiation of video encoder in the embodiment of the present invention;
Fig. 8 is the structural representation of Video Decoder in the embodiment of the present invention.
Embodiment
For the purpose, technical scheme and advantage of the embodiment of the present invention are more clearly understood, below in conjunction with the accompanying drawings to this hair
Bright embodiment is described in further details.Here, the schematic description and description of the present invention is used to explain the present invention, but simultaneously
It is not as a limitation of the invention.
In order to make full use of the correlation between chromatic component V and chromatic component U, intra-frame image prediction encoding and decoding effect is improved
Rate, the embodiment of the present invention is modified to existing LM.Existing LM assume chromatic component U and chromatic component V and luminance component Y it
Between wired sexual intercourse, based on this it is assumed that in encoded chroma, LM is with luminance component Y reconstruction value to chromatic component U and colourity point
Amount V is predicted.And the chrominance frames that the intra-frame image prediction decoding method and Video Codec of the embodiment of the present invention are used
Inner estimation mode, it will again be assumed that wired sexual intercourse between chromatic component U and chromatic component V and luminance component Y, and use luminance component Y
Reconstruction value chromatic component U is predicted, this is identical with existing LM;From unlike LM, the embodiment of the present invention is it is also supposed that color
Also wired sexual intercourse between component V and chromatic component U is spent, the linear relationship is than linear between chromatic component V and luminance component Y
Relation is stronger, and during in view of prediction chromatic component V, chromatic component U is reconstructed to be finished, therefore with chromatic component U reconstruct
Value rather than luminance component Y reconstruction value predict chromatic component V.
For ease of description, the color of intra-frame image prediction decoding method and the Video Codec use of the embodiment of the present invention
Intra prediction mode is spent, LUM (luma-based U and U-based V chroma intra prediction can be referred to as
Mode, V chroma intra prediction modes are predicted from luma prediction U from U).
Fig. 2 is the schematic diagram of the inage predicting encoding method in frame of the embodiment of the present invention.As shown in Fig. 2 the present invention is implemented
The inage predicting encoding method in frame of example includes carrying out intra-frame image prediction volume using following chroma intra prediction modes (LUM)
Code:
Step 201, the reconstruction value for obtaining Current Transform unit TU luminance components Y;
Step 202, the reconstruction value with current TU luminance components Y, are predicted to current TU chromatic components U;
Step 203, the reconstruction value for obtaining current TU chromatic components U;
Step 204, the reconstruction value with current TU chromatic components U, are predicted to current TU chromatic components V.
As shown in Figure 2 flow it is known that the embodiment of the present invention use chroma intra prediction modes (LUM) in, colourity
Component U Forecasting Methodology is identical with existing LM, and from unlike existing LM, reconstruction value rather than brightness with chromatic component U
Component Y reconstruction value predicts chromatic component V.
When it is implemented, with current TU chromatic components U reconstruction value, being predicted, can wrapping to current TU chromatic components V
Include:
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted by linear relationship.
When it is implemented, can as follows, with current TU chromatic components U reconstruction value, by linear relationship to working as
Preceding TU chromatic components V is predicted:
PredV[x, y]=αV·RecU[x, y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U
Reconstruction value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used
It is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value and chromatic component V reconstruction value.
Fig. 3 be the embodiment of the present invention in current TU chromatic components V is predicted with current TU chromatic components U reconstruction value
Schematic diagram.
Fig. 4 be the embodiment of the present invention in calculate factor alphaVAnd βVThe schematic diagram of the sample point used.In Fig. 4, RecUIt is current
TU chrominance blocks adjacent left-hand one arranges the reconstruction value with upside one-row pixels chromatic component U, RecVIt is the adjacent left side of current TU chrominance blocks
Side one arranges the reconstruction value with upside one-row pixels chromatic component V.
When it is implemented, can as follows, design factor αVAnd βV:
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruct
Value, RecV(i) be current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ...,
2N-1, travels through the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels, and the width and height of current TU chrominance blocks are
N。
Design factor αVAnd βVDetailed process can be found in working draft (working draft) JCTVC-G1103_ hereinafter
D4 newly-increased part 8.3.3.1.19 trifles.
The chroma intra prediction modes (LUM) that the inage predicting encoding method in frame of the embodiment of the present invention is used, Ke Yiyu
Existing LM carries out rate-distortion optimization selection together.
When it is implemented, intra-frame image prediction can be carried out on the basis of flow shown in Fig. 2 is implemented, then using existing LM
Coding:Obtain current TU luminance components Y reconstruction value;With current TU luminance components Y reconstruction value, to current TU chromatic components U
It is predicted with chromatic component V.Subsequently carry out rate-distortion optimization selection again, i.e., in LUM and LM, selection minimum rate distortion generation
The corresponding chroma intra prediction modes of valency carry out intra-frame image prediction coding.
When it is implemented, the inage predicting encoding method in frame of the embodiment of the present invention can also include:Encoded chroma frame in
The code word of predictive mode, to indicate carrying out intra-frame image prediction volume using which kind of chroma intra prediction modes (such as LUM or LM)
Code.Table 1 is the code word of chroma intra prediction modes in HM4.0:
Table 1:The code word of chroma intra prediction modes in HM4.0
Introduce after LUM, it is necessary to which the code word to chroma intra prediction modes is adjusted, as shown in table 2.
Table 2:The code word of chroma intra prediction modes after adjustment
Chroma intra prediction modes | Code word |
DM | 0 |
LM | 10 |
LUM | 110 |
Planar | 1110 |
Vertical | 11110 |
Horizontal | 111110 |
DC | 111111 |
Contrast Tables 1 and 2 can be seen that in the code word of encoded chroma intra prediction mode, chroma intra prediction modes
Code word changed on the basis of HM4.0, and the maximum length of code word increases 6 from 5.
Based on same inventive concept, a kind of intra-frame image prediction coding/decoding method is additionally provided in the embodiment of the present invention, it is as follows
Described in the embodiment in face.Because intra-frame image prediction coding/decoding method solves the principle and inage predicting encoding method in frame phase of problem
Seemingly, therefore the implementation of intra-frame image prediction coding/decoding method may refer to the implementation of inage predicting encoding method in frame, part is repeated
Repeat no more.
Fig. 5 is the schematic diagram of the intra-frame image prediction coding/decoding method of the embodiment of the present invention.As shown in figure 5, the present invention is implemented
The intra-frame image prediction coding/decoding method of example can include:
Step 501, determination carry out intra-frame image prediction decoding using LUM;
Step 502, the reconstruction value for obtaining current TU luminance components Y;
Step 503, the reconstruction value with current TU luminance components Y, are predicted to current TU chromatic components U;
Step 504, the reconstruction value for obtaining current TU chromatic components U;
Step 505, the reconstruction value with current TU chromatic components U, are predicted to current TU chromatic components V.
When it is implemented, determining to carry out intra-frame image prediction decoding using LUM, it can include:
Decode the code word of chroma intra prediction modes;
According to the code word of chroma intra prediction modes, it is determined that carrying out intra-frame image prediction decoding using LUM.
In HM4.0, the code word of chroma intra prediction modes is as shown in table 1.The intra-frame image prediction solution of the embodiment of the present invention
Code method adds LUM, and the code word of chroma intra prediction modes is changed, as shown in table 2.Therefore contrast Tables 1 and 2 can
To find out, when decoding the code word of chroma intra prediction modes, first, the maximum length of decoding code word increases 6 from 5;Its
Secondary, the chroma intra prediction modes that code word is represented also are changed, it is necessary to obtain pre- in the corresponding chrominance frames of code word according to table 2
Survey pattern.
When it is implemented, with current TU chromatic components U reconstruction value, being predicted, can wrapping to current TU chromatic components V
Include:
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted by linear relationship.
When it is implemented, can as follows, with current TU chromatic components U reconstruction value, by linear relationship to working as
Preceding TU chromatic components V is predicted:
PredV[x, y]=αV·RecU[x, y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U
Reconstruction value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used
It is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value and chromatic component V reconstruction value.
When it is implemented, can as follows, design factor αVAnd βV:
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruct
Value, RecV(i) be current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ...,
2N-1, travels through the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels, and the width and height of current TU chrominance blocks are
N。
Design factor αVAnd βVDetailed process can be found in the newly-increased part of working draft JCTVC-G1103_d4 hereinafter
8.3.3.1.19 trifle.
It is also required to when it is implemented, the intra-frame image prediction decoding method of the embodiment of the present invention introduces LUM to work grass
Case JCTVC-G1103_d4 is modified, and concrete modification is as follows, and modification part is marked by underscore in table 8-1 and table 8-4,
Modification part also includes 8.3.3.1.19 trifles.
8.3.1 Derivation process for luma intra prediction mode (luma intra predictions
Mode derivation processes)
……
Table 8-1 Specification of intra prediction mode and associated names
The explanation of corresponding title (table 8-1 intra prediction modes with)
……
8.3.2 Derivation process for chroma intra prediction mode are (pre- in chrominance frames
Survey mode derivation processes)
……
Table 8-4 Specification of IntraPredModeC according to the values of
intra_chroma_pred_mode and IntraPredMode[xB][yB]when chroma_pred_from_luma_
(table 8-4 (allows from bright enabled_flag is equal to 1 as chroma_pred_from_luma_enabled_flag
The flag bit of degree prediction colourity) when being equal to 1, using intra_chroma_pred_mode (chroma intra prediction method) and
IntraPredMode (intra prediction mode) [xB] [yB] determines saying for IntraPredModeC (chroma intra prediction modes)
It is bright)
……
8.3.3.1.19 the explanation of Intra_FromLumaAndU (frame in is predicted by brightness and colourity U) predictive mode
When intraPredMode (intra prediction mode) is equal to 36, following steps are performed:
1st, the frame in pixel prediction process in 8.3.3.1.18 trifles is performed to predict U;
2nd, perform after 8.3.3 reconstruct U, then perform following frame in pixel prediction process to predict V:
The input of this process is:
Adjacent U reconstruction value recSamplesU[x, y], wherein x, y=-1..nS-1, wherein x=0, y=0 represent current
The upper left position of block;
Adjacent V reconstruction value recSamplesV[x, y], wherein y=-1, x=0..nS-1 or x=-1, y=0..nS-1,
Wherein x=0, y=0 represent the upper left position of current block;
Variable nS, is used to refer to current block size.
The output of this process is:
V predicted value predSamplesV[x, y], x, y=0..nS-1, wherein x=0, y=0 represents the upper left of current block
Angle Position;
V predicted value predSamplesV[x, y], x, y=0..nS-1 are obtained by following steps:
1) variable k3 and sample p, is calculatedU, pVIt is as follows:
K3=Max (0, BitDepthc+log2(nS)-14)
pU[x, y]=recSamplesU[x, y], x, y=-1..nS-1
pV[x, y]=recSamplesV[x, y], y=-1, x=0..nS-1 or x=-1, y=0..nS-1
2) variables L, is calculated, C, LL, LC and k2 are as follows:
K2=log2((2*nS) > > k3)
3) variable a, b and k, are calculated as follows:
A1=(LC < < k2)-L*C
A2=(LL < < k2)-L*L
K1=Max (0, log2(abs (a2)) -5)-Max (0, log2(abs(a1))-14)+2
A1s=a1 > > Max (0, log2(abs(a1))-14)
A2s=abs (a2 > > Max (0, log2(abs(a2))-5))
A3=a2s < 10:Clip3(-215, 215- 1, a1s*lmDiv+ (1 < < (k1-1)) > > k1)
A=a3 > > Max (0, log2(abs(a3))-6)
K=13-Max (0, log2(a))-6)
B=(L- ((a*C) > > k1)+(1 < < (k2-1))) > > k2
Wherein lmDiv is obtained by using the a2s 8-10 that table look-up.
4) it is, last, calculate V predicted value predSamplesV[x, y] is as follows:
predSamplesV[x, y]=Clip1C(((pU[x, y] * a) > > k)+b), x, y=0..nS-1
Based on same inventive concept, a kind of video encoder and Video Decoder are additionally provided in the embodiment of the present invention, such as
Described in the following examples.Encoded with intra-frame image prediction due to the principle that video encoder and Video Decoder solve problem and
Coding/decoding method is similar, therefore the implementation of video encoder and Video Decoder may refer to intra-frame image prediction coding and decoding side
The implementation of method, repeats part and repeats no more.
Fig. 6 is the structural representation of video encoder in the embodiment of the present invention.As shown in fig. 6, being regarded in the embodiment of the present invention
Frequency encoder can include:
Carry out intra-frame image prediction coding using following chroma intra prediction modes (LUM) first obtains module 601, the
One prediction module 602, second obtains the prediction module 604 of module 603 and second:
First obtains module 601, the reconstruction value for obtaining current TU luminance components Y;
First prediction module 602, for the reconstruction value with current TU luminance components Y, is carried out pre- to current TU chromatic components U
Survey;
Second obtains module 603, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module 604, for the reconstruction value with current TU chromatic components U, is carried out pre- to current TU chromatic components V
Survey.
In one embodiment, the second prediction module 604 specifically can be used for:
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted by linear relationship.
In one embodiment, the second prediction module 604 specifically can be used for:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is entered by linear relationship
Row prediction:
PredV[x, y]=αV·RecU[x, y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U
Reconstruction value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used
It is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value and chromatic component V reconstruction value.
In one embodiment, the second prediction module 604 specifically can be used for:
As follows, design factor αVAnd βV:
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruct
Value, RecV(i) be current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ...,
2N-1。
As shown in fig. 7, in one embodiment, the video encoder shown in Fig. 6 can also include:
Pattern-coding module 701, for the code word of encoded chroma intra prediction mode, to indicate to carry out frame in using LUM
Image forecasting encoding.
Fig. 8 is the structural representation of Video Decoder in the embodiment of the present invention.As shown in figure 8, being regarded in the embodiment of the present invention
Frequency decoder can include:
Mode decision module 801, for determining to carry out intra-frame image prediction decoding using LUM;
First obtains module 802, the reconstruction value for obtaining current TU luminance components Y;
First prediction module 803, for the reconstruction value with current TU luminance components Y, is carried out pre- to current TU chromatic components U
Survey;
Second obtains module 804, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module 805, for the reconstruction value with current TU chromatic components U, is carried out pre- to current TU chromatic components V
Survey.
In one embodiment, determining module 801 specifically can be used for:
Decode the code word of chroma intra prediction modes;
According to the code word of chroma intra prediction modes, it is determined that carrying out intra-frame image prediction decoding using LUM.
In one embodiment, the second prediction module 805 specifically can be used for:
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted by linear relationship.
In one embodiment, the second prediction module 805 specifically can be used for:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is entered by linear relationship
Row prediction:
PredV[x, y]=αV·RecU[x, y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U
Reconstruction value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used
It is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value and chromatic component V reconstruction value.
In one embodiment, the second prediction module 805 specifically can be used for:
As follows, design factor αVAnd βV:
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruct
Value, RecV(i) be current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ...,
2N-1。
It is pre- in the chrominance frames that the intra-frame image prediction decoding method and Video Codec of the embodiment of the present invention are used
Mode integrating is surveyed into HM4.0 (LM bug-fixed versions), and is compared with HM4.0.Experiment is according to JCTVC-
What the universal test condition in F900 was carried out.The test environment of experiment is:Intel (R) Xeon (R) CPU X5670@2.93GHz,
6 cores, 7,32 compilers of internal memory 12GB, Windows.
Experimental result is as shown in table 3.As can be seen that the embodiment of the present invention is in " high efficiency intraframe coding (All Intra
HE the code check that) " and in the case of " low complex degree intraframe coding (All Intra LC) " can reduce chromatic component V respectively reaches
0.72% and 1.33%.Luminance component Y and chromatic component U code checks in the case of " high efficiency intraframe coding " do not increase, " low
Code check is increased slightly (luminance component Y average bit rates increase by 0.03%, the increasing of worst case code check in the case of complexity intraframe coding "
Plus 0.11%;Chromatic component U average bit rates reduction by 0.07%, 0.88%) worst case code check increases.In " low complex degree frame in
Code check in the case of coding " increases key factor in the increasing of tetra- kinds of pattern code words of Planar, Vertical, Horizontal and DC
Plus (with HM4.0 ratios, these four pattern code words both increase 1) and LCEC (low complexity entropy coder, it is low multiple
Miscellaneous degree entropy coder) coding codeword when poor efficiency.When the likelihood ratio that LUM is selected is relatively low, the performance that it is brought is improved
The hydraulic performance decline caused by above-mentioned four kinds of pattern code words increase is insufficient to compensate for, at this moment bulking property will slightly decline.
Table 3:Experimental result:- xx% represents that code check reduces xx%
In summary, the colourity that the intra-frame image prediction decoding method of the embodiment of the present invention and Video Codec are used
Intra prediction mode, is that current TU chromatic components U is predicted with current TU luminance components Y reconstruction value, with current TU colors
Degree component U reconstruction value is predicted to current TU chromatic components V, with reconstruction values pair of the existing LM with current TU luminance components Y
Current TU chromatic components U is compared with the chromatic component V technical schemes being predicted, and can make full use of chromatic component V and colourity
Correlation between component U, improves intra-frame image prediction encoding-decoding efficiency, especially between chromatic component U and chromatic component V
Linear relationship specific luminance component Y and chromatic component V between linear relationship it is stronger when, can for chromatic component V generation compare LM
More preferable predicted value.
It should be understood by those skilled in the art that, embodiments of the invention can be provided as method, system or computer program
Product.Therefore, the present invention can be using the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Apply the form of example.Moreover, the present invention can be used in one or more computers for wherein including computer usable program code
The computer program production that usable storage medium is implemented on (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.)
The form of product.
The present invention is the flow with reference to method according to embodiments of the present invention, equipment (system) and computer program product
Figure and/or block diagram are described.It should be understood that can be by every first-class in computer program instructions implementation process figure and/or block diagram
Journey and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that produced by the instruction of computer or the computing device of other programmable data processing devices for real
The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which is produced, to be included referring to
Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or
The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that in meter
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or
The instruction performed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in individual square frame or multiple square frames.
Particular embodiments described above, has been carried out further in detail to the purpose of the present invention, technical scheme and beneficial effect
Describe in detail it is bright, should be understood that the foregoing is only the present invention specific embodiment, the guarantor being not intended to limit the present invention
Scope is protected, within the spirit and principles of the invention, any modification, equivalent substitution and improvements done etc. should be included in this
Within the protection domain of invention.
Claims (12)
1. a kind of inage predicting encoding method in frame, it is characterised in that including carrying out frame using following chroma intra prediction modes
Interior image forecasting encoding:
Obtain Current Transform unit TU luminance components Y reconstruction value;
With current TU luminance components Y reconstruction value, current TU chromatic components U is predicted;
Obtain current TU chromatic components U reconstruction value;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted, including:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is carried out by linear relationship pre-
Survey:
PredV[x, y]=αV·RecU[x,y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U reconstruct
Value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used is to work as
The reconstruction value and chromatic component V reconstruction value of the preceding row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U.
2. the method as described in claim 1, it is characterised in that as follows, design factor αVAnd βV:
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>&beta;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value, RecV
(i) it is current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ..., 2N-1.
3. the method as described in any one of claim 1 to 2, it is characterised in that also include:
The code word of encoded chroma intra prediction mode, to indicate to carry out intra-frame image prediction using the chroma intra prediction modes
Coding.
4. a kind of intra-frame image prediction coding/decoding method, it is characterised in that including:
It is determined that carrying out intra-frame image prediction decoding using chroma intra prediction modes described in claim 1;
Obtain current TU luminance components Y reconstruction value;
With current TU luminance components Y reconstruction value, current TU chromatic components U is predicted;
Obtain current TU chromatic components U reconstruction value;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted;
With current TU chromatic components U reconstruction value, current TU chromatic components V is predicted, including:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is carried out by linear relationship pre-
Survey:
PredV[x, y]=αV·RecU[x,y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U reconstruct
Value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used is to work as
The reconstruction value and chromatic component V reconstruction value of the preceding row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U.
5. method as claimed in claim 4, it is characterised in that it is determined that carrying out frame in figure using the chroma intra prediction modes
As prediction decoding, including:
Decode the code word of chroma intra prediction modes;
According to the code word of chroma intra prediction modes, it is determined that carrying out intra-frame image prediction solution using the chroma intra prediction modes
Code.
6. method as claimed in claim 4, it is characterised in that as follows, design factor αVAnd βV:
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>&beta;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value, RecV
(i) it is current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ..., 2N-1.
7. a kind of video encoder, it is characterised in that including carrying out intra-frame image prediction using following chroma intra prediction modes
The first of coding obtains module, the first prediction module, the second acquisition module and the second prediction module:
First obtains module, the reconstruction value for obtaining current TU luminance components Y;
First prediction module, for the reconstruction value with current TU luminance components Y, is predicted to current TU chromatic components U;
Second obtains module, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module, for the reconstruction value with current TU chromatic components U, is predicted to current TU chromatic components V;
Second prediction module specifically for:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is carried out by linear relationship pre-
Survey:
PredV[x, y]=αV·RecU[x,y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U reconstruct
Value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used is to work as
The reconstruction value and chromatic component V reconstruction value of the preceding row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U.
8. video encoder as claimed in claim 7, it is characterised in that second prediction module specifically for:
As follows, design factor αVAnd βV:
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
2
<mrow>
<msub>
<mi>&beta;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value, RecV
(i) it is current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ..., 2N-1.
9. the video encoder as described in any one of claim 7 to 8, it is characterised in that also include:
Pattern-coding module, for the code word of encoded chroma intra prediction mode, to indicate to use the chroma intra prediction mould
Formula carries out intra-frame image prediction coding.
10. a kind of Video Decoder, it is characterised in that including:
Mode decision module, for determining to carry out intra-frame image prediction solution using chroma intra prediction modes described in claim 7
Code;
First obtains module, the reconstruction value for obtaining current TU luminance components Y;
First prediction module, for the reconstruction value with current TU luminance components Y, is predicted to current TU chromatic components U;
Second obtains module, the reconstruction value for obtaining current TU chromatic components U;
Second prediction module, for the reconstruction value with current TU chromatic components U, is predicted to current TU chromatic components V;
Second prediction module specifically for:
As follows, with current TU chromatic components U reconstruction value, current TU chromatic components V is carried out by linear relationship pre-
Survey:
PredV[x, y]=αV·RecU[x,y]+βV
Wherein, PredV[x, y] is current TU chromatic components V predicted value, RecU[x, y] is current TU chromatic components U reconstruct
Value, x, y=0 ..., N-1, the width and height of current TU chrominance blocks are N, design factor αVAnd βVThe sample point used is to work as
The reconstruction value and chromatic component V reconstruction value of the preceding row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U.
11. Video Decoder as claimed in claim 10, it is characterised in that the determining module specifically for:
Decode the code word of chroma intra prediction modes;
According to the code word of chroma intra prediction modes, it is determined that carrying out intra-frame image prediction solution using the chroma intra prediction modes
Code.
12. Video Decoder as claimed in claim 10, it is characterised in that second prediction module specifically for:
As follows, design factor αVAnd βV:
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>&beta;</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>V</mi>
</msub>
<mo>&CenterDot;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>Rec</mi>
<mi>U</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>2</mn>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Wherein, RecU(i) it is the current row of TU chrominance blocks adjacent left-hand one and upside one-row pixels chromatic component U reconstruction value, RecV
(i) it is current TU chrominance blocks adjacent left-hand one row and upside one-row pixels chromatic component V reconstruction value, i=0 ..., 2N-1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210035252.9A CN103260018B (en) | 2012-02-16 | 2012-02-16 | Intra-frame image prediction decoding method and Video Codec |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210035252.9A CN103260018B (en) | 2012-02-16 | 2012-02-16 | Intra-frame image prediction decoding method and Video Codec |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103260018A CN103260018A (en) | 2013-08-21 |
CN103260018B true CN103260018B (en) | 2017-09-22 |
Family
ID=48963684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210035252.9A Expired - Fee Related CN103260018B (en) | 2012-02-16 | 2012-02-16 | Intra-frame image prediction decoding method and Video Codec |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103260018B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10506243B2 (en) | 2014-03-06 | 2019-12-10 | Samsung Electronics Co., Ltd. | Image decoding method and device therefor, and image encoding method and device therefor |
WO2016115736A1 (en) * | 2015-01-23 | 2016-07-28 | Mediatek Singapore Pte. Ltd. | Additional intra prediction modes using cross-chroma-component prediction |
WO2016115981A1 (en) | 2015-01-22 | 2016-07-28 | Mediatek Singapore Pte. Ltd. | Method of video coding for chroma components |
CN105306944B (en) * | 2015-11-30 | 2018-07-06 | 哈尔滨工业大学 | Chromatic component Forecasting Methodology in hybrid video coding standard |
WO2017139937A1 (en) * | 2016-02-18 | 2017-08-24 | Mediatek Singapore Pte. Ltd. | Advanced linear model prediction for chroma coding |
GB2571311B (en) * | 2018-02-23 | 2021-08-18 | Canon Kk | Methods and devices for improvement in obtaining linear component sample prediction parameters |
CN110858903B (en) * | 2018-08-22 | 2022-07-12 | 华为技术有限公司 | Chroma block prediction method and device |
CN110876061B (en) * | 2018-09-03 | 2022-10-11 | 华为技术有限公司 | Chroma block prediction method and device |
WO2020258056A1 (en) * | 2019-06-25 | 2020-12-30 | 北京大学 | Video image processing method and device and storage medium |
JP2022539786A (en) * | 2019-07-10 | 2022-09-13 | オッポ広東移動通信有限公司 | Image component prediction method, encoder, decoder and storage medium |
CN110602491B (en) * | 2019-08-30 | 2022-07-19 | 中国科学院深圳先进技术研究院 | Intra-frame chroma prediction method, device and equipment and video coding and decoding system |
WO2021035717A1 (en) * | 2019-08-30 | 2021-03-04 | 中国科学院深圳先进技术研究院 | Intra-frame chroma prediction method and apparatus, device, and video coding and decoding system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1574970A (en) * | 2003-05-16 | 2005-02-02 | 三星电子株式会社 | Method and apparatus for encoding/decoding image using image residue prediction |
CN101057506A (en) * | 2004-12-30 | 2007-10-17 | 三星电子株式会社 | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
WO2008020687A1 (en) * | 2006-08-16 | 2008-02-21 | Samsung Electronics Co, . Ltd. | Image encoding/decoding method and apparatus |
WO2011126348A3 (en) * | 2010-04-09 | 2012-01-26 | Lg Electronics Inc. | Method and apparatus for processing video data |
-
2012
- 2012-02-16 CN CN201210035252.9A patent/CN103260018B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1574970A (en) * | 2003-05-16 | 2005-02-02 | 三星电子株式会社 | Method and apparatus for encoding/decoding image using image residue prediction |
CN101057506A (en) * | 2004-12-30 | 2007-10-17 | 三星电子株式会社 | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
WO2008020687A1 (en) * | 2006-08-16 | 2008-02-21 | Samsung Electronics Co, . Ltd. | Image encoding/decoding method and apparatus |
WO2011126348A3 (en) * | 2010-04-09 | 2012-01-26 | Lg Electronics Inc. | Method and apparatus for processing video data |
Non-Patent Citations (3)
Title |
---|
Chroma intra prediction by reconstructed luma samples;Jianle Chen等;《Joint Collaborative Team on Video Coding(JCT-VC)of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11》;20101015;第2节,图1 * |
intra prediction method based on the linear relationship between the channels for YUV 4:2:0 intra coding;Sang Heon Lee等;《2009 16th IEEE International Conference on Image Processing》;20091110;全文 * |
Intra-prediction for color image coding using YUV correlation;Luis F.R等;《2010 17th IEEE International conference on Image Processing》;20100929;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103260018A (en) | 2013-08-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103260018B (en) | Intra-frame image prediction decoding method and Video Codec | |
CN113411577B (en) | Coding method and device | |
US9225984B2 (en) | Simplification of LM mode | |
US9307237B2 (en) | Reference pixel reduction for intra LM prediction | |
US9438904B2 (en) | Reduced look-up table for LM mode calculation | |
US20160373767A1 (en) | Encoding and Decoding Methods and Apparatuses | |
US20160173907A1 (en) | LM Mode with Uniform Bit-Width Multipliers | |
EP4362462A1 (en) | Chroma block prediction method and apparatus | |
KR102661224B1 (en) | How to intra-predict blocks in an image | |
TWI727826B (en) | Coding using intra-prediction | |
KR20210107131A (en) | Image prediction method, apparatus and system, device and storage medium | |
KR20210095945A (en) | Video picture decoding and encoding method and apparatus | |
US10999604B2 (en) | Adaptive implicit transform setting | |
US20230396780A1 (en) | Illumination compensation method, encoder, and decoder | |
KR102631517B1 (en) | Picture segmentation method and device | |
CN103260019B (en) | Intra-frame image prediction decoding method and Video Codec | |
CN103237225B (en) | YUV is utilized to combine the method revising coding and decoding video error with rgb space | |
CN116916029A (en) | Inter prediction method, encoder, decoder and storage medium | |
EP4354858A1 (en) | Encoding method and apparatus using inter-frame prediction, device, and readable storage medium | |
WO2022194197A1 (en) | Separate Tree Coding Restrictions | |
US20230403392A1 (en) | Intra prediction method and decoder | |
CN107801040A (en) | HEVC/H.265 encoder architectures and its interior Forecasting Methodology with interframe of frame | |
KR20230157975A (en) | Motion flow coding for deep learning-based YUV video compression | |
TW202335499A (en) | Multi-model cross-component linear model prediction | |
CN102638678B (en) | Video coding-decoding inter-frame image prediction method and Video Codec |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170922 Termination date: 20220216 |
|
CF01 | Termination of patent right due to non-payment of annual fee |