CN105491390A - Intra-frame prediction method in hybrid video coding standard - Google Patents

Intra-frame prediction method in hybrid video coding standard Download PDF

Info

Publication number
CN105491390A
CN105491390A CN201510861669.4A CN201510861669A CN105491390A CN 105491390 A CN105491390 A CN 105491390A CN 201510861669 A CN201510861669 A CN 201510861669A CN 105491390 A CN105491390 A CN 105491390A
Authority
CN
China
Prior art keywords
pattern
block
mode
intra
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510861669.4A
Other languages
Chinese (zh)
Other versions
CN105491390B (en
Inventor
范晓鹏
张涛
赵德斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201510861669.4A priority Critical patent/CN105491390B/en
Publication of CN105491390A publication Critical patent/CN105491390A/en
Application granted granted Critical
Publication of CN105491390B publication Critical patent/CN105491390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention provides an intra-frame prediction method in a hybrid video coding standard, and belongs to the field of video coding. The purpose of the invention is to effectively process complex blocks in a video sequence, for example, video fuzziness caused by objects or camera motion, multidirectional complex blocks and the like, and provides the intra-frame prediction method in the hybrid video coding standard to further improve the video coding performance. The intra-frame prediction method comprises the following steps: obtaining two different predicted values by two different prediction modes; weighting the two predicted values to obtain a new prediction of a current coded block; obtaining intra-frame coding mode information of a plurality of adjacent coded blocks on the surrounding of the current coded block, and selecting one mode therein to serve as a first mode; and selecting another intra-frame mode to serve as a second mode based on the first mode. The predicted value synthesized by two different prediction modes can be used for processing the complex blocks in the video sequence to further improve the coding efficiency.

Description

Intra-frame prediction method in hybrid video coding standard
Technical field
The present invention relates to intra-frame prediction method in a kind of hybrid video coding standard, belong to field of video encoding.
Background technology
Along with the raising that people require video display quality, the new video application forms such as high definition and ultra high-definition video arise at the historic moment.Under this high-resolution high-quality video appreciation application more and more widely situation, how augmented video compression efficiency becomes most important.Image and video, in digitized process, create a large amount of data redundancies, and this makes video compression technology become possibility.Generally speaking, redundancy type at least comprises spatial redundancy, time redundancy, comentropy redundancy.For the elimination of spatial redundancy, the general method adopted based on prediction, i.e. intraframe predictive coding.Its basic thought utilizes the pixel value rebuild around present encoding block, by generating the predicted value of current block based on the interpolation in direction.After obtaining prediction block, current block is easier to encode compared to original coding block with the difference i.e. residual block predicting block, and infra-frame prediction significantly reduces the spatial redundancy in Video coding.Due in existing video encoding standard infra-frame prediction adopt based on unidirectional interpolative prediction, the method cannot be predicted the block of complexity.
In order to process the complicated encoding block in video sequence, Y.YeandM.Karczewicz, " ImprovedH.264intracodingbasedonbi-directionalintrapredic tion; directionaltransform; andadaptivecoefficientscanning; " inProc.IEEEInt.Conf.ImageProcess., Oct.2008, pp.2116 – 2119. proposes bi-directional intra prediction coding method.The method, based on 9 kinds of predictive modes in H.264/AVC video encoding standard, selects the combination of two kinds of patterns of some.For each combination, the weight table of an off-line training is used for the predicted value of these two kinds of patterns generations of weighted average.Still there is the problem that the coding efficiency of video is poor.
Summary of the invention
The object of the invention is the complex block in order to effectively process in video sequence, and propose intra-frame prediction method in a kind of hybrid video coding standard, to promote the coding efficiency of video further.
The present invention solves the problems of the technologies described above the technical scheme taked to be:
Intra-frame prediction method in a kind of hybrid video coding standard, described Forecasting Methodology is for describing the encoding block of the complexity existed in video sequence, and the implementation procedure of described Forecasting Methodology is:
Step one: the intra-frame encoding mode obtaining several adjacent coded blocks of surrounding of present encoding block, present encoding block is of a size of W*H, and W is the wide of present encoding block, and H is the height of present encoding block; Around several adjacent coded blocks are called contiguous encoding block;
Step 2: the set obtaining the coding mode one of present encoding block according to the intra-frame encoding mode of the contiguous encoding block of step one acquisition;
Step 3: obtain corresponding pattern two according to each pattern one in the set of coding mode one: choose from two other pattern nearest on direction of pattern one be pattern two, or choose combine with pattern one after there is minimum predicted distortion pattern be pattern two.
According to the set of the coding mode one that step 2 obtains, the set of the another one coding mode of each pattern acquiring present encoding block during coding mode one is gathered, the i.e. set of coding mode two; The set of merging patterns one and the set of pattern two, obtain two tuple-sets, and each two tuples comprise two relevant patterns one and pattern two;
Step 4: for each mode combinations in two tuple-sets that step 3 produces, obtain two different prediction blocks with picture element interpolation adjacent around current block; A bi-directional predicted result of present encoding block is the weighted average block of these two different prediction blocks; The combination of optimum pattern one and pattern two is selected to predict current block;
Step 5: respectively choosing of optimal prediction modes is carried out to the luminance block in coding unit and chrominance block;
Step 6: the coding mode of the luminance block in coding unit and chrominance block is encoded respectively.
In step one, described contiguous encoding block is the left side of present encoding block, top, lower-left, the Intra-coded blocks that upper right is encoded;
The aggregation obtaining the coding mode one of present encoding block described in step 2 is:
Several patterns that the contiguous encoding block using forestland obtained in selecting step one is maximum are pattern one, or the pattern selecting the contiguous encoding block of the present encoding block left side and top is pattern one, or select the pattern of any one block in these contiguous blocks to be pattern one, or the subset selecting the pattern of these contiguous blocks is pattern one, or specify weights for each contiguous encoding block, added up by the weights of the contiguous encoding block with identical intra-frame encoding mode, several patterns of the maximum weight of the contiguous encoding block obtained in selecting step one are pattern one.
In step 3, the implementation procedure obtaining the pattern two of present encoding block for each pattern one is:
Choosing two coding modes nearest with present mode one direction is pattern two, and detailed process is: pattern one represents with mode1, if the pattern of pattern one is between 3 and 33, pattern two is chosen for mode1-1 and mode1+1; If the value of mode1 is 2 or 34, pattern two is chosen for 3 and 33; If the value of mode1 is DC pattern or PLANAR pattern, pattern two is chosen for 10 (horizontal patterns) and 26 (vertical modes);
Or choose combine with pattern one after there is minimum predicted distortion pattern be pattern two, its implementation procedure is: for each pattern one, obtain the predicted value that all residue intra-frame encoding modes are corresponding, then the prediction that the predicted value of pattern one is corresponding with each residue frame internal schema is weighted on average, select with pattern one weighted average after the coding mode minimum with the distortion of present encoding block be pattern two; Encoding block with the criterion of the distortion of prediction block can be: least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.
In step 4, when carrying out encoded test to each coding mode group in two tuple-sets, optimum modal sets is selected to predict current block; Select optimum modal sets can adopt least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.
In step 4, the prediction produced two different predictive modes is weighted process and is: give no weights to the prediction block of different predictive modes; Described weighted average can adopt to the identical weights of these two kinds different prediction blocks, is namely averaged to them and obtains the prediction block of present encoding block; Or give different weights according to the importance of different predictive mode, or give different weights according to the accuracy without predictive mode generation forecast, or set the higher weights of some probability, obtain best weights by the mode of search spread.
In step 5, process for the luminance block in coding unit and chrominance block selection optimum prediction mode is: for luminance block, the predictive mode of its optimum chooses from original one-direction prediction modes and bi-predictive mode, and the criterion chosen is minimum rate distortion criterion; And for chrominance block, if it is optimum prediction mode that the luminance block of its correspondence is selected bi-directional predicted, then the optimal prediction modes of current chroma block is two predictive modes selecting its corresponding brightness block.
In step 6: encode respectively to the coding mode of the luminance block in coding unit and chrominance block, its detailed process is:
If current intra-frame encoding mode is bi-directional predicted, two coding modes during this is bi-directional predicted, namely pattern one and pattern two need to encode; Pattern one comes from contiguous encoding block, and direct coding is by the index of contiguous block selected.
For luminance block, when pattern one chooses from the left side of current block or top to obtain, then the symbol of 1 bit just can be used for representing that the pattern chosen is from the left side or top; Coding mode two obtains based on pattern one, and similarly, the symbol of a bit can be used for representing and be selected which in pattern one adjacent modes be pattern be;
For chrominance block, if present intensity block is selected bi-directional predicted, the predictive mode of chrominance block will be set to bi-predictive mode, and the direct luminance block of two predictive modes of its correspondence, need not encode to predictive mode; If present intensity block selects original single directional prediction, chrominance block is chosen from original five predictive modes.
The invention has the beneficial effects as follows:
Forecasting Methodology of the present invention can process the complex block existed in video sequence effectively, as the video blur caused due to object or camera motion, and multidirectional complex block etc.The present invention utilizes the frame mode information of adjacent coded block to obtain two patterns of present encoding block, based on the bi-directional predicted complex block can predicted in video sequence of these two patterns, as having the block of multiple directions, the fuzzy block that object and camera motion cause, thus infra-frame prediction performance is got a promotion, code efficiency is further enhanced.
This intra-frame prediction method, utilizes two different predictive modes to obtain two different predicted values.By being weighted the new prediction of of obtaining present encoding block to these two predicted values.Obtain the intra-frame encoding mode information of several adjacent coded blocks of surrounding of present encoding block, select one of them pattern to be pattern one; On the basis of pattern one, another one frame mode is selected to be pattern two.Utilize the predicted value of two different predictive mode synthesis, the complex block in video sequence can be processed, thus code efficiency is further enhanced.
With the method proposed in the past unlike, the present invention program, when carrying out bi-directional predicted, does not need training and the preservation of carrying out weight table.Two different modes in the bi-predictive mode of the present invention program are all obtain based on the contiguous block of current block, and the overhead bit of these two patterns of therefore encoding is few.In addition, the present invention program needs the combined number of carrying out the different mode tested less, because the complexity of coding is lower.
Accompanying drawing explanation
Fig. 1 is the location diagram of current block (C) in embodiment of the present invention two, the contiguous block (L) on the left side and the contiguous block (A) of top.
Fig. 2 is the coding mode one for present encoding block in embodiment of the present invention three, the coding mode two of its candidate and the graph of a relation of pattern one.Pattern one is 3 in the drawings, and pattern two is the pattern 2 or 4 nearest with its angle.
Embodiment
Embodiment one: in the hybrid video coding standard described in present embodiment, intra-frame prediction method is for predicting the complicated encoding block existed in video sequence, described Forecasting Methodology is based on the intra-frame prediction method based on direction (described Forecasting Methodology realizes based on one direction intraframe prediction algorithm) in original coding standard
Described Forecasting Methodology, we are referred to as the bi-directional predicted method based on contiguous coding mode, referred to as bi-directional predicted.This bi-directional predicted method has two coding modes to form, i.e. pattern one and pattern two; The implementation procedure of described Forecasting Methodology is:
Step one: the intra-frame encoding mode obtaining several adjacent coded blocks of surrounding of present encoding block, present encoding block is of a size of W*H, and W is the wide of present encoding block, and H is the height of present encoding block; Around several adjacent coded blocks are called contiguous encoding block;
Step 2: the set obtaining the coding mode one of present encoding block according to the intra-frame encoding mode of the contiguous encoding block of step one acquisition;
Step 3: obtain corresponding pattern two according to each pattern one in the set of coding mode one: choose from two other pattern nearest on direction of pattern one be pattern two, or choose combine with pattern one after there is minimum predicted distortion pattern be pattern two; According to the set of the coding mode one that step 2 obtains, the set of the another one coding mode of each pattern acquiring present encoding block during coding mode one is gathered, the i.e. set of coding mode two; The set of merging patterns one and the set of pattern two, obtain two tuple-sets, and each two tuples comprise two relevant patterns one and pattern two;
Step 4: for each mode combinations in two tuple-sets that step 3 produces, obtain two different prediction blocks with picture element interpolation adjacent around current block; A bi-directional predicted result of present encoding block is the weighted average block of these two different prediction blocks; The combination of optimum pattern one and pattern two is selected to predict current block;
Step 5: carry out choosing of optimal prediction modes respectively for the luminance block in coding unit and chrominance block;
Step 6: the coding mode of the luminance block in coding unit and chrominance block is encoded respectively.
Embodiment two: intra-frame prediction method in the hybrid video coding standard described in present embodiment, is characterized in that:
In step one, described contiguous encoding block is the left side of present encoding block, top, lower-left, the Intra-coded blocks that upper right is encoded, and we can select current block left side block L and top block A as shown in Figure 1;
Also support to select the contiguous block of more more number to obtain the intra prediction mode one of present encoding block, except the left side, top, lower-left, the contiguous block of upper right, the contiguous block of other positions is supported too;
The aggregation obtaining the coding mode one of present encoding block described in step 2 is:
Several patterns that the contiguous encoding block using forestland obtained in selecting step one is maximum are pattern one, or the pattern selecting the contiguous encoding block of the present encoding block left side and top is pattern one, or select the pattern of any one block in these contiguous blocks to be pattern one, or the subset selecting the pattern of these contiguous blocks is pattern one, or specify weights for each contiguous encoding block, added up by the weights of the contiguous encoding block with identical intra-frame encoding mode, several patterns of the maximum weight of the contiguous encoding block obtained in selecting step one are pattern one; As Fig. 1, intra prediction mode corresponding to current block left side block L and top block A can be selected to be the intra-frame encoding mode one of current block.
Other step is identical with embodiment one.
Embodiment three: as shown in Figure 2, intra-frame prediction method in the hybrid video coding standard described in present embodiment, in step 3, the process of the intra-frame encoding mode two of described acquisition present encoding block is:
Choosing two coding modes nearest with present mode one direction is pattern two.Implementation procedure is, if the pattern of pattern one (mode1) is between 3 and 33, pattern two is chosen for mode1-1 and mode1+1; If the value of mode1 is 2 or 34, pattern two is chosen for 3 and 33; If the value of mode1 is DC pattern or PLANAR pattern, pattern two is chosen for 10 (horizontal patterns) and 26 (vertical modes).
Or in step 3, the process of the intra-frame encoding mode two of described acquisition present encoding block is:
To choose the pattern after combining with pattern one with minimum predicted distortion be the implementation procedure of pattern two is: for each pattern one, obtain the predicted value that all residue intra-frame encoding modes are corresponding, then the prediction that the predicted value of pattern one is corresponding with each residue frame internal schema is weighted on average, select with pattern one weighted average after the coding mode minimum with the distortion of present encoding block be pattern two.Here encoding block with the criterion of the distortion of prediction block can be: least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.
Other step is identical with embodiment one or two.
Embodiment four: intra-frame prediction method in the hybrid video coding standard described in present embodiment, in step 4, selects optimum modal sets to predict current block when carrying out encoded test to each coding mode group in two tuple-sets.Optimum modal sets is selected to pass through: least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.Other step and embodiment one, two or three identical.
Embodiment five: intra-frame prediction method in the hybrid video coding standard described in present embodiment, in step 4, the prediction produced two different predictive modes is weighted process and is: give no weights to the prediction block of different predictive modes.Here weighted average can adopt to the identical weights of these two kinds different prediction blocks, is namely averaged to them and obtains the prediction block of present encoding block.Or give different weights according to the importance of different predictive mode, or give different weights according to the accuracy without predictive mode generation forecast, or set the higher weights of some probability, obtain best weights by the mode of search spread.Other step and embodiment one, two, three or four identical.
Embodiment six: intra-frame prediction method in the hybrid video coding standard described in present embodiment, in step 5, process for the luminance block in coding unit and chrominance block selection optimum prediction mode is: for luminance block, the predictive mode of its optimum chooses from the bi-predictive mode that original one-direction prediction modes and the present invention provide, and the criterion chosen is minimum rate distortion criterion.And for chrominance block, if it is optimum prediction mode that the luminance block of its correspondence is selected bi-directional predicted, then the optimal prediction modes of current chroma block is two predictive modes selecting its corresponding brightness block.Other step and embodiment one, two, three, four or five identical.
Embodiment seven: intra-frame prediction method in the hybrid video coding standard described in present embodiment, frame mode coding method is: if current intra-frame encoding mode is bi-directional predicted, two coding modes during this is bi-directional predicted, namely pattern one and pattern two need to encode.Pattern one comes from contiguous encoding block, and direct coding is by the index of contiguous block selected.Other step and embodiment one, two, three, four, five or six identical.
Embodiment eight: in present embodiment, pattern one chooses from the left side of current block or top to obtain, then the symbol of 1 bit just can be used for representing that the pattern chosen is from the left side or top.Coding mode two obtains based on pattern one, and similarly, the symbol of a bit can be used for representing and be selected which in pattern one adjacent modes be pattern be.For chrominance block, if present intensity block is selected bi-directional predicted, the predictive mode of chrominance block will be set to bi-predictive mode, the direct luminance block of two predictive modes of its correspondence.Therefore do not need to encode to predictive mode.If present intensity block selects original single directional prediction, chrominance block is chosen from original five predictive modes.Other step and embodiment one, two, three, four, five, six or seven identical.
Embodiment
Embodiment one:
Provide the specific implementation step of intra-frame prediction method in hybrid video coding standard:
Step one: coding mode modeL and modeA obtaining present encoding block (be of a size of W*H, W is the wide of present encoding block, and H is the height of present encoding block) left side adjacent block and top adjacent block;
Step 2: the pattern one obtaining present encoding block according to the coding mode of the contiguous encoding block of step one acquisition.If modeL and modeA is equal, then the set of the pattern one of present encoding block is { modeL}; If modeL and modeA is unequal, then the set of the pattern one of present encoding block is { modeL, modeA};
Step 3: the pattern one according to the present encoding block of step 2 acquisition is gathered, and obtains the pattern two of present encoding block.Each mode m odei during pattern one is gathered, if modei is between 3 and 33, pattern two is chosen for modei-1 and modei+1; If the value of modei is 2 or 34, pattern two is chosen for 3 and 33; If the value of modei is DC pattern or PLANAR pattern, pattern two is chosen for 10 (horizontal patterns) and 26 (vertical modes).The set of the pattern two of each its correspondence of model selection in gathering for pattern one, two tuple-sets of a pattern one and pattern two combination can be obtained, the each element in set middle and high end is made up of corresponding pattern one and pattern two, i.e. (mode1, mode2).
Step 4: for each mode combinations (mode1, mode2) in two tuple-sets that step 3 produces, obtaining two different prediction blocks with picture element interpolation adjacent around current block is pred1 and pred2 respectively.A bi-directional predicted result pred of present encoding block is the average of these two different prediction blocks, i.e. pred=(pred1+pred2+1) >>1, utilance aberration optimizing selects the combination of optimum pattern one and pattern two to predict current block.
Step 5: carry out choosing of optimal prediction modes respectively for the luminance block in coding unit and chrominance block.For luminance block, the predictive mode of its optimum chooses from the bi-predictive mode that original one-direction prediction modes and the present invention provide, and the criterion chosen is minimum rate distortion criterion.And for chrominance block, if it is optimum prediction mode that the luminance block of its correspondence is selected bi-directional predicted, then the optimal prediction modes of current chroma block is two predictive modes selecting its corresponding brightness block.
Step 6: the coding mode of the luminance block in coding unit and chrominance block is encoded respectively.If current intra-frame encoding mode is bi-directional predicted, two coding modes during this is bi-directional predicted, namely pattern one and pattern two need to encode.Pattern one comes from contiguous encoding block, and direct coding is by the index of contiguous block selected.Such as, pattern one chooses from the left side of current block or top to obtain, then the symbol of 1 bit just can be used for representing that the pattern chosen is from the left side or top.Coding mode two obtains based on pattern one, and similarly, the symbol of a bit can be used for representing and be selected which in pattern one adjacent modes be pattern be.For chrominance block, if present intensity block is selected bi-directional predicted, the predictive mode of chrominance block will be set to bi-predictive mode, the direct luminance block of two predictive modes of its correspondence.Therefore do not need to encode to predictive mode.If present intensity block selects original single directional prediction, chrominance block is chosen from original five predictive modes.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.Scope of the present invention is preferably with reference to additional claim.For this person of an ordinary skill in the technical field, without departing from the inventive concept of the premise, some simple deduction or replace can also be made, all should be considered as belonging to the scope of patent protection that claims that the present invention submits to are determined.
Embodiment one is in the upper realization of VC-0.4 (with the addition of the test model of some technology at the test model HM12.0 of HEVC), and survey condition test according to VC266 is logical, VC266 is logical surveys conditioned reference VC266StudyGroup, " Testconditionandevaluationmethodology ", VC-02-N005, VC2662thMeeting:Suzhou, Mar.2015.
The experimental result of embodiment one is as shown in table 1, as shown in Table 1, compared with VC-0.4, under AllIntraMain_HighBitrate (AI-HR) configuration condition, for Y, U and V component on average has 0.8%, the BD bit rate of 0.6% and 1.1% is saved, under AllIntraMain_LowBitrate (AI-LR) configuration condition, for Y, U and V component on average has the BD bit rate of 0.7%, 0.4% and 0.6% to save.BD bit rate represents that the code check of two kinds of methods under same objective quality saves situation, with reference to G. " CalculationofaveragePSNRdifferencesbetweenRD-Curves, " ITU-TSG16Q.6Document, VCEG-M33, Austin, US, April2001.
Table 1. embodiment one is relative to the BD bit-rate performance of VC-0.4

Claims (8)

1. an intra-frame prediction method in hybrid video coding standard, described Forecasting Methodology, for describing the encoding block of the complexity existed in video sequence, is characterized in that, the implementation procedure of described Forecasting Methodology is:
Step one: the intra-frame encoding mode obtaining several adjacent coded blocks of surrounding of present encoding block, present encoding block is of a size of W*H, and W is the wide of present encoding block, and H is the height of present encoding block; Around several adjacent coded blocks are called contiguous encoding block;
Step 2: the set obtaining the coding mode one of present encoding block according to the intra-frame encoding mode of the contiguous encoding block of step one acquisition;
Step 3: obtain corresponding pattern two according to each pattern one in the set of coding mode one: choose from two other pattern nearest on direction of pattern one be pattern two, or choose combine with pattern one after there is minimum predicted distortion pattern be pattern two.
According to the set of the coding mode one that step 2 obtains, the set of the another one coding mode of each pattern acquiring present encoding block during coding mode one is gathered, the i.e. set of coding mode two; The set of merging patterns one and the set of pattern two, obtain two tuple-sets, and each two tuples comprise two relevant patterns one and pattern two;
Step 4: for each mode combinations in two tuple-sets that step 3 produces, obtain two different prediction blocks with picture element interpolation adjacent around current block; A bi-directional predicted result of present encoding block is the weighted average block of these two different prediction blocks; The combination of optimum pattern one and pattern two is selected to predict current block;
Step 5: respectively choosing of optimal prediction modes is carried out to the luminance block in coding unit and chrominance block;
Step 6: the coding mode of the luminance block in coding unit and chrominance block is encoded respectively.
2. intra-frame prediction method in hybrid video coding standard according to claim 1, is characterized in that:
In step one, described contiguous encoding block is the left side of present encoding block, top, lower-left, the Intra-coded blocks that upper right is encoded;
The aggregation obtaining the coding mode one of present encoding block described in step 2 is:
Several patterns that the contiguous encoding block using forestland obtained in selecting step one is maximum are pattern one, or the pattern selecting the contiguous encoding block of the present encoding block left side and top is pattern one, or select the pattern of any one block in these contiguous blocks to be pattern one, or the subset selecting the pattern of these contiguous blocks is pattern one, or specify weights for each contiguous encoding block, added up by the weights of the contiguous encoding block with identical intra-frame encoding mode, several patterns of the maximum weight of the contiguous encoding block obtained in selecting step one are pattern one.
3. intra-frame prediction method in hybrid video coding standard according to claim 1, is characterized in that: in step 3, and the implementation procedure obtaining the pattern two of present encoding block for each pattern one is:
Choosing two coding modes nearest with present mode one direction is pattern two, and detailed process is: pattern one represents with mode1, if the pattern of pattern one is between 3 and 33, pattern two is chosen for mode1-1 and mode1+1; If the value of mode1 is 2 or 34, pattern two is chosen for 3 and 33; If the value of mode1 is DC pattern or PLANAR pattern, pattern two is chosen for 10 and 26;
Or choose combine with pattern one after there is minimum predicted distortion pattern be pattern two, its implementation procedure is: for each pattern one, obtain the predicted value that all residue intra-frame encoding modes are corresponding, then the prediction that the predicted value of pattern one is corresponding with each residue frame internal schema is weighted on average, select with pattern one weighted average after the coding mode minimum with the distortion of present encoding block be pattern two; Encoding block with the criterion of the distortion of prediction block can be: least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.
4. intra-frame prediction method in hybrid video coding standard according to claim 1, is characterized in that: in step 4, when carrying out encoded test to each coding mode group in two tuple-sets, selects optimum modal sets to predict current block; Select optimum modal sets can adopt least mean-square error, minimum Hadamard error or rate-distortion optimization criterion.
5. intra-frame prediction method in hybrid video coding standard according to claim 1, it is characterized in that: in step 4, the prediction produced two different predictive modes is weighted process and is: give no weights to the prediction block of different predictive modes; Described weighted average can adopt to the identical weights of these two kinds different prediction blocks, is namely averaged to them and obtains the prediction block of present encoding block; Or give different weights according to the importance of different predictive mode, or give different weights according to the accuracy without predictive mode generation forecast, or set the higher weights of some probability, obtain best weights by the mode of search spread.
6. intra-frame prediction method in hybrid video coding standard according to claim 5, it is characterized in that: in step 5, process for the luminance block in coding unit and chrominance block selection optimum prediction mode is: for luminance block, the predictive mode of its optimum chooses from original one-direction prediction modes and bi-predictive mode, and the criterion chosen is minimum rate distortion criterion; And for chrominance block, if it is optimum prediction mode that the luminance block of its correspondence is selected bi-directional predicted, then the optimal prediction modes of current chroma block is two predictive modes selecting its corresponding brightness block.
7. intra-frame prediction method in hybrid video coding standard according to claim 1 or 6, is characterized in that:
In step 6: encode respectively to the coding mode of the luminance block in coding unit and chrominance block, its detailed process is:
If current intra-frame encoding mode is bi-directional predicted, two coding modes during this is bi-directional predicted, namely pattern one and pattern two need to encode; Pattern one comes from contiguous encoding block, and direct coding is by the index of contiguous block selected.
8. intra-frame prediction method in hybrid video coding standard according to claim 7, is characterized in that:
For luminance block, when pattern one chooses from the left side of current block or top to obtain, then the symbol of 1 bit just can be used for representing that the pattern chosen is from the left side or top; Coding mode two obtains based on pattern one, and similarly, the symbol of a bit can be used for representing and be selected which in pattern one adjacent modes be pattern be;
For chrominance block, if present intensity block is selected bi-directional predicted, the predictive mode of chrominance block will be set to bi-predictive mode, and the direct luminance block of two predictive modes of its correspondence, need not encode to predictive mode; If present intensity block selects original single directional prediction, chrominance block is chosen from original five predictive modes.
CN201510861669.4A 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard Active CN105491390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510861669.4A CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510861669.4A CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Publications (2)

Publication Number Publication Date
CN105491390A true CN105491390A (en) 2016-04-13
CN105491390B CN105491390B (en) 2018-09-11

Family

ID=55678056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510861669.4A Active CN105491390B (en) 2015-11-30 2015-11-30 Intra-frame prediction method in hybrid video coding standard

Country Status (1)

Country Link
CN (1) CN105491390B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681808A (en) * 2016-03-16 2016-06-15 同济大学 Rapid decision-making method for SCC interframe coding unit mode
WO2019161798A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent mode assignment in video coding
WO2021114100A1 (en) * 2019-12-10 2021-06-17 中国科学院深圳先进技术研究院 Intra-frame prediction method, video encoding and decoding methods, and related device
CN113709501A (en) * 2019-12-23 2021-11-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794878A (en) * 2019-09-23 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794885A (en) * 2020-12-30 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
WO2022028422A1 (en) * 2020-08-03 2022-02-10 Alibaba Group Holding Limited Systems and methods for bi-directional prediction correction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102790878A (en) * 2011-12-07 2012-11-21 北京邮电大学 Coding mode choosing method and device for video coding
CN103248895A (en) * 2013-05-14 2013-08-14 芯原微电子(北京)有限公司 Quick mode estimation method used for HEVC intra-frame coding
CN103997646A (en) * 2014-05-13 2014-08-20 北京航空航天大学 Rapid intra-frame prediction mode selection method in high-definition video coding
WO2015055832A1 (en) * 2013-10-18 2015-04-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-component picture or video coding concept

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102790878A (en) * 2011-12-07 2012-11-21 北京邮电大学 Coding mode choosing method and device for video coding
CN103248895A (en) * 2013-05-14 2013-08-14 芯原微电子(北京)有限公司 Quick mode estimation method used for HEVC intra-frame coding
WO2015055832A1 (en) * 2013-10-18 2015-04-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multi-component picture or video coding concept
CN103997646A (en) * 2014-05-13 2014-08-20 北京航空航天大学 Rapid intra-frame prediction mode selection method in high-definition video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AN-CHAO TSAI ET AL: "Intensity Gradient Technique for Efficient Intra-Prediction in H.264/AVC", 《IEEE TRANSACTIONS ON CIRCUITS & SYSTEMS FOR VIDEO TECHNOLOGY》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681808A (en) * 2016-03-16 2016-06-15 同济大学 Rapid decision-making method for SCC interframe coding unit mode
WO2019161798A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent mode assignment in video coding
CN113810687A (en) * 2019-09-23 2021-12-17 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794878B (en) * 2019-09-23 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794878A (en) * 2019-09-23 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113810687B (en) * 2019-09-23 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
WO2021114100A1 (en) * 2019-12-10 2021-06-17 中国科学院深圳先进技术研究院 Intra-frame prediction method, video encoding and decoding methods, and related device
CN113709500A (en) * 2019-12-23 2021-11-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113709501A (en) * 2019-12-23 2021-11-26 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113709501B (en) * 2019-12-23 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
WO2022028422A1 (en) * 2020-08-03 2022-02-10 Alibaba Group Holding Limited Systems and methods for bi-directional prediction correction
US11582474B2 (en) 2020-08-03 2023-02-14 Alibaba Group Holding Limited Systems and methods for bi-directional gradient correction
CN114650423A (en) * 2020-12-30 2022-06-21 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN114650423B (en) * 2020-12-30 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794885B (en) * 2020-12-30 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113794885A (en) * 2020-12-30 2021-12-14 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment

Also Published As

Publication number Publication date
CN105491390B (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN105491390B (en) Intra-frame prediction method in hybrid video coding standard
CN104935941B (en) The method being decoded to intra prediction mode
CN105306944B (en) Chromatic component Forecasting Methodology in hybrid video coding standard
CN104935938B (en) Inter-frame prediction method in a kind of hybrid video coding standard
CN107197264B (en) The method of decoding video signal
CN106067980B (en) For the device encoded to image
CN105325000B (en) Picture coding device, image encoding method, picture decoding apparatus and picture decoding method
CN105981389B (en) Picture coding device, picture decoding apparatus, encoding stream converting means, image encoding method and picture decoding method
US10091526B2 (en) Method and apparatus for motion vector encoding/decoding using spatial division, and method and apparatus for image encoding/decoding using same
JP6807987B2 (en) Image coding device, moving image decoding device, moving image coding data and recording medium
CN109792521A (en) The recording medium of method and apparatus and stored bits stream for being encoded/decoded to image
TWI665908B (en) Image decoding device, image decoding method, image encoding device, image encoding method, computer-readable recording medium
CN110089113A (en) Image coding/decoding method, equipment and the recording medium for stored bits stream
CN104811717A (en) Methods and apparatuses for encoding/decoding high resolution images
CN109996082A (en) Method and apparatus for sharing candidate list
CN110366850A (en) Method and apparatus for the method based on intra prediction mode processing image
CN110365982A (en) The different transform of intraframe coding selects accelerated method in a kind of multipurpose coding
CN105847794A (en) HEVC intra-frame prediction mode rapid selection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant