CN101674475A - Self-adapting interlayer texture prediction method of H.264/SVC - Google Patents
Self-adapting interlayer texture prediction method of H.264/SVC Download PDFInfo
- Publication number
- CN101674475A CN101674475A CN 200910084021 CN200910084021A CN101674475A CN 101674475 A CN101674475 A CN 101674475A CN 200910084021 CN200910084021 CN 200910084021 CN 200910084021 A CN200910084021 A CN 200910084021A CN 101674475 A CN101674475 A CN 101674475A
- Authority
- CN
- China
- Prior art keywords
- layer
- pixel
- pixels
- anterior
- subclass
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
The invention discloses a self-adapting interlayer texture prediction method of H.264/SVC. When every intra-frame slice of an enhancement layer of the current space is encoded, the method comprises the following steps: sixteen corresponding reference layer pixels are determined for each pixel of the current layer, and a two-dimensional Wiener filter is utilized for carrying out interpolation filtering on the sixteen corresponding reference layer pixels, so that interlayer texture predicted values corresponding to the pixels of the current layer can be obtained; and the coefficient of the two-dimensional Wiener filter is transmitted into a decoding terminal. Correspondingly, when every intra-frame slice of the enhancement layer of the current space is processed by the decoding terminal, theinterpolation filtering is carried out on the sixteen corresponding reference layer pixels corresponding to the each pixel of the current layer according to the filter coefficient sent by a coding terminal, and the interlayer texture predicted values of the pixels of the current layer can be obtained. The method can be applied to improve the performance of the interlayer texture prediction and SVC space hierarchical coding efficiency.
Description
Technical field
The present invention relates to technology of video compressing encoding, texture Forecasting Methodology between particularly a kind of adaptation layer H.264/SVC.
Background technology
Gradable video encoding (SVC, Scalable Video Coding) is a kind of reply modern video transmission system and the multifarious technology of terminal of rising over nearly 20 years.Gradability (scalability) is meant that video bit stream can selectively carry out " abandoning " according to certain rule, thereby adapts to the demand of heterogeneous networks condition and terminal capabilities.
At present up-to-date SVC standard is formulated the H.264/AVC appendix G of (Advanced Video Coding) standard of also conduct by joint video team (JVT, Joint Video Team), is commonly called H.264/SVC.In ensuing explanation, all will refer in particular to H.264/SVC as non-specified otherwise SVC.
The characteristics of SVC code stream are that it comprises the experimental process code stream, can extract some subcode stream according to demand and decode.Current SVC standard has realized three kinds of gradabilities the most general: temporal scalability, gradable spatial and quality are gradable.Wherein, gradable spatial means that different subcode streams can be decoded and obtains the video image of different size (spatial resolution).Different gradabilities can be united, and makes single channel SVC code stream can express the video content of multiple Space Time resolution and quality, and flexibility is very high.
The various graded properties of SVC are to adopt the technology of hierarchical coding to be achieved.Each SVC code stream all is made up of basic layer (Base Layer) and several enhancement layers (EnhancementLayer) with the AVC compatibility.The video content of basic corresponding lowest resolution of layer or quality, its code stream bit rate is minimum; Enhancement layer is with respect to basic layer of resolution or quality that possesses higher video, and the corresponding increase of code stream bit rate.When transmission system worsening condition or terminal computing capability deficiency, complete code stream can not effectively be transmitted and be decoded, and then can begin successively to abandon subcode stream by the highest enhancement layer, until only comprising minimum basic layer.
SVC good compression efficient has benefited from having continued to use all coding toolses of AVC on the one hand, for example infra-frame prediction, the motion compensation of the multi-reference frame fraction pixel precisions of dividing, entropy coding of environment self-adaption or the like more; SVC is when encoding enhancement layer on the other hand, with low layer as with reference to layer (a reference layer), introduce a series of inter-layer predictions (inter-layer prediction) technology removed layer with layer between redundancy.
The SVC gradable spatial has used three kinds of inter-layer prediction technology: inter-layer texture prediction (TextureInter-layer Prediction), motion inter-layer prediction (Motion Inter-layer Prediction) and residual error inter-layer prediction (Residual Inter-layer Prediction).Wherein, the purpose of inter-layer texture prediction is to utilize the reconstructed pixel information of existing reference layer, obtains the information of forecasting of the texture of present encoding layer.Inter-layer texture prediction needs the complete decoding of reference layer information, if the corresponding macro block of reference layer is an interframe encoding mode, the operation of motion compensation will inevitably cause the complete decoding of a series of reference frames, cause the very big lifting of decoding complex degree.Therefore the SVC standard defines inter-layer texture prediction and only limits to use under the intra-frame encoding mode.Fig. 1 is the SVC encoder system block diagram of inter-layer texture prediction.Wherein, in the inter-layer texture prediction frame be inter-layer texture prediction part in the encoder.Particularly, the implementation procedure of the inter-layer texture prediction in the SVC standard is as follows:
1, calculate the geometric position.
For macro block pixels,, obtain itself and the opposite position of layer pixel substantially according to dimension scale relation and the position relation between current tomographic image and the reference layer image when anterior layer coding.SVC standard code opposite position coordinate is adopted 1/16 pixel precision and is expressed.Be illustrated in figure 2 as the relation of the position when between anterior layer pixel and reference layer pixel under the basic two spatial scalability situations.
2, reference layer is rebuild the pixel up-sampling.
According to the relative position relation of trying to achieve in 1, reference layer is rebuild the mapping of pixel up-sampling try to achieve predicted value when the anterior layer pixel.SVC regulation up-sampling filter is the 4 tap leggy interpolation filters (poly-phase interpolation filter) of gang's 1 dimension.Filter coefficient draws by tabling look-up.The relative position information that calculates in the step 1 is depended in the selection of phase place.The operation of up-sampling is according to first level, vertical again order.
Inter-layer texture prediction is a kind of technology of effective removal interlayer intra-frame macro block pixel interdependence.Its performance depends primarily on the interpolation performance of employed up-sampling filter.Interpolation result approaches current encoded image more, and then prediction residual is more little, and then code efficiency is high more.Though 4 tap leggy interpolation filters of fixed coefficient are realized simple, its up-sampling performance but is not an optimum.With base two gradable spatial (the wide height that is enhancement layer is that reference layer is wide high 2 times) be example, and the phase place of SVC criterion calculation interpolation filter is 1/4 or 3/4 at this moment, and its corresponding interpolation filter coefficient is
For example, to as anterior layer pixel q
AWhen carrying out inter-layer texture prediction, at first in the horizontal direction, utilize corresponding filter coefficient that reference layer pixel p 10, p11, p12 and p13 are carried out filtering interpolation, more in vertical direction, utilize corresponding filter coefficient that reference layer pixel p 01, p11, p21 and p31 are carried out filtering interpolation.
Can there be following deficiency in above-mentioned this simple 1 dimension low pass filter when reply vision signal up-sampling:
1, because the space 2 of vision signal dimension property, simple level and vertical 1 dimension filtering can not be contained the characteristics on all 2 dimension directions of vision signal comprehensively, therefore interpolation poor effect on non-level and non-perpendicular direction;
2, because the statistics of vision signal is non-stationary, therefore changeless Design of Filter can not all obtain optimum interpolation performance under the occasion arbitrarily;
3, inevitably can introduce aliasing distortion owing in the video signal collective process, and aliasing distortion can have a strong impact on the performance of filtering interpolation.Low pass filter does not possess the anti-ability repeatedly of mixing, and therefore can cause the deficiency of inter-layer texture prediction performance yet.
Because the performance of interpolation filter is limited, can directly influences the performance of inter-layer texture prediction, and then can make the code efficiency of SVC be restricted.Therefore, the space that improvement still arranged and improve of the inter-layer texture prediction technology in the present SVC standard.
Summary of the invention
In view of this, the invention provides texture Forecasting Methodology between a kind of adaptation layer H.264/SVC, can improve the performance and the SVC code efficiency of inter-layer texture prediction.
For achieving the above object, the present invention adopts following technical scheme:
Texture Forecasting Methodology between the adaptation layer in a kind of H.264/SVC coding, during sheet slice, this method comprises in each frame of the current spatial enhancement layer of coding:
Determine opposite position,, determine and corresponding 16 the reference layer pixels of described arbitrary pixel for arbitrary pixel when anterior layer when anterior layer pixel and reference layer pixel; According to the relative position relation between described arbitrary pixel 16 reference pixels corresponding with it, all pixels that will work as anterior layer are divided into 4 subclass;
To belong to arbitrary subclass when the anterior layer pixel, calculate the autocorrelation matrix of 16 corresponding reference layer pixels and when the cross correlation vector between anterior layer pixel and corresponding 16 reference layer pixels;
According to described autocorrelation matrix and cross correlation vector, determine 2 dimension wiener filter coefficients h of described arbitrary subclass correspondence
IjWherein, i and j are respectively the ranks index when 16 reference layer pixels of anterior layer pixel correspondence;
The filter coefficient that utilizes described arbitrary subclass correspondence obtains corresponding inter-layer texture prediction value when the anterior layer pixel, and described filter coefficient is transferred to decoding end each carries out interpolation when 16 reference layer pixels of anterior layer pixel correspondence in this subclass.
Preferably, described according to autocorrelation matrix R
PpWith cross correlation vector r
PqThe mode of determining 2 dimension wiener filter coefficients of described arbitrary subclass correspondence is:
Preferably, the described 2 dimension wiener filter coefficients that obtain are quantized to round;
Describedly filter coefficient is transferred to decoding end is: the filter coefficient after quantizing to round is transferred to decoding end.
Preferably, spatial enhancement layer is independently calculated described 2 dimension wiener filter coefficients to sheet in each frame, utilize the filter coefficient of sheet in each frame that the anterior layer pixel of working as of sheet in this frame is carried out texture prediction.
Preferably, the filter coefficient that sends to decoding end is sent to decoding end behind entropy coding.
Texture Forecasting Methodology between the adaptation layer in a kind of H.264/SVC decoding, during sheet slice, this method comprises in each frame of the current spatial enhancement layer of decoding:
Determine opposite position,, determine and corresponding 16 the reference layer pixels of described arbitrary pixel for arbitrary pixel when anterior layer when anterior layer pixel and reference layer pixel; According to the relative position relation between described arbitrary pixel 16 reference pixels corresponding with it, all pixels that will work as anterior layer are divided into 4 subclass;
2 dimension wiener filter coefficients h of each subclass correspondence that the received code end sends
Ij, when the anterior layer pixel, utilize the 2 dimension wiener filter coefficients of deserving the affiliated subclass correspondence of anterior layer pixel for each, 16 reference layer pixels of deserving anterior layer pixel correspondence are carried out filtering interpolation, determine to deserve the inter-layer texture prediction value of anterior layer pixel.
Preferably, if the described 2 dimension wiener filter coefficients that receive through entropy coding, are then carried out described filtering interpolation operation to the described 2 dimension wiener filter coefficients that receive through entropy decoding back.
As seen from the above technical solution, among the present invention, coding side to each frame of current spatial enhancement layer in sheet when handling, for each determines 16 reference layer pixels of correspondence when the anterior layer pixel, and utilize 2 to tie up Weiner filters, these 16 reference layer pixels are carried out filtering interpolation, obtain corresponding inter-layer texture prediction value when the anterior layer pixel.Simultaneously, for guaranteeing the consistency of encoding and decoding, filter coefficient is sent to decoding end.Correspondingly, decoding end to each frame of current spatial enhancement layer in sheet when handling, filter coefficient according to coding side sends carries out filtering interpolation to arbitrary 16 reference layer pixels when anterior layer pixel correspondence, obtains this arbitrary inter-layer texture prediction value when the anterior layer pixel.Wherein, 16 reference layer pixels comprise the pixel of level, vertical, non-level and non-perpendicular direction, and adopt 2 dimension Weiner filters, thereby overcome existing SVC inter-layer texture prediction technology the reply vision signal the space non-stationary property and and aliasing distortion aspect deficiency, promote the accuracy of the inter-layer texture prediction of SVC spatial scalability, and then improve the code efficiency of SVC I picture.
Description of drawings
Fig. 1 is the SVC encoder system block diagram of inter-layer texture prediction.
Fig. 2 is the location of pixels and the phase relation schematic diagram of basic two spatial scalabilities.
Fig. 3 is texture Forecasting Methodology flow chart between the adaptation layer in H.264/SVC encoding among the present invention.
Fig. 4 a is the S that SVC standard employing method is tried to achieve
A2 dimension frequency response schematic diagrames of interpolation filter.
The S of Fig. 4 b for adopting the present embodiment method to try to achieve
A2 dimension frequency responses of interpolation filter.
Fig. 5 for the rate distortion curve ratio that small size " SOCCER " sequence adopted present embodiment method and the emulation of SVC standard employing method and obtain than schematic diagram.
Fig. 6 for the rate distortion curve ratio that large scale " HARBOUR " sequence adopted present embodiment method and the emulation of SVC standard employing method and obtain than schematic diagram.
Embodiment
For making purpose of the present invention, technological means and advantage clearer, the present invention is described in further details below in conjunction with accompanying drawing.
Basic thought of the present invention is: when carrying out the up-sampling operation in inter-layer texture prediction, utilize 2 dimension filters, corresponding reference layer pixel on the level of anterior layer pixel, vertical, non-level, non-perpendicular a plurality of directions carried out filtering interpolation, thereby improve the accuracy of inter-layer texture prediction.
Because decoding end realizes simple, and owing to coding side in the hybrid encoding frame has comprised the basic structure of decoding end, so mainly coding side is elaborated.
Fig. 3 carries out the method schematic diagram of inter-layer texture prediction for coding side among the present invention.Wherein, all identical for the inter-layer texture prediction of sheet in each frame in the current spatial enhancement layer, below describe with the example that is treated to sheet in one of them frame.As shown in Figure 3, this method comprises:
In the present invention, by filtering interpolation to arbitrary when the anterior layer pixel is predicted, utilize to deserve 16 reference layer pixels of anterior layer pixel correspondence and carry out Filtering Processing, thereby obtain inter-layer texture prediction result more accurately.Therefore, in this step, need determine 16 the reference layer pixels corresponding to working as arbitrary pixel A of anterior layer in this film with this pixel A.
Particularly, reference layer and use rule with the SVC conformance to standard when anterior layer location of pixels relation.Concern as shown in Figure 2 in reference layer pixel under the basic two spatial scalability situations with when position between the anterior layer pixel.The reference layer pixel p
Ij, i, j=0...3 form one 2 dimension spectral window, occupy the spectral window center as anterior layer pixel q
A, q
B, q
CAnd q
DPredicted value will be by p
Ij, i, the j=0...3 filtering interpolation is obtained.That is to say, as anterior layer pixel q
A, q
B, q
CAnd q
DEach self-corresponding 16 reference layer pixel is p
Ij, i, j=0...3.When the anterior layer pixel, all can determine each self-corresponding 16 reference layer pixel among Fig. 2 other according to identical relative position relation.
For all pixels of working as anterior layer in each sheet, relative position relation between 16 the reference layer pixels corresponding with it has four kinds of situations, therefore, according to each relative position relation when between anterior layer pixel 16 reference layers corresponding, all pixels when anterior layer in this slice are divided into 4 subclass with it.Concrete subset division mode is: as anterior layer pixel q, determine affiliated subclass according to the level of this pixel with vertical phase place, promptly for arbitrary
Phase wherein
y[q] and phase
x[q] represents vertical phase place and the horizontal phase as anterior layer pixel q respectively.
Above-mentioned as anterior layer pixel q
A, q
B, q
CAnd q
D16 the reference layer pixels corresponding with it have been represented above-mentioned four kinds of situations respectively, and for other all pixels when anterior layer, the relative position relation one between 16 the reference layer pixels corresponding with it is decided to be above-mentioned q
A, q
B, q
COr q
DA kind of identical with in its 16 corresponding reference layer pixels.
For in arbitrary subclass each when the anterior layer pixel, because it is all identical with relative position relation between corresponding 16 reference layer pixels to deserve the anterior layer pixel, therefore, for in this subclass each when the anterior layer pixel, when carrying out inter-layer texture prediction, adopt identical filter coefficient to get final product, so among the present invention the filter coefficient when anterior layer pixel correspondence in arbitrary subclass is called the filter coefficient of this arbitrary subclass correspondence; For belong to different subclass when the anterior layer pixel, adopt different filter coefficients.But the filter coefficient of each different subclass correspondences can be adopted in a like fashion and obtain, and the value of just finally trying to achieve can be different.The mode of concrete definite filter coefficient can adopt existing variety of way.
Below to determine that wherein subclass A (is q
AAffiliated subclass) corresponding filter coefficient is an example, a kind of mode of asking for filter coefficient of deriving.
Suppose h
Ij, i, j=0...3 is for being used to obtain q
A16 filter coefficients of predicted value are respectively with 16 reference layer pixel p
Ij, i, j=0...3 is corresponding, and wherein, i and j are respectively the ranks index when 16 reference layer pixels of anterior layer pixel correspondence, then q
APredicted value q '
ACan provide by following formula
Then prediction residual err is
err=q′
A-q
A (3)
According to the Weiner filter principle, filter coefficient h
Ij, i, j=0...3 satisfy the least error energy criterion, promptly
h
ij=argminE[err
2] (4)
In order to allow E[err
2] minimum, make it to { h
IjThe single order local derviation equal 0.Then can obtain 16 yuan of once linear equations of coefficients, be write as the expression matrix form and be
R
pp·h=r
pq (5)
R wherein
PpExpression { p
IjAutocorrelation matrix, r
PqExpression { p
IjAnd q
ACross correlation vector, h represents by coefficient { h
IjThe coefficient vector formed of group.Find the solution and obtain
Based on above-mentioned derivation, determine in this step that the mode of the filter coefficient of subclass A correspondence comprises following operation:
Step 302a, all are as the autocorrelation matrix R of 16 reference layer pixels of anterior layer pixel correspondence among the subset of computations A
PpAnd among the subclass A all as the cross correlation vector r between anterior layer pixel and corresponding 16 reference layer pixels
Pq
Particularly, autocorrelation matrix is a statistical matrix, and cross correlation vector also is a statistics vector.Wherein, 16 reference layer pixels of pixel correspondence in arbitrary current synusia are carried out auto-correlation processing, again all auto-correlation results when 16 reference layer pixels of anterior layer pixel correspondence are carried out statistical average, obtain 16 * 16 autocorrelation matrix R
Pp16 reference layer pixels that pixel in arbitrary current synusia is corresponding with it are respectively carried out cross-correlation, again all are carried out statistical average when the cross correlation results of anterior layer pixel 16 reference layer pixels corresponding with it, obtain 16 * 1 cross correlation vector r
Pq
Step 302b, autocorrelation matrix and the cross correlation vector of utilizing step 201a to obtain determine that the filter coefficient of subclass A correspondence is
Wherein, h is 16 * 1 the vector that 16 filter coefficient sequence arrangement constitute.
In actual the realization,, therefore, preferably, need quantize to round to coefficient owing to consider the high complexity of floating number operation and filter coefficient transmission requirements next.Can carry out 8 quantifications, then the method for Liang Huaing is
h′
ij=round[h
ij×2
8] (7)
Step 304 utilizes the filter coefficient of each definite subclass correspondence of step 302 to carry out filtering interpolation, obtains the inter-layer texture prediction value when the anterior layer pixel.
For any pixel q in anterior layer, at first obtain it and belong to certain subclass according to its level, vertical phase information, take out the filter coefficient h ' of this subclass correspondence then
IjAnd 16 reference layer pixel p of this pixel q correspondence
Ij, i, j=0...3 carries out filtering interpolation calculating by following formula and obtains q ', be i.e. the predicted pixel values that obtains by the inter-layer texture prediction technology.
In the manner described above, each pixel in the current synusia is carried out inter-layer texture prediction, thereby obtain the inter-layer texture prediction value of each pixel.
Step 305 is transferred to decoding end with the filter coefficient of each subclass correspondence.
By abovementioned steps 302, determined 16 filter coefficients respectively for 4 subclass when anterior layer, promptly the filter coefficient of this slice is totally 64.In order to guarantee the consistency at encoding and decoding two ends, in this step these 64 filter coefficients are transferred to decoding end.Particularly, for realizing effective transmission of coefficient, can carry out entropy coding to the coefficient after quantizing in the step 303 by using index Columbus (Exp-Golomb) coding techniques that uses in the standard H.264, the coefficient bit after will encoding is again put into the self-defining code stream element of slice head.
So far, the inter-layer texture prediction method flow of coding side finishes.
Correspondingly, comprising of decoding end for example inter-layer prediction flow process:
Step 401 is determined when the opposite position of anterior layer pixel with the reference layer pixel, and in this film all are worked as the anterior layer pixel is divided into 4 subclass.
Identical in the specific implementation of this step and the step 301, just repeat no more here.
Step 402, the filter coefficient of each subclass correspondence that the received code end sends.
If the filter coefficient that receives needs to carry out corresponding entropy decoding after, then receiving the respective filter coefficient in this step through entropy coding.
Step 403 when the anterior layer pixel, is utilized the filter coefficient of the affiliated subclass correspondence of this pixel for each, and 16 reference layer pixels of this pixel correspondence are carried out filtering interpolation, obtains texture predicted value between this pixel layer.
Utilize the filter coefficient that receives to carry out the mode of filtering interpolation in this step,, just repeat no more here with identical in the step 304.
So far, the inter-layer texture prediction method flow of decoding end finishes among the present invention.
More than be the implementation method of inter-layer texture prediction technology of the present invention.In the above-mentioned specific implementation, adaptive filter coefficient is independent calculating to sheet in the different frame of spatial enhancement layer, and all picture element interpolations of same subclass use identical filter coefficient in the sheet in the frame of the same space enhancement layer.Therefore technical scheme of the present invention has realized the chip level adaptivity of up-sampling filter.
In order to compare the performance difference of the specific embodiment of the invention and SVC standard technique, on the test platform JSVM9.14 of SVC official, realized texture Predicting Technique between middle level of the present invention.Below be emulation explanation and correlated results.
Because inter-layer texture prediction is used when only SVC encodes in the full frame of spatial scalability.Therefore all only use intraframe coding in all experiments.All experiment code streams all have a basic layer and an enhancement layer to constitute in addition, and the wide height of enhancement layer be 2 times of basic layer, i.e. basic two spatial scalability.
Provided the sequence information that uses in the experiment as table 1.Corresponding results is provided by table 2.By the result as seen, under equal video quality condition, technology shown in the present can be implemented in than SVC standard and save 3.61% code check under the small scale, saves 9.05% code check under large scale.
The coded sequence information of table 1 test inter-layer texture prediction technology of the present invention
Table 2 inter-layer texture prediction technology of the present invention and SVC standard technique result are relatively
Fig. 4 has provided the S that " SOCCER " sequence first frame adopts present embodiment method and SVC standard employing method to try to achieve
A2 dimension frequency response contrasts of interpolation filter.Wherein, Fig. 4 a is the S that SVC standard employing method is tried to achieve
A2 dimension frequency response schematic diagrames of interpolation filter; The S of Fig. 4 b for adopting the present embodiment method to try to achieve
A2 dimension frequency responses of interpolation filter.Therefrom the 2 Wei Weina sef-adapting filters that use among the present invention have as can be seen comprehensively related to the characteristic on all directions of space, and its frequency is corresponding does not end fully at front end, make it possess the ability of anti-aliasing distortion.Handling on the vision signal interpolation, the simple low-pass filter that adopts than SVC standard is superior.
Having contrasted small size " SOCCER " sequence and large scale " HARBOUR " sequence as Fig. 5 and Fig. 6 adopts the method for the invention and SVC standard method the rate distortion curve ratio after the emulation is respectively.
To improving code efficiency remarkable result is arranged from the visible the method for the invention that relatively goes up of rate distortion.
To sum up, among the present invention, carry out the SVC inter-layer texture prediction based on 2 Wei Weina adaptive interpolation filters, it has overcome 2 deficiencies of tieing up on non-stationary and the reply aliasing distortion problem that the dimension of 1 in the existing SVC standard 4 tap leggy interpolation filters are being handled image.Because dimension is received adaptive-filtering based on the predicated error minimum criteria, makes that inter-layer texture prediction efficient of the present invention is higher, thereby obtains better encoding compression effect.
Being preferred embodiment of the present invention only below, is not to be used to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (7)
1, texture Forecasting Methodology between the adaptation layer in a kind of H.264/SVC coding is characterized in that during sheet slice, this method comprises in each frame of the current spatial enhancement layer of coding:
Determine opposite position,, determine and corresponding 16 the reference layer pixels of described arbitrary pixel for arbitrary pixel when anterior layer when anterior layer pixel and reference layer pixel; According to the relative position relation between described arbitrary pixel 16 reference pixels corresponding with it, all pixels that will work as anterior layer are divided into 4 subclass;
To belong to arbitrary subclass when the anterior layer pixel, calculate the autocorrelation matrix of 16 corresponding reference layer pixels and when the cross correlation vector between anterior layer pixel and corresponding 16 reference layer pixels;
According to described autocorrelation matrix and cross correlation vector, determine 2 dimension wiener filter coefficients h of described arbitrary subclass correspondence
IjWherein, i and j are respectively the ranks index when 16 reference layer pixels of anterior layer pixel correspondence;
The filter coefficient that utilizes described arbitrary subclass correspondence obtains corresponding inter-layer texture prediction value when the anterior layer pixel, and described filter coefficient is transferred to decoding end each carries out interpolation when 16 reference layer pixels of anterior layer pixel correspondence in this subclass.
2, method according to claim 1 is characterized in that, and is described according to autocorrelation matrix R
PpWith cross correlation vector r
PqThe mode of determining 2 dimension wiener filter coefficients of described arbitrary subclass correspondence is:
3, method according to claim 1 and 2 is characterized in that, the described 2 dimension wiener filter coefficients that obtain are quantized to round;
Describedly filter coefficient is transferred to decoding end is: the filter coefficient after quantizing to round is transferred to decoding end.
4, method according to claim 1 and 2 is characterized in that, spatial enhancement layer is independently calculated described 2 dimension wiener filter coefficients to sheet in each frame, utilizes the filter coefficient of sheet in each frame that the anterior layer pixel of working as of sheet in this frame is carried out texture prediction.
5, method according to claim 1 and 2 is characterized in that, the filter coefficient that sends to decoding end is sent to decoding end behind entropy coding.
6, texture Forecasting Methodology between the adaptation layer in a kind of H.264/SVC decoding is characterized in that during sheet slice, this method comprises in each frame of the current spatial enhancement layer of decoding:
Determine opposite position,, determine and corresponding 16 the reference layer pixels of described arbitrary pixel for arbitrary pixel when anterior layer when anterior layer pixel and reference layer pixel; According to the relative position relation between described arbitrary pixel 16 reference pixels corresponding with it, all pixels that will work as anterior layer are divided into 4 subclass;
2 dimension wiener filter coefficients h of each subclass correspondence that the received code end sends
Ij, when the anterior layer pixel, utilize the 2 dimension wiener filter coefficients of deserving the affiliated subclass correspondence of anterior layer pixel for each, 16 reference layer pixels of deserving anterior layer pixel correspondence are carried out filtering interpolation, determine to deserve the inter-layer texture prediction value of anterior layer pixel.
7, method according to claim 6 is characterized in that, if the described 2 dimension wiener filter coefficients that receive through entropy coding, are then carried out described filtering interpolation operation to the described 2 dimension wiener filter coefficients that receive through entropy decoding back.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200910084021 CN101674475B (en) | 2009-05-12 | 2009-05-12 | Self-adapting interlayer texture prediction method of H.264/SVC |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200910084021 CN101674475B (en) | 2009-05-12 | 2009-05-12 | Self-adapting interlayer texture prediction method of H.264/SVC |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101674475A true CN101674475A (en) | 2010-03-17 |
CN101674475B CN101674475B (en) | 2011-06-22 |
Family
ID=42021427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200910084021 Expired - Fee Related CN101674475B (en) | 2009-05-12 | 2009-05-12 | Self-adapting interlayer texture prediction method of H.264/SVC |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101674475B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854549A (en) * | 2010-05-28 | 2010-10-06 | 浙江大学 | Spatial domain prediction based video and image coding and decoding method and device |
CN102355583A (en) * | 2011-09-29 | 2012-02-15 | 广西大学 | Scalable video encoding (SVC) block-level interlayer intra prediction (ILIP) method |
CN103703773A (en) * | 2011-05-20 | 2014-04-02 | 株式会社Kt | Method and apparatus for intra prediction within display screen |
CN104396241A (en) * | 2012-09-28 | 2015-03-04 | 索尼公司 | Image processing device and method |
CN105052139A (en) * | 2013-04-04 | 2015-11-11 | 高通股份有限公司 | Multiple base layer reference pictures for SHVC |
CN105379278A (en) * | 2013-07-18 | 2016-03-02 | 高通股份有限公司 | Device and method for scalable coding of video information |
CN106067981A (en) * | 2010-07-31 | 2016-11-02 | M&K控股株式会社 | Infra-frame prediction device |
CN107257465A (en) * | 2010-12-08 | 2017-10-17 | Lg 电子株式会社 | Interior prediction method and the encoding apparatus and decoding apparatus using this method |
CN108604313A (en) * | 2016-02-12 | 2018-09-28 | 微软技术许可有限责任公司 | The predictive modeling of automation and frame |
CN109547785A (en) * | 2018-10-26 | 2019-03-29 | 西安科锐盛创新科技有限公司 | Adaptive texture gradual change prediction technique in bandwidth reduction |
CN109561308A (en) * | 2018-10-26 | 2019-04-02 | 西安科锐盛创新科技有限公司 | Adaptive texture gradual change prediction technique in bandwidth reduction |
CN109561309A (en) * | 2018-10-26 | 2019-04-02 | 西安科锐盛创新科技有限公司 | The method of adaptive texture gradual change prediction in bandwidth reduction |
CN109618155A (en) * | 2018-10-26 | 2019-04-12 | 西安科锐盛创新科技有限公司 | Compaction coding method |
CN110430428A (en) * | 2010-06-17 | 2019-11-08 | 夏普株式会社 | Decoding apparatus, code device, coding/decoding method and coding method |
-
2009
- 2009-05-12 CN CN 200910084021 patent/CN101674475B/en not_active Expired - Fee Related
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101854549A (en) * | 2010-05-28 | 2010-10-06 | 浙江大学 | Spatial domain prediction based video and image coding and decoding method and device |
CN101854549B (en) * | 2010-05-28 | 2012-05-02 | 浙江大学 | Spatial domain prediction based video and image coding and decoding method and device |
CN110430428A (en) * | 2010-06-17 | 2019-11-08 | 夏普株式会社 | Decoding apparatus, code device, coding/decoding method and coding method |
CN106231312B (en) * | 2010-07-31 | 2019-04-12 | M&K控股株式会社 | Device for being encoded to image |
CN106067981B (en) * | 2010-07-31 | 2017-08-11 | M&K控股株式会社 | Infra-frame prediction device |
CN106231312A (en) * | 2010-07-31 | 2016-12-14 | M&K控股株式会社 | For the device that image is encoded |
CN106067981A (en) * | 2010-07-31 | 2016-11-02 | M&K控股株式会社 | Infra-frame prediction device |
US11677961B2 (en) | 2010-12-08 | 2023-06-13 | Lg Electronics Inc. | Intra prediction method and encoding apparatus and decoding apparatus using same |
US10785487B2 (en) | 2010-12-08 | 2020-09-22 | Lg Electronics Inc. | Intra prediction in image processing |
US11102491B2 (en) | 2010-12-08 | 2021-08-24 | Lg Electronics Inc. | Intra prediction in image processing |
CN107257465B (en) * | 2010-12-08 | 2020-08-04 | Lg 电子株式会社 | Intra prediction method performed by encoding apparatus and decoding apparatus, and readable storage medium |
CN107257465A (en) * | 2010-12-08 | 2017-10-17 | Lg 电子株式会社 | Interior prediction method and the encoding apparatus and decoding apparatus using this method |
US9749640B2 (en) | 2011-05-20 | 2017-08-29 | Kt Corporation | Method and apparatus for intra prediction within display screen |
US9749639B2 (en) | 2011-05-20 | 2017-08-29 | Kt Corporation | Method and apparatus for intra prediction within display screen |
CN103703773A (en) * | 2011-05-20 | 2014-04-02 | 株式会社Kt | Method and apparatus for intra prediction within display screen |
US9756341B2 (en) | 2011-05-20 | 2017-09-05 | Kt Corporation | Method and apparatus for intra prediction within display screen |
US9584815B2 (en) | 2011-05-20 | 2017-02-28 | Kt Corporation | Method and apparatus for intra prediction within display screen |
CN103703773B (en) * | 2011-05-20 | 2017-11-07 | 株式会社Kt | The method and apparatus that infra-frame prediction is carried out in display screen |
US9843808B2 (en) | 2011-05-20 | 2017-12-12 | Kt Corporation | Method and apparatus for intra prediction within display screen |
US10158862B2 (en) | 2011-05-20 | 2018-12-18 | Kt Corporation | Method and apparatus for intra prediction within display screen |
CN102355583B (en) * | 2011-09-29 | 2013-03-13 | 广西大学 | Scalable video encoding (SVC) block-level interlayer intra prediction (ILIP) method |
CN102355583A (en) * | 2011-09-29 | 2012-02-15 | 广西大学 | Scalable video encoding (SVC) block-level interlayer intra prediction (ILIP) method |
CN105611293A (en) * | 2012-09-28 | 2016-05-25 | 索尼公司 | Image processing device and method |
CN105611293B (en) * | 2012-09-28 | 2018-07-06 | 索尼公司 | Image processing apparatus and method |
CN104396241A (en) * | 2012-09-28 | 2015-03-04 | 索尼公司 | Image processing device and method |
CN105052139B (en) * | 2013-04-04 | 2018-06-26 | 高通股份有限公司 | For multiple base layer reference pictures of SHVC |
CN105052139A (en) * | 2013-04-04 | 2015-11-11 | 高通股份有限公司 | Multiple base layer reference pictures for SHVC |
US10212437B2 (en) | 2013-07-18 | 2019-02-19 | Qualcomm Incorporated | Device and method for scalable coding of video information |
CN105379278B (en) * | 2013-07-18 | 2019-07-09 | 高通股份有限公司 | The device and method of scalable decoding for video information |
CN105379278A (en) * | 2013-07-18 | 2016-03-02 | 高通股份有限公司 | Device and method for scalable coding of video information |
CN108604313A (en) * | 2016-02-12 | 2018-09-28 | 微软技术许可有限责任公司 | The predictive modeling of automation and frame |
CN109547785A (en) * | 2018-10-26 | 2019-03-29 | 西安科锐盛创新科技有限公司 | Adaptive texture gradual change prediction technique in bandwidth reduction |
CN109561308A (en) * | 2018-10-26 | 2019-04-02 | 西安科锐盛创新科技有限公司 | Adaptive texture gradual change prediction technique in bandwidth reduction |
CN109561309A (en) * | 2018-10-26 | 2019-04-02 | 西安科锐盛创新科技有限公司 | The method of adaptive texture gradual change prediction in bandwidth reduction |
CN109618155A (en) * | 2018-10-26 | 2019-04-12 | 西安科锐盛创新科技有限公司 | Compaction coding method |
Also Published As
Publication number | Publication date |
---|---|
CN101674475B (en) | 2011-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101674475B (en) | Self-adapting interlayer texture prediction method of H.264/SVC | |
CN104602009B (en) | Infra-frame prediction decoding device | |
CN101159875B (en) | Double forecast video coding/decoding method and apparatus | |
RU2659470C2 (en) | Moving image encoder | |
CN104041035B (en) | Lossless coding and coherent signal method for expressing for composite video | |
EP2232874B1 (en) | Adaptive filtering | |
CN101534436B (en) | Allocation method of video image macro-block-level self-adaptive code-rates | |
RU2573747C2 (en) | Video encoding method and apparatus, video decoding method and apparatus and programmes therefor | |
CN102710936B (en) | High performance loop filters in video compression | |
CN101312529B (en) | Method, system and apparatus generating up and down sampling filter | |
EP2765770B1 (en) | Matrix encoding method and device thereof, and matrix decoding method and device thereof | |
CN104702950A (en) | Method of decoding moving pictures in intra prediction | |
CN107277548A (en) | In method of the merging patterns to Image Coding | |
CN1767655A (en) | Multi view point video image parallax difference estimating method | |
KR20010075232A (en) | Encoding method for the compression of a video sequence | |
CN104811714A (en) | Enhanced Intra-Prediction Coding Using Planar Representations | |
CN102905200A (en) | Video interesting region double-stream encoding and transmitting method and system | |
CN104995916A (en) | Video data decoding method and video data decoding apparatus | |
CN103069803B (en) | Method for video coding, video encoding/decoding method, video coding apparatus, video decoder | |
KR20000053028A (en) | Prediction method and device with motion compensation | |
CN112188195B (en) | Image encoding/decoding method and apparatus, and corresponding computer readable medium | |
CN101389014A (en) | Resolution variable video encoding and decoding method based on regions | |
CN102625109A (en) | Multi-core-processor-based moving picture experts group (MPEG)-2-H.264 transcoding method | |
CN101146227A (en) | Build-in gradual flexible 3D wavelet video coding algorithm | |
CN102355583A (en) | Scalable video encoding (SVC) block-level interlayer intra prediction (ILIP) method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110622 Termination date: 20130512 |