CN101001383A - Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction - Google Patents

Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction Download PDF

Info

Publication number
CN101001383A
CN101001383A CN 200710002177 CN200710002177A CN101001383A CN 101001383 A CN101001383 A CN 101001383A CN 200710002177 CN200710002177 CN 200710002177 CN 200710002177 A CN200710002177 A CN 200710002177A CN 101001383 A CN101001383 A CN 101001383A
Authority
CN
China
Prior art keywords
piece
prediction
pixel
prediction piece
smooth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200710002177
Other languages
Chinese (zh)
Inventor
韩宇镇
金素英
塔米·李
李教爀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN101001383A publication Critical patent/CN101001383A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for reducing block artifacts during a residual prediction in a multilayer-based video coding are disclosed. The multilayer-based video encoding method includes obtaining a difference between a predicted block for a second block of a lower layer, which corresponds to a first block included in a current layer, and the second block; adding the obtained difference to a predicted block for the first block; smoothing a third block generated as a result of the addition using a smoothing function; and encoding a difference between the first block and the smoothed third block.

Description

Video coding/decoding method and encoder/decoder based on multilayer
Technical field
The present invention relates to a kind of video coding technique, relate in particular to a kind of method and apparatus that is used for during residual prediction, reducing blocking artefacts (block artifact) based on the video coding of multilayer.
Background technology
Along with the development of information and communication technology (ICT), multimedia communication and text and voice communication increase.Sight has is various demand that the communication system at center is not enough to satisfy the consumer with the text, and the multimedia service that therefore can hold the information of various forms (such as text, image, music and other) increases.Because multi-medium data is big,, is used for storage and sends described multi-medium data so need mass memory medium and wide bandwidth respectively.Therefore, need compression coding technology to send multi-medium data.
The basic principle of data compression is to remove data redundancy.Can come packed data by following manner: remove spatial redundancy (such as repeat same color or object in image), time redundancy (such as the similarly continuous repetition of consecutive frame or sound in mobile image) and vision/perception redundancy, this is to consider people insensitive for high frequency.In general method for video coding, remove time redundancy by time filtering, and remove spatial redundancy by spatial alternation based on motion compensation.
In order after removing data redundancy, to send multimedia, need the different transmission medium of its performance.The transmission medium of current use has diversified transmission speed.For example, the ultraspeed communication network can send tens megabit data by per second, and mobile communications network has the transmission speed of per second 384 kilobits.In order in such transmission environment, to support transmission medium, and use the transfer rate that is suitable for transmission environment to send multimedia, can expand that the method for video coding of (scalable) is best suited for.
Described extendible method for video coding is the coding method that can adjust video resolution, frame rate and signal to noise ratio (snr) by following manner, promptly supports the coding method of diversified extensibility: a part of blocking compression bit stream according to the peripheral condition such as transmission bit rate, transmission error rate and system resource.
In the current extendible video encoding standard that is advanced, carrying out based on the H.264 research of (hereinafter referred to as " H.264 can expand (SE) ") realization multilayer extensibility by joint video team (JVT) (it is the joint working group of Motion Picture Experts Group (MPEG) and International Telecommunication Union).
H.264 SE and multilayer extending video codec are supported four predictive modes basically: between prediction, point to intra-prediction (hereinafter referred to as " intra-prediction "), residual prediction and Nei Ji (intra-base) prediction." prediction " means following technology: use the prediction data that produces from common obtainable information encoder, compression ground shows initial data.
In aforesaid four kinds of predictive modes, between predictive mode be even also normally used predictive mode in existing single-layer video codec.Prediction is following method between as shown in fig. 1 described: the piece that is similar to a certain (being current block) of current block from least one reference frame search most; Obtain to express best the prediction piece of current block from searched piece; Quantize then poor between the piece of current block and prediction.
Between predict and be divided into: bi-directional predicted, it is used two reference frames; Forward prediction uses previous reference frame to it; Back forecast uses back one reference frame to it.
On the other hand, intra-prediction is also to be used for the single-layer video codec technology of (such as H.264).Intra-prediction is to be used for using the method for predicting current block in the adjacent block of current block and current block adjacent pixels.Intra-prediction is that with the different of other Forecasting Methodologies it only uses the information in present frame, and does not quote with other frames in one deck or the frame of other layers.
In basic prediction can be used for the situation of the frame that present frame has the low layer that has same time location (hereinafter referred to as " basic frame ").As shown in Figure 2, can be from predict the macro block of present frame effectively corresponding to the macro block of the basic frame of the macro block of present frame.That is, the difference between the macro block of the macro block of present frame and basic frame is used to prediction.
If the resolution of low layer and differ from one another when the resolution of anterior layer then should be used the macro block that comes the basic frame of up-sampling (upsample) when the resolution of anterior layer before obtaining described difference.In this base prediction the video with very fast motion or therein in the video that changes of scene effectively.
At last, have residual prediction between predictive mode (hereinafter referred to as " residual prediction ") be such predictive mode, in view of the above, prediction between the existing individual layer is expanded to multilayer form.As shown in Figure 3,, directly be not quantized in the difference that between anterior layer, produces in the prediction processing according to residual prediction, but, obtain and quantize difference between described difference and the difference that between low layer, produces in the prediction processing.
Consider the characteristic of multifarious video sequence,, in aforesaid four kinds of Forecasting Methodologies, select a kind of effective method for each macro block of configuration frame.For example, in video sequence with motion slowly, prediction and residual prediction between will mainly selecting, and in having the video sequence of rapid movement, base prediction in will mainly selecting.
Compare with the single-layer video codec, the multi-layer video codec has comparatively complicated predict.And, because the multi-layer video codec mainly uses open loop structure, so compare with single-layer codec, many blocking artefacts take place in the multi-layer video codec.Specifically, under the situation of above-mentioned residual prediction, use the residue signal of lower-level frame, and if have big difference at residue signal with between the characteristic of prediction signal between the anterior layer frame, then serious distortion can take place.
Contrast, at interior basic predictive period, the macro block of present frame, promptly the prediction signal of the macro block of basic layer is not a primary signal, but the signal that has been quantized and has been resumed.Therefore, can in encoder, jointly obtain prediction signal, so, between encoder, there is not mismatch to take place.Specifically, because after smoothing filter is applied to prediction signal, obtain difference between the macro block of the macro block of basic frame and present frame, so reduced blocking artefacts widely.
But according to being adopted to H.264 the low complex degree decode condition and the single-loop decoding condition of the work at present draft of SE, interior base prediction is limited in the use.That is, in SE H.264, interior basic prediction can only be used for when satisfying specified conditions, though so that carry out coding in the multilayer mode, can carry out decoding in the mode that is similar to the single-layer video codec.
According to the low complex degree decode condition, interior base prediction only is used for when relating to intra-prediction mode or interior basic mode formula predictive mode corresponding to the macro block (mb) type when the low layer macro block of certain macro block of anterior layer.This has reduced the amount of calculation in the motion compensation process that needs the max calculation amount in decoding processing.Comparatively speaking, because base is predicted in using under limited situation, so reduced the performance in having the video of rapid movement widely.
Therefore, between using under the situation of prediction or residual prediction according to other conditions of low complex degree conditioned disjunction, expectation can reduce such as encoder-decoder do not match and blocking artefacts the technology of various distortions.
Summary of the invention
Therefore, made the present invention and handled and appear at the problems referred to above of the prior art, and one aspect of the present invention is during based on prediction between in the Video Codec of multilayer or residual prediction, improve code efficiency.
Other advantage of the present invention and feature will partly provide in the explanation of back, and partly when studying hereinafter, will become apparent for those of ordinary skill in the art, perhaps can know by practice of the present invention.
In one aspect of the invention, provide a kind of method for video coding, it comprises: obtain poor between second prediction piece of low layer and second, described second prediction piece is corresponding at first that comprises in anterior layer; The difference that is obtained is added to is used for described first prediction piece; Use level and smooth the 3rd of producing of smooth function as addition result; And, be coded in poor between first and smoothed the 3rd.
In another aspect of the present invention, a kind of method that is used to produce bit stream is provided, described method comprises: smoothly in first the prediction signal that comprises in anterior layer; Be coded in poor between first and the smoothed prediction signal; And, produce the bit stream that differs from and be used to indicate whether used the first level and smooth sign that comprises described coding.
In another aspect of the present invention, a kind of video encoding/decoding method is provided, it comprises: recover first residual data from first data texturing of the present frame that comprises incoming bit stream; Recover second residue signal of basic layer, second residue signal of described basic layer is included in the described bit stream and corresponding to described; Described second residue signal is added to first prediction piece; Use level and smooth the 3rd of producing of smoothing filter as addition result; And, first residue signal is added to described level and smooth the 3rd.
In another aspect of the present invention, a kind of video encoding/decoding method is provided, it comprises: recover first residual data from first data texturing of the present frame that comprises incoming bit stream; Recover second residue signal of basic layer, second residue signal of described basic layer is included in the bit stream and corresponding to described; First residue signal is added to described second residue signal; Use smoothing filter level and smooth first between predict piece; And, addition result is added to describedly predicts piece between level and smooth.
In another aspect of the present invention, a kind of video encoder is provided, it comprises: be used to obtain poor between second prediction piece of low layer and second, described second prediction piece is corresponding in first the part that comprises in anterior layer; The difference that is used for being obtained is added to the part that is used for described first prediction piece; Be used to use level and smooth the 3rd the part that produces as addition result of smooth function; And the part that is used to be coded in the difference between first and smoothed the 3rd.
In another aspect of the present invention, a kind of Video Decoder is provided, it comprises: the part that is used for recovering from first data texturing of the present frame that comprises at incoming bit stream first residual data; Be used to recover the part of second residue signal of basic layer, second residue signal of described basic layer is included in the bit stream and corresponding to described; Be used for described second residue signal is added to the part of first prediction piece; Be used to use level and smooth the 3rd the part that produces as addition result of smoothing filter; And the residue signal that is used for first is added to described the 3rd level and smooth part.
Description of drawings
By describing in detail below in conjunction with accompanying drawing, above-mentioned and other aspects, feature and advantage of the present invention will be more readily apparent from, in the accompanying drawings:
The view of Predicting Technique between Fig. 1 explanation is traditional;
Fig. 2 is the view of the traditional interior basic Predicting Technique of explanation;
Fig. 3 is the view of the traditional residual prediction technology of explanation;
Fig. 4 is the view of explanation according to the smoothing prediction technology of one embodiment of the present of invention;
Fig. 5-the 8th, diagram is the view that unit uses the example of smoothing filter with the macro block;
Fig. 9 is diagram according to the PSNR that uses 1: 2: 1 smoothing filter to obtain, use the figure of the PSNR that another sef-adapting filter obtains;
Figure 10 is the block diagram of diagram according to the structure of the video encoder of one embodiment of the present of invention;
Figure 11 is the block diagram of diagram according to the structure of the Video Decoder of one embodiment of the present of invention; And
Figure 12 is the block diagram of diagram according to the structure of the Video Decoder of an alternative embodiment of the invention.
Embodiment
Followingly describe example embodiment of the present invention in detail with reference to accompanying drawing.By introducing the embodiment that will describe in detail with reference to accompanying drawing, each side of the present invention and feature and be used to realize that the method for described aspect and feature will be apparent.But, the invention is not restricted to the embodiment of following discloses, but can be realized with various forms.The item that limits in specification (such as the details of structure and element) only is provided to help the one of ordinary skilled in the art comprehensively to understand the present invention, and the present invention only is defined within the scope of the claims.In whole explanation of the present invention, identical drawing reference numeral is used for the identical element at different accompanying drawings.
If the piece of present frame is O F, then the prediction piece that obtains by prediction between the execution block is P F, be O corresponding to the piece of the basic layer of the piece of present frame B, and by carry out basic frame between the prediction piece that obtains of prediction be P B, from O B-P BObtain O BThe residue signal R that has B
In this case, O B, P B, R BBe the value that has been quantized and has been resumed, and O FAnd P FBe the primary signal under the situation of open loop structure, although they are the values that have been quantized and have been resumed.If the value that will encode in present frame is R F, then residual prediction can be expressed as equation (1).In equation (1), U represents the up-sampling function.Because the up-sampling function only is used in when the resolution when anterior layer and low layer differs from one another, so can be by on the significance of application optionally, with { U} represents it.
R F=O F-P F-[U]·R B (1)
On the other hand, interior base prediction can be expressed as equation (2)
R F=O F-[U]·O B (2)
With equation (1) with during equation (2) is compared, they seem there is not common feature.But, express them again by using equation (3) and equation (4), their similarity can be compared to each other.
R F=O F-(P F+[U]·R B) (3)
R F=O F-[U]·B(P B+R B) (4)
In equation (4), B represents the piece function.In comparing equation (3) and (4), R BJointly be used for equation (3) and (4).Maximum differential between them is as prediction piece P between the anterior layer FBe used for equation (3), prediction piece P between the low layer BBe used for equation (4).Under the situation of interior base prediction, if should spend piece function and up-sampling function, the image of the frame that then is resumed becomes smoothly, has therefore reduced blocking artefacts.
By contrast, in equation (3), from P BThe residue signal R of the basic frame that obtains BBe added to present frame between prediction piece P F, therefore, mismatch or blocking artefacts each other may take place.If though base prediction then can alleviate this problem in using, even be not more than in the efficient of interior base prediction under the situation of efficient of residual prediction, base is predicted in can not using.And under the situation of using the low complex degree decode condition, even under the more effective situation of interior base prediction, the piece of base prediction does not increase yet in not using, and this causes the remarkable variation of performance.Therefore, even need a kind of adequate measures that under the situation of using residual prediction, also can reduce blocking artefacts of design.
In the present invention, replenish existing residual prediction by increasing smooth function F to equation (3).According to the present invention, the data R of the current block that quantize FBe expressed as equation (5).
R F=O F-F(P F+[U]·R B) (5)
Prediction between predictive mode former state according to equation (5) can being applied to.That is, between the prediction situation under, can consider R BBe 0, therefore, can be with R FBe expressed as equation (6).
R F=O F-F(P F) (6)
As in equation (5) and (6), existing residual prediction or between predictive period adopt the technology of smoothing filter to be defined as " smoothing prediction ".The detailed process of carrying out smoothing prediction is described with reference to Fig. 4.In Fig. 4, exemplified the processing of certain piece 20 (below be called as " current block ") of coding present frame.To be called as " basic block " corresponding to the piece in the basic frame of current block 20 below 10.
At first, at S1 by basic block 10 and motion vector, from the piece 11 and 12 the peripheral reference frame of the low layer of correspondence (be forward reference frame, back to reference frame and other), produce basic block 10 between prediction piece 13.Then, obtain poor between basic block and prediction piece 13 at S2 (corresponding to the R in equation (5) B).And at S3, by basic block 20 and motion vector, from the piece 21 and 22 when the peripheral reference frame of anterior layer in correspondence, prediction piece 23 is (corresponding to the P in equation (5) between the generation basic block 20 F), can be before step S1 and S2 execution in step S3.In general, " between predict piece " means the prediction piece of certain piece in the frame that will encode, and it obtains from the image corresponding to this reference frame of a certain.Piece is indicated using motion vector for image.In general, if there is a reference frame, then between the prediction piece mean corresponding image itself, if there are a plurality of reference frames, then between the prediction piece mean the weighted sum of the image of correspondence.
Then, at S4, be in the same place, and, use the level and smooth piece that produces as addition result of smoothing filter (corresponding to the P in the equation (5) at S5 with predicting that piece 23 adds with the difference that obtains at step S2 F+ R B).At last, obtain at current block 20 and the piece that produces as level and smooth result (corresponding to the F (P in the equation (5) at S6 F+ R B)) between poor, then, S7 quantize to be obtained poor.
Fig. 4 illustrates based on the smoothing prediction of residual prediction and handles.Based between the smoothing prediction of prediction handle simpler than based on residual prediction.Specifically, because with the relevant R of calculating of low layer in Fig. 5 BBe omitted, so all be omitted in graphic step S1, S2 and S3 among Fig. 4.Therefore, prediction piece 23 smoothed filter smoothings between producing in the anterior layer then, quantize at current block 20 and the piece that produces as level and smooth result (corresponding to the F (P in equation (6) F)) between poor.
On the other hand, be which smoothing filter F is applied to smoothing prediction also as material particular.Traditional branch blocking filter B that goes can be used as such smoothing filter F.And, can use the combination of up-sampling function U and down-sampling function D, because also can obtain smooth effect by the combination of up-sampling function and down-sampling function.
But because go piece function B, up-sampling function and down-sampling function to need considerable amount of calculation, and the down-sampling function generally is used for very strong low-pass filtering, so might be in the details of predictive period image variation greatly.
Therefore, need carry out smoothing filter with little amount of calculation and use processing.For this reason, can express smoothing filter F simply by the linear function in the middle of the neighbor of predetermined quantity.For example, if predetermined quantity is 3, then (n) can be expressed as equation (7) from the pixel value x ' of original pixel value x (n) filtering by smoothing filter F.
x’(n)=α*x(n-1)+β*x(n)+γ*x(n+1) (7)
Can suitably select the value of α, β, γ, so as they and be 1.For example,, compare, can increase the weighted value of the respective pixel of wanting filtering with neighbor by in equation (7), selecting α=1/4, β=1/2, γ=1/4.Certainly, can be neighbor in equation (7) with more pixel selection.
Use has the smoothing filter F of aforesaid simple form, can reduce amount of calculation widely, and also can be reduced in the variation of the image detail that takes place during the down-sampling etc.
Fig. 5-the 8th, the view of the example of smoothing filter is used in diagram with respect to 16x16 macro block 60.
In an embodiment of the present invention, in four following steps, smoothing filter is applied to corresponding macro block 60.With reference to Fig. 5, with explanation first step in described four steps.
At first, the horizontal windows 50 that has corresponding to the size of three neighbors arranging in the horizontal direction is set, and will be applied to initial three neighbors that in horizontal windows 50, comprise as the smoothing filter F of linear function.In case used smoothing filter F, then in the horizontal direction horizontal windows 50 moved a pixel, and use smoothing filter F again.Repeat above-mentioned processing, if and horizontal windows 50 reaches the right margin of macro block 60, then horizontal windows 50 is returned to its initial position, and is moved a pixel on lower direction, when horizontal windows is mobile in the horizontal direction, use smoothing filter F again then.Whole macro block 60 is carried out this processing.In a first step, carry out the inferior filtering in 224 (=14 (width) * 16 (length)) for a macro block.
Then, with reference to Fig. 6, with second step of explanation in described four steps.
Setting has the vertical window 51 corresponding to the size of three neighbors arranging in vertical direction, and will be applied to initial three neighbors that comprise in vertical window 51 as the smoothing filter F of linear function.In case used smoothing filter F, then vertical window 51 moves a pixel in the horizontal direction, and uses smoothing filter F again.Repeat above-mentioned processing, if vertical window 51 reaches the right margin of macro block 60, then vertical window 51 returns its initial position, and is moved a pixel on lower direction, when vertical window is mobile in the horizontal direction, uses smoothing filter F again then.Whole macro block 60 is carried out this processing.In second step, carry out the inferior filtering in 224 (=14 (width) * 16 (length)) for a macro block.
By first step and second step, finish that smoothing filter F is applied in macro block 60 not and the macroblock boundaries adjacent pixels.Then, smoothing filter need be applied to the coboundary adjacent pixels with macro block 60, and smoothing filter is applied to left margin adjacent pixels with macro block 60.
With reference to Fig. 7, with explanation in described four steps corresponding to third step with respect to the Filtering Processing of left margin.
Setting has the horizontal windows 53 corresponding to the size of three neighbors arranging in the horizontal direction, so that the top left pixel of macro block 60 is positioned at the center of horizontal windows 53.Then, will be applied to initial three neighbors that in horizontal windows 53, comprise as the smoothing filter F of linear function.In case used smoothing filter F, then in vertical direction horizontal windows 53 moved a pixel, and use smoothing filter F again.Repeat above-mentioned processing, reach the lower boundary of macro block 60 up to horizontal windows 53.In third step, carry out 16 filtering for a macro block.
At last, with reference to Fig. 8, with explanation in described four steps, corresponding to the 4th step with respect to the Filtering Processing of coboundary.
Setting has the vertical window 54 corresponding to the size of three neighbors arranging in vertical direction, so that the top left pixel of macro block 60 is positioned at the center of vertical window 54.Then, will be applied to initial three neighbors that in vertical window 54, comprise as the smoothing filter F of linear function.In case used smoothing filter F, then in the horizontal direction vertical window 54 moved a pixel, and use smoothing filter F again.Repeat above-mentioned processing, reach the right margin of macro block 60 up to vertical window 54.In the 4th step, carry out 16 filtering for a macro block.
The change of the order of corresponding four steps does not have big influence to the effect that realizes according to the present invention.In Fig. 5-8, in the illustrated embodiments of the invention, α: β: γ is set to 1: 2: 1 in equation (7).But,, on effect, do not have big difference even change described ratio yet.This can prove from graphic experimental result among Fig. 9.
Fig. 9 is diagram based on the PSNR by using 1: 2: 1 smoothing filter to compress football CIF sequence to obtain, compresses the figure of the PSNR that same sequence obtains by using another sef-adapting filter.
In Fig. 9, Figure 91 represents to use adaptively the result of 1: 2: 1 smoothing filter and 1: 3: 1 smoothing filter, Figure 92 represents to use adaptively the result of 1: 2: 1 smoothing filter and 1: 14: 1 smoothing filter, and Figure 93 represents to use adaptively the result of 1: 2: 1 smoothing filter and 1: 6: 1 smoothing filter.With reference to Fig. 9, as can be seen, compare with the situation of wherein using 1: 2: 1 smoothing filter, even use sef-adapting filter, also only PSNR is improved about 0.005dB to the full extent.
It wherein is the example that unit uses smoothing filter with the macro block that present embodiment of the present invention shows.But those skilled in the art will be clear fully, can be that unit or any other unit use smoothing filter with 4x4 piece.
As mentioned above, by using smoothing filter, can improve the problem that causes the coding efficiency variation owing to existing hoop decode condition to a certain extent according to equation (5).But,, then can increase complexity to a certain extent if in being suggested the single-loop decoding that reduces complexity, use smooth function.
Recovering O by carry out decoding according to equation (5) FHypothesis under, should be for R BAnd P FCarry out inverse DCT.Handle in order to reduce inverse DCT, can during decoding processing, use equation (8).
O F=F(P F)+(R F+[U]·R B) (8)
According to equation (8),, will still not be the R of conversion coefficient component by independently inverse DCT processing BBe added to current block R BNubbin, carry out inverse dct transform simultaneously.Therefore,, but only carry out once, to reduce complexity not with inverse DCT processing execution twice.And, carrying out under the situation of decoding, smoothing filter is applied to P according to equation (5) FAnd R BAnd, and carrying out under the situation of decoding according to equation (8), smooth function only is applied to prediction signal P F
As mentioned above, in order to be applied to existing working draft JSVM-4 (Julien Reichel according to the smoothing prediction of one embodiment of the present of invention, Heiko Schwarz, and Mathias Wien, " JointScalable Video Mode JSVM-4, " JVT meeting, Nice, France), need in grammer, semanteme and decoding processing, carry out some modifications.The part that will revise on grammer has been shown in the table 1 at first, below.Table 1 be the clause of JSVM-4 G.7.3.8.3 in the part of mentioned " at the nubbin that can expand in the extension syntax ", and the part that will revise is marked underscore.
[table 1] is at the nubbin that can expand in the extension syntax
residual_in_scalable_extension(){ C Descriptor
If(adaptive_prediction_alag&& MbPartPredType(mb_type,0)!= Intra_16x16&& MbPartPredType(mb_type,0)!= Intra_8x8&& MbPartPredType(mb_type,0)!= Intra_4x4&& MbPartPredType(mb_type,0)!= Intra_Base){
residual_prediction_flag 3|4 Ae(v)
if(residual prediction flag&&base mode flag&& constrained inter layer pred())
smoothed reference flag 3|4 Ae(v)
}
All be under 1 the situation under the condition in single-loop decoding at residual_prediction_flag and base_mode_flag, be encoded as the sign " smoothed_reference_flag " of new syntax item.Pass through function " constrained_inter_layer_pred () " indicate the single-loop decoding condition.Residual_prediction_flag indicates the sign that whether uses residual prediction, and base_mode_flag indicates the sign that whether uses basic layer skipped mode.If the value of this sign is 1 (very), then its indication will be carried out corresponding operation, and if the value of described sign is 0 (vacation), the then corresponding operation of execution of its indication.Specifically, in many circulant solutions pattern, be noted that the expense that does not have grammer.
In basic layer (BL) skipped mode, in anterior layer, do not carrying out motion estimation process separately, but the motion vector and the macro block mode that have obtained in the motion estimation process of in basic layer, having carried out, when they when in the anterior layer, then be used.Therefore, compare, reduced amount of calculation, and improved code efficiency, because do not encode when the movable information of anterior layer with the situation of not using described pattern.But, if different with in basic layer to a certain extent of the distribution of movement in the anterior layer, the variation of image quality then may take place.Therefore, mainly similarly use basic layer skipped mode under the situation each other in distribution of movement between the layer.
On the other hand, be to describe the part of the semanteme of smoothed_reference_flag in the part of semantically revising.The implication of this sign has been described at the clause of JSVM-4 " at the nubbin that can expand in the semanteme " in G.7.4.8.3.
If smoothed_reference_flag is 1, then it mean smooth function is applied between the sampling of prediction and basic layer the remnants sampling and, and if smoothed_reference_flag is 0, then it means and does not use smooth function.If there is no smoothed_reference_flag is then with described worthwhile work 0.
At last, the part that will revise has been described at the clause of JSVM-4 in G.8.4.2.4 in decoding processing.In this clause, the detailed content of the smooth function of redetermination has been described.In decoding processing,, then call clause G.8.4.2.4 if smoothed_reference_flag is 1.
Specifically, at first call resPred L[x, y] (wherein, x and y are respectively in scope of 0-15) (size of macro block) and resPred Cb[x, y] and resPred Cr[x, y] (wherein, x arrives in the scope of MbWidthC-1 0, and y arrives in the scope of MbHeightC-1 0), described resPred L[x, y] is the remaining sampling array of brightness (luma) of handling the basic layer that obtains from residual prediction, resPred Cb[x, y] and resPred Cr[x, y] is the remaining sampling array of the colourity of basic layer.Thereafter, with prediction samples pred between the corresponding brightness L[x, y] is added to will be by the remaining resPred of sampling of the brightness that equation (9) upgrades L[x, y].At this, x and y are illustrated respectively in the x coordinate and the y coordinate of the pixel that comprises in the current macro.
pred L[x,y]=pred L[x,y]+resPred L[x,y] (9)
And, if chroma_format_idc is not 0 (promptly under the situation of coloured image), prediction samples pred between the then corresponding colourity Cb[x, y] and pred Cr[x, y] will upgrade by equation (10).
pred Cb[x,y]=pred Cb[x,y]+resPred Cb[x,y]
pred Cr[x,y]=pred Cr[x,y]+resPred Cr[x,y] (10)
Below, the pred that upgrades with respect in equation (9) will be described L[x, y] and the pred that in equation (10), upgrades Cb[x, y] and pred Cr[x, y] and the processing of using smooth function.This is handled by constituting in four steps shown in Fig. 5-8.
At first, prediction samples between upgrading in equation (9) and (10) when they are handled by using at the smooth function shown in Fig. 5, is updated according to equation (11).
pred L[x,y]=(pred L[x-1,y]+2*pred L[x,y]+pred L[x+1,y]+2)
>>2, wherein, x=1...14, and y=0...15
pred Cb[x,y]=(pred Cb[x-1,y]+2*pred Cb[x,y]+pred Cb[x+1,y]+2) (11)
>>2, wherein, x=1...MbWidthC-2, and y=0...MbHeightC-1
pred Cr[x,y]=(pred Cr[x-1,y]+2*pred Cr[x,y]+pred Cr[x+1,y]+2)
>>2, wherein, x=1...MbWidthC-2, and y=0...MbHeightC-1
Equation (11) be used for embodying when horizontal windows (Fig. 5 50) be unit and the application of smooth function when mobile with the pixel in the macro block (60 among Fig. 5).Because the size of the macro block of luminance component is 16 * 16, and the size of the macro block of chromatic component is mbWidthC * MbHeightC, so, the scope of differently indicating the x and the y of luminance component and chromatic component.The smooth function that uses in equation (11) is 1: 2: 1 linear function.
On the other hand, prediction samples between being updated in equation (11) when they are handled by using at the smooth function shown in Fig. 6, is updated according to equation (12).
pred L[x,y]=(pred L[x,y-1]+2*pred L[x,y]+pred L[x,y+1]+2)
>>2, wherein, x=0...15, and y=1...14
pred Cb[x,y]=(pred Cb[x,y-1]+2*pred Cb[x,y]+pred Cb[x,y+1]+2) (12)
>>2, wherein, x=0...MbWidthC-1, and y=1...MbHeightC-2
pred Cr[x,y]=(pred Cr[x,y-1]+2*pred Cr[x,y]+pred Cr[x,y+1]+2)
>>2, wherein, x=0...MbWidthC-1, and y=1...MbHeightC-2
Prediction samples between being updated in equation (12) when they are handled by using at the smooth function shown in Fig. 7, is updated according to equation (13).
pred L[x,y]=(S’ L[xP+x-1,yP+y]+2*pred L[x,y]+pred L[x+1,y]+2)
>>2, wherein, x=0, and y=0...15
pred Cb[x,y]=(S’ Cb[xC+x-1,yC+y]+2*pred Cb[x,y]+pred Cb[x+1,y]+2) (13)
>>2, wherein, x=0, and y=0...MbHeightC-1
pred Cr[x,y]=(S’ Cr[xC+x-1,yC+y]+2*pred Cr[x,y]+pred Cr[x+1,y]+2)
>>2, wherein, x=0, and y=0...MbHeightC-1
At this, xP and yP represent to belong to the absolute coordinate (i.e. position in frame) of first luma samples of current macro, S ' L[xP+x-1, yP+y] is illustrated in corresponding absolute coordinate (xP+x-1, the value of sampling yP+y) of having in the luma samples that comprises in the level and smooth macro block.In an identical manner, S ' Cb[xC+x-1, yC+y] and S ' Cr[xC+x-1, yC+y] is illustrated in corresponding absolute coordinate (xC+x-1, the value of sampling yC+y) of having in the chroma samples that comprises in the level and smooth macro block.XC and xP represent to belong to the absolute coordinate of first chroma samples of current macro.
At last, prediction samples between being updated in equation (13) when they use processing by smooth function as shown in Figure 8, is updated according to equation (14).
pred L[x,y]=(S’ L[xP+x,yP+y-1]+2*pred L[x,y]+pred L[x,y+1]+2)
>>2, wherein, x=0...15, and y=0
pred Cb[x,y]=(S’ Cb[xC+x,yC+y-1]+2*pred Cb[x,y]+pred Cb[x,y+1]+2)
>>2, wherein, x=0...MbWidthC-1, and y=0 (14)
pred Cr[x,y]=(S’ Cr[xC+x,yC+y-1]+2*pred Cr[x,y]+pred Cr[x,y+1]+2)
>>2, wherein, x=0...MbWidthC-1, and y=0
Figure 10 is the block diagram of diagram according to the structure of the video encoder 100 of one embodiment of the present of invention.
At first, the physical block that in current block, comprises (hereinafter referred to as " current block ") O FBe imported into down-sampler 103.Down-sampler 103 is carried out current block O FSpace and/or time down-sampling, and produce corresponding base layer block O B
Motion estimation unit 205 is carried out base layer block O BWith respect to adjacent frame F B' estimation, and obtain motion vector MV BAlleged as mentioned above consecutive frame is called as " reference frame ".In general, the algorithm that is widely used in estimation is a block matching algorithm.This block matching algorithm is unit when in the appointment region of search of reference frame with pixel or sub-pixel (for example 1/2 pixel and 1/4 pixel), when moving given moving mass, will be motion vector corresponding to the Displacement Estimation of minimal error.According to the layering variable size block coupling of in H.264, using (HVSBM), use the moving mass of fixed size, or use moving mass with variable-size, can carry out estimation.
If video encoder 100 is forms of open loop codec, the original consecutive frame F that then in buffer, stores B' be used as reference frame.But,, then will be used as reference frame at the decoded frame (not shown) in back that is encoded if video encoder is the form of closed loop codec.Below, will the present invention be described around the open loop codec, but the invention is not restricted to this.
Motion vector MV by motion estimation unit 205 acquisitions BBe provided to motion compensation units 210.Motion compensation units 210 is by motion vector MV B, at reference frame F B' the middle image that extracts correspondence, and prediction piece P between producing BIf use two-way reference, then can predict piece between on average the calculating of the image that extracted.If use unidirectional reference, then between the prediction piece can be identical with the image that is extracted.
Subtracter 215 is from base layer block O BPrediction piece P between deducting B, to produce residual block R BResidual block R BBe provided for up-sampler 140 and converter unit 220.
Up-sampler 140 is carried out residual block R BUp-sampling.In general, n: 1 up-sampling is not used in a pixel is expanded to n pixel simply, but considers an operational processes of neighbor.Occur when becoming bigger though more level and smooth down-sampling result can work as the quantity of neighbor, may be created in the image of distortion to a certain extent, therefore need the neighbor of selection right quantity.If the resolution of basic layer is identical with the resolution of working as anterior layer, then can omit the up-sampling operation of carrying out by up-sampler 140.
Current block O FAlso be imported into motion compensation units 110, buffer 101 and subtracter 115.If base_mode_flag is 1, if promptly be similar to the motor pattern of basic layer when the motor pattern of anterior layer, and it is corresponding to basic layer skipped mode, then motion vector and the macro block mode that obtains in the motion estimation process of having carried out in basic layer is used for working as anterior layer by former state, therefore do not need to carry out independently motion estimation process.But, even by determining independently motion vector and macro block mode in the independently motion estimation process in anterior layer, also within the scope of the invention.Below, the situation of layer skipped mode is substantially used in explanation.
The motion vector MV that in motion estimation unit 205, obtains BBe provided to motion compensation units 110.Motion compensation units 110 is by motion vector MV BThe reference frame F that is providing from buffer 101 F' the middle image that extracts correspondence, and prediction piece P between producing FIf the resolution of basic layer is identical with the resolution of working as anterior layer, then motion compensation units 110 is used the motion vector MV of basic layer BAs motion vector when anterior layer.But if described resolution is inequality, then motion compensation units is by the same ratio of the resolution of working as anterior layer with the resolution of basic layer, extension movement vector M V B, and use the motion vector that is expanded as motion vector when anterior layer.
The signal UR that adder 135 will provide from up-sampler 140 BThe signal P that provides from motion compensation units 110 is provided F, and provide addition P to smoothing filter 130 F+ UR BThe result.The operational processes corresponding to equation (9) and (10) is handled in this addition.
Smoothing filter 130 is carried out described addition P by using smooth function, promptly removing the piece function F+ UR BResult level and smooth.As smooth function, can use the combination of using in H.264 in tradition of removing piece function or up-sampling function and down-sampling function.But,, can use the simple linear function as in equation (7) in order to reduce amount of calculation according to the low complex degree condition.Though only use linear function, compare with the situation of using complicated function, do not reduce coding efficiency widely.This linear function can be that unit is employed with piece (being sub-piece or macro block), and can be applied to the whole inside of block boundary or block boundary and piece.In described preferred embodiment of the present invention, arrive (14) with reference to Fig. 5-8 and equation (11), the example that smooth function is applied to the whole inside of block boundary and piece with four steps has been described.Certainly, can change the order of described four steps.
Subtracter 115 is from current block O FSignal F (the P that provides as the level and smooth result who carries out by smoothing filter 130 is provided F+ UR B), and produce the residue signal R that works as anterior layer F
Converter unit 120 is with respect to residue signal R FCarry out spatial alternation, and produce conversion coefficient R F TDiscrete cosine transform (DCT) and wavelet transformation can be used as spatial transform method.Under the situation of using DCT, conversion coefficient will be the DCT coefficient, and under the situation of using wavelet transformation, conversion coefficient will be a wavelet coefficient.
Quantifying unit 125 is carried out conversion coefficient R F TQuantification, and produce to quantize coefficients R F QQuantification is to represent the conversion coefficient R that expressed by some actual values by centrifugal pump F TProcessing, for example, quantifying unit 125 will be divided into the quantization step of appointment by the conversion coefficient that some actual values is expressed, and end value is rounded to immediate integer.
The residue signal R of basic layer BAlso be transformed to quantization parameter R by converter unit 220 and quantifying unit 225 B Q
The motion vector MV that motion estimation unit 205 is estimated is carried out in entropy coding unit 150 B, the quantization parameter R that provides from quantifying unit 125 F Q, and the quantization parameter R that provides from quantifying unit 225 B QLossless coding, and produce bit stream.Huffman coding, arithmetic coding, variable length code and other can be used as lossless coding method.
Figure 11 is the block diagram of diagram according to the structure of the Video Decoder 300 of one embodiment of the present of invention.
Entropy decoding unit 305 is carried out losslessly encoding for the bit stream of input, and extracts the data texturing R of current block F Q, corresponding to data texturing R when the base layer block of anterior layer B QMotion vector MV with base layer block BLosslessly encoding is to handle opposite processing with the lossless coding in encoder.
The data texturing R of current block F QBe provided to inverse quantization unit 310, and the data texturing R of base layer block B QBe provided to inverse quantization unit 410.The motion vector MV of base layer block is provided to motion compensation units 350 B
Inverse quantization unit 310 is carried out the data texturing R of current block F QRe-quantization.It is such processing that this re-quantization is handled: use and the identical quantization table that uses in quantification treatment, recover to mate the value of the index that produces in quantification treatment.
Inverse transformation block 320 is carried out inverse transformation for the result of re-quantization.Described inverse transformation is and the opposite processing of conversion process in encoder.Specifically, can be with inverse dct transform and inverse wavelet transform as inverse transformation block, and, recover the residue signal R of current block as the result of inverse transformation F
On the other hand, inverse quantization unit 410 is carried out the data texturing R of base layer block B QRe-quantization, and inverse transformation block 420 is for the R as a result of re-quantization B TCarry out inverse transformation.As the result of inverse transformation, recover the residue signal R of base layer block BUpsampler 380 provides the residue signal that is resumed R B
Up-sampler 380 is carried out residue signal R BUp-sampling.If the resolution of basic layer is identical with the resolution of working as anterior layer, then can omit the up-sampling operation of carrying out by up-sampler 380.
Motion compensation units 350 is by motion vector MV BThe reference frame F that is providing from buffer 340 F' the middle image that extracts correspondence, and prediction piece P between producing FIf the resolution of basic layer is identical with the resolution of working as anterior layer, then motion compensation units 350 is used the motion vector MV of basic layer BBe used as motion vector when anterior layer.But if the resolution difference, then motion compensation units is by the same ratio of the resolution of working as anterior layer with the resolution of basic layer, extension movement vector M V B, and use the motion vector of expanding to be used as working as the motion vector of anterior layer.
The signal UR that adder 360 will provide from up-sampler 380 BThe signal P that provides from motion compensation units 350 is provided F, and provide addition P to smoothing filter 370 F+ UR BThe result.The operational processes corresponding to equation (9) and (10) is handled in this addition.
Smoothing filter 370 is by using smooth function, promptly going the piece function to carry out described addition P F+ UR BResult level and smooth.As smooth function, can use and the identical function of smooth function as using in the graphic smoothing filter 130 among Figure 10.
Signal F (the P that adder 330 will provide as the level and smooth result who is carried out by smoothing filter 370 F+ UR B), be added to the residual block R that conduct is produced by the result of the inverse transformation of inverse transformation block 320 execution FTherefore, current block O FBe resumed, and by a plurality of current block O of combination F, a frame F FBe resumed.The last frame F that recovers of buffer 370 temporary transient storages F, and between the convalescence of another frame, provide the conduct of being stored with reference to frame F F' frame.
On the other hand, the Video Decoder 400 that recovers current block according to equation (8) as shown in Figure 12 is to a certain extent with different as graphic Video Decoder among Figure 11 300.
With reference to Figure 12, from motion compensation units 350 provide between prediction piece P FDirectly be input to smoothing filter 370 with smoothed, and the up-sampling that will carry out by up-sampler 380 of adder 360 UR as a result BBe added to residual block R FAt last, adder 330 is with the level and smooth (P of F as a result F) be added to addition result R F+ UR B, to recover current block O F
In as Figure 10-12, in the graphic embodiments of the invention, exemplified the frame of video that coding is made of two layers.But those skilled in the art will be clear fully, the invention is not restricted to this, also can be applied to the coding of the frame of video that is made of three or more layers.
Can be by software or hardware and the combination by software and hardware, realize the corresponding composed component of Figure 10-12, described software is such as task, class, subroutine, process, object, execution thread or the program carried out in the memory appointed area, described hardware such as FPGA (field programmable gate array) or ASIC (application-specific integrated circuit (ASIC)).Corresponding composed component can be included in the computer-readable storage medium, and perhaps its part can distribute in a plurality of computers.
As mentioned above, according to the present invention, can improve use residual prediction or between the performance of codec of prediction.
Specifically, can improve the performance of the codec that uses interior base prediction with low complex degree decode condition.
The preferred embodiments of the present invention have been described for illustration purpose, and have it will be apparent to one skilled in the art that under the situation that does not break away from disclosed scope and spirit of the present invention in the claim, various modifications, increase and to substitute be possible.Therefore, should limit scope of the present invention by claim and their legal equivalents.
The application requires on March 10th, 2006 in korean patent application 10-2006-0022871 number of Korea S Department of Intellectual Property submission and the U.S. Provisional Patent Application of submitting on January 12nd, 2006 in United States Patent and Trademark Office the 60/758th, No. 227 and the U.S. Provisional Patent Application the 60/760th submitted on January 20th, 2006, No. 401 priority, it openly is included in this by integral body by reference.

Claims (35)

1. method for video coding based on multilayer comprises:
(a) obtain poor between second prediction piece of low layer and second, described prediction piece is corresponding at first that comprises in anterior layer;
(b) difference that is obtained is added to is used for described first prediction piece;
(c) use level and smooth the 3rd of producing of smooth function as addition result; And,
(d) be coded in poor between first and smoothed the 3rd.
2. according to the method for video coding of claim 1, wherein, first prediction piece and second prediction piece are predicted piece between being.
3. according to the method for video coding of claim 1, wherein, obtain second prediction piece by motion estimation process and motion compensation process.
4. according to the method for video coding of claim 3, wherein, use the motion vector that in motion estimation process, produces,, obtain first prediction piece by motion compensation process.
5. according to the method for video coding of claim 1, also comprise: that up-sampling obtained before was poor at (b);
Wherein, the difference of institute's addition is the poor of up-sampling in (b).
6. according to the method for video coding of claim 1, wherein, described smooth function is indicated as the pixel of wanting level and smooth and the linear combination of neighbor thereof.
7. according to the method for video coding of claim 6, wherein, described neighbor is and will be on vertical or horizontal direction smoothed two adjacent pixels of pixel.
8. according to the method for video coding of claim 7, wherein, the weighted value of pixel that be smoothed is 1/2, and the weighted value of two neighbors is respectively 1/4.
9. according to the method for video coding of claim 6, wherein, (c) comprising: smooth pixel when at the 3rd middle mobile and horizontal window, described horizontal windows comprise the neighbor of wanting smoothed pixel and being positioned at described pixel left side and right side.
10. according to the method for video coding of claim 6, wherein, (c) comprising: smooth pixel when mobile vertical window in the 3rd, described vertical window comprise the neighbor of wanting smoothed pixel and being positioned at described pixel upside and downside.
11. method for video coding according to claim 6, wherein, (c) comprising: smooth pixel when along the 3rd left margin mobile and horizontal window, described horizontal windows comprise with the 3rd left margin adjacent pixels and are positioned at the neighbor of the left and right sides of described pixel.
12. method for video coding according to claim 6, wherein, (c) comprising: when smooth pixel when vertical window is moved in the 3rd coboundary, described vertical window comprises with the 3rd coboundary adjacent pixels and is positioned at the neighbor of the upper and lower sides of described pixel.
13. according to the method for video coding of claim 9, wherein, described the 3rd is macro block or sub-piece.
14. one kind is used for by using piece in the piece of frame of video and the described frame of video of difference coding between the prediction piece to produce the method for bit stream, described method comprises: insert the information that is used to indicate whether to predict piece smoothed filtering in bit stream.
15. according to the method for claim 14, wherein, the residual block of prediction piece and described low layer obtains described prediction piece between described.
16. the method according to claim 15 also comprises: in bit stream, insert to be used to indicate and wherein predict described information by described prediction piece.
17., wherein, use residual prediction to described, and described of single-loop decoding according to the method for claim 14.
18. a storage medium comprises:
The first area, it comprises that deducting the prediction piece by the piece from vision signal comes information encoded; And
Second area, it comprises the information of prediction piece that indicated whether smothing filtering.
19. according to the storage medium of claim 18, wherein, the residual block of prediction piece and described low layer obtains described prediction piece between described.
20. according to the storage medium of claim 19, also comprise the 3rd zone, it comprises the information that indicates whether by described of prediction piece prediction.
21., wherein, residual prediction is applied to described, and described of single-loop decoding according to the storage medium of claim 18.
22. the method from the current block of prediction piece decoded video frames, described method comprises:
Recover the prediction piece;
The described prediction piece of smothing filtering; And
Recover current block from the prediction piece of smothing filtering.
23., wherein, obtain described prediction piece from the residual block of predicting the low layer of piece and current block between the current block according to the method for claim 22.
24. the method according to claim 22 also comprises: the information of prediction piece that has been identified for indicating whether smothing filtering.
25., wherein described smothing filtering is designated as the pixel of wanting level and smooth and the linear combination of neighbor thereof according to the method for claim 23.
26. according to the method for claim 25, wherein, described neighbor is on horizontal or vertical direction and wants smoothed two adjacent pixels of pixel.
27. according to the method for claim 26, wherein, described smothing filtering comes weighting to want smoothed pixel with 1/2, and comes two neighbors of weighting with 1/4 respectively.
28. according to the method for claim 26, wherein, if smoothed pixel is and described border adjacent pixels will be neighbor with the pixel selection of described adjacent piece then.
29. the method from the current block of prediction piece decoded video frames, described method comprises:
Judge whether that described current block uses described prediction piece;
Judge whether that described current block uses basic layer skipped mode;
Judge whether that described current block uses smothing filtering;
Recover the prediction piece, and the described prediction piece of smothing filtering; And
Recover described current block from described prediction piece.
30., wherein, obtain described prediction piece from the residual block of predicting the low layer of piece and described current block between the described current block according to the method for claim 29.
31., wherein, described smothing filtering is designated as the pixel of current block and the linear combination of two neighbors according to the method for claim 30.
32. according to the method for claim 30, wherein, the pixel of described current block forms linear combination with two neighbors of upper and lower sides that is positioned at described pixel or left and right sides.
33. the video encoding/decoding method based on multilayer comprises:
(a) recover first residual data from first data texturing of the present frame that incoming bit stream, comprises;
(b) recover second residue signal of basic layer, second residue signal of described basic layer is included in the described bit stream and corresponding to described;
(c) first residue signal is added on described second residue signal;
(d) use smoothing filter level and smooth first between predict piece;
(e) addition result is added to predicts piece between level and smooth.
34. the video encoder based on multilayer comprises:
Be used to obtain the part of the difference between second prediction piece of low layer and second, described second prediction piece is corresponding at first that comprises in anterior layer;
The difference that is used for being obtained is added to the part that is used for described first prediction piece;
Be used to use level and smooth the 3rd the part that produces as addition result of smooth function; And
Be used to be coded in the part of the difference between first and smoothed the 3rd.
35. the Video Decoder based on multilayer comprises:
Be used for recovering the part of first residual data from first data texturing of the present frame that comprises at incoming bit stream;
Be used to recover the part of second residue signal of basic layer, second residue signal of described basic layer is included in the described bit stream and corresponding to described;
Be used for second residue signal is added to part on first the prediction piece;
The 3rd the part that is used to use smoothing filter smoothly to produce as the result of addition; And
The residue signal that is used for first is added to the 3rd smoothed part.
CN 200710002177 2006-01-12 2007-01-12 Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction Pending CN101001383A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US75822706P 2006-01-12 2006-01-12
US60/758,227 2006-01-12
US60/760,401 2006-01-20
KR22871/06 2006-03-10

Publications (1)

Publication Number Publication Date
CN101001383A true CN101001383A (en) 2007-07-18

Family

ID=38693162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710002177 Pending CN101001383A (en) 2006-01-12 2007-01-12 Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction

Country Status (1)

Country Link
CN (1) CN101001383A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102484704A (en) * 2009-08-17 2012-05-30 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
CN101828401B (en) * 2007-10-16 2013-07-17 汤姆森许可贸易公司 Methods and apparatus for artifact removal for bit depth scalability
CN103238329A (en) * 2010-12-06 2013-08-07 索尼公司 Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method
WO2014047781A1 (en) * 2012-09-25 2014-04-03 Mediatek Singapore Pte. Ltd. Methods for inter-view residual prediction
CN104811700A (en) * 2009-08-17 2015-07-29 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
CN105052145A (en) * 2012-12-28 2015-11-11 高通股份有限公司 Parsing syntax elements in three-dimensional video coding
CN105163131A (en) * 2010-05-12 2015-12-16 Sk电信有限公司 Video decoding apparatus and method

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101828402B (en) * 2007-10-16 2013-09-11 汤姆森特许公司 Methods and apparatus for artifact removal for bit depth scalability
CN101828401B (en) * 2007-10-16 2013-07-17 汤姆森许可贸易公司 Methods and apparatus for artifact removal for bit depth scalability
CN104811700A (en) * 2009-08-17 2015-07-29 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
CN104811700B (en) * 2009-08-17 2016-11-02 三星电子株式会社 To the method and apparatus of Video coding and the method and apparatus to video decoding
US9392283B2 (en) 2009-08-17 2016-07-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
CN102484704B (en) * 2009-08-17 2015-04-01 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
US9369715B2 (en) 2009-08-17 2016-06-14 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9374591B2 (en) 2009-08-17 2016-06-21 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
CN102484704A (en) * 2009-08-17 2012-05-30 三星电子株式会社 Method and apparatus for encoding video, and method and apparatus for decoding video
US9277224B2 (en) 2009-08-17 2016-03-01 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9313503B2 (en) 2009-08-17 2016-04-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9313502B2 (en) 2009-08-17 2016-04-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9319686B2 (en) 2009-08-17 2016-04-19 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
CN105163131A (en) * 2010-05-12 2015-12-16 Sk电信有限公司 Video decoding apparatus and method
CN103238329A (en) * 2010-12-06 2013-08-07 索尼公司 Image decoding device, motion vector decoding method, image encoding device, and motion vector encoding method
WO2014047781A1 (en) * 2012-09-25 2014-04-03 Mediatek Singapore Pte. Ltd. Methods for inter-view residual prediction
CN105052145A (en) * 2012-12-28 2015-11-11 高通股份有限公司 Parsing syntax elements in three-dimensional video coding

Similar Documents

Publication Publication Date Title
US11076175B2 (en) Video encoding method for encoding division block, video decoding method for decoding division block, and recording medium for implementing the same
KR100772873B1 (en) Video encoding method, video decoding method, video encoder, and video decoder, which use smoothing prediction
JP5036884B2 (en) Interlaced video encoding and decoding
CN101069429B (en) Method and apparatus for multi-layered video encoding and decoding
CN103270700B (en) Enhanced intra-prediction coding using planar representations
WO2010143583A1 (en) Image processing device and method
KR100944333B1 (en) A fast inter-layer prediction mode decision method in scalable video coding
KR20050045746A (en) Method and device for motion estimation using tree-structured variable block size
CN104054338A (en) Bitdepth And Color Scalable Video Coding
WO2010131601A1 (en) Image processing device, method, and program
JPWO2006001485A1 (en) Motion prediction compensation method and motion prediction compensation device
CN103281531B (en) Towards the quality scalable interlayer predictive coding of HEVC
JP2006025077A (en) Encoder, encoding method, program therefor, and recording medium recording program therefor
CN101001383A (en) Multilayer-based video encoding/decoding method and video encoder/decoder using smoothing prediction
JP2009089332A (en) Motion prediction method and motion predictor
WO2006059848A1 (en) Method and apparatus for multi-layered video encoding and decoding
KR101739580B1 (en) Adaptive Scan Apparatus and Method therefor
KR101691380B1 (en) Dct based subpixel accuracy motion estimation utilizing shifting matrix

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication