CN103096078B - For inter-layer prediction method and the device of vision signal - Google Patents

For inter-layer prediction method and the device of vision signal Download PDF

Info

Publication number
CN103096078B
CN103096078B CN201210585882.3A CN201210585882A CN103096078B CN 103096078 B CN103096078 B CN 103096078B CN 201210585882 A CN201210585882 A CN 201210585882A CN 103096078 B CN103096078 B CN 103096078B
Authority
CN
China
Prior art keywords
macro block
layer
frame
picture
inter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210585882.3A
Other languages
Chinese (zh)
Other versions
CN103096078A (en
Inventor
朴胜煜
全柄文
朴志皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060111894A external-priority patent/KR20070074451A/en
Priority claimed from KR1020060111893A external-priority patent/KR20070075257A/en
Priority claimed from KR1020070001582A external-priority patent/KR20070095180A/en
Priority claimed from KR1020070001587A external-priority patent/KR20070075293A/en
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of CN103096078A publication Critical patent/CN103096078A/en
Application granted granted Critical
Publication of CN103096078B publication Critical patent/CN103096078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to the inter-layer prediction method for vision signal.The present invention relates to the method for carrying out inter-layer motion prediction in the coding or decoding of vision signal.This method identifies the type of the picture on basic unit and current layer or is included in the type of the macro block in these pictures, if the picture on current layer or be included in the type of macro block wherein identified go out be field, and the picture in basic unit or be included in the type of macro block wherein identified go out for line by line, then formed the block on virtual level by the movable information of the macro block in copy basic unit, and the movable information come by the copy of the block on virtual level is for the inter-layer motion prediction of the macro block to the picture on current layer.

Description

For inter-layer prediction method and the device of vision signal
The divisional application that the application is the applying date is on January 9th, 2007, application number is 200780005672.X (international application no PCT/KR2007/000147), denomination of invention is the Chinese patent application of " inter-layer prediction method for vision signal ".
1. technical field
The present invention relates to the method for carrying out inter-layer prediction when encoding/decoding video signal.
2. background technology
Liftable level Video Codec (SVC) encodes video into the picture sequence with most high image quality, guarantee simultaneously encoded picture sequence part (specifically, from the partial frame sequence that whole frame sequence is intermittently selected) can decoded and for low image quality to show this video.
Although by receive and process show low image quality video according to the part of the picture sequence of liftable level scheme code, if but still exist bit rate reduce, the problem that picture quality significantly declines.A solution of this problem is to provide the auxiliary picture sequence of low bit rate---such as having the picture sequence of small screen size and/or low frame per second---as at least one deck in hierarchical structure.
When hypothesis provides two sequences, auxiliary (under) picture sequence is called as basic unit, and lead (on) picture sequence is called as and strengthens or reinforced layer.The vision signal of basic unit and enhancement layer has redundancy, because identical video signal source is encoded into two-layer.In order to improve the encoding-decoding efficiency of enhancement layer, the vision signal of enhancement layer uses the information through encoding and decoding (movable information or texture information) of basic unit to carry out encoding and decoding.
Although as shown in Figure 1a single video source 1 can be encoded into multiple layers with different transmissibility, also as shown in Figure 1 b the multiple video source 2b under the different scanning pattern comprising identical content 2a can be encoded into corresponding each layer.Equally, in this case, coding upper strata encoder by utilizing the encoded information and executing inter-layer prediction of lower floor to improve encoding and decoding gain because two source 2b provide identical content 2a.
Therefore, need to provide a kind of inter-layer prediction method when different source code being become corresponding each layer, the scan pattern of vision signal being included in consideration.When encoding interlaced video, it can be encoded into even field and strange field, and also can be encoded into the strange and even macro block pair in a frame.Correspondingly, the picture type for encoding and decoding interlaced video signal must also be considered for inter-layer prediction.
Generally speaking, enhancement layer provides resolution higher than the picture of base layer resolution.Correspondingly, if the picture of all layer has different resolution when different source codes being become corresponding each layer, then also need to perform interpolation to improve screen resolution (that is, picture size).Because the image of the Base layer picture used in the interlayer prediction for prediction encoding and decoding is more close to the image of enhancement layer picture, codec rate is higher, so need to provide a kind of scan pattern by the vision signal of all layer to include the interpolating method of consideration in.
3. summary of the invention
The object of this invention is to provide a kind of method having at least in two-layer one deck to have inter-layer prediction under the situation of interlaced video signal component.
Another object of the present invention is to provide a kind of method performing the inter-layer motion prediction of all layer to the picture with different spatial resolutions (liftable level) according to picture type.
Another object of the present invention is to provide the method that one performs the inter-layer texture prediction of all layer to the picture with different spaces and/or temporal resolution (liftable level).
A kind of inter-layer motion prediction method according to the present invention comprises: the motion related information motion related information of internal schema macro block being arranged to inter mode macro block, and this internal schema and inter mode macro block are two vertical macro blocks adjoined of basic unit; Then the right movable information of vertical adjacent macroblocks is obtained for inter-layer motion prediction based on these two macro blocks vertically adjoined.
Another kind of inter-layer motion prediction method according to the present invention comprises: the inter mode block internal schema macro block of one of two internal schemas of vertically adjoining and inter mode macro block as basic unit being arranged to have 0 motion related information; Then the right movable information of vertical adjacent macroblocks is obtained for inter-layer motion prediction based on these two macro blocks vertically adjoined.
Another kind of inter-layer motion prediction method according to the present invention comprises: the movable information of single macro block of deriving from the movable information that the vertical adjacent frame macroblocks of basic unit is right; And derived movable information is used as the information of forecasting of the right respective movable information of field macro block in the movable information of the field macro block in current layer or current layer.
Another kind of inter-layer motion prediction method according to the present invention comprises to derive two macro blocks movable information separately from the movable information of the single field macro block of basic unit or the movable information that is selected from the right single field macro block of the vertical adjacent field macroblocks of basic unit; And derived respective movable information is used as the frame macro block of current layer to the information of forecasting of respective movable information.
A kind ofly to comprise for the inter-layer motion prediction method of all layer with the picture of different resolution according to the present invention: by using the Forecasting Methodology of conversion framing macro block the changing picture of lower floor to be become the frame picture of equal resolution according to the type of picture and the type selecting of macro block in picture; Rise this frame picture of sampling to make it have the resolution identical with the resolution on upper strata; Then application is applicable to the inter-layer prediction method of this type of frame macro block in liter frame picture of sampling and the macro block (mb) type in the picture of upper strata.
Another kind comprises for the inter-layer motion prediction method of all layer with the picture of different resolution according to the present invention: identify the type of the picture on lower floor and upper strata and/or be included in the type of the macro block in these pictures; According to the result identified to lower layer pictures application from the right method of single field macroblock prediction frame macro block there is the virtual screen of the aspect ratio identical with the aspect ratio of upper strata picture; Rise this virtual screen of sampling; Then this is utilized through liter virtual screen of sampling to upper layer application inter-layer motion prediction.
Another kind comprises for the inter-layer motion prediction method of all layer with the picture of different resolution according to the present invention: identify the type of the picture on lower floor and upper strata and/or be included in the type of the macro block in these pictures; According to the result identified to lower layer pictures application from the right method of single field macroblock prediction frame macro block there is the virtual screen of the aspect ratio identical with the aspect ratio of upper strata picture; And utilize the virtual screen constructed to the picture application inter-layer motion prediction on upper strata.
Another kind comprises for the inter-layer motion prediction method of all layer with the picture of different resolution according to the present invention: the type identifying lower floor and upper strata picture; If the type of lower layer pictures is field and the type of upper strata picture is line by line, then copy the movable information of the block in lower layer pictures with constructing virtual picture; Rise this virtual screen of sampling; And this through liter sampling virtual screen and upper strata picture between application of frame macro block-macroblock motion prediction method.
Another kind comprises for the inter-layer motion prediction method of all layer with the picture of different resolution according to the present invention: the type identifying lower floor and upper strata picture; If the type of lower layer pictures is field and the type of upper strata picture is line by line, then copy the movable information of the block of lower floor with constructing virtual picture; And use this virtual screen to come upper strata picture application inter-layer motion prediction.
In an embodiment of the present invention, in inter-layer motion prediction, partition mode, reference key and motion vector is sequentially predicted.
In another embodiment of the present invention, sequentially prediction reference index, motion vector and partition mode.
In another embodiment of the present invention, the movable information that be used for the field macro block of the virtual base layer of inter-layer motion prediction right derives from the movable information that the frame macro block of basic unit is right.
In another embodiment of the present invention, the movable information that be used for the field macro block in the occasionally strange field picture of the virtual base layer of inter-layer motion prediction derives from the movable information that the frame macro block of basic unit is right.
In another embodiment of the present invention, select macro block from the field macro block centering of basic unit, and the right movable information of the frame macro block that will be used for the virtual base layer of inter-layer motion prediction derives from the movable information of selected macro block.
In another embodiment of the present invention, be used for the right movable information of the frame macro block of the virtual base layer of inter-layer motion prediction is derive from the movable information of the field macro block the occasionally strange field picture of basic unit.
In another embodiment of the present invention, the information of the field macro block in the occasionally strange field picture of basic unit is copied with other constructing virtual field macro block, and the right movable information of the frame macro block that will be used for the virtual base layer of inter-layer motion prediction derives from the movable information that the field macro block constructed in this way is right.
A kind of inter-layer texture prediction method according to the present invention comprises: by the vertical adjacent frame macroblocks of basic unit to structure field macro block pair; And constructed field macro block is used as the field macro block of current layer to respective texture prediction information to respective texture information.
Another kind of inter-layer texture prediction method according to the present invention comprises: by the vertical adjacent frame macroblocks of basic unit to structure single field macro block; And by the texture prediction information of the texture information of constructed single field macro block as the field macro block of current layer.
Another kind of inter-layer texture prediction method according to the present invention comprises: by the single field macro block of basic unit or vertical adjacent field macroblocks to structure frame macro block pair; And constructed frame macro block is used as the frame macro block of current layer to respective texture prediction information to respective texture information.
Another kind of inter-layer texture prediction method according to the present invention to comprise by the vertical adjacent field macroblocks of basic unit to structure N to frame macro block, wherein N be greater than 1 integer; And constructed N is used as to be positioned at the N of different time position to frame macro block texture prediction information separately in current layer to frame macro block texture information separately.
Another kind of inter-layer texture prediction method according to the present invention comprises: each frame of lower floor is divided into multiple pictures and has the temporal resolution identical with upper strata to allow lower floor; Each institute's isolated field picture of sampling rise in the vertical direction to expand each institute's isolated field picture in vertical direction; Then each is used for the inter-layer texture prediction of each frame on upper strata through liter field picture of sampling.
Another kind of inter-layer texture prediction method according to the present invention comprises: each picture of sampling lower floor of rising in the vertical direction is to expand each picture in vertical direction; And each is used for the inter-layer texture prediction of each frame on upper strata through liter field picture of sampling.
Another kind of inter-layer texture prediction method according to the present invention comprises: each frame on upper strata is divided into multiple pictures; The picture of down-sampled lower floor is to reduce the picture of lower floor in vertical direction; Then the inter-layer texture prediction of the isolated field picture on upper strata will be used for through down-sampled picture.
The method of inter-layer prediction encoded video signal is utilized to comprise according to the present invention: to determine it is use by alternately selecting the row of the 2N block in the arbitrariness picture of basic unit then with the 2N block texture information separately that row selected by the order layout selected constructs in inter-layer texture prediction, or the 2N block texture information separately using the block being selected from the 2N block of basic unit by interpolation to construct; And this information determined of instruction is brought in the information of coding.
A kind of method utilizing inter-layer prediction to carry out decoded video signal according to the present invention comprises: check whether specific indication information is included in the signal received; And based on the result that checks determine it is use by alternately selecting the row of the 2N block in the arbitrariness picture of the basic unit 2N block texture information separately that then selected by order layout selectively, row constructs in inter-layer texture prediction, or the 2N block texture information separately using the block being selected from the 2N block of basic unit by interpolation to construct.
In an embodiment of the present invention, each frame of upper strata or lower floor is divided into two field pictures.
In an embodiment of the present invention, if specific indication information is not included in received signal, then this situation is considered as with receive the signal that comprises the indication information being set as 0 and determine its separately texture information by identical for the situation of the block being used for inter-layer prediction.
A kind ofly according to the present invention, the method that the vision signal of basic unit is used for inter-layer texture prediction to be comprised: the interlaced video signal of basic unit is divided into the strange field component of even summation; In vertical and/or horizontal directions strange for even summation field component is amplified separately; Then strange for the even summation through amplifying field component group combination is used for inter-layer texture prediction.
The method that the vision signal of basic unit is used for inter-layer texture prediction comprises according to the present invention by another kind: the progressive video signal of basic unit is divided into even row group and strange row group; In vertical and/or horizontal directions by even summation very row group amplification separately; Even summation through amplifying very is gone and combines and be used for inter-layer texture prediction.
The method that the vision signal of basic unit is used for inter-layer texture prediction comprises according to the present invention by another kind: amplify the interlaced video signal of basic unit in vertical and/or horizontal directions to make it have the resolution identical with the progressive video signal on upper strata; And the inter-layer texture prediction of the vision signal on upper strata is performed based on the vision signal through amplification.
The method that the vision signal of basic unit is used for inter-layer texture prediction comprises according to the present invention by another kind: amplify the progressive video signal of basic unit in vertical and/or horizontal directions to make it have the resolution identical with the interlaced video signal on upper strata; And the inter-layer texture prediction of the vision signal on upper strata is performed based on the vision signal through amplification.
In one embodiment of the invention, vision signal is separated and amplification performs in macro block rank (or namely on a macroblock basis).
In another embodiment of the present invention, vision signal is separated and amplifies and performs in picture rank.
In another embodiment of the present invention, if the picture format will applying two layers of inter-layer texture prediction to it is different, if namely one deck comprises picture and another layer comprises interlaced picture line by line, then performs vision signal and be separated and amplify.
In another embodiment of the present invention, if the two-layer picture will applying inter-layer texture prediction to it is all interlacing, then performs vision signal and be separated and amplify.
4. accompanying drawing explanation
Fig. 1 a with 1b illustrates the method single video frequency source coding being become multiple layers;
Fig. 2 a and 2b schematically illustrates the configuration of application according to the video signal coding apparatus of inter-layer prediction method of the present invention;
Fig. 2 c and 2d illustrates the type of the picture sequence for encoding interlaced video signal;
Fig. 3 a and 3b schematically shows and wherein performs the process of de-blocking filter for inter-layer texture prediction structure Base layer picture according to an embodiment of the invention;
The process that the movable information that Fig. 4 a to 4f schematically shows the field macro block of the virtual base layer of the inter-layer motion prediction that wherein will be used for MBAFF frame midfield macro block according to an embodiment of the invention utilizes the movable information of frame macro block to derive;
Fig. 4 g schematically shows according to an embodiment of the invention the program that texture information that wherein macro block is right is used to the right texture prediction of field macro block in MBAFF frame;
Fig. 4 h to illustrate frame macro block according to embodiments of the invention being transformed into the right method of a macro block;
Fig. 5 a and 5b illustrates reference key according to another embodiment of the invention and movable information derivation program;
Fig. 6 a to 6c schematically shows the program of the movable information of the field macro block wherein utilized according to an embodiment of the invention in the movable information derivation virtual base layer of frame macro block;
Fig. 6 d schematically shows the program that wherein frame macro block is right according to an embodiment of the invention texture information is used to the texture prediction of the field macro block in a picture;
Fig. 7 a and 7b illustrates reference key according to another embodiment of the invention and movable information derivation program;
The movable information that Fig. 8 a to 8c schematically shows the field macroblock frame macro block of the virtual base layer that wherein will be used for inter-layer motion prediction is according to an embodiment of the invention the program that the movable information of the field macro block utilized in MBAFF frame is derived;
Fig. 8 d schematically shows according to an embodiment of the invention the program that the right texture information of field macro block in wherein MBAFF frame is used to the right texture prediction of frame macro block;
Fig. 8 e illustrates method right to conversion framing macro block for field macro block according to embodiments of the invention;
When Fig. 8 f and 8g schematically shows according to an embodiment of the invention macro block centering then and there only a macro block is inter mode, texture information right for the field macro block in MBAFF frame is used for the program of the right inter-layer prediction of frame macro block;
Fig. 8 h schematically shows according to an embodiment of the invention the program that the right texture information of field macro block in wherein MBAFF frame is used to the texture prediction of multipair frame macro block;
Fig. 9 a and 9b illustrates reference key according to another embodiment of the invention and movable information derivation program;
The movable information that Figure 10 a to 10c schematically shows the frame macro block of the virtual base layer that will be used for inter-layer motion prediction according to an embodiment of the invention is wherein the program that the movable information of the field macro block utilized in the picture of field is derived;
The texture information of the field macro block that Figure 10 d schematically shows in its midfield picture is according to an embodiment of the invention used to the program of the right texture prediction of frame macro block;
Figure 11 illustrates reference key according to another embodiment of the invention and movable information derivation program;
Figure 12 a and 12b movable information that wherein will be used for the frame macro block of the virtual base layer of inter-layer motion prediction schematically shown according to another embodiment of the invention is the program that the movable information of the field macro block utilized in the picture of field is derived;
The program that Figure 13 a to 13d schematically shows the field macro block of the virtual base layer that will be used for inter-layer motion prediction according to an embodiment of the invention respectively movable information according to the type of picture utilizes the movable information of field macro block to derive;
Figure 14 a to 14k illustrates the method for motion prediction between execution level when the spatial resolution of all layer is different according to various embodiments of the present invention according to the type of picture respectively;
Figure 15 a and 15b schematically show according to an embodiment of the invention enhancement layer be line by line and the picture of the basic unit with different spatial resolutions to be used for the program of inter-layer texture prediction by basic unit when being interlacing;
Figure 16 a and 16b schematically show according to an embodiment of the invention wherein in order to the picture of basic unit is used for inter-layer texture prediction and by the macro block in picture to be divided into macro block and isolated macro block be exaggerated program;
Figure 17 a and 17b schematically show according to an embodiment of the invention enhancement layer be interlacing and basic unit be line by line in the picture of the basic unit with different spatial resolutions is used for the program of inter-layer texture prediction;
Figure 18 schematically shows the program when enhancement layer and basic unit are all interlacing, the picture of the basic unit with different spatial resolutions being used for inter-layer prediction according to an embodiment of the invention;
Figure 19 a illustrates the program predicted between application layer when enhancement layer is progressive frame sequence and two-layer picture type is different with temporal resolution according to an embodiment of the invention;
Figure 19 b illustrates and is progressive frame sequence at enhancement layer according to an embodiment of the invention and two-layerly has different picture types and apply the program of inter-layer prediction during identical resolution;
Figure 20 illustrates the program predicted between application layer when basic unit is progressive frame sequence and two-layer picture type is different with temporal resolution according to an embodiment of the invention; And
Figure 21 illustrates and is progressive frame sequence in basic unit according to an embodiment of the invention and two-layerly has different picture types and apply the program of inter-layer prediction during identical resolution.
5. embodiment
With reference now to accompanying drawing, describe embodiments of the invention in detail.
Fig. 2 a schematically shows the building block of application according to the video signal coding apparatus of inter-layer prediction method of the present invention.Although the device of Fig. 2 a is realized as and is encoded into by incoming video signal two-layer, the principle of the following description of the present invention is also applicable to the interlayer process when vision signal is encoded into three layers or even more multi-layered.
Enhancement layer (EL) encoder 20 place of inter-layer prediction method according to the present invention in the device of Fig. 2 a performs.Encoded information (movable information and texture information) receives at basic unit (BL) encoder 21 place.Based on received information and executing inter-layer texture prediction or motion prediction.If needed, then the information received of decoding also is predicted based on the information and executing decoded.Certainly, in the present invention, as shown in Figure 2 b, incoming video signal can be use to be carried out encoding and decoding by the video source 3 of the basic unit of encoding.Inter-layer prediction method is applicable equally as described below in this case.
In the situation of Fig. 2 a, the encoded video source 3 of BL encoder 21 encoding interlaced video signal wherein or Fig. 2 b wherein can be had by two kinds of methods of encoding and decoding.Particularly, in one of these two kinds of methods, as shown in Figure 3 a, interlaced video signal is being encoded into a sequence simply by the basis of field, and in another approach, as shown in Figure 3 b, by frame being encoded into frame sequence with the macro block of two (even summation is strange) fields to each frame carrying out tectonic sequence.The upper macro block of macro block centering in the frame of encoding in this way is called as " top macro block ", and lower macro block is called as " end macro block ".If top macro block is made up of even (or strange) field picture component, then end macro block is made up of strange (or even) field picture component.The frame constructed in this way is called as macro block adaptive frame/field (MBAFF) frame.MBAFF frame not only can comprise the macro block pair of each self-contained strange and even field macro block, also can comprise the macro block pair of each self-contained two frame macro blocks.
Correspondingly, when the macro block in picture has interlaced picture component, it may be the macro block in field, and also may be the macro block in frame.Each macro block with interlaced picture component is called as a macro block, and each macro block line by line with (scanning) picture content is called frame macro block.
Therefore, need by determining will to be that frame macro block (mb) type or field macro block (mb) type determine inter-layer prediction method at the macro block of EL encoder 20 place coding and the base layer macro block type separately that will use in the inter-layer prediction at macro block.If macro block is a macro block, then need by determining that it is in field or the field macro block in MBAFF frame determines inter-layer prediction method.
For each situation, the method will be described respectively.Before described, suppose that the resolution of current layer equals the resolution of basic unit.That is, suppose that SpatialScalabilityType () is 0.The resolution of current layer will provide after a while higher than description during base layer resolution.In following description and accompanying drawing, term " top " and " idol " (or strange) are used interchangeably, and term " end " and " very " (or even) are used interchangeably.
In order to utilize basic unit to carry out inter-layer prediction with coding or decoding enhancement layer, first need decoded base.Therefore, first base layer decoder is described as follows.
When decoded base, the base layer motion information of not only decode such as partition mode, reference key and motion vector and so on, goes back the texture of decoded base.
When the texture of basic unit decoded for inter-layer texture prediction time, be not that all image sample data of basic unit are all decoded, this is the load in order to reduce decoder.The image sample data of internal schema macro block is out decoded, and inter mode macro block is that only residual error data---error information namely between image sample data---is out decoded and need not adjoin picture and carry out motion compensation.
In addition, the base layer texture decoding for inter-layer texture prediction is not by the basis of macro block is performing by the basis of picture, to construct the time upper Base layer picture with enhancement layer picture consistent.Base layer picture constructs by the image sample data gone out from internal schema reconstructing macroblocks with from the residual error data that inter mode macroblock decode goes out as mentioned above.
Internal schema or the inter mode motion compensation of such as DCT and quantification and so on are carried out on image block basis with conversion, such as, carry out in 16x16 macroblock basis or on 4x4 sub-block basis.This causes the blocking artifacts at block boundary place to make picture distortion.Application de-blocking filter reduces these blocking artifacts.Deblocking filter makes the edge of image block smoothly to improve the quality of frame of video.
Whether apply de-blocking filter and depend on the intensity of image block at boundary and the gradient of border surrounding pixel to reduce piecemeal distortion.The dynamics of deblocking filter or degree are determined by the pixel value etc. before the image block partition mode of quantization parameter, internal schema, inter mode, indicator collet size etc., motion vector, de-blocking filter.
Deblocking filter in inter-layer prediction is the internal schema macro block in the Base layer picture on the basis of the texture prediction of base internal schema (intraBL or the interlayer internal schema) macro block being used as enhancement layer.
When picture sequence entirely will be encoded into as shown in Figure 2 c according to inter-layer prediction method coding two-layer, this is two-layer is counted as frame format entirely, thus makes easily to derive from the encoding-decoding process for frame format the coding/decoding process comprising de-blocking filter.
Now by for the picture format of the basic unit situation different with the picture format of enhancement layer---i.e. the situation of basic unit's to be frame (or namely line by line) form and basic unit be field (or being interlacing) form, basic unit is a form and basic unit is the situation of frame format, although or enhancement layer as illustrated in figures 2 c and 2d and basic unit are both for field form but one of enhancement layer and basic unit are encoded into a picture sequence and another is encoded into the situation of MBAFF frame---the method performing de-blocking filter according to embodiments of the invention is described.
Fig. 3 a and 3b schematically shows and constructs Base layer picture according to an embodiment of the invention wherein to perform the process for the de-blocking filter of inter-layer texture prediction.
Fig. 3 a illustrates that wherein enhancement layer is frame format and basic unit is the embodiment of a form, and Fig. 3 b illustrates that wherein basic unit is a form and basic unit is the embodiment of frame format.
In these embodiments, in order to inter-layer texture prediction, the inter mode macro block of basic unit and the texture of internal schema macro block decoded, the Base layer picture of image sample data and residual error data is comprised with structure, and deblocking filter being applied to constructed picture to come liter to sample the picture constructed according to the resolution (or being screen size) of basic unit and the ratio of the resolution of enhancement layer after reducing blocking artifacts.
The first method (method 1) in Fig. 3 a and 3b is that wherein basic unit is divided into two field pictures to perform the method for de-blocking filter.In the method, when utilize create enhancement layer with the basic unit of different pictures said shank time, Base layer picture is divided into even row field picture and strange row field picture, and these two field pictures are deblocked (that is, carrying out the filtering for deblocking) and rise sampling.Then these two picture splicings are become single picture, and perform inter-layer texture prediction based on this single picture.
This first method comprises following three steps.
In separating step (step 1), Base layer picture is divided into field, top (or the strange field) picture comprising even row and field, the end (or the even field) picture comprising strange row.Base layer picture is the video pictures comprising residual error data (inter mode data) and the image sample data (internal schema data) reconstructed from the data flow of basic unit by motion compensation.
In deblocking step (step 2), in separating step, separated field picture is deblocked by deblocking filter.Here, conventional deblocking filter can be used as this deblocking filter.
When the resolution of enhancement layer is different with the resolution of basic unit, the field picture through deblocking carrys out a liter sampling according to the ratio of the resolution of enhancement layer and the resolution of basic unit.
Splicing step (step 3), through liter sampling field, top picture and through liter sampling field, end picture in an alternating manner by interlacing scan to be spliced into single picture.Afterwards, the texture prediction of enhancement layer is performed based on this single picture.
In the second method (method 2) in Fig. 3 a and 3b, when utilization creates enhancement layer with the basic unit of different pictures said shank, Base layer picture be not divided into two field pictures but it directly deblocked to it and rises sampling, and performing inter-layer texture prediction based on the picture of result gained.
In this second method, the Base layer picture corresponding with the enhancement layer picture will encoded by inter-layer texture prediction is not divided into top and bottom field picture but is deblocked immediately, then rises sampling.Afterwards, perform the texture prediction of enhancement layer through liter picture of sampling based on this.
The deblocking filter being applied to the Base layer picture constructed for inter-layer motion prediction is only applied to comprising the region of the image sample data gone out from internal schema macroblock decode, and is not applied to the region comprising residual error data.
Basic unit is in fig. 3 a encoded into a form---when namely basic unit is encoded into a picture sequence as shown in Figure 2 c or is encoded into MBAFF frame as shown in Figure 2 d, in order to apply the second method, the row needing to perform alternately interlacing scan top and bottom field picture is to be combined into the row of single picture (when Fig. 2 c) or the alternately top and bottom macro block that interlaced field macro block is right to be combined into the process of single picture (at Fig. 2 d).This process is described in detail with reference to Fig. 8 d and 8e.To be the field picture or the macro block that comprise residual error data (inter mode data) and the image sample data (internal schema data) reconstructed by motion compensation by interleaved top and bottom field picture or top and bottom macro block.
In addition, the right top and bottom macro block of (basic unit) field macro block in MBAFF frame is as shown in Figure 2 d different patterns and from these macro blocks, selects internal schema block for (when Fig. 8 g described after a while) when inter-layer texture prediction that the macro block of enhancement layer is right, when as shown in Figure 2 d be encoded into any frame (picture) in the right basic unit of field macro block in MBAFF frame inconsistent with enhancement layer picture in time (at Fig. 8 h described after a while), or when the texture with the right enhancement layer of macro block be from as shown in Figure 2 c there is the base layer prediction of the field macro block of a picture (at Figure 10 d described after a while), one that chooses in the macro block of field is sampled into interim macro block to (" 841 " in Fig. 8 g and " 851 " and " 852 " in Fig. 8 h) or two interim macro blocks (" 1021 " in Figure 10 d) by liter, and internal schema macro block deblocking filter is applied in these macro blocks.
The inter-layer texture prediction described in following various embodiment is performing through the Base layer picture that deblocks of describing in the embodiment based on Fig. 3 a and 3b.
Respectively inter-layer prediction method is described by for each situation of classifying according to the macro block (mb) type of basic unit of inter-layer prediction of macro block wanting the macro block (mb) type in the current layer of encoding and decoding and current layer will be used for now.In this description, suppose that the spatial resolution of current layer equals the spatial resolution of basic unit as described above.
the situation of the field MB I. in frame MB->MBAFF frame
In this case, the macro block in current layer (EL) is encoded into the field macro block in MBAFF frame, and the macro block that will be used in the basic unit of the inter-layer prediction of the macro block of current layer is encoded into frame macro block.Video signal components included in upper macro block in basic unit and lower macro block is identical with the video signal components included by the macro block of a pair coordination in current layer.Upper and lower (top and bottom) macro block will be called as macro block pair, and term " to " by being used for, the block vertically adjoined for a pair is described in the following description.First, inter-layer motion prediction is described as follows.
EL encoder 20 uses the macro-block partition mode by becoming single macro block (by being compressed to half size in vertical direction) to obtain to 410 merger the macro block of basic unit as the partition mode of current macro.Fig. 4 a illustrates the specific example of this process.As shown, first, the respective macroblock of basic unit is become single macro block (S41) to 410 merger, and the partition mode of the macro block obtained by merger is copied into another macro block to construct macro block to 411 (S42).Afterwards, this partition mode respective to macro block 411 is applied to the macro block of virtual base layer to 412 (S43).
But, when corresponding macro block is merged into single macro block to 410, unallowed zoning in partition mode may be created on.In order to prevent this situation, EL encoder 20 determines partition mode according to following rule.
1) top and bottom two 8x8 blocks (" B8_0 " and " B8_2 " in Fig. 4 a) of the macro block centering of basic unit are merged into single 8x8 block.But if any piece in corresponding 8x8 block is not all subdivided, then they are merged into two 8x4 blocks, and if have any piece to be subdivided in corresponding 8x8 block, then they are merged into four 4x4 blocks (" 401 " in Fig. 4 a).
2) the 8x16 block of basic unit dwindles into 8x8 block, and 16x8 block dwindles into two 8x4 blocks adjoined, and 16x16 block dwindles into 16x8 block.
If respective macroblock be centering to a rare macro block be within pattern-coding, then first EL encoder 20 performed following process before merge process.
If only have one to be internal schema in these two macro blocks, then, the such as macro-block partition mode of macro block, the movable information of reference key and motion vector and so on are copied into interior macro block as shown in Figure 4 b, or interior macro block be considered to have 0 motion vector and 0 reference key as illustrated in fig. 4 c 16x16 between macro block.Or as shown in figure 4d, 0 motion vector by the reference key of a macro block is copied to interior macro block to arrange, and is distributed to interior macro block by the reference key of interior macro block.Then, perform above mentioned merge process, then execution reference key as described below and motion vector derivation program.
EL encoder 20 perform following process with from respective macroblock to 410 reference key derivation current macro to 412 reference key.
If each block corresponding to the basic unit 8x8 block centering of current 8x8 block has been subdivided into an identical number part, then the reference key of this 8x8 block centering one piece (jacking block or sole piece) has been confirmed as the reference key of current 8x8 block.Otherwise the reference key that this 8x8 block centering has been subdivided into that piece of fewer number of part is confirmed as the reference key of current 8x8 block.
In another embodiment of the present invention, for the basic unit 8x8 block that corresponds to current 8x8 block is to a reference key being confirmed as current 8x8 block less in the reference key arranged.This defining method in the example of Fig. 4 e can be expressed as follows:
Reference key=the min (reference key of the B8_0 of base top frame MB, the reference key of the B8_2 of base top frame MB) of current B8_0
Reference key=the min (reference key of the B8_1 of base top frame MB, the reference key of the B8_3 of base top frame MB) of current B8_1
Reference key=the min (reference key of the B8_0 of substrate frame MB, the reference key of the B8_2 of substrate frame MB) of current B8_2, and
Reference key=the min (reference key of the B8_1 of substrate frame MB, the reference key of the B8_3 of substrate frame MB) of current B8_3.
Above reference key derivation program is applicable to both top and bottom field macro blocks.The reference key of each the 8x8 block determined in this way is multiplied by 2, and the reference key after being multiplied is defined as its final reference index.The reason making this multiplication is that the number of picture is the twice of the number in frame sequence, because the field macro block belonging to picture is divided into even field and strange field when decoding.Depend on decoding algorithm, then the reference key after being multiplied is added 1 determine by its reference key being multiplied by 2 by the final reference index of field, end macro block.
Below that EL encoder 20 is derived the program of the right motion vector of the macro block of virtual base layer.
Motion vector determines on the basis of 4x4 block, and therefore the corresponding 4x8 block of basic unit is out identified, as shown in fig. 4f.If this corresponding 4x8 block is subdivided, then the motion vector of its top or end 4x4 block is confirmed as the motion vector of current 4x4 block.Otherwise, the motion vector of the 4x8 block of correspondence is defined as the motion vector of current 4x4 block.Determined motion vector is used as the final motion vector of current 4x4 block after its vertical component is divided by 2.The reason making this division is included in the iconic element that iconic element in two frame macro blocks corresponds to a field macro block thus to make the size of field picture reduce half in vertical direction.
Once the field macro block of virtual base layer to 412 movable information determine in this way, the target field macro block that this movable information is just used to enhancement layer to 413 inter-layer motion prediction.Equally, in the following description, once the macro block of virtual base layer or the right movable information of macro block are determined, this movable information is just used to the respective macroblock of current layer or the right inter-layer motion prediction of respective macroblock.In the following description, suppose to be used to the respective macroblock of current layer even without the movable information that the macro block or macro block of mentioning virtual base layer are right or right this process of inter-layer motion prediction of respective macroblock is also employed.
How the field macro block that Fig. 5 schematically shows the virtual base layer that will be used to inter-layer prediction according to another embodiment of the invention derives from the movable information corresponding to the right basic frame macro block of current macro right to the movable information of 500.In the present embodiment, as shown in the figure, the field macro block that the top of top macro block that the frame macro block of basic unit is right or the reference key of end 8x8 block are used as virtual base layer is to the reference key of the top 8x8 block of each macro block in 500, and the reference key of the top of the end macro block of basic unit or end 8x8 block is used as the reference key of this macro block to the end 8x8 block of each macro block in 500.On the other hand, as shown in the figure, the field macro block that the motion vector of the 4x4 block of the top of the top macro block that the frame macro block of basic unit is right is common to virtual base layer is to each macro block 4x4 block topmost in 500, the motion vector of the 3rd 4x4 block of the top macro block that the frame macro block of basic unit is right is common to second the 4x4 block of this macro block to each macro block in 500, the motion vector of the end macro block 4x4 block topmost that the frame macro block of basic unit is right is common to three the 4x4 block of this macro block to each macro block in 500, and the motion vector of the 3rd of the end macro block that the frame macro block of basic unit is right the 4x4 block is common to four the 4x4 block of this macro block to each macro block in 500.
As shown in Figure 5 a, for the field macro block that constructs for inter-layer prediction to use the motion vector of the 4x4 block in the different 8x8 blocks 511 and 512 of basic unit to the top 4x4 block 501 in the 8x8 block in 8x8 block in 500 and end 4x4 block 502.These motion vectors may be the motion vectors using different reference picture.That is, different 8x8 blocks 511 and 512 may have different reference keys.Correspondingly, in this case, in order to the motion vector of the corresponding 4x4 block 503 selected for top 4x4 block 501 to be shared the motion vector of second the 4x4 block 502 making virtual base layer by the macro block of constructing virtual basic unit to 500, EL encoder 20, as shown in Figure 5 b (521).
In the embodiment described with reference to figure 4a to 4f, in order to the movable information of constructing virtual basic unit is to predict the movable information that current macro is right, EL encoder 20 sequentially to be derived partition mode, reference key and motion vector based on the movable information that the respective macroblock of basic unit is right.But, with reference in the embodiment described in figure 5a and 5b, first EL encoder 20 based on the right reference key of the macro block of the right movable information derivation virtual base layer of the respective macroblock of basic unit and motion vector, then finally determines based on derived value the partition mode that the macro block of virtual base layer is right.When partition mode is determined, the 4x4 module unit with the identical motion vector derived and reference key is combined, if and the block mode after combination is the partition mode allowed, then partition mode is arranged to the pattern after this combination, otherwise partition mode is arranged to the pattern before combining.
In the above-described embodiment, if the respective macroblock of basic unit is all internal schema to two macro blocks in 410, then current macro is predicted in 413 execution bases.In this case, motion prediction is not performed.Certainly, at the macro block pair of the situation Xia Bu constructing virtual basic unit of texture prediction.If the respective macroblock of basic unit only has a macro block to be internal schema in 410, then as shown in Figure 4 b by the copying motion information of a macro block extremely interior macro block, as illustrated in fig. 4 c the motion vector of interior macro block and reference key are arranged to 0, or the reference key of interior macro block are set by the reference key of a macro block being copied to interior macro block and the motion vector of interior macro block is arranged to 0 as shown in figure 4d.Then, the movable information that the macro block of virtual base layer is right is derived as described above.
Be as mentioned above the macro block of inter-layer motion prediction constructing virtual basic unit to afterwards, the movable information that EL encoder 20 uses the macro block that constructs right predict and encode current field macro block to 413 movable information.
Inter-layer texture prediction will be described now.Fig. 4 g illustrates the example interlayer texture Forecasting Methodology when " the field MB in frame MB->MBAFF frame ".The respective frame macro block that EL encoder 20 identifies basic unit to 410 block mode.If respective frame macro block is to two macro blocks in 410 or be all internal schema or be all inter mode, then the respective macroblock of basic unit becomes interim field macro block to 421 to 410 conversions (conversion) by EL encoder 20, or to perform current field macro block to prediction (when two frame macro blocks 410 are all internal schemas) in the base of 413 or perform its residual prediction (when two frame macro blocks 410 are all inter modes) in the manner described below.When the macro block of two during corresponding macro block is to 410 is all internal schema, this interim field macro block comprises the data of deblocked after completing decoding when internal schema foregoing (that is, carrying out the filtering for deblocking) to 421.In the following description to various embodiment, the interim macro block derived for the macro block from the basic unit for texture prediction is to so same.
But, do not perform inter-layer texture prediction when only having one to be inter mode in these two macro blocks.For the macro block of the basic unit of inter-layer texture prediction to 410 raw image datas when macro block is internal schema with un-encoded (or through decoding view data), and there is when macro block is inter mode encoded residual error data (or through decoding residual error data).Following to the description of texture prediction in for basic unit macro block to same so.
Fig. 4 h illustrate for by frame macro block to converting the right method of the field macro block that will be used for inter-layer texture prediction to.As shown in the figure, sequentially select the idol row of a pair frame macro block A and B to construct top field macro block A', and sequentially select this to the strange row of frame macro block A and B to construct field, end macro block B'.When filling a field macro block with row, first it fill with the idol of jacking block A (or strange) row (A_ occasionally A_ is strange), then fills with strange (or even) row (B_ occasionally B_ is strange) of sole piece B.
the situation of the field MB II. in the picture of frame MB-> field
In this case, the macro block in current layer is the field macro block be encoded in a picture, and the macro block that will be used in the basic unit of the inter-layer prediction of the macro block of current layer is encoded into frame macro block.The video signal components included by macro block centering in basic unit is identical with video signal components included in the macro block of coordination in the even field in current layer or strange field.First, inter-layer motion prediction is described below.
EL encoder 20 uses the macro-block partition mode by becoming single macro block (by being compressed to half size in vertical direction) to obtain to merger the macro block of basic unit as the partition mode of the occasionally strange macro block of virtual base layer.Fig. 6 a illustrates the detailed example of this process.As shown in the figure, first the respective macroblock of basic unit is become single macro block 611 (S61) to 610 merger, and the partition mode obtained by this merger is applied to the macro block (S62) of the virtual base layer of the inter-layer motion prediction that will be used for current macro 613.Identical with previous situation I of merger rule.Identical with previous situation I of processing method when having pattern-coding within a macro block at least in corresponding macro block is to 610.
For the program of derive reference key and motion vector also to perform with the mode same way described in previous situation I above.In situation I, identical derivation program is applied to top and bottom macro block, because the strange macro block of even summation is to being carried in a frame.But this situation II and situation I difference are derivation program to be only applied to a field macro block, as shown in figs. 6b and 6c, because only exist in the current field picture wanting encoding and decoding one correspond to base layer macro block to 610 macro block.
In above embodiment, in order to predict the movable information of the macro block of virtual base layer, EL encoder 20 sequentially to be derived the partition mode of this macro block, reference key and motion vector based on the movable information that the respective macroblock of basic unit is right.
In another embodiment of the present invention, EL encoder 20 is first based on reference key and the motion vector of the macro block of the right movable information derivation virtual base layer of the respective macroblock of basic unit, then, the block mode of the macro block of virtual base layer is finally determined based on derived value.Fig. 7 a and 7b schematically shows the reference key of field macro block and the derivation of motion vector of virtual base layer.Class of operation in this case for deriving is similar to the operation in situation about describing with reference to figure 5a and 5b I, and difference is that the movable information of top or end macro block utilizes the right movable information of the macro block of basic unit to derive.
When partition mode is finalized, the 4x4 module unit with the identical motion vector derived and reference key is combined, if and the block mode after combination is the partition mode allowed, then partition mode is arranged to the pattern after this combination, otherwise partition mode is arranged to the pattern before combining.
In the above-described embodiment, if two macro blocks of the respective macroblock centering of basic unit are all internal schemas, then do not perform motion prediction, the movable information that the macro block of Ye Bu constructing virtual basic unit is right, if and only have one to be internal schema in these two macro blocks, then perform motion prediction in this case as described previously.
Inter-layer texture prediction will be described now.Fig. 6 d illustrates the example interlayer texture Forecasting Methodology when " the field MB in the picture of frame MB-> field ".The respective macroblock that EL encoder 20 identifies basic unit to 610 block mode.If two of this macro block centering macro blocks or be all internal schema or be all inter mode, then EL encoder 20 constructs interim field macro block 621 by single to frame macro block 610.If current macro 613 belongs to even field picture, then EL encoder 20 by respective macroblock to 610 idol row construct interim field macro block 621.If current macro 613 belongs to strange field picture, then EL encoder 20 by respective macroblock to 610 strange row construct interim field macro block 621.Building method is similar in Fig. 4 h the method constructing single field macro block A' or B'.
Once interim field macro block 621 is constructed out, EL encoder 20 just based on prediction (when the macro block of two during corresponding macro block is to 610 is all internal schema) in the base of the texture information execution current field macro block 613 in field macro block 621, or performs its residual prediction (when the macro block of two during corresponding macro block is to 610 is all inter mode).
If corresponding macro block only has a macro block to be inter mode in 610, then EL encoder 20 does not perform inter-layer texture prediction.
the situation of the MB-> frame MB in III.MBAFF frame
In this case, the macro block in current layer is encoded into frame macro block, and the macro block that will be used in the basic unit of the inter-layer prediction of the frame macro block of current layer is the field macro block be encoded in MBAFF frame.Video signal components included in field macro block in basic unit is identical with video signal components included in the macro block of a pair coordination in current layer.First, inter-layer motion prediction is described below.
EL encoder 20 uses the macro-block partition mode that obtained by the right top of expansion base layer macro block or end macro block (expanding to twice in vertical direction) as the right partition mode of the macro block in virtual base layer.Fig. 8 a illustrates the specific example of this process.Although be that field, top macro block is selected in following description and accompanying drawing, selected being suitable for equally at present described by face at field, end macro block.
As shown in Figure 8 a, by the respective macroblock of basic unit to 810 field, top macro block expand to twice to construct two macro blocks 811 (S81), and by by expansion obtain partition mode be applied to the macro block of virtual base layer to 812 (S82).
But, when respective fields macro block is extended to twice in vertical direction, unallowed partition mode (or pattern) in macro-block partition mode may be created on.In order to prevent this situation, EL encoder 20 determines partition mode by following rule according to the partition mode through expansion.
1) 4x4,8x4 and 16x8 block of basic unit is confirmed as 4x8,8x8 and 16x16 block by it being amplified in vertical direction twice acquisition after expansion.
2) 4x8,8x8 and 16x16 block of basic unit is confirmed as the top and bottom two pieces of formed objects after expansion separately.As shown in Figure 8 a, the 8x8 block B8_0 of basic unit is confirmed as two 8x8 blocks (801).The reason that 8x8 block B8_0 is not configured to 8x16 block is after expansion that what to be adjoined on the left of it or on right side may not be 8x16 divided block through extension blocks, and does not have which kind of macro-block partition mode to be supported in this case.
If respective macroblock to have in 810 a macro block be within pattern-coding, then EL encoder 20 is not select internal schema but select the top of inter mode or field, end macro block, and to its perform above expansion process with determine the macro block in virtual base layer to 812 partition mode.
If corresponding macro block is all internal schema to two macro blocks in 810, then EL encoder 20 performs inter-layer texture prediction, and does not perform the partition mode undertaken by above expansion process and determine and reference key described below and motion vector derivation.
In order to the reference key that the macro block of the reference key derivation virtual base layer from respective fields macro block is right, the reference key of the corresponding 8x8 block B8_0 of basic unit is defined as the reference key of each in this top and bottom two 8x8 blocks by EL encoder 20, as shown in Figure 8 b, and by the reference key of each 8x8 block determined divided by 2 to obtain its final reference key.The reason making this division is to be applied to frame sequence, needs frame numbers to reduce half, because the number of reference pictures of field macro block is arranged based on the picture being divided into the strange field of even summation.
When motion vector to 812 of the frame macro block of derivation virtual base layer, the macro block that the motion vector of the corresponding 4x4 block of basic unit is defined as virtual base layer by EL encoder 20 is to the motion vector of the 4x8 block in 812, as shown in Figure 8 c, and by determined motion vector after its vertical component is multiplied by 2 be used as final motion vector.The reason making this multiplication is the iconic element that iconic element included in a field macro block corresponds to two frame macro blocks, thus makes the size of two field picture be increased to twice in vertical direction.
In the above-described embodiment, in order to predict the movable information that the macro block of virtual base layer is right, EL encoder 20 sequentially to be derived the partition mode of this macro block, reference key and motion vector based on the movable information of the respective fields macro block of basic unit.
In another embodiment of the present invention, when will be used for the right movable information of the macro block of the virtual base layer of the right inter-layer prediction of current macro when deriving, first EL encoder 20 obtains the right reference key of the macro block of virtual base layer and motion vector based on the movable information of the respective fields macro block of basic unit, then the block mode of each macro block of the macro block centering of virtual base layer is finally determined based on obtained value, as illustrated in fig. 9.When partition mode is finalized, the 4x4 module unit with the identical motion vector derived and reference key is combined, if and the block mode after combination is the partition mode allowed, then partition mode is arranged to the pattern after this combination, otherwise partition mode is arranged to the pattern before combining.
It is below the more detailed description of the embodiment of Fig. 9 a.As shown in the figure, the inter mode field macro block of basic unit is selected, the reference key that the frame macro block of the virtual base layer that will be used for the right motion prediction of current macro is right and the motion vector of macro block selected by using and reference key are derived and motion vector.If these two macro blocks are all inter modes, then in top and bottom macro block one of arbitrariness selected (901 or 902), and use motion vector and the reference index information of selected macro block.As shown in the figure, in order to reference key of deriving, the analog value of the top 8x8 block of selected macro block is copied into the reference key of the top and bottom 8x8 block of the top macro block of virtual base layer, and the analog value of the end 8x8 block of selected macro block is copied into the reference key of the top and bottom 8x8 block of the end macro block of virtual base layer.As shown in the figure, in order to motion vector of deriving, the analog value of each 4x4 block of selected macro block is shared the motion vector making the corresponding 4x4 block vertically adjoined for a pair of macro block centering of virtual base layer.In another embodiment of the present invention, the right movable information of the respective macroblock of basic unit can be mixed and for the right motion vector of the frame macro block of virtual base layer of deriving and reference key, the embodiment shown in this from Fig. 9 a is different.Fig. 9 b illustrates the program for derive motion vector and reference key according to this embodiment.The detailed description that the reference key of the sub-block (8x8 block and 4x4 block) of the macro block centering of virtual base layer associates with the copy of motion vector is here omitted because its can from the above-mentioned description of movable information derivation program and the illustration of Fig. 9 b intuitivism apprehension.
But, because the movable information of two macro blocks of the field macro block centering of basic unit is all used in the embodiment of Fig. 9 b, if so the field macro block centering of basic unit has a macro block to be internal schema, then utilize the movable information of the movable information derivation internal schema macro block of another macro block as inter mode macro block.Particularly, can as shown in Figure 4 b by after the corresponding information of inter mode macro block being copied to motion vector and reference key that internal schema macro block constructs internal schema macro block, or after being considered as internal schema macro block having the inter mode macro block of 0 motion vector and 0 reference key as illustrated in fig. 4 c, or arranging the reference key of internal schema macro block by the reference key of inter mode macro block being copied to internal schema macro block and its motion vector is set to after 0 as shown in figure 4d, the motion vector that the macro block of virtual base layer of deriving as shown in figure 9b is right and reference index information.Once the right motion vector of the macro block deriving virtual base layer and reference index information, just determine based on derived information the block mode that macro block is right as discussed previouslyly.
On the other hand, if two macro blocks of the respective fields macro block centering of basic unit are all internal schemas, then motion prediction is not performed.
Inter-layer texture prediction will be described now.Fig. 8 d illustrates the example interlayer texture Forecasting Methodology when " the field MB-> frame MB in MBAFF frame ".The respective fields macro block that EL encoder 20 identifies basic unit to 810 block mode.If corresponding frame macro block is to two macro blocks in 810 or be all internal schema or be all inter mode, then EL encoder 20 converts the respective fields macro block of basic unit to interim frame macro block to 821 to 810, or to perform present frame macro block to predicting (when these two frame macro blocks 810 are all internal schemas) in the base of 813 or performing its residual prediction (when these two frame macro blocks 810 are all inter modes) in the manner described below.When the macro block of two during respective macroblock is to 810 is all internal schema, macro block comprises decoded data to 810, and deblocking filter is applied to frame macro block to 821 as discussed previouslyly.Fig. 8 e illustrates for by method right to conversion framing macro block for field macro block.As shown in the figure, the row of a pair macro block A and B from the top of each macro block sequentially by alternate selection (A->B->A->B-Gre atT.GreaT.GTA->, ...), then sequentially arrange to construct a pair frame macro block A' and B' by selected order from top.Because be the row that marshalling yard (MY) macro block is right again in this way, so top frame macro block A' is constructed going of the first half of field macro block A and B by this, and end frame macro block B' is constructed by the row of the latter half.
On the other hand, if the respective fields macro block of basic unit only has a macro block to be inter mode in 810, then according to current frame macro block to 813 block mode to 810, select a block from the macro block of basic unit, and selected block is used for inter-layer texture prediction.Or, before determining that current frame macro block is to the block mode of 813, first can apply each method described below and carry out inter-layer prediction, then can determine macro block to 813 block mode.
Fig. 8 f and 8g illustrates and wherein selects a block with the example of inter-layer prediction.When current frame macro block is with inter mode coding (or perform model prediction therebetween) to 813, as illustrated in fig. 8f, to 810, select inter mode block 810a from the field macro block of basic unit, and selected block in vertical direction by liter sampling to create two respective macroblock 831.Then these two macro blocks 831 are used for current frame macro block to 813 residual prediction.When current frame macro block is not with inter mode coding (or at its intra mode prediction of execution) to 813, as illustrated in fig.8g, to 810, select internal schema block 810b from the field macro block of basic unit, and selected block in vertical direction by liter sampling to create two respective macroblock 841.After deblocking filter being applied to these two macro blocks 841, these two macro blocks 841 being used for present frame macro block and predicting in the base of 813.
One of them block shown in Fig. 8 f and 8g is selected and is risen sampling also can be suitable for when each layer has different picture rates to create the method that will be used for the macro block of inter-layer texture prediction right.When picture rate higher than basic unit of the picture rate of enhancement layer, the picture that some picture in the picture sequence of enhancement layer is corresponding on may having no time in basic unit.The right inter-layer texture prediction of frame macro block included in the enhancement layer picture of picture corresponding not free in basic unit can utilize a macro block in the field macro block of a pair in basic unit on the time in preceding picture spatially coordination to perform.
The example of the method that Fig. 8 h is the picture rate of enhancement layer when being the twice of Base layer picture rate.
As shown in the figure, the twice of the picture rate of the picture Shuai Shi basic unit of enhancement layer.Therefore,---such as picture order count (POC) is the picture of " n2 "---does not have the picture that picture order count (POC) is identical in basic unit to have one in every two pictures of enhancement layer.Here, the consistency on identical POC instruction time.
When the picture having no time above consistent in basic unit (such as, when current POC is n2), previous picture (namely, the picture of POC lower than current POC 1) in a pair spatially coordination field macro block in included field, end macro block 802 risen sampling to create interim macro block to 852 (S82) by vertical, then use this interim macro block to 852 to perform current macro to 815 inter-layer texture prediction.When picture above consistent if having time in basic unit (such as, when current POC is n1), in the field macro block of a pair in this time upper consistent picture spatially coordination, included field, top macro block 801 is risen sampling to create interim macro block to 851 (S82) by vertical, then use this interim macro block to 851 to perform current macro to 814 inter-layer texture prediction.When by rise sampling create interim macro block the macro block pair from internal schema macroblock decode is comprised to 851 or 852 time, to this macro block to application deblocking filter after by this macro block to being used for inter-layer texture prediction.
In another embodiment of the present invention, when picture above consistent if having time in basic unit (when the current POC in the example of Fig. 8 h is n1), frame macro block to be not use the method shown in Fig. 8 h but can embodiment according to Fig. 8 d by field macro block to creating, then can use it for inter-layer texture prediction.In addition, during consistent current picture is not free in basic unit picture (when the current POC in the example of Fig. 8 h is n2), inter-layer texture prediction can as Fig. 8 h perform, or can not in current picture macro block perform inter-layer texture prediction.
Correspondingly, ' field_base_flag (field disjunction mark will) ' is that method according to Fig. 8 d performs or the method according to Fig. 8 h performs to indicate inter-layer texture prediction to embodiments of the invention assignment flag, and is incorporated in coded message by this mark.Such as, be ' 0 ' when texture prediction is and performs according to the method as Fig. 8 d by this traffic sign placement, and be ' 1 ' when texture prediction is the method execution of basis as Fig. 8 h by this traffic sign placement.This mark is defined within the macroblock layer in section head, macroblock layer or the liftable level expansion in the parameter sets in the sequence parameter set in the enhancement layer that will transmit to decoder, the sequential parameter in the expansion of liftable level, parameter sets, the expansion of liftable level, head of cutting into slices, the expansion of liftable level.
the situation of the field MB-> frame MB IV. in the picture of field
In this case, the macro block in current layer (EL) is encoded into frame macro block, and the macro block that will be used in the basic unit (BL) of the inter-layer prediction of the frame macro block of current layer is the field macro block be encoded in a picture.Video signal components included in field macro block in basic unit is identical with the video signal components in the macro block being included in a pair coordination in current layer.First, inter-layer motion prediction is described below.
The partition mode that EL encoder 20 uses the macro block (expanding to twice in vertical direction) by expanding in the occasionally strange field of basic unit to obtain is as the partition mode of the macro block in virtual base layer.Figure 10 a illustrates the specific example of this process.Program shown in Figure 10 a and the top in wherein MBAFF frame or field, end macro block are very naturally to use the field macro block 1010 of the spatially coordination in occasionally strange field by the difference of the program of situation III selected, and the similar part of the program of itself and situation III is that the field macro block 1010 of coordination is expanded and is applied to the macro block of virtual base layer to 1012 by the partition mode expanding two macro blocks obtained.When respective fields macro block 1010 is extended to twice in vertical direction, unallowed partition mode (or pattern) in macro-block partition mode may be created on.In order to prevent this situation, EL encoder 20 by with the rule 1 of advising in situation III) with 2) identical rule determines partition mode according to the partition mode through expanding.
If corresponding macro block is that then EL encoder 20 performs inter-layer texture prediction by internal schema coding, and does not perform the partition mode undertaken by above expansion process and determine and reference key described below and motion vector derivation.That is, EL encoder 20 does not perform inter-layer motion prediction.
Similar also with described in situation III above of reference key and motion vector derivation program.But this situation IV is different from situation III in the following areas.In situation III, because corresponding base layer macro block is carried at the strange macro block centering of even summation in frame, so one of top and bottom macro block is selected and is applied to derivation program.In this situation IV, because only there is the macro block that corresponds to the current macro wanting encoding and decoding in basic unit, so the macro block of virtual base layer to 1012 movable information derive from the movable information of respective fields macro block, and there is no macro block option program as illustrated in figures 10b and 10c, and the movable information derived be used to current macro to 1013 inter-layer motion prediction.
Figure 11 schematically shows the derivation of the right reference key of the macro block of virtual base layer according to another embodiment of the invention and motion vector.In this case, the movable information that the macro block of virtual base layer is right is derived from the movable information of the occasionally strange field macro block of basic unit, and these are different from the situation described in above reference diagram 9a.The derivation identical with the situation of Fig. 9 a operates and is applicable to this situation.But, the mixing in the situation shown in Fig. 9 b the process of the movable information using macro block right is inapplicable in this situation IV, because do not have the top and bottom macro block in respective fields to match in basic unit.
In the embodiment described with reference to figure 10a to 10c, in order to predict the movable information that the macro block of virtual base layer is right, EL encoder 20 sequentially to be derived partition mode, reference key and motion vector based on the movable information of the respective fields macro block of basic unit.But, in another embodiment of Figure 11, first EL encoder 20 based on the right reference key of the macro block of the right movable information derivation virtual base layer of the respective macroblock of basic unit and motion vector, then finally determines based on derived value the partition mode that the macro block of virtual base layer is right.When partition mode is determined, the 4x4 module unit with the identical motion vector derived and reference key is combined, if and the block mode after combination is the partition mode allowed, then partition mode is arranged to the pattern after this combination, otherwise partition mode is arranged to the pattern before combining.
When performing texture prediction in the above-described embodiment, if the respective fields macro block of basic unit is internal schema, then current macro is performed in base and predict encoding and decoding.If respective fields macro block is inter mode, and if current macro with inter mode coding, then perform inter-layer residue prediction encoding and decoding.Here, certainly, the field macro block used in prediction its in vertical direction by liter sampling after for texture prediction.
In another embodiment of the present invention, create virtual macro block to construct macro block pair by the field macro block be included in strange or even field, then from the movable information that the macro block of constructed macro block to derivation virtual base layer is right.Figure 12 a and Figure 12 b illustrates the example of this embodiment.
In this embodiment, the reference key of the corresponding idol of basic unit (or strange) field macro block and motion vector are copied (1201 and 1202) to create virtual strange (or even) field macro block to construct macro block to 1211, and the macro block constructed to 1211 movable information mixed with the macro block of derivation virtual base layer to 1212 movable information (1203 and 1204).Mixing and using in the exemplary method of movable information, as shown in figures 12 a and 12b, the macro block that the reference key of the top 8x8 block of respective top macro block is applied to virtual base layer to 1212 the top 8x8 block of top macro block, the reference key of end 8x8 block is applied to the top 8x8 block of end macro block, the macro block that the reference key of the top 8x8 block of macro block of the corresponding end is applied to virtual base layer to 1212 the end 8x8 block of top macro block, and the reference key of end 8x8 block is applied to the end 8x8 block (1203) of end macro block.According to reference key application motion vector (1204).It is omitted here the description of this process, because can be understood intuitively from Figure 12 a and 12b.
In the embodiment shown in Figure 12 a and 12b, the macro block of virtual base layer is use to determine based on the reference key derived and motion vector with method identical as mentioned above to the partition mode of 1212.
Inter-layer texture prediction will be described now.Figure 10 b illustrates the example interlayer texture Forecasting Methodology of the situation for " the field MB-> frame MB in the picture of field ".First EL encoder 20 rises the respective fields macro block 1010 of sample base layer to create two interim macro blocks 1021.If respective fields macro block 1010 is internal schemas, then deblocking filter is applied to these two created interim macro blocks 1021 by EL encoder 20, then performs present frame macro block based on these two interim macro blocks 1021 and predicts in the base of 1013.If respective fields macro block 1010 is inter modes, then EL encoder 20 based on these two created interim macro blocks 1021 perform present frame macro block to 1013 residual prediction.
v. the situation of field MB-> field MB
This situation is subdivided into following four kinds of situations, because field macro block is divided into the field macro block comprised in picture on the scene and the field macro block be included in MBAFF frame.
I) basic unit and enhancement layer are the situations of MBAFF frame
This situation is shown in Figure 13 a.As shown in the figure, the movable information (partition mode, reference key and motion vector) that the respective macroblock of basic unit is right be by by movable information direct copying right for respective macroblock to virtual base layer macro block to and be used as the right movable information of the macro block of virtual base layer.Here, movable information is copied between the macro block having same parity.Particularly, the movable information of even field macro block is copied into even field macro block, and the movable information of strange field macro block is copied into strange field macro block, to be configured to the macro block of the virtual level of the motion prediction of the macro block of current layer.
The method of the inter-layer texture prediction between known frame macro block is applied when performing texture prediction.
Ii) basic unit comprises a picture and enhancement layer comprises the situation of MBAFF frame
This situation is shown in Figure 13 b.As shown in the figure, the movable information (partition mode, reference key and motion vector) of the respective fields macro block of basic unit is the movable information by the movable information direct copying of respective fields macro block to be used as each macro block of the macro block centering of virtual base layer to each macro block of the macro block centering of virtual base layer.Here, same parity copy rule is inapplicable, because the movable information of single field macro block is used to both top and bottom field macro blocks.
When performing texture prediction, between the enhancement layer with identical (occasionally strange) field attribute and base layer macro block, predict (when the corresponding blocks of basic unit is internal schema) or application residual prediction (when the relevant block of basic unit is inter mode) in fired basis.
Iii) basic unit comprises MBAFF frame and enhancement layer comprises the situation of a picture
This situation is shown in Figure 13 c.As shown in the figure, select to have the field macro block of same parity from the base layer macro block centering corresponding to current field macro block, and by by the movable information direct copying of institute's selected field macroblock to the field macro block of virtual base layer by the movable information of the movable information (partition mode, reference key and motion vector) of institute's selected field macroblock as the field macro block of virtual base layer.
When performing texture prediction, between the enhancement layer with identical (occasionally strange) field attribute and base layer macro block, predict (when the corresponding blocks of basic unit is internal schema) or application residual prediction (when the relevant block of basic unit is inter mode) in fired basis.
Iv) basic unit and enhancement layer are the situations of a picture
This situation is shown in Figure 13 d.As shown in the figure, by the movable information direct copying of the respective fields macro block by basic unit to the field macro block of virtual base layer by the movable information of the movable information (partition mode, reference key and motion vector) of the respective fields macro block of basic unit as the field macro block of virtual base layer.Equally in this case, movable information is copied between the macro block having same parity.
The method of the inter-layer texture prediction between known frame macro block is applied when performing texture prediction.
The description of above inter-layer prediction is that the situation having an equal resolution for basic unit and enhancement layer provides.How the following description resolution of putting up with at enhancement layer identifies the type of macro block in the picture type (progressive frame, MBAFF frame or interlaced field) of every one deck and/or picture and provides according to the type application inter-layer prediction method identified higher than (that is, when being greater than 0 as SpatialScalabilityType ()) during base layer resolution.First inter-layer motion prediction is described.
M_A). basic unit's (progressive frame)-> enhancement layer (MBAFF frame)
Figure 14 a illustrates the processing method for this situation.As shown in the figure, first, the movable information of all macro blocks of the respective frame in basic unit is copied to create virtual frames.Then perform and rise sampling.In the sampling of this liter, utilize the texture information of Base layer picture to allow namely interpolation rate that the resolution of this picture (or picture size) is equal with the resolution of current layer to perform interpolation.In addition, the movable information of each macro block of the picture be exaggerated by interpolation constructs based on the movable information of each macro block of this virtual frames.One in multiple known method can be used for this structure.The picture of the provisional basic unit constructed in this way has the resolution identical with the picture of current (enhancing) layer.Correspondingly, above-mentioned inter-layer motion prediction can be applied in this case.
(Figure 14 a), the macro block in the picture in basic unit and current layer is the field macro block in frame macro block and MBAFF frame, and because basic unit comprises frame, current layer comprises MBAFF frame in this case.Correspondingly, the method for above-mentioned situation I is applied to perform inter-layer motion prediction.But not only field macro block pair as mentioned above, frame macro block is to being also included in same MBAFF frame.Correspondingly, the right type of the right current layer macro block of macro block in the picture corresponding to provisional basic unit identified go out for frame macro block (mb) type instead of field macro block (mb) type time, apply the known method (frame-frame Forecasting Methodology) comprising the motion prediction of the simple copy of movable information between frame macro block.
M_B). basic unit's (progressive frame)-> enhancement layer (interlaced field)
Figure 14 b illustrates the processing method for this situation.As shown in the figure, first, the movable information of all macro blocks of the respective frame in basic unit is copied to create virtual frames.Then perform and rise sampling.In the sampling of this liter, utilize the texture information of Base layer picture, perform interpolation with the interpolation rate that the resolution of the resolution with current layer that allow picture is equal.In addition, the movable information of each macro block of the picture be exaggerated by interpolation constructs based on the movable information of each macro block of created virtual frames.
Apply the method for above-mentioned situation II to perform inter-layer motion prediction, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is all the field macro blocks in a picture.
M_C). basic unit's (MBAFF frame)-> enhancement layer (progressive frame)
Figure 14 c illustrates the processing method for this situation.As shown in the figure, first, the corresponding MBAFF frame transform of basic unit is become progressive frame.The method of above-mentioned situation III is applicable to by the field macro block of MBAFF frame to being transformed into progressive frame, and known frame-frame Forecasting Methodology is applicable to the right conversion of the frame macro block of MBAFF frame.Certainly, when the method for situation III is applied in this situation, it is the movable information of each macro block utilized by not needing to perform data creation virtual frames and this frame obtained the data doped and the actual inter-layer prediction wanting the difference of the data of the layer of encoding and decoding to carry out the operation of encoding and decoding.
Once acquisition virtual frames, just this virtual frames is performed and rise sampling.In the sampling of this liter, perform interpolation with the interpolation rate that the resolution of the resolution with current layer that allow basic unit is equal.In addition, the movable information of movable information structure through each macro block of magnified picture of a kind of each macro block based on virtual frames in multiple known method is utilized.Here, perform known frame macro block-macro block inter-layer motion prediction method, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is all frame macro blocks.
M_D). basic unit's (interlaced field)-> enhancement layer (progressive frame)
Figure 14 d illustrates a kind of processing method for this situation.In this case, the type of picture is identical with the type of the macro block of this picture.As shown in the figure, first, the respective fields of basic unit is transformed into progressive frame.The frame converted out has vertical/horizontal (in length and breadth) ratio identical with the picture of current layer.The method rising sampling process and above-mentioned situation IV is applicable to interlaced field to be transformed into progressive frame.Certainly, when the method for situation IV is applied in this situation, be utilize by not needing execution to create the movable information of the data texturing of virtual frames and each macro block of this frame to the data that the data doped and the actual inter-layer prediction wanting the difference of the data of the layer of encoding and decoding to carry out the operation of encoding and decoding obtain.
Once acquisition virtual frames, just this virtual frames is performed and rise sampling.In the sampling of this liter, perform interpolation with the resolution allowing the resolution of virtual frames to equal current layer.In addition, the movable information of a kind of each macro block based on virtual frames in multiple known method is used to construct the movable information of each macro block of the picture that interpolation goes out.Here be perform known frame macro block-macro block inter-layer motion prediction method, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is frame macro block.
Figure 14 e illustrate according to another embodiment of the invention for above situation M_D) processing method.As shown in the figure, strange or even respective fields is transformed into progressive frame by this embodiment.In order to interlaced field is transformed into progressive frame, as shown in Figure 14 d, application rises the method for sampling and above-mentioned situation IV.Once acquisition virtual frames, just the motion prediction between the picture of current layer and provisional layer is carried out, to perform the prediction encoding and decoding of the movable information of each macro block of the picture line by line of current layer to the method---it is the one in multiple known method---of the motion prediction that virtual frames is applied between the picture with same aspect ratio.
The difference of the method for the method shown in Figure 14 e and Figure 14 d is not generate interim prediction signal.
Figure 14 f illustrate according to another embodiment of the invention for above situation M_D) processing method.As shown in the figure, the movable information of all macro blocks of the respective fields of this embodiment copy basic unit is to create virtual screen.Then perform and rise sampling.In the sampling of this liter, use the texture information of the picture of basic unit, and different interpolation rates is used for vertically to make the picture through amplifying, there is the size (or be resolution) identical with the picture of current layer with horizontal interpolation.In addition, the one (such as, expanding special liftable level (ESS)) in multiple known Forecasting Methodology can be applied to virtual screen with structure through the various syntactic information of magnified picture and movable information.The motion vector constructed in this process is expanded according to magnification ratio.Once provisional basic unit be constructed out through rising sampled picture, just this picture is used for the inter-layer motion prediction of each macro block performed in the picture of current layer, with the movable information of each macro block of the picture of encoding and decoding current layer.Here, known frame macro block=macro block inter-layer motion prediction method is applied.
Figure 14 g illustrate according to another embodiment of the invention for above situation M_D) processing method.As shown in the figure, first this embodiment copies the movable information of all macro blocks of the respective fields of basic unit to create virtual screen.Afterwards, use the texture information of the picture of basic unit to perform interpolation for the vertical ratio different with horizontal interpolation.The texture information created by this operation is used to inter-layer texture prediction.In addition, the movable information in virtual screen is used to the inter-layer motion prediction of each macro block performed in the picture of current layer.Here, the one (such as, the special liftable level (ESS) of expansion of definition in associating liftable level Video Model (JSVM)) applied in multiple known method performs the motion prediction encoding and decoding of the picture of current layer.
The difference of the method for the method shown in Figure 14 g and Figure 14 f is not generate interim prediction signal.
M_E). basic unit's (MBAFF frame)-> enhancement layer (MBAFF frame)
Figure 14 h illustrates the processing method for this situation.As shown in the figure, first, the corresponding MBAFF frame transform of basic unit is become progressive frame.In order to MBAFF frame transform is become progressive frame, the method for above-mentioned situation III is applicable to the right conversion of the field macro block of MBAFF frame, and frame-frame Forecasting Methodology is applicable to the right conversion of the frame macro block of MBAFF frame.Certainly, when the method for situation III is applied in this situation, it is the movable information utilizing the data by not needing execution encoding and decoding to dope and the actual data wanting the inter-layer prediction of the operation of the difference of the data of the layer of encoding and decoding to obtain to create each macro block of virtual frames and this frame.
Once acquisition virtual frames, just this virtual frames is performed and rise sampling.In the sampling of this liter, perform interpolation with the interpolation rate that the resolution of the resolution with current layer that allow basic unit is equal.In addition, the movable information of each macro block of the picture of movable information structure through amplifying of a kind of each macro block based on virtual frames in multiple known method is utilized.Apply the method for above-mentioned situation I to perform inter-layer motion prediction, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is all the field macro blocks in MBAFF frame.But not only field macro block pair as mentioned above, frame macro block is to being also included in same MBAFF frame.Correspondingly, the right current layer macro block of macro block in the picture corresponding to provisional basic unit to be frame macro block instead of field macro block time, apply the known method (frame-frame Forecasting Methodology) comprising the motion prediction of the copy of movable information between frame macro block.
M_F). basic unit's (MBFF frame)-> enhancement layer (interlaced field)
Figure 14 i illustrates the processing method of this situation.As shown in the figure, first, the corresponding MBAFF frame transform of basic unit is become progressive frame.In order to MBAFF frame transform is become progressive frame, the method for above-mentioned situation III is applicable to the right conversion of the field macro block of MBAFF frame, and frame-frame Forecasting Methodology is applicable to the right conversion of the frame macro block of MBAFF frame.Certainly, equally, when the method for situation III is applied in this situation, it is the movable information utilizing the data by not needing execution encoding and decoding to dope and the actual data wanting the inter-layer prediction of the operation of the difference of the data of the layer of encoding and decoding to obtain to create each macro block of virtual frames and this frame.
Once acquisition virtual frames, just with the interpolation rate allowing resolution to equal the resolution of current layer, interpolation is performed to this virtual frames.In addition, the movable information of each macro block of the picture of movable information structure through amplifying of a kind of each macro block based on virtual frames in multiple known method is used.Apply the method for above-mentioned situation II to perform inter-layer motion prediction, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is all the field macro blocks in occasionally strange field.
M_G). basic unit's (interlaced field)-> enhancement layer (MBAFF frame)
Figure 14 j illustrates the processing method for this situation.As shown in the figure, first, the interlaced field of basic unit is transformed into progressive frame.Interlaced field is transformed into progressive frame by the method that application rises sampling and above-mentioned situation IV.Certainly, equally, when the method for situation IV is applied in this situation, be utilize by not needing execution to create the movable information of each macro block of virtual frames and this frame to the data that the data doped and the actual inter-layer prediction wanting the difference of the data of the layer of encoding and decoding to carry out the operation of encoding and decoding obtain.
Once acquisition virtual frames, just this virtual frames is performed and rise sampling with the resolution allowing resolution to equal current layer.In addition, the movable information of each macro block of the picture of a kind of structure through amplifying in multiple known method is utilized.Apply the method for above-mentioned situation I to perform inter-layer motion prediction, because each macro block of the picture of the provisional basic unit constructed in this way is all frame macro blocks, and each macro block of current layer is all the field macro blocks in MBAFF frame.But not only field macro block pair as mentioned above, frame macro block is to being also included in same MBAFF frame.Therefore, the right current layer macro block of macro block in the picture corresponding to provisional basic unit, to when comprising frame macro block instead of field macro block, applies the method (frame-frame Forecasting Methodology) of the known motion prediction between frame macro block instead of the Forecasting Methodology of above-mentioned situation I.
M_H). basic unit's (interlaced field)-> enhancement layer (interlaced field)
Figure 14 k illustrates the processing method for this situation.As shown in the figure, first, the movable information of all macro blocks of the respective fields in basic unit is copied to create virtual field, then performs this virtual field and rises sampling.The sampling of this liter performs with the sample rate that rises that the resolution of the resolution with current layer that allow basic unit is equal.In addition, the movable information of each macro block of the picture of movable information structure through amplifying of a kind of each macro block based on created virtual frames in multiple known method is used.Apply the situation iv in above-mentioned situation V) method to perform inter-layer motion prediction, because each macro block of the picture of the provisional basic unit constructed in this way is all the field macro blocks in a picture, and each macro block of current layer is also all the field macro blocks in a picture.
Although be use the texture information of the picture of the virtual field of provisional layer or the texture information of frame instead of basic unit to carry out a liter sampling in the description of the embodiment of Figure 14 a to 14k, the texture information of Base layer picture also can be used for a liter sampling.In addition, if not necessity, when will be used for the movable information of the picture of the provisional layer of the inter-layer motion prediction performed in following stages when deriving, rise the interpolation process that can omit in sampling process and utilize texture information above-mentioned.
On the other hand, provide although the description of texture prediction is the situation having a same spatial resolution for basic unit and enhancement layer, these two layers may have different spatial resolutions as mentioned above.When resolution higher than basic unit of the resolution of enhancement layer, first, performing makes the resolution of the picture of basic unit equal the operation of the resolution of the picture of enhancement layer, to create the Base layer picture with the resolution identical with the resolution of enhancement layer, and the texture prediction method corresponding with each situation in above-mentioned situation I-V is selected to predict encoding and decoding to perform based on each macro block in this picture.Present detailed description makes the resolution of Base layer picture equal the program of the resolution of enhancement layer picture.
When considering to be used for inter-layer prediction two-layer, the combined number for the picture format (line by line and interlaced format) of encoding and decoding is between the two layers 4, because there are two kinds of vision signal scan methods, one to be lined by line scan and another kind is interlacing scan.Therefore, the resolution of increase Base layer picture will be described to perform the method for inter-layer texture prediction for each in these four kinds of situations respectively.
T_A). enhancement layer be line by line and basic unit is the situation of interlacing
Figure 15 a illustrates the embodiment of the method for this situation, Base layer picture being used for inter-layer texture prediction.As shown in the figure, the Base layer picture 1501 corresponding to the picture 1500 of current (enhancing) layer the time is included in the strange field of even summation of different time output.Therefore, first, the picture of basic unit is divided into the strange field of even summation (S151) by EL encoder 20.The internal schema macro block of Base layer picture 1501 have for intra mode prediction not by the raw image data (or decoded view data) of encoding, and between its pattern, macro block has the encoded residual error data (or through decoding residual error data) for residual prediction.When describing texture prediction hereinafter, for base layer macro block or picture like this equally.
After respective picture 1501 is divided into field component, EL encoder 20 performs the interpolation of isolated field 1501a and 1501b in vertical and/or horizontal directions, to create strange picture 1502a and 1502b (S152) of even summation through amplifying.This interpolation uses the one in multiple known method, such as 6 tap filtering and binary linear filtering.For being increased the vertical of the resolution (that is, size) of picture and the vertical and level ratio of level than the size of the size with Base layer picture 1501 that equal enhancement layer picture 1500 by interpolation.Vertical and level ratio can be equal to each other.Such as, if the resolution between enhancement layer and basic unit is 2, then interpolation is performed, to create a pixel again between each pixel in the vertical and horizontal direction in each field to strange field 1501a and 1501b of isolated even summation.
Once interpolation completes, then combine through amplify strange field 1502a and 1502b of even summation to construct picture 1503 (S153).In this combination, the row (1502a->1502b->1502a-GreatT.G reaT.GT1502b->..) alternately selecting strange field 1502a and 1502b of even summation through amplifying then by its order layout selectively to construct the picture 1503 of combination.Here, the block mode of each macro block in the picture 1503 of combination is determined.Such as, the block mode of the macro block of the picture 1503 of combination is confirmed as equal with the block mode of the macro block in the Base layer picture 1501 comprising the region with identical image composition.This defining method can be applicable in any situation of the picture through amplifying described below.Because the combined picture constructed in this way 1503 has the spatial resolution identical with the current picture 1500 of enhancement layer, so perform the texture prediction (between such as, frame-frame macro block texture prediction) (S154) of the macro block in current progressive picture 1500 based on the respective macroblock of combined picture 1503.
Figure 15 b illustrates the method using Base layer picture in inter-layer texture prediction according to another embodiment of the invention.As shown in the figure, this embodiment be absent from the scene attribute (parity) basis on be separated Base layer picture, but directly perform the interpolation (S155) of the Base layer picture being included in the strange field of even summation that different time exports in vertical and/or horizontal directions, with the resolution (that is, size) constructing resolution and enhancement layer picture identical through magnified picture.The inter-layer texture prediction (S156) being used to the current progressive picture performing enhancement layer through magnified picture constructed in this way.
Figure 15 a illustrates and the basis by attribute on the scene is separated the picture with the strange field of even summation carries out interpolation program to it in picture rank.But EL encoder 20 by performing the program shown in Figure 15 a to reach the result identical with shown in Figure 15 a in macro block rank.More specifically, when the basic unit with the strange field of even summation be encoded by MBAFF time, the vision signal of the strange field component of even summation that can comprise as in Figure 16 a or 16b that in picture 1501, vertical adjacent macroblocks is right---itself and the macro block be subject at present in the enhancement layer picture of texture prediction encoding and decoding are to coordination---.Figure 16 a illustrates that the frame MB interweaved in the wherein strange field component of even summation each macro block in a pair macro block A and B is to pattern, and Figure 16 b illustrates that each macro block in wherein a pair macro block A and B comprises the field MB of the video line with identical field attribute to pattern.
When Figure 16 a, in order to the method shown in application drawing 15a, select this to the idol row of each macro block in macro block A and B to construct even field block A', and select its strange row to construct strange field block B', thus the macro block of the strange field component of even summation will be intertwined with in each macro block to being divided into two block A' and B' respectively with the strange field component of even summation.Interpolation is performed with structure through amplifying block to each in isolated two macro block A' and B' in this way.Utilize through amplify in block with current by be subject to texture prediction encoding and decoding enhancement layer picture in data in the corresponding region of the macro block of intra_BL (in basic unit) or residual_prediction (residual prediction) pattern to perform texture prediction.Although not shown in Figure 16 a, but partly attribute basis on the scene is combined all blocks individually amplified can in structural map 15a through amplifying strange picture 1502a and 1502b of even summation, constructing by the operation above repeating often pair of macro block through amplifying strange picture 1502a and 1502b of even summation therefore in Figure 15 a.
When as Figure 16 b based on field attribute segmentation macro block to to construct each macro block, above-mentioned separable programming is to copying each macro block simply to construct two processes of macro block be separated from this macro block.Follow-up program is similar to reference to the program described in figure 16a.
T_B). enhancement layer is interlacing and basic unit is situation line by line
Figure 17 a illustrates the embodiment of the method for this situation, Base layer picture being used for inter-layer texture prediction.As shown in the figure, first, EL encoder 20 constructs two pictures (S171) for current layer picture 1700.In the exemplary method of application construction two pictures, select the idol row of respective picture 1701 to construct a picture 1701a, and select its strange row to construct another picture 1701b.Then EL encoder 20 performs the interpolation of two picture 1701a and 1701b so constructed in vertical and/or horizontal directions to create two picture 1702a and 1702b (S172) through amplifying.This interpolation uses the one in multiple known method, such as situation T_A) in 6 tap filtering and binary linear filtering.For increasing resolution ratio also with situation T_A) in describe those are identical.
Once interpolation completes, just combine these two field 1702a and 1702b through amplifying to construct picture 1703 (S173).In this combination, alternately select row (the 1702a->1702b->1702a-GreatT.G reaT.GT1702b-> of these two field 1702a and 1702b through amplifying ...) then by its order layout selectively with the picture 1703 of tectonic association.Because the combined picture constructed in this way 1703 has the spatial resolution identical with the current picture 1700 of enhancement layer, so perform the texture prediction (between such as, frame-frame macro block texture prediction or the texture prediction with reference to figure 4g description) (S174) of the macro block in current interlaced picture 1700 based on the respective macroblock of picture 1703 of combination.
Figure 17 b illustrates the method using Base layer picture in inter-layer texture prediction according to another embodiment of the invention.As shown in the figure, Base layer picture is not divided into two pictures by this embodiment, but directly perform the interpolation (S175) of Base layer picture in vertical and/or horizontal directions, with construct resolution identical with enhancement layer screen resolution (that is, size) through magnified picture.The inter-layer texture prediction (S176) being used to the current interlaced picture performing enhancement layer through magnified picture constructed in this way.
Although the description of Figure 17 a also provides in picture rank, EL encoder 20 can as above situation T_A) described in macro block rank, perform picture separation process.When single picture 1701 is considered as vertical adjacent macroblocks pair, the method for Figure 17 b is similar to the separation shown in Figure 17 a and interpolator.It is omitted here the detailed description of this program, because can be understood intuitively from Figure 17 a.
T_C). the situation of enhancement layer and basic unit both interlacing
Figure 18 illustrates the embodiment of the method for this situation, Base layer picture being used for inter-layer texture prediction.In this case, as shown in the figure, EL encoder 20 with situation T_A) in identical mode upper for the time Base layer picture 1801 corresponding to current layer picture 1800 is divided into the strange field of even summation (S181).Then EL encoder 20 performs the interpolation of isolated field 1801a and 1801b in vertical and/or horizontal directions to create strange picture 1802a and 1802b (S182) of even summation through amplifying.EL encoder 20 then combine through amplify strange field 1802a and 1802b of even summation to construct picture 1803 (S182).Then EL encoder 20 performs the inter-layer texture prediction (between such as, frame-frame macro block texture prediction or the texture prediction with reference to figure 4g description) (S184) of macro block in the current interlaced picture 1800 frame macro block of the MBAFF coding (to) based on the respective macroblock of combined picture 1803.
Although two layers have identical picture format, but the basis of EL encoder 20 attribute on the scene is separated Base layer picture 1801 (S181) and individually amplify isolated field (S182) then combine through amplify picture (S183), if this is because combination the strange field of even summation picture 1801 when the characteristic that the vision signal that it has the strange field of even summation alters a great deal by Direct interpolation, then compared with the interlaced picture 1800 of the strange field of the even summation with intertexture of enhancement layer, picture through amplifying may have the image of distortion (such as, there is the image stretching border).Correspondingly, even if two layers are all interlacing, according to the present invention, EL encoder 20 is also using it to obtain two fields after the basis of attribute on the scene for Base layer picture is separated, and individually amplifies this two fields, then combines the field through amplifying.
Certainly, not can always use the method shown in Figure 18 when the picture of two layers is all interlacing, but replace and selectively use the method according to the video properties of picture.
Figure 18 illustrates according to the present invention's attribute on the scene basis being separated and amplifying the program of the picture with the strange field of even summation in picture rank.But, as above T_A) described in, EL encoder 20 by performing the program shown in Figure 18 to reach result same as shown in Figure 18 in macro block rank, it comprise with reference to figure 16a with 16b describe based on macro block be separated and interpolation process (specifically by frame macro block to being divided into block that even summation very goes and individually amplifying isolated piece) and combination and inter-layer texture prediction process (specifically alternately select the row of the block through amplifying to construct a pair block through amplification, and utilize the texture prediction right to the frame macro block performing current layer through the block of amplification constructed).
T_D). enhancement layer and basic unit both situations line by line
In this case, Base layer picture is amplified to the size identical with enhancement layer picture, and the picture through amplifying is used for the inter-layer texture prediction of the current EL picture with same frame form.
Although be described above the embodiment of the texture prediction when basic unit and enhancement layer have identical temporal resolution, two layers may have different temporal resolutions, that is, different picture rates.Even if if the picture of all layer is also different picture scan type when all layers have same time resolution, then these pictures may comprise the vision signal with different output time, even if they are pictures (that is, picture time corresponded to each other) of identical POC.Inter-layer texture prediction method in this case will be described now.In the following description, suppose that two layers have identical spatial resolution at first.If two-layer, there is different spatial resolutions, then as described above each picture rising sample base layer with make spatial resolution equal the resolution of enhancement layer after apply method described below again.
A) enhancement layer comprises progressive frame, basic unit comprises MBAFF frame and the temporal resolution of enhancement layer reaches the high situation of twice
Figure 19 a illustrates the inter-layer texture prediction method for this situation.As shown in the figure, each MBAFF frame of basic unit comprises the strange field of the even summation with different output time, and therefore each MABFF frame is divided into the strange field of even summation (S191) by EL encoder 20.The even field component (such as, even row) of each MBAFF frame and strange field component (such as, very capable) are divided into even field and strange field by EL encoder 20 respectively.After in this way MBAFF frame being divided into two fields, EL encoder 20 in vertical direction each field of interpolation to make it have the high resolution (S192) reaching twice.This interpolation uses the one in multiple known method, such as 6 tap filtering, binary linear filtering and the capable zero padding of sample.Once interpolation completes, picture consistent each frame of enhancement layer is just free in basic unit, therefore the macro block of every frame of EL encoder 20 pairs of enhancement layers performs known inter-layer texture prediction (such as, the grand inter prediction of frame-frame) (S193).
Also above program can be applied to inter-layer motion prediction.Here, when MBAFF frame is divided into two fields, the movable information that EL encoder 20 copies each macro block of the field macro block centering in MBAFF frame as the movable information of macro block with identical field attribute (parity), to use it for inter-layer motion prediction.Even if (at t1, t3 during consistent not free in basic unit picture ... when), use the method also can create out time upper consistent picture to perform inter-layer motion prediction according to said method.
When one of two-layer resolution is the twice of the resolution of another layer high as in the example of Figure 19 a and even all can directly apply above-mentioned method when it is N doubly (three times or more) high.Such as, when resolution is three times high, one of these two isolated fields can be copied in addition to construct and to use three fields, and when resolution is four times high, each in these two isolated fields can be copied again to construct and to use four fields.Obviously, when there being any time differences in resolution, those skilled in the art just can carry out inter-layer prediction simply by application principle of the present invention without the need to any creative thinking.Therefore, what do not describe in this specification falls within the scope of the present invention naturally for any method predicted between the layer having different time resolution.Other situations described below are like this equally.
If base layer encoder have been become frame self-adaptive field and frame (PAFF) instead of MBAFF frame, then two-layer may have identical temporal resolution as in Figure 19 b.Therefore, in this case, after the picture directly carrying out interpolation there is the temporal resolution identical with current layer by the process without the need to carrying out frame to be divided into two to frame, inter-layer texture prediction is performed again.
B) enhancement layer comprises MBAFF frame, basic unit comprises progressive frame and the temporal resolution of enhancement layer is the situation of the half of basic unit
Figure 20 illustrates the inter-layer texture prediction method for this situation.As shown in the figure, each MBAFF frame of enhancement layer comprises the strange field of the even summation with different output time, and therefore each MABFF frame is divided into the strange field of even summation (S201) by EL encoder 20.The even field component (such as, even row) of each MBAFF frame and strange field component (such as, very capable) are divided into even field and strange field by EL encoder 20 respectively.The picture (S202) that the sub sampling that EL encoder 20 performs each frame of basic unit in vertical direction reduces by half to construct resolution.This sub sampling can use the one in row sub sampling or other known down-sampled method various, in the example of Figure 20, EL encoder 20 selects picture (picture t0, t2, the t4 with even picture index ...) idol row to obtain half-sized picture, and select to have picture (picture t1, t3 of strange picture index ...) strange row to obtain half-sized picture.Also frame be can perform in reverse order and (S201) and sub sampling (S202) are separated.
Once complete this two process S201 and S202, just have from the isolated field of the frame of enhancement layer 2001 consistent with field 2001 on the time basic unit and there is the picture of the spatial resolution identical with field 2001, EL encoder 20 performs known inter-layer texture prediction (such as, the grand inter prediction of frame-frame) (S203) to the macro block in each field thus.
Also above program can be applied to inter-layer motion prediction.Here, when obtaining picture (S202) of size reduction by sub sampling from each frame of basic unit, EL encoder 20 can according to the method be applicable to (such as, adopt not by the method for the movable information of block divided completely) from the movable information of each macro block of vertical adjacent macroblocks centering, obtain the movable information of respective macroblock, then obtained movable information can be used for inter-layer motion prediction.
In this case, the picture of enhancement layer is encoded to transmit by PAFF, because inter-layer prediction performs from isolated each the picture 2001 of MBAFF frame.
C) enhancement layer comprises MBAFF frame, basic unit comprises progressive frame and the two-layer situation with identical temporal resolution
Figure 21 illustrates the inter-layer texture prediction method for this situation.As shown in the figure, each MBAFF frame of enhancement layer comprises the strange field of the even summation with different output time, and therefore each MABFF frame is divided into the strange field of even summation (S211) by EL encoder 20.The even field component (such as, even row) of each MBAFF frame and strange field component (such as, very capable) are divided into even field and strange field by EL encoder 20 respectively.The picture (S212) that the sub sampling that EL encoder 20 performs each frame of basic unit in vertical direction reduces by half to construct resolution.This sub sampling can use the one in row sub sampling or other known down-sampled method various.Also frame be can perform in reverse order and (S211) and sub sampling (S212) are separated.
EL encoder 20 also can construct field (such as, even field picture) by MBAFF frame, instead of MBAFF frame is divided into two fields.This is because two-layer, there is identical temporal resolution, from a frame, therefore in isolated two field pictures, only have one (instead of both all) in basic unit, have the respective frame that can be used for inter-layer prediction.
Once complete this two process S211 and S212, EL encoder 20 just performs known inter-layer texture prediction (such as, the grand inter prediction of frame-frame) (S213) based on the picture accordingly through sub sampling in basic unit to only even (very) field in field isolated from the frame of enhancement layer.
Equally in this case, can by with situation b) described identical mode performs inter-layer motion prediction to the isolated field performing the enhancement layer of inter-layer texture prediction for it.
Operate provide although above description is the inter-layer prediction just performed by the EL encoder 20 of Fig. 2 a or 2b, all descriptions of inter-layer prediction operation can jointly be applicable to from the information of basic unit's reception through decoding and the EL decoder of decoding enhancement layer stream.In Code And Decode program, above-mentioned inter-layer prediction operation (comprise for separating of, amplify and the operation of vision signal in combined picture or macro block) performs in an identical manner, but the operation after inter-layer prediction performs in a different manner.The example of this difference is: after execution motion and texture prediction, the information that encoder encodes dopes or the difference information between the information doped and actual information, to be sent to decoder, and decoder is by by by performing and the identical Inter-layer motion performed at encoder place and texture prediction and the information that obtains directly applies to current macro or by using the actual macro block coding/decoding information received to obtain actual motion information and texture information in addition.The above details from the description of coding angle of the present invention and principle are directly applied for the decoder of the two-layer data flow received of decoding.
But, when EL encoder when being separated into a sequence by enhancement layer and having the enhancement layer of MBAFF frame with the transmission of PAFF mode after inter-layer prediction as described in reference Figure 20 and 21, decoder does not perform above-mentioned program MBAFF frame being divided into a picture to the layer be currently received.
In addition, decoder then from receive signal decoding to go out to identify EL encoder 20 be as shown in figure 8d or perform as shown in Fig. 8 h the inter-layer texture prediction between macro block mark ' field_base_flag'.Based on the value of statistical indicant decoded, the prediction between decoder determination macro block performs as shown in figure 8d or performs as shown in Fig. 8 h, and determines to obtain texture prediction information according to this.If do not receive mark ' field_base_flag', then EL decoder hypothesis received the mark with " 0 " value.That is, the texture prediction between EL decoder hypothesis macro block performs according to method as shown in figure 8d, and obtain the right information of forecasting of current macro to rebuild current macro or macro block pair.
Have at least in above-mentioned limited embodiments of the present invention one even can use different-format (or pattern) video signal source time inter-layer prediction.Therefore, when encoding and decoding multiple layers, data encoding rate can be improved, and be not limited to the picture type of vision signal, such as interlace signal, progressive signal, MBAFF frame picture and field picture.In addition, when one of two-layer be interlaced video signal source time, the image of the picture used in prediction can be configured to the original image be more similar to for predicting encoding and decoding, thus improves data encoding rate.
Although describe the present invention with reference to preferred embodiment, it should be apparent to those skilled in the art that and can carry out various improvement, amendment, replacement in the present invention and increase and can not depart from the scope of the present invention and spirit.Therefore, the present invention is intended to contain improvement of the present invention, amendment, replacement and increase, as long as they drop in the scope of claims and equivalents thereof.

Claims (10)

1., for an inter-layer prediction method for vision signal, said method comprising the steps of:
Derive the positional information of the virtual macro block corresponding with the frame macro block of current layer;
Based on the positional information of described virtual macro block, derive the positional information of the respective macroblock of basic unit, the respective macroblock of described basic unit is the single field macro block in macro block adaptive frame/field frame (MBAFF frame), and described macro block adaptive frame/field frame (MBAFF frame) be comprise the macro block that comprises strange and even field macro block to and comprise the right frame of the macro block of two frame macro blocks;
Based on the positional information of described respective macroblock, obtain the movable information of described respective macroblock;
Utilize the movable information of described respective macroblock, predict the movable information of described frame macro block; And
Utilize the movable information of described prediction, the frame macro block of described current layer of decoding.
2. the method for claim 1, is characterized in that, described movable information comprises motion vector and reference key.
3. method as claimed in claim 2, it is characterized in that, described prediction comprises the reference key of described respective macroblock divided by 2.
4. the method for claim 1, is characterized in that, described respective macroblock forms according to coded in inter mode.
5. the method for claim 1, is characterized in that, the aspect ratio of the image of described basic unit and the image of enhancement layer is different.
6., for an inter-layer prediction device for vision signal, described device comprises:
Decoding unit, it is for deriving the positional information of the virtual macro block corresponding with the frame macro block of current layer; Based on the positional information of described virtual macro block, derive the positional information of the respective macroblock of basic unit; Based on the positional information of described respective macroblock, obtain the movable information of described respective macroblock; Utilize the movable information of described respective macroblock, predict the movable information of described frame macro block; And utilize the movable information of described prediction, the frame macro block of described current layer of decoding,
Wherein, the respective macroblock of described basic unit is the single field macro block in macro block adaptive frame/field frame (MBAFF frame), and
Described macro block adaptive frame/field frame (MBAFF frame) be comprise the macro block that comprises strange and even field macro block to and comprise the right frame of the macro block of two frame macro blocks.
7. device as claimed in claim 6, it is characterized in that, described movable information comprises motion vector and reference key.
8. device as claimed in claim 7, it is characterized in that, described prediction comprises the reference key of described respective macroblock divided by 2.
9. device as claimed in claim 6, it is characterized in that, described respective macroblock forms according to coded in inter mode.
10. device as claimed in claim 6, it is characterized in that, the aspect ratio of the image of described basic unit and the image of enhancement layer is different.
CN201210585882.3A 2006-01-09 2007-01-09 For inter-layer prediction method and the device of vision signal Active CN103096078B (en)

Applications Claiming Priority (29)

Application Number Priority Date Filing Date Title
US75700906P 2006-01-09 2006-01-09
US60/757,009 2006-01-09
US75823506P 2006-01-12 2006-01-12
US60/758,235 2006-01-12
US77693506P 2006-02-28 2006-02-28
US60/776,935 2006-02-28
US78339506P 2006-03-20 2006-03-20
US60/783,395 2006-03-20
US78674106P 2006-03-29 2006-03-29
US60/786,741 2006-03-29
US78749606P 2006-03-31 2006-03-31
US60/787,496 2006-03-31
US81634006P 2006-06-26 2006-06-26
US60/816,340 2006-06-26
US83060006P 2006-07-14 2006-07-14
US60/830,600 2006-07-14
KR10-2006-0111895 2006-11-13
KR1020060111894A KR20070074451A (en) 2006-01-09 2006-11-13 Method for using video signals of a baselayer for interlayer prediction
KR10-2006-0111893 2006-11-13
KR1020060111897A KR20070074453A (en) 2006-01-09 2006-11-13 Method for encoding and decoding video signal
KR1020060111895A KR20070074452A (en) 2006-01-09 2006-11-13 Inter-layer prediction method for video signal
KR1020060111893A KR20070075257A (en) 2006-01-12 2006-11-13 Inter-layer motion prediction method for video signal
KR10-2006-0111897 2006-11-13
KR10-2006-0111894 2006-11-13
KR10-2007-0001582 2007-01-05
KR1020070001582A KR20070095180A (en) 2006-03-20 2007-01-05 Inter-layer prediction method for video signal based on picture types
KR10-2007-0001587 2007-01-05
KR1020070001587A KR20070075293A (en) 2006-01-12 2007-01-05 Inter-layer motion prediction method for video signal
CN200780005672XA CN101385352B (en) 2006-01-09 2007-01-09 Inter-layer prediction method for video signal

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN200780005672XA Division CN101385352B (en) 2006-01-09 2007-01-09 Inter-layer prediction method for video signal

Publications (2)

Publication Number Publication Date
CN103096078A CN103096078A (en) 2013-05-08
CN103096078B true CN103096078B (en) 2015-10-21

Family

ID=43754792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210585882.3A Active CN103096078B (en) 2006-01-09 2007-01-09 For inter-layer prediction method and the device of vision signal

Country Status (2)

Country Link
CN (1) CN103096078B (en)
BR (1) BRPI0706378A2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6514197B2 (en) 2013-07-15 2019-05-15 ジーイー ビデオ コンプレッション エルエルシー Network device and method for error handling
WO2015060614A1 (en) * 2013-10-22 2015-04-30 주식회사 케이티 Method and device for encoding/decoding multi-layer video signal
WO2015082763A1 (en) 2013-12-02 2015-06-11 Nokia Technologies Oy Video encoding and decoding
CN116170590A (en) * 2017-08-10 2023-05-26 夏普株式会社 Image filter device, image decoder device, and image encoder device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0731614A2 (en) * 1995-03-10 1996-09-11 Kabushiki Kaisha Toshiba Video coding/decoding apparatus
JP2003018595A (en) * 2001-07-03 2003-01-17 Matsushita Electric Ind Co Ltd Video encoding system, video decoding system, video encoding method, and video decoding method
CN1678074A (en) * 2004-03-29 2005-10-05 三星电子株式会社 Method and apparatus for generating motion vector in lamina motion evaluation
CN1801941A (en) * 2005-11-18 2006-07-12 宁波中科集成电路设计中心有限公司 Motion vector prediction multiplex design method in multi-mode standard decoder

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0731614A2 (en) * 1995-03-10 1996-09-11 Kabushiki Kaisha Toshiba Video coding/decoding apparatus
JP2003018595A (en) * 2001-07-03 2003-01-17 Matsushita Electric Ind Co Ltd Video encoding system, video decoding system, video encoding method, and video decoding method
CN1678074A (en) * 2004-03-29 2005-10-05 三星电子株式会社 Method and apparatus for generating motion vector in lamina motion evaluation
CN1801941A (en) * 2005-11-18 2006-07-12 宁波中科集成电路设计中心有限公司 Motion vector prediction multiplex design method in multi-mode standard decoder

Also Published As

Publication number Publication date
BRPI0706378A2 (en) 2011-03-22
CN103096078A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
CN101385350B (en) Inter-layer prediction method for video signal
KR100914712B1 (en) Inter-layer prediction method for video signal
CN103096078B (en) For inter-layer prediction method and the device of vision signal
RU2384970C1 (en) Interlayer forcasting method for video signal
MX2008008825A (en) Inter-layer prediction method for video signal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant