EP1782634A1 - Verfahren und vorrichtung zum codieren und decodieren - Google Patents

Verfahren und vorrichtung zum codieren und decodieren

Info

Publication number
EP1782634A1
EP1782634A1 EP05764634A EP05764634A EP1782634A1 EP 1782634 A1 EP1782634 A1 EP 1782634A1 EP 05764634 A EP05764634 A EP 05764634A EP 05764634 A EP05764634 A EP 05764634A EP 1782634 A1 EP1782634 A1 EP 1782634A1
Authority
EP
European Patent Office
Prior art keywords
block structure
cif
block
4cif
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP05764634A
Other languages
German (de)
English (en)
French (fr)
Inventor
Peter Amon
Andreas Hutter
Benoit Timmermann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of EP1782634A1 publication Critical patent/EP1782634A1/de
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/36Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the invention relates to a method for video coding according to the preamble of claim 1, a method for decoding according to the preamble of claim 22 and coders for video coding according to the preamble of claim 23 and a decoding device according to the preamble of claim 24th
  • Digital video data is usually compressed for storage or transmission in order to significantly reduce the enormous volume of data.
  • the compression takes place both by eliminating the signal redundancy contained in the video data and by eliminating the irrelevant signal parts which are imperceptible to the human eye.
  • This is generally achieved by a hybrid coding method in which the image to be coded is first predefined in time and the remaining prediction error is then transformed into the frequency domain, for example by a discrete cosine transform, quantized there and by a variable coding method Length code is encoded. The motion information and the quantized spectral coefficients are finally transmitted.
  • An essential task in the compression of video data is thus to obtain the most accurate possible prediction of the picture to be coded from the previously transmitted picture information.
  • the prediction of an image has hitherto been effected by initially dividing the image, for example, into regular sections, typically square blocks of size 8 ⁇ 8 or 16 ⁇ 16 pixels, and then prediction for each of these picture blocks from that already known in the receiver Image information is determined by motion compensation. (However, blocks of different size may also result.) Such an approach can be taken from FIG. Two basic cases of prediction can be distinguished:
  • - Uni-directional prediction The motion compensation takes place here exclusively on the basis of the previously transmitted image and leads to so-called "P-frames".
  • Bi-directional prediction The prediction of the image is done by superimposing two images, one of which is temporally forward and another follows in time and which leads to so-called "B-frames". It should be noted here that both reference pictures have already been transferred.
  • motion-compensated temporal filtering yields five directional modes in the method of MSRA [1], as can be seen in FIG.
  • MCTF-based scalable video coding is used to ensure good video quality for a very wide range of possible bit rates as well as temporal and spatial resolution levels.
  • the MCTF algorithms known today show unacceptable results for reduced bit rates, which is due to the fact that too little texture (block information) in relation to the information which relates to the motion information (block structures and motion vectors) a sequence of defined videos refer to exist. It therefore requires a scalable form of motion information in order to achieve an optimum relationship between texture and motion data at any bit rate and also resolution.
  • MSRA Microsoft Research Asia
  • the MSRA solution proposes to present movements in layers, or to dissolve them in successively refined structures.
  • the MSRA method thus achieves that the quality of images at low bit rates is generally improved.
  • Shifts in the reconstructed image lead, which are attributable to an offset between the movement information and the textures.
  • the object underlying the invention is to provide a method for coding and decoding, as well as an encoder and decoder, which enable better embedding of refined structures.
  • This object is achieved on the basis of the method of coding according to the preamble of claim 1 by its characterizing features nenden. Furthermore, this object is achieved by a method for decoding according to the preamble of An ⁇ claim 22, the encoder according to the preamble of claim 23 and the decoder according to the preamble of claim 24 by their features.
  • the coding is block-based such that for a description of any contained in the image sequence movement of parts of the images at least one movement descriptive Block structure er ⁇ is generated, which is designed such that it is divided starting from a block in sub-blocks partially with the sub-blocks successively finer sub-dividing sub-blocks, temporarily for at least a first resolution level a first block structure and for a second resolution level a second block structure r, wherein the first resolution level has a lower number of pixels and / or image quality than the second resolution level.
  • the second block structure is compared with the first block structure such that differences in the block structure are determined so that a modified second block structure is generated on the basis of properties of the structure differences such that their structure represents a subset of the second block structure. Subsequently, the modified second block structure and the second block structure are compared on the basis of at least one value proportional to a quality of the image and based on that block structure of the coding of the bit sequence whose value is directly proportional to a better quality.
  • the difference between texture information is minimized and, moreover, this information can be coded with minimal effort.
  • the offset disappears for the cases where, for example, the finest motion vector field has been selected, so that an improvement in image quality is ensured even at lower bit rates and lower resolutions.
  • the comparisons according to the invention ensure, above all, through the comparison, that a step-by-step adaptation and, above all, an optimal adaptation between a motion estimation and the embeddedness of residual-error images is achieved.
  • it is characterized by its special efficiency.
  • sub-blocks added to this difference detection are preferably detected, the properties of the sub-blocks being detected alternatively or additionally to the difference determinations.
  • the block size of the subblocks is detected as a subblock property, one obtains a very good indicator in practice of the degree of fineness of the block structures produced.
  • the differences of the texture information can be further reduced.
  • a definable threshold value preferably only those subblocks of the second block structure are taken over into the modified second block structure whose block size reaches a definable threshold value.
  • the threshold value is preferably defined such that it indicates a ratio of the block size of a sub-block of the second block structure to a block size contained in a region of the first block structure used for comparison, which block is assigned to the smallest sub-block of the area.
  • the acquired sub-blocks can be non-dyadic.
  • a further improvement of the results with respect to the representation of the decoded image can be achieved if the modified second block structure of the second resolution step is used as the first block structure of a third resolution step, wherein the second resolution step has a lower pixel count and / or image quality than the second resolution block third resolution level.
  • possible further block structures of higher resolution levels are used for generating the modified second block structure, in which the modified second block structure of the respectively preceding resolution stage is used for the comparison according to the invention.
  • the identification takes place by the use of a direction mode, which is referred to in particular as "not_refind".
  • a bit stream is generated during the encoding of the bit sequence in such a way that it represents a scalable texture, wherein this is preferably achieved in that the bit stream is realized by a number of bit planes and in particular at least depends on Comparison result is varied as well as by a to be realized for a transmission bit rate. This achieves adapted SNR scalability.
  • At least a first part of the bit planes representing the second block structure is updated. This ensures that the corresponding second modified block structure is available on the decoder side.
  • the update can be carried out, for example, such that the transmission of a second part takes place or alternatively that the first part is modified by a second part of bit planes.
  • the updating is preferably carried out in such a way that those regions of a texture associated with the second block structure are refined, which are defined by the modified second block structure, so that in the final result a good image quality is available even for different spatio-temporal resolutions or bit rates without being subject to drift, which arises due to an offset between motion vector fields and residual error blocks, which do not make use of the refinement of the block structures.
  • Additional support for the finer granularity is achieved if, at a high bit rate, a second number of bit planes exceeding the number is transmitted.
  • the object on which the invention is based is also achieved by the method for decoding a coded image sequence by taking into account the information contained in the image sequence according to a method, in particular the information described above for updating motion information, and a scalable one Texture representing bitstream is a scaled representation of the image sequence is generated.
  • the coder according to the invention which has means for carrying out the method, and a corresponding decoder, which has means for decoding a coded picture sequence generated according to the method, also contribute to achieving the object.
  • the decoder preferably has means for detecting scalable texturizing parts of the bit stream indicating first signals, and additionally means for detecting second signals indicating regions to be updated, wherein the signals are each designed in particular as syntax elements.
  • the decoder has means for determining those bit planes at which an update leads to improvements in a representation of the coded image sequence and alternatively or additionally has means for determining the bit plane at which the update of a texture is to take place to precisely reconstruct refined or scalable representation of the image sequence.
  • the decoder has means for updating a texture which are configured in such a way that consideration of updated motion information takes place, the elimination of the offset achieved by the inventive method for encoding can be ensured in the scalable representation of the image sequence generated on the decoder side.
  • the decoder is preferably characterized by updating means which are configured in such a way that an updated texture is formed from an existing texture in such a way that the updated texture information is formed from the texture information assigned to the texture and a texture update information, wherein the update are designed such that the Texturinforma ⁇ tion is at least partially replaced by the texture update information.
  • FIG. 1 shows the model of a motion estimation for generating scalable motion information
  • FIG. 2 shows the directional modes necessary for this
  • FIG. 3 shows the subblock sizes used here
  • FIG. 4 shows the schematic representation of block structures produced according to the invention
  • FIG. 5 shows schematically the decision according to the invention about updates
  • FIG. 6 schematically shows the generation according to the invention of an updated bistream
  • FIG. 1 schematically shows the MSRA solution known from the prior art, which is explained for a better understanding of the invention, since it is used at least in part in the described embodiment.
  • the mentioned multilayer motion estimation is performed in each temporary layer.
  • the motion estimation is realized with a fixed spatial resolution with different macroblock sizes, so that the resulting motion vector field adapts to the decoded resolution.
  • the motion estimation is performed at the resolution level of the CIF format or CIF resolution, respectively with a block size of 32 x 32 as a base and with a macroblock size of 8 x 8 as the smallest block size.
  • the decoded format is the CIF format
  • the size of the macroblocks is scaled down by a factor of 2, as can be seen from FIG.
  • the original motion vectors are transmitted in the lower branch of the processing shown there for the decoding of the block present in QCIF format, while for each higher layer, for example those for the decoding of the CIF block is used, only the difference information with respect to the motion vectors is used.
  • a single motion vector of a lower layer can be used to predict a plurality of vectors of the higher layer if the block is split up into smaller subblocks.
  • FIG. 3 shows that the block structures are coded according to the MSRA method according to the same method as described in FIG Standard MPEG-4 AVC (Advanced Video Coding) [2] is used.
  • the motion estimation which belongs to the higher resolutions, is regarded as enriching information (enhancement layer / information) on the basis of a detection of the coarse movement information. Since the result obtained by the coarse motion vector field If the residual error block contains a large amount of energy, only that residual error block is transmitted which is generated after the feinsth movement compensation. This leads, especially when the coarse motion information is selected, to very strong artifacts in the reconstructed residual error image, even when the bit rate is high.
  • FIG. 4 shows how temporary block structures generated according to the invention lead, using the method according to the invention, to block structures which are ultimately to be transmitted.
  • each of these block structures is in each case assigned to a resolution level, resolution level designating the format of the resolution with which a video signal encoded by the method according to the invention, which consists of image sequences, can be represented.
  • these are the Common Intermediate Format (CIF), the QCIF and the 4CIF format.
  • QCIF represents a first resolution stage, that is to say the lowest resolution stage for the resolution stage selected according to the invention, so that according to the invention a first block structure MV_QCIF is assigned to it, while CIF represents a second resolution stage, for the invention a second block structure MV_CIF is produced.
  • the block structures are generated in the context of a motion estimation algorithm, for example using the already mentioned MCTF and / or MSRA method. It can also be seen that the temporary block structures MV_QCIF, MV_CIF and MV_4CIF have successively refined sub-block structures, which are characterized by sub-blocks which are becoming increasingly finer, based on sub-blocks MB_QCIF, MV_CIF and MV_4CIF respectively defined for each temporary block structure MV_QCIF, MV_CIF and MV_4CIF added.
  • FIG. 4 also shows the block structures MV_QCIF, MV_CIF and MV_4CIF to be transmitted or, finally, transmitted, for example for a streaming application, which are generated from the temporary block structures MV_QCIF using the method according to the invention.
  • MV_CIF and MV 4CIF are generated by respectively comparing a block structure belonging to a high resolution stage with a block structure belonging to a next lower resolution stage and, as a result, generating a modified block structure belonging to the considered resolution stage which has subblock structures which only contains a subset of the temporal block structure belonging to the same resolution step, this not being a true subset, which would preclude the case that the subblock structure of the modified block structure with the subblock structure of the corresponding temporary block str but rather, since it is even the case that this special case can also occur according to the method according to the invention, it is merely a (simple) partial quantity known, for example, from mathematics.
  • the generation of a block structure belonging to the lowest resolution stage is started.
  • the modified block structure MV QCIF results directly from this first block structure MV_QCIF, since, of course, no comparison with a previous block structure can be made for this case.
  • the directly resulting modified block structure MV_QCIF therefore has the same subblock structure as the first block structure MV QCIF.
  • a second block structure MV_CIF is generated. It can be seen that additional subblocks have been added to the second block structure MV CIF, which lead to a finer subblock structure, as compared to the first block structure MV QCIF auf ⁇ has.
  • the sub-blocks or sub-block structures that have been added are shown in phantom in the figure.
  • a comparison is therefore carried out in a next step, in which the added sub-blocks are checked as to whether they have a block size that is more than four times smaller than the smallest one
  • Block size of the corresponding subarea of the first block structure is a block size of the corresponding subarea of the first block structure.
  • the corresponding subblock structure is included in a modified second block structure MV_CIF, whereas in cases where the subblock to be examined represents less refinement, the acquisition of the subblock structure in the modified second block structure to be transferred is dispensed with.
  • the first sub-block SB1 is located in a first sub-block MB1_CIF of the second block structure MV_CIF. Accordingly, according to the invention, the first sub-block MB1_QCIF corresponding to the first sub-block MB1_CIF of the second block structure MV CIF is examined, which is the smallest sub-block size occurring here. In the present example, this minimum block size is defined by a minimum first subblock MIN SB1. As can be seen, the size of the first sub-block corresponds to the size of the first minimum sub-block, so there is no refinement in this case. Accordingly, according to the invention, the subblock structure underlying the first subblock is not adopted in the second block structure MV_CIF to be transmitted, so that in the illustration according to FIG. 4 the second modified block structure MV_CIF lacks the dot-dash grid at the corresponding position.
  • a second sub-block SB2 is used for the comparison. Since the second sub-block SB2 is contained in a fourth sub-block MB4_CIF of the second block structure MV CIF, a search is made for a minimum sub-block size in a fourth sub-block MB4_QCIF of the first block structure MV QCIF. This is given by a second minimum sub-block MIN_SB2, which in this case exactly divides the fourth sub-block MB4 QCIF of the first block structure MV_QCIF.
  • the size of the second sub-block SB2 represents one-eighth of the size of the minimum second sub-block MIN_SB2, so that even an eightfold refinement is given compared to the first block structure MV_QCIF.
  • the subblock structure defining the second subblock is also adopted in the modified second block structure MV 'CIF.
  • the same happens for all those Blocks of the second block structure MV CIF can be seen in the illustration according to FIG. 4 on the dashed structures of the modified second block structure MV 'CIF.
  • a block structure MV 4CIF is also generated for the 4CIF format. According to the invention, this is again used as a second block structure, while the first block structure is given by the preceding second block structure MV_CIF.
  • the second modified block structure MV'_4CIF resulting from the comparison of the two block structures has again been refined in the representation of FIG. 4 only by a part of the added subblock structures, which are dotted in the illustration.
  • Modified second block structure can be used as a first block structure.
  • the data rates for the motion information for the various local resolution levels are set by a parameter, so that an optimum ratio of the data rate for motion information and texture information results at each resolution level.
  • the invention is not based on the exemplary embodiment explained with reference to FIG. 4, but encompasses all implementations which come within the scope of expert knowledge and which comprise the core according to the invention:
  • An essential advantage of the algorithm according to the invention is the improvement of the image quality even at low bit rates as well as at low resolutions.
  • FIG. 5 now shows which method steps are taken as a basis for the signaling explained above or also for the bitstream generation, as explained below.
  • the novel block mode proposed according to the invention shows whether a block structure for a currently considered motion vector field is to be used for the following movement. has to be split up. Because of these block modes, it is therefore possible to locate the regions in which a current residual error block which differs from a previous residual error block associated with a lower layer.
  • the blocks associated with these regions are then compared with the blocks located at the same positions within the preceding residual error block and the difference is encoded for this purpose.
  • bit stream is generated for this purpose before the transmission, so that all the information available on the encoder side can be used optimally.
  • a comparison is carried out in the sense of an evaluation in which it is determined whether a motion vector field (block structure) must be refined or not.
  • IT block-based transformation
  • the goal of this bitstream generation according to the invention is to achieve a good image quality for various spatial conditions. to ensure lent / temporal resolution levels or bit rates, without a drift, which can be caused by an offset zwi ⁇ rule a motion vector field and a residual error block. Schematically, therefore, the steps are shown with which this is achieved according to the invention.
  • the illustrated embodiment starts from an initialization state in which a specific number of motion vector fields with corresponding residual error blocks have been generated on the encoder side. For example, a first motion vector field MVF1 and a first refined motion vector field MVF1 'for a QCIF resolution, the first refined motion vector field MVF1' and (not shown) a second motion vector field for a CIF
  • Bite planes BTPL1 ... BTPLN + M which represent the residual error block. Furthermore, the number is limited by the decision explained in the introduction about a refinement of the blocks.
  • the number of bit planes is limited to a number N. If, according to the evaluation according to the invention, the decision is made that a refinement is required, the first motion vector field MVF1 is refined in such a way that the refined motion vector field MVF1 'is generated. In such a case it is therefore necessary that the first torfield MVFl is updated ("updated") in order to prevent an offset between the motion vector fields and the respective textures.
  • bit planes BTPL1... BTPLN have usually already been transmitted. Up to a certain limit value BTPLn, the bit planes which represent the non-refined residual error blocks (BTPL1-BTPLn) need not be modified. On reaching this limit BTPLn, on the other hand, the next following bit planes BTPLn... BTPLN are updated according to the exemplary embodiment.
  • bit plane which represents the last bit plane of the unrefined residual error blocks, BTPLn and extends to the bit plane, which is already transmitted BTPLN.
  • the update is carried out in such a way that the regions which belong to the refined parts REFINEMENT are updated in such a way that they coincide with the subsequent motion vector field, i. according to the illustrated embodiment, the first refined vector field MVFl 'match.
  • the number of bit planes BTPLN + 1 to BTPLN + M which exceeds the already transmitted bit plane number BTPLN can additionally be transmitted.
  • This concept is repeated for each spatial resolution and / or quality level and thereby enables finer granularity of a signal-to-noise scalability (SNR scalability).
  • SNR scalability signal-to-noise scalability
  • the SNR and spatial scalability should also be combined here, if e.g. it is erforder ⁇ Lich to decode a (video) bitstream at CIF resolution ren and this is done at lower bit rate, the first modified motion vector field MVFl 'from the QCIF resolution to the CIF resolution upscaled.
  • an inverse wavelet transformation or an interpolation is performed in order to achieve a higher spatial resolution of the texture TEXTUR1, TEXTURE '1.
  • the SNR scalability in CIF resolution is achieved by coding the bit planes of the difference between the original refined CIF residual error block and a QCIF bit plane refined by interpolated or inverse wavelet transforms. If the decision as to whether refinement is positive in the CIF resolution is followed by the same strategy as explained in the above-described method for QCIF. The same applies to a scaling from CIF to 4CIF.
  • the SNR scalability is generated by bitwise representation of the texture information according to the example described above, but is not limited thereto, as it may also be alternative scalable texture representations can be achieved.
  • the maximum number of bit planes that occur before refinement may differ for each spatial resolution.
  • more than one update can take place within a spatial resolution level if more than two layers of the motion information are used for this spatial resolution level.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
EP05764634A 2004-08-27 2005-07-29 Verfahren und vorrichtung zum codieren und decodieren Ceased EP1782634A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102004041664A DE102004041664A1 (de) 2004-08-27 2004-08-27 Verfahren zum Codieren und Decodieren, sowie Codier- und Decodiervorrichtung zur Videocodierung
PCT/EP2005/053709 WO2006024584A1 (de) 2004-08-27 2005-07-29 Verfahren und vorrichtung zum codieren und decodieren

Publications (1)

Publication Number Publication Date
EP1782634A1 true EP1782634A1 (de) 2007-05-09

Family

ID=35063418

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05764634A Ceased EP1782634A1 (de) 2004-08-27 2005-07-29 Verfahren und vorrichtung zum codieren und decodieren

Country Status (7)

Country Link
US (1) US8290058B2 (ja)
EP (1) EP1782634A1 (ja)
JP (2) JP2008511226A (ja)
KR (1) KR101240441B1 (ja)
CN (1) CN101010961B (ja)
DE (1) DE102004041664A1 (ja)
WO (1) WO2006024584A1 (ja)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004041664A1 (de) 2004-08-27 2006-03-09 Siemens Ag Verfahren zum Codieren und Decodieren, sowie Codier- und Decodiervorrichtung zur Videocodierung
DE102005016827A1 (de) * 2005-04-12 2006-10-19 Siemens Ag Adaptive Interpolation bei der Bild- oder Videokodierung
KR100970697B1 (ko) * 2008-05-27 2010-07-16 그리다 주식회사 이미지 데이터의 부분 갱신 방법
US8306122B2 (en) * 2008-06-23 2012-11-06 Broadcom Corporation Method and apparatus for processing image data
US20110002554A1 (en) * 2009-06-11 2011-01-06 Motorola, Inc. Digital image compression by residual decimation
US20110002391A1 (en) * 2009-06-11 2011-01-06 Motorola, Inc. Digital image compression by resolution-adaptive macroblock coding
KR101675118B1 (ko) 2010-01-14 2016-11-10 삼성전자 주식회사 스킵 및 분할 순서를 고려한 비디오 부호화 방법과 그 장치, 및 비디오 복호화 방법과 그 장치
US8532408B2 (en) 2010-02-17 2013-09-10 University-Industry Cooperation Group Of Kyung Hee University Coding structure
JP5616984B2 (ja) * 2011-01-26 2014-10-29 株式会社日立製作所 画像復号化装置
KR102379609B1 (ko) 2012-10-01 2022-03-28 지이 비디오 컴프레션, 엘엘씨 향상 레이어 모션 파라미터들에 대한 베이스-레이어 힌트들을 이용한 스케일러블 비디오 코딩
TWI479473B (zh) * 2013-05-28 2015-04-01 Innolux Corp 液晶顯示器及其顯示方法
US20160249050A1 (en) * 2013-10-22 2016-08-25 Nec Corporation Block structure decision circuit and block structure decision method
US10275646B2 (en) * 2017-08-03 2019-04-30 Gyrfalcon Technology Inc. Motion recognition via a two-dimensional symbol having multiple ideograms contained therein
US10593097B2 (en) * 2018-05-08 2020-03-17 Qualcomm Technologies, Inc. Distributed graphics processing

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000511366A (ja) 1995-10-25 2000-08-29 サーノフ コーポレイション 4分割ツリーベースの可変ブロックサイズ動き推定装置および方法
AUPO951297A0 (en) * 1997-09-29 1997-10-23 Canon Information Systems Research Australia Pty Ltd Method and apparatus for digital data compression
JP3660548B2 (ja) * 2000-02-01 2005-06-15 日本電信電話株式会社 画像符号化方法及び画像復号方法、画像符号化装置及び画像復号装置、並びにそれらのプログラムを記憶した媒体
DE10022520A1 (de) 2000-05-10 2001-11-15 Bosch Gmbh Robert Verfahren zur örtlichen skalierbaren Bewegtbildcodierung
CN1244232C (zh) * 2000-06-30 2006-03-01 皇家菲利浦电子有限公司 用于视频序列压缩的编码方法
KR100353851B1 (ko) * 2000-07-07 2002-09-28 한국전자통신연구원 파문 스캔 장치 및 그 방법과 그를 이용한 영상코딩/디코딩 장치 및 그 방법
FI120125B (fi) * 2000-08-21 2009-06-30 Nokia Corp Kuvankoodaus
KR100783396B1 (ko) * 2001-04-19 2007-12-10 엘지전자 주식회사 부호기의 서브밴드 분할을 이용한 시공간 스케일러빌러티방법
DE10219640B4 (de) 2001-09-14 2012-05-24 Siemens Ag Verfahren zum Codieren und Decodieren von Videosequenzen und Computerprogrammprodukt
US6909753B2 (en) * 2001-12-05 2005-06-21 Koninklijke Philips Electronics, N.V. Combined MPEG-4 FGS and modulation algorithm for wireless video transmission
KR20040046892A (ko) * 2002-11-28 2004-06-05 엘지전자 주식회사 움직임 벡터 예측 부호화 및 복호화 방법
DE102004038110B3 (de) 2004-08-05 2005-12-29 Siemens Ag Verfahren zum Codieren und Decodieren, sowie Codier- und Decodiervorrichtung zur Videocodierung
DE102004041664A1 (de) 2004-08-27 2006-03-09 Siemens Ag Verfahren zum Codieren und Decodieren, sowie Codier- und Decodiervorrichtung zur Videocodierung
JP2007096479A (ja) * 2005-09-27 2007-04-12 Nippon Telegr & Teleph Corp <Ntt> 階層間予測符号化方法および装置,階層間予測復号方法および装置,並びにそれらのプログラムおよび記録媒体

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BENOIT TIMMERMAN ET AL: "Response to SVC CE5 - Optimization of tradeoff between motion information and texture", 70. MPEG MEETING; 18-10-2004 - 22-10-2004; PALMA DE MALLORCA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. M11365, 13 October 2004 (2004-10-13), XP030040139, ISSN: 0000-0252 *
See also references of WO2006024584A1 *
XU JIZHENG ET AL: "3D Sub-band Video Coding using Barbell lifting, ISO/IEC JTC1/SC29/WG11 MPEG2004/M10569/S05", ISO/IEC JTC1/CS29/WG11 MPEG2004/M10569/S05, XX, XX, 1 March 2004 (2004-03-01), pages 1 - 14, XP002356360 *

Also Published As

Publication number Publication date
US8290058B2 (en) 2012-10-16
WO2006024584A1 (de) 2006-03-09
CN101010961B (zh) 2010-12-01
JP2011172297A (ja) 2011-09-01
US20080095241A1 (en) 2008-04-24
CN101010961A (zh) 2007-08-01
JP2008511226A (ja) 2008-04-10
JP5300921B2 (ja) 2013-09-25
DE102004041664A1 (de) 2006-03-09
KR101240441B1 (ko) 2013-03-08
KR20070046880A (ko) 2007-05-03

Similar Documents

Publication Publication Date Title
WO2006024584A1 (de) Verfahren und vorrichtung zum codieren und decodieren
DE60031230T2 (de) Skalierbares videokodierungssystem und verfahren
EP1774790B1 (de) Verfahren und vorrichtung zum codieren und decodieren
DE60109423T2 (de) Videokodierung mit prädiktiver bitebenenkodierung und progressiver fein-granularitätsskalierung (pfgs)
EP1815690A1 (de) Verfahren zur transcodierung sowie transcodiervorrichtung
DE10022520A1 (de) Verfahren zur örtlichen skalierbaren Bewegtbildcodierung
DE60310128T2 (de) Verfahren zur wavelet-bildcodierung und entsprechendes decodierungsverfahren
EP1285537B1 (de) Verfahren und eine anordnung zur codierung bzw. decodierung einer folge von bildern
WO2006067053A1 (de) Bildencodierverfahren, sowie dazugehöriges bilddecodierverfahren, encodiervorrichtung und decodiervorrichtung
EP1815689A1 (de) Codierverfahren und decodierverfahren, sowie codiervorrichtung und decodiervorrichtung
EP1869890B1 (de) Verfahren und vorrichtung zur reduktion eines quantisierungsfehlers
EP1762100A1 (de) Skalierbares verfahren zur bildencodierung einer folge von originalbildern, sowie dazugehöriges bilddecodierverfahren, encodiervorrichtung und decodiervorrichtung
DE10121259C2 (de) Optimale SNR-skalierbare Videocodierung
WO2000046998A1 (de) Verfahren und anordnung zur transformation eines bildbereichs
WO2008006806A2 (de) Skalierbare videokodierung
EP1913780B1 (de) Verfahren zum korrigieren eines quantisierten datenwerts sowie eine dazugehörige vorrichtung
DE102004011421B4 (de) Vorrichtung und Verfahren zum Erzeugen eines skalierten Datenstroms
WO2009074393A1 (de) Verfahren und vorrichtung zum bestimmen einer bildqualität
DE10225434A1 (de) Verfahren und Vorrichtung zur Videocodierung
EP1085761A1 (de) Bewegungsschätzung in einem objektorientierten Videokodierer
DE10311054A1 (de) Verfahren zur Teilbandcodierung einer Folge digitalisierter Bilder
DE10243568A1 (de) Verfahren zur skalierbaren Videocodierung eines Videobildsignals sowie ein zugehöriger Codec
DE102006050066A1 (de) Verfahren zur skalierbaren Videocodierung einer Folge digitalisierter Bilder
WO2000049525A1 (de) Verfahren und anordnung zum abspeichern von mindestens einem bild mit zugehöriger relationalen information
DD284324A5 (de) Uebertragungsverfahren sowie empfaenger fuer ein uebertragungsverfahren

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070206

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIEMENS AKTIENGESELLSCHAFT

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SIEMENS AKTIENGESELLSCHAFT

17Q First examination report despatched

Effective date: 20150224

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20151112