WO2006137020A2 - Compression de signaux video - Google Patents

Compression de signaux video Download PDF

Info

Publication number
WO2006137020A2
WO2006137020A2 PCT/IB2006/051991 IB2006051991W WO2006137020A2 WO 2006137020 A2 WO2006137020 A2 WO 2006137020A2 IB 2006051991 W IB2006051991 W IB 2006051991W WO 2006137020 A2 WO2006137020 A2 WO 2006137020A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
regions
hole
predicted
transformation parameters
Prior art date
Application number
PCT/IB2006/051991
Other languages
English (en)
Other versions
WO2006137020A3 (fr
Inventor
Reinier B. M. Klein Gunnewiek
Christiaan Varekamp
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006137020A2 publication Critical patent/WO2006137020A2/fr
Publication of WO2006137020A3 publication Critical patent/WO2006137020A3/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/553Motion estimation dealing with occlusions

Definitions

  • the invention relates to video signal compression and decompression and to interpolation between images in a video sequence.
  • interpolation An important application of this type of interpolation is video compression and decompression, because it makes it possible to omit much, if not all, of the information about an image if that image can be interpolated from other images. Of course compression, in turn, may be used to reduce storage space requirements and/or transmission bandwidth requirements. Another application of interpolation is frame rate up-conversion of video signals.
  • a pixel location in the interpolated image corresponds to pixel locations in the source images that are related by the interpolated motion vector for the pixel location.
  • the corresponding pixel locations in the source images belong both to the background or both to the foreground.
  • the interpolated image can be formed from any of the source images or their average.
  • a pixel location in the interpolated image may correspond to the foreground in one source image and to the background in the other source image. This indicates that the pixel location is "uncovered" due to motion.
  • information from the source image where the pixel location corresponds to background is inserted.
  • US patent No. 6,008,865 uses manual identification of foreground and background regions. Although useful for high value content such as commercial motion pictures, this makes the method unattractive for low-cost or high-speed applications. Moreover US patent No. 6,008,865 is not involved with compression.
  • a method of compression and decompression as set forth in claim 1 is provided.
  • a predicted image is used that is obtained by interpolation (or extrapolation) from a first and second image and correction information is included in a compressed signal to represent differences between the predicted image and an actual image.
  • To compute the predicted image (at least virtually) at least one of the first and second image is segmented and for each of a plurality of the regions one or more transformation parameters of at least one transformation of the region from said at least one of the first and second image to the other one of the first and second image is computed.
  • the transformations are translations, having motion vectors as parameters. Holes are detected that are left between the regions when transforming the regions from the first image to the second image according to the transformation parameters.
  • one of the regions adjoining the hole is selected on the basis of a comparison of a computed image property of the second image in the hole with corresponding computed image properties of the regions adjoining the hole.
  • the selected regions are used to define the predicted image which corresponds to a transformation of the regions with transformation parameters derived for a further image, substituting in each hole in the predicted image information from the new image for the hole, transformed according to a transformation parameter value derived from the transformation parameters of the selected region for the hole.
  • the predicted image is computed and the correction information is used to correct the predicted image.
  • the segmentation is preformed anew during decompression, so that no additional information about the segmentation needs to be included in the compressed signal.
  • the transformation parameters may also be used to compress the first and/or second image, in which case they need not be computed anew on the decompression side.
  • compressed information such as DCT coefficients are used to compare the image property inside and outside of the hole.
  • Figure 1 shows a video compression apparatus
  • Figure 2 shows a flow chart of a video compression process
  • Figure 3a illustrate effects of motion
  • Figure 4 illustrates interpolation
  • Figure 5 shows a video decompression apparatus
  • Figure 1 shows a video compression apparatus.
  • the apparatus has an input 10 for receiving signals that represent successive images, an input image memory 12, a first and second working image memory 14a,b, a processing circuit 16 and an output 18 for compression data.
  • Input 10 is coupled to input image memory 12.
  • Processing circuit 16 is coupled to input image memory 12 and working image memories 14a,b and to output 18.
  • Figure 2 shows a flow-chart of part of a video-compression process.
  • the process is used to generate compressed video information that represents at least one image on the basis of interpolation between a current image and a new image.
  • Processing circuit 16 outputs information that represents the current image.
  • compression is used. Any convenient form of compression may be used. These images may be represented as I-frames according to known MPEG coding for example and/or as P frames for example in terms motion vectors and a residues.
  • a decompressed current image is stored, with a content that is obtained by decompressing the information that has been output.
  • processing circuit segments a current image stored in first working image memory 14a. Segmentation is known per se. Segmentation involves the identification of a plurality of multi-pixel regions in the current image, each with selected coherent pixel locations (i.e. pixel locations that adjoin all other pixel locations in the region either directly or via other pixel locations in the region). Segmentation is performed preferably on the basis of detection of matching image content in the same region of pixel locations in a series of images or in regions that are displaced in different images by a common motion vector for all pixel locations in each region.
  • each region is maximized by adding at least as many pixel locations to the region as is possible while maintaining the match between the series of images.
  • segmentation may be based on the detection of matching image characteristics in different parts of a region, maximizing the regions by adding at least as many pixel locations to the region as is possible while maintaining the same characteristics throughout the region.
  • a block based coding method like H264 is used as a special form of segmentation of an image. Segmentation methods are known per se and will therefore not be discussed in detail. Any segmentation method may be used. Preferably these methods should be identical at the encoder and decoder side.
  • processing circuit 16 computes motion vectors D for the regions that have been selected during segmentation.
  • Each motion vector D represents a direction and distance of motion for a respective one of the selected regions.
  • Processing circuit 16 computes the motion vectors by comparing the content of the current image in first working image memory 14a with the content of a new image in second working image memory 14b. For each of the regions that have been selected during segmentation processing circuit 16 searches for a motion vector that results in a match of the image content of the selected region with content of the new image in a translated region that is obtained by applying the motion vector to the selected region.
  • the motion vector D for a region is selected to minimize the absolute values of difference
  • processing circuit 16 detects any "holes" and " overlaps" that arise as an effect of divergence of motions vectors D of adjoining regions.
  • Figure 3 schematically illustrates holes.
  • One dimensional lines 30, 32 through a current image (line 30) and a new image are shown.
  • Line 30 through the current image intersects the boundary 34 between two of the regions 35a,b that have been selected during segmentation.
  • Different motion vectors D a , D b are shown for the respective regions 35a,b, which translated the regions 35a,b to translated regions 37a,b on the line 32 in the new image. Because of the divergence between the motion vectors D a , D b between the translated regions 37a,b a hole 38 arises with pixel locations to which pixels from neither one region 35a or the other region 35b are translated.
  • Figure3a illustrates overlaps 39.
  • the figure builds on figure 3 and similar references are used.
  • the two motion vectors D a , D b converge. Because of the convergence between the motion vectors D a , D b between the translated regions 37a,b an overlap 39 arises with pixel locations to which pixels from both regions 35a,b are translated. Any method may be used to detect the holes. In one method processing circuit
  • processing circuit 16 visits all pixel locations in each of the regions, computes a new pixel location for each pixel location by adding the motion vector for the region to which the pixel location belongs and writes a label of the region in to an array for locations in the new image, at the computed new pixel location. Subsequently processing circuit 16 visits all locations in the array and detects regions where no labels have been written. These regions are the holes. As an alternative processing circuit 16 may detect holes by detecting pairs of adjacent regions in the current image and testing whether the difference between their motion vectors indicates motion of the regions away from each other. In that case the hole can be computed from the edge between the region and the difference between the motion vectors. Similar methods may be used detect overlaps.
  • processing circuit 16 When, in third step 23, processing circuit 16 detects the existence of a hole 38 or overlap 39 it performs a fourth step 24 for that hole 38 or overlap.
  • processing circuit 16 computes measures of difference between on one hand an image content of the new image at pixel locations in the hole 38 and on the other hand image content of the regions 37a,b adjoining the hole 38 (or image content of the corresponding regions 35a,b in the current image) and selects the region with the smallest measure of difference. It should be noted that preferably a translation invariant measure of difference is used, which is substantially insensitive to mutual shift between the regions, and not a matching criterion, which is typically highly sensitive to such shift.
  • processing circuit 16 may use a matching criterion to compare the image content of the parts of the regions 35a,b in the current image that move to the overlap 39 an select the region with the smallest difference.
  • processing circuit 16 computes measures of difference between on one hand an image content of the new image at pixel locations in the overlap 39 and on the other hand image content of the regions 37a,b adjoining the overlap 39. (or, alternatively, image content of the parts of the regions 35a,b in the current image that move to the overlap 39).
  • Processing circuit 16 repeats fourth step 24 for each of the holes 38 and overlaps 39. That is, fourth step 24 is performed at least for each pair of adjoining regions in the current image.
  • the measure of difference that processing circuit 16 uses in fourth step 24 is a texture comparison measure.
  • use is made of differences between the absolute values of higher frequency DCT coefficients of the new image for areas that cover at least part of the hole 38 or overlap 39 and at least part of the regions 37a,b or 35a,b.
  • Higher frequency meaning that one or more coefficients for a predetermined set of frequencies surrounding zero frequency are not used to determine the measure of difference.
  • Use of DCT coefficients for this purpose is especially advantageous when these DCT coefficients are also used to represent the image.
  • DCT coefficients for blocks of pixel locations that are completely within the respective regions and the hole/overlap are used, but alternatively the coefficients of blocks may be compared that extend more and less from a region into the hole/overlap.
  • texture comparison measures may be used as well.
  • a comparison of the absolute values of higher frequency Fourier transform coefficients, or Hadamard transform coefficients may be used, or a comparison of statistical data, such as for example about the variance, the density of peaks and/or the direction and/or strength of detected edges in the different regions and the hole/overlap.
  • texture comparison measures instead of, or in addition to texture comparison measures other simpler measures may be used, for example a measure of the difference between the average hue and/or color saturation in the regions and the hole or even a measure of the difference in luminance.
  • processing circuit 16 computes and outputs residue information that represents an intermediate image between the current image and the new image.
  • processing circuit 16 determines residues that represent pixel value differences between the intermediate image and a prediction of the intermediate image.
  • Figure 4 illustrates prediction. The figure builds on figure 3 and similar references are used. In addition to Figure 3 the figure also shows a line 40 to represent the predicted image and interpolated motion vectors
  • the predicted image 40 contains a further hole 46, or overlap 47. At pixel locations outside holes 46 and overlaps 47 any convenient form of interpolation may be used, for example on the basis of the content of the current image at corresponding locations:
  • Ipredicted(r) Icurrent (H " faC*D)
  • D is the motion vector for the region to which the location r belongs in both the current image and the new image (of course filtering and interpixel interpolations may be used, but these have been omitted for the sake of clarity).
  • processing circuit 16 selects one of the motion vectors on the basis of the determination made in fourth step 24: the motion vector for the region 37a,b with the smallest measure of difference is used.
  • Processing circuit 16 selects one of the motion vectors on the basis of the determination made in fourth step 24: the motion vector for the closest matching region 37a,b is used (or alternatively with the smallest measure of difference).
  • a (weighted) average of the image content in the regions that move to the overlap 47, or the content of a systematically selected one of those regions, may be used in the overlap 47 in the predicted image.
  • holes and overlaps in the intermediate image may be detected in any convenient way. For example, by first visiting each pixel location in each regions in the current image, computing the new pixel location for that pixel in the intermediate image using the interpolated motion vector for the region and an writing interpolated pixel value if no other pixel value has yet been written. If another pixel value has been written, processing circuit 16 resorts to the comparison result for the pair of regions that is involved in the overlap (the region of the current pixel in the current image and the region of the earlier pixel that moved to the same location) to select which of the pixel locations should control.
  • processing circuit 16 visits all pixel locations in holes in new image computing the new pixel location for that pixel in the intermediate image using the interpolated motion vector of the adjacent region that is selected for the hole on the basis of the measure of difference. Image processing circuit then writes a pixel value obtained from the new image to the computed location.
  • holes in the new image give rise to overlaps in the intermediate image and overlaps in the new image give rise to holes in the intermediate image, and the new image should be used accordingly.
  • a (weighted) average of predicted values from the new image and the current image may be used where neither gives rise to a hole in the intermediate image.
  • processing circuit 16 determines the differences between the predicted image that it defined in this way with the intermediate image and outputs residue information that represents these differences (typically both the differences for pixel location inside and outside the holes 46).
  • the residue information is output after data that represents the current image and the new image has been output, but any sequence may be used. Any form of representation may be used for the residue information, such as codes to represent differences for individual pixels, or the DCT (Discrete Cosine Transform) of the differences etc.
  • processing circuit computes and stores the interpolated image for this purpose, but it should be appreciated that this is not necessary. Alternatively the interpolated image may be computed dynamically (i.e. without prior storage) or even only implicitly when the differences are computed.
  • Fifth step 25 may be repeated for any number of intermediate images for different times (and therefore different motion vectors fac*D). Subsequently the process is repeated for a new combination of a current image, a new image and one or more intermediate images. Typically, the old "new image" takes the place of the current image in this process.
  • processing circuit 16 also encodes the new image using the motion vectors. However, in this case no information is available to fill the holes 38 in the new image. Therefore for the new image processing circuit preferably outputs residue information that represents the differences between some predetermined default image data for the holes 38 and the new image in the holes. In an embodiment an additional step is added between second step 22 and third step to output data that encodes the new image.
  • processing circuit 16 outputs the motion vectors. Also processing circuit 16 determines the differences between the new image and a provisionally decoded image obtained by moving the regions from the current image according to the motion vectors. Such a movement may leave holes 38 in the provisionally decoded image, which processing circuit 16 fills in some predetermined way, for example by using default pixel values or pixel values obtained by averaging over an image area around the hole. Such a movement may also give rise to overlaps, where more than one different region is moved to the same pixel position. In these overlaps two processing circuit 16 substitutes pixel values in some predetermined way in the provisionally decoded image, for example by taking averages of the pixel values that are moved to the same pixel position etc.
  • Processing circuit 16 encodes the differences between the new image and the provisionally decoded image and outputs the encoded differences.
  • the new image in working image memory 14b that is used in the following steps is a version of the new image that is obtained from the provisionally decoded image plus the encoded differences.
  • alternatively encoded information about the new image can be output in different ways.
  • the new image may be encoded as an MPEG-type I-frame, or as an MPEG type P-frame, by means of other motion vectors that describe the locations in the current image from which pixel values in the new image must be moved in combination with corresponding residue information.
  • the new image in working image memory 14b that is used in the following steps is a version of the new image that is obtained by decoding the encoded new image.
  • the encoded information from output 18 may be stored in a storage medium (not shown) by a disk drive or flash memory controller (not shown) that is coupled to output 18 and/or the encoded information may be transmitted via a transmission medium such as a cable network, a telecommunication network or a wireless broadcast channel. Subsequently the encoded information is decoded by a decoding apparatus.
  • FIG. 5 shows a video decompression apparatus.
  • the apparatus comprises a decompression unit 51 (for example a settop box, or a part of an optical disk reader) and a display screen 59.
  • Decompression unit 51 is similar in structure to the part of the video compression apparatus that has been shown in figure 1.
  • Decompression unit 51 an input 50 for receiving signals that represent compressed images, an intermediate image memory 52, a first and second working image memory 54a,b, a processing circuit 56 and an output 58 for decompressed data coupled to display screen 59.
  • Input 50 is coupled to processing circuit 16.
  • Processing circuit 16 is coupled to input data memory 52 and working image memories 54a,b and to output 58.
  • separate working image memories 54a,b and an input image memory 52 are shown, it should be appreciated that in practice two or more of these memories may be combined in a larger memory.
  • decompression unit 51 performs the inverse of the compression operation that is performed by the compression apparatus.
  • Decompression unit 51 receives the data that encodes the image that was referred to as the current image (and that will be referred to as the current image again), decompresses this data, uses the decompressed data to control display of the decompressed current image on display screen 59 and stores the decompressed current image in first working memory 54a.
  • decompression unit 51 receives the data that encodes the image that was referred to as the new image (and that will be referred to as the new image again), decompresses this data and stores the decompressed current image in second working memory 54b.
  • decompression unit 51 receives the data that encodes the one or more images that were referred to as the intermediate images (and that will be referred to as the intermediate images again).
  • Figure 6 shows a flow chart that illustrates the operation of processing circuit
  • processing circuit 16 repeats the process for a number of intermediate images. Before the intermediate images the decompressed current image is used to control image display and after the intermediate images the decompressed new image is used to control image display. Subsequently the decompressed new image takes the place of the old decompressed current image and a subsequent current image is read and decompressed, after which the process repeats.
  • the decompression unit 51 preferably uses these motion vectors for interpolation, instead of computing the motion vectors once more.
  • decompression unit 51 also uses the motion vectors to reconstruct the provisionally decoded image, substituting values in the holes and treating overlaps in the same way as the compression apparatus does in constructing its provisionally decoded image. Subsequently decompression unit 51 adds the encoded differences to reconstruct the decompressed new image.
  • this decompressed new image is preferably the same as the new image that is used by compression apparatus to select between regions, so that decompression unit 51 will take the same decisions about the region to fill the hole as the compression apparatus. No separate information is needed to control selection.
  • the compression apparatus preferably selects the motion vectors and the regions that are used to fill the holes on the basis of the decompressed new image.
  • decompression unit 51 is able to make the same selections of motion vectors and regions to fill holes on the basis of the decompressed new image.
  • the computation of the measures of difference in fourth step 24 involves the use of compressed data such as DCT coefficients for different regions.
  • the compression apparatus preferably uses (quantized) DCT coefficients that are used to encode the new image to compute the measures of difference and decompression unit 51 uses the received DCT coefficients.
  • the DCT coefficients of the separate compression are preferably used in the computation of the measure of difference for example after computation of these coefficients by the compression apparatus or their reception by decompressing unit.
  • DCT coefficients of the current image may be used in the computation of the measure of difference, to characterize the texture in the two regions for example.
  • the DCT coefficients of the current image are preferably used in the measure of difference, after computation of these coefficients by the compression apparatus or their reception by decompressing unit. This speeds up decompression.
  • the measures of difference may be computed anew from the decompressed new image and/or the decompressed current image.
  • a form of segmentation is used that assigns each pixel location of the current image to a corresponding one of the regions and computes motion vectors for all regions.
  • some of the pixel locations may be left unassigned to any regions, or at least to any region for which a motion vector is determined. This may concern pixel locations that cannot be assigned to a region of more than a threshold size or pixel location in regions for which no sensible motion vector can be found, for example.
  • the resulting holes in the predicted intermediate image or the new image may be filled by default pixel values (and corrected by means of residue information). Of course this may reduce the amount of compression, but as long as motion vectors are used at least for some regions compression will be realized.
  • motion vectors or more generally the parameters of the transformation of the regions
  • these motion vectors or parameters may be determined from other images, e.g. from a series of preceding images.
  • the described technique of forming intermediate images has other applications.
  • the technique is applied to increasing the frame rate of a series of images. In this case, no residue information is used for the intermediate images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé permettant de comprimer et décomprimer une séquence d'images, ce procédé consistant à segmenter une première et une seconde image (30, 32) en régions (35a,b, 37a,b) d'emplacements de pixels. Pour chacune de plusieurs régions (35a,b, 37a,b) des premiers paramètres de transformation (Da,b) tel qu'une translation est calculée pour transformation de cette région (35a,b, 37a,b) de la première image à la seconde image (30,32). Des trous (38) sont détectés, lesquels sont laissés entre les régions (35a,b, 37a,b) lors de la transformation des régions de la première image à la seconde image (30, 32) selon les premiers paramètres de transformation (Da,b). Pour chacun des trous (38), une propriété d'image calculée de la seconde image (32) dans le trou est comparée avec des propriétés d'image calculées correspondantes des régions (35a,b, 37a,b) contigues aux trous (38). Pour chacun des trous (38), une des régions (35a,b, 37a,b) contigue aux trous (38) est sélectionnée, pour laquelle la propriété d'image est plus proche de la propriété d'image du trou (38). Une image supplémentaire est codée au moyen d'informations de correction qui représentent une différence entre l'image supplémentaire (40) et une image calculée, l'image calculée étant obtenue par transformation des régions (35a,b, 37a,b) avec des deuxièmes paramètres de transformation dérivés des premiers paramètres de transformation (Da,b). Dans chaque trou (46) de l'image calculée (40) des informations de la seconde image (32) sont substituées, transformées selon une valeur de paramètre de transformation dérivée des premiers paramètres de transformation (Da,b) de la région sélectionnée pour le trou (38) dans la seconde image qui correspond au trou (46) dans l'image calculée. Par décodage, l'image calculée est reconstruite et des informations de correction sont utilisées pour calculer une image supplémentaire décompimée.
PCT/IB2006/051991 2005-06-22 2006-06-20 Compression de signaux video WO2006137020A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05105550 2005-06-22
EP05105550.7 2005-06-22

Publications (2)

Publication Number Publication Date
WO2006137020A2 true WO2006137020A2 (fr) 2006-12-28
WO2006137020A3 WO2006137020A3 (fr) 2007-05-03

Family

ID=37459540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/051991 WO2006137020A2 (fr) 2005-06-22 2006-06-20 Compression de signaux video

Country Status (1)

Country Link
WO (1) WO2006137020A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010043809A1 (fr) * 2008-10-15 2010-04-22 France Telecom Prediction d'une image par compensation en mouvement en avant
CN102301714B (zh) * 2009-01-28 2014-01-22 法国电信公司 用于对实施运动补偿的图像序列进行编码和解码的方法、以及对应的编码和解码装置

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005022922A1 (fr) * 2003-09-02 2005-03-10 Koninklijke Philips Electronics N.V. Interpolation temporelle d'un pixel sur la base de la detection d'occlusions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005022922A1 (fr) * 2003-09-02 2005-03-10 Koninklijke Philips Electronics N.V. Interpolation temporelle d'un pixel sur la base de la detection d'occlusions

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CELASUN I ET AL: "2-D mesh-based video object segmentation and tracking with occlusion resolution" SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 16, no. 10, August 2001 (2001-08), pages 949-962, XP004255556 ISSN: 0923-5965 *
GOKCETEKIN M H ET AL: "2D mesh-based detection and representation of an occluding object for object-based video" ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2000. ICASSP '00. PROCEEDINGS. 2000 IEEE INTERNATIONAL CONFERENCE ON 5-9 JUNE 2000, PISCATAWAY, NJ, USA,IEEE, vol. 6, 5 June 2000 (2000-06-05), pages 1915-1918, XP010504666 ISBN: 0-7803-6293-4 *
LUNTER G A: "Occlusion-insensitive motion estimation for segmentation" PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 4671, 21 January 2002 (2002-01-21), pages 573-584, XP002374121 ISSN: 0277-786X *
MUKAWA N ET AL: "UNCOVERED BACKGROUND PREDICTION IN INTERFRAME CODING" IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. COM-33, no. 11, November 1985 (1985-11), pages 1227-1231, XP000946265 ISSN: 0090-6778 *
TOKLU C ET AL: "2-D MESH-BASED SYNTHETIC TRANSFIGURATION OF AN OBJECT WITH OCCLUSION" 1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS. MUNICH, APR. 21 - 24, 1997, IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), LOS A, vol. VOL. 4, 21 April 1997 (1997-04-21), pages 2649-2652, XP000787994 ISBN: 0-8186-7920-4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010043809A1 (fr) * 2008-10-15 2010-04-22 France Telecom Prediction d'une image par compensation en mouvement en avant
US9055293B2 (en) 2008-10-15 2015-06-09 France Telecom Prediction of an image by compensation during forward movement
CN102301714B (zh) * 2009-01-28 2014-01-22 法国电信公司 用于对实施运动补偿的图像序列进行编码和解码的方法、以及对应的编码和解码装置

Also Published As

Publication number Publication date
WO2006137020A3 (fr) 2007-05-03

Similar Documents

Publication Publication Date Title
US11917200B2 (en) Hybrid video coding supporting intermediate view synthesis
US20180054613A1 (en) Video encoding method and apparatus with in-loop filtering process not applied to reconstructed blocks located at image content discontinuity edge and associated video decoding method and apparatus
US20060039617A1 (en) Method and assembly for video encoding, the video encoding including texture analysis and texture synthesis, and corresponding computer program and corresponding computer-readable storage medium
US20050243920A1 (en) Image encoding/decoding device, image encoding/decoding program and image encoding/decoding method
EP2168382B1 (fr) Procédé de traitement d'images et dispositif électronique correspondant
US20120269263A1 (en) Method for coding and method for reconstruction of a block of an image
KR20210031799A (ko) 분할 코딩을 이용한 효과적인 예측
JP4875007B2 (ja) 動画像符号化装置、動画像符号化方法、及び、動画像復号化装置
KR20230145540A (ko) 적응적 분할 코딩
JP5466700B2 (ja) 時間予測を実施する画像シーケンスの符号化方法及び装置と、対応する信号、データ記憶媒体、復号化方法及び装置、コンピュータプロダクト製品
KR20090100402A (ko) 이미지 압축 및 압축해제
EP0878966B1 (fr) Procede servant a completer une image numerique avec un element d'image
EP2186343B1 (fr) Projection par compensation de mouvement de résidus de prédiction pour dissimulation d'erreur dans des données vidéo
US20190289329A1 (en) Apparatus and a method for 3d video coding
US8704932B2 (en) Method and system for noise reduction for 3D video content
CN114900691B (zh) 编码方法、编码器及计算机可读存储介质
WO2006137020A2 (fr) Compression de signaux video
KR19980033415A (ko) 동화상을 코딩/인코딩하는 장치와 방법 그리고 동화상을 저장하는 저장매체
JP2002523987A (ja) ディジタル画像の符号化方法および符号化装置ならびにディジタル画像の復号方法および復号装置
US6553149B1 (en) Shape information coding and decoding apparatus for adaptively bordering and method therefor
JP2008301270A (ja) 動画像符号化装置及び動画像符号化方法
JP3798432B2 (ja) ディジタル画像を符号化および復号化する方法および装置
US20230065861A1 (en) Method and device for processing multi-view video data
JP4239894B2 (ja) 画像符号化装置及び画像復号化装置
JPH08228351A (ja) 動画像の動き補償予測符号化方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06756154

Country of ref document: EP

Kind code of ref document: A2