WO2009084814A1 - Method for encoding and decoding image of ftv, and apparatus for encoding and decoding image of ftv - Google Patents

Method for encoding and decoding image of ftv, and apparatus for encoding and decoding image of ftv Download PDF

Info

Publication number
WO2009084814A1
WO2009084814A1 PCT/KR2008/006831 KR2008006831W WO2009084814A1 WO 2009084814 A1 WO2009084814 A1 WO 2009084814A1 KR 2008006831 W KR2008006831 W KR 2008006831W WO 2009084814 A1 WO2009084814 A1 WO 2009084814A1
Authority
WO
WIPO (PCT)
Prior art keywords
values
encoding
prediction
decoding
free viewpoint
Prior art date
Application number
PCT/KR2008/006831
Other languages
French (fr)
Inventor
Jung Eun Lim
Jin Seok Im
Seung Jong Choi
Jong Chan Kim
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Publication of WO2009084814A1 publication Critical patent/WO2009084814A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/94Vector quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23406Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving management of server-side video buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2383Channel coding or modulation of digital bit-stream, e.g. QPSK modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/438Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving encoded video stream packets from an IP network
    • H04N21/4382Demodulation or channel decoding, e.g. QPSK demodulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer

Definitions

  • the present invention relates to a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, and more particularly, to a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, which perform dc prediction.
  • Free viewpoint TV is the next-generation active TV which enables a user to freely decide a desired point of time and see and hear images at that point of time without limits, unlike in the existing 2D TV.
  • a encoding stage compresses color images and depth images captured by a camera on a viewpoint K.
  • a decoding stage receives the bitstream compressed from the encoding stage and restores user viewpoint images, which have not been captured by the camera, from the restored color images and depth images at the viewpoint K.
  • the present invention has been made in view of the above problems, and it is an object of the present invention to provide a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, which are capable of efficiently encoding or decoding free viewpoint images.
  • a method of encoding free viewpoint images includes the steps of predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored, and encoding a difference signal between each of the predicted dc values and each of current dc values.
  • a method of decoding free viewpoint images includes the steps of predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored, decoding the encoded difference signals from input bitstream, and restoring current dc values using the predicted dc values and the decoded difference signals.
  • an apparatus for encoding free viewpoint images includes a dc table in which a plurality of specific dc values is stored, a dc prediction unit for predicting dc values of a current block, and a transformation quantization unit for transforming and quantizing a difference signal between each of the predicted dc values and each of current dc values.
  • an apparatus for decoding free viewpoint images includes a dc table in which a plurality of specific dc values is stored, a dc prediction unit for predicting dc values of a current block, an inverse-quantization inverse-transformation unit for inverse quantizing and inverse transforming the encoded difference signals from input bitstream, and a adding unit for restoring current dc values using the predicted dc values and the difference signals.
  • a method of encoding free viewpoint images includes the steps of calculating motion vectors of a color image and a depth image of the free viewpoint images, calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image and calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image, and if a difference between the first error and the second error is a specific value or less, performing motion compensation of the predicted depth image based on the motion vector of the color image.
  • a method of decoding free viewpoint images includes the steps of extracting a motion vectors from an input bitstream, and, if the extracted motion vector is a motion vector of a color image, performing motion compensation of a predicted depth image based on the motion vector of the color image.
  • an apparatus for encoding free viewpoint images includes a motion vector memory for storing motion vectors of a color image, a motion estimation unit for calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image, calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image, and, if a difference between the first error and the second error is a specific value or less, determining the motion vector of the color image as a motion vector, and a motion compensation unit for performing motion compensation of the predicted depth image based on the motion vector of the color image.
  • an apparatus for decoding free viewpoint images includes an entropy decoding unit for extracting motion vectors from input bitstream, and a motion compensation unit for, if the motion vectors are motion vectors of color images, performing motion compensation of prediction depth images based on the motion vectors of the color image.
  • FIG. 1 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention
  • FIG. 2 is a diagram showing an image encoding operation by a dc prediction unit shown in FIG. 1
  • FIG. 3 is a diagram showing an example of a dc table
  • FIG. 4 is a diagram showing the level values of depth images of free viewpoint images and a distribution thereof
  • FIG. 5 is a diagram showing an example of a dc table
  • FIG. 6 is a diagram to which reference is made in order to describe FIG. 5
  • FIG. 7 is a diagram showing an example of a dc table
  • FIG. 8 is a diagram showing the encoding of a dc table
  • FIG. 9 is a flowchart showing a method of encoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 10 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 11 is a diagram showing an image decoding operation by a dc prediction unit shown in FIG. 10;
  • FIG. 12 is a flowchart showing a method of decoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 13 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention, and FIG. 14 is a diagram used to describe FIG. 13;
  • FIG. 15 is a block diagram showing a method of encoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 16 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 1 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 1 includes a transformation quantization unit 110, an entropy encoding unit 115, a motion estimation unit 120, a motion compensation unit 125, an intra-prediction unit 130, a dc prediction unit 135, a dc table 140, an inverse-quantization inverse- transformation unit 145, a filter 150, and a memory 155.
  • Free viewpoint images in the present invention are used to include color images and depth images.
  • the transformation quantization unit 110 transforms an input image or a residual signal, that is, a difference between an input image and a predicted image into the data of a frequency domain and quantizes the transformed data of the frequency domain.
  • the entropy encoding unit 115 encodes not only the output of the transformation quantization unit, but also other supplementary information (motion vectors, etc.).
  • the entropy encoding unit 115 may also perform entropy encoding for a dc table to be described later.
  • the motion estimation unit 120 performs a motion estimation by comparing a reference image and the input image and calculates a motion vector.
  • the motion compensation unit 125 calculates a predicted image whose reference image has been compensated for based on the motion vector calculated by the motion estimation unit 120. Inter-prediction is performed using the motion estimation unit 120 and the motion compensation unit 125.
  • the intra-prediction unit 130 calculates a predicted image by performing intra- prediction.
  • the dc prediction unit 135 calculates a predicted image by performing dc value prediction.
  • the dc value prediction may be performed using the dc table 140.
  • a dc value predicted in the dc prediction unit 135 may be a dc value of the frequency-transformed block of the depth image of a free viewpoint image.
  • the operation of the dc prediction unit 135 will be described later with reference to FIG. 2 and its subsequent drawings.
  • the inverse- quantization inverse- transformation unit 145 inverse quantizes and inverse transforms the output of the transformation quantization unit 110 and sums the predicted images predicted by the motion compensation unit 125, the intra-prediction unit 130, and the dc prediction unit 135.
  • the filter 150 may perform deblocking filtering on the adding results.
  • the filtered value is stored in the memory 155 and is used as a reference image when inter-prediction is performed.
  • the transformation and quantization operations are performed by the transformation quantization unit 110
  • the transformation and quantization operations may be separately performed using a transformation unit and a quantization unit.
  • the inverse-quantization and inverse- transformation operations may be separately performed using an inverse quantization unit and an inverse transformation unit.
  • FIG. 2 is a diagram showing an image encoding operation by the dc prediction unit shown in FIG. 1.
  • the dc prediction unit 135 predicts the dc value of a current block.
  • each of all pixels within a prediction block 220 can have a constant predicted dc value (dcp), and a difference signal 230 between the prediction block 220 and a current block 210, that is, an input block is encoded through residual coding.
  • residual coding may be the same as the operation of the transformation quantization unit 110 shown in FIG. 1. That is, the difference signal 230 may be transformed and quantized in the transformation quantization unit 110 of FIG. 1.
  • the prediction of the dc value may be performed using the dc table.
  • the contents of the dc table will be described later with reference to FIGS. 3 to 7.
  • FIG. 3 is a diagram showing an example of a dc table
  • FIG. 4 is a diagram showing the level values of depth images of free viewpoint images and a distribution thereof.
  • the dc table 300 of FIG. 3 may also be referred to as a dc level table because it includes a plurality of specific dc levels.
  • the abscissa axis of FIG. 4 denotes the pixel values of depth images and a longitudinal axis thereof denotes the distributions of the pixel values.
  • the depth images include only gray information. Accordingly, unlike in color images, the pixel values of the depth images are not greatly changed because of a change in the external environment, motion, etc.
  • pixels placed at close positions may have similar values.
  • buildings, mountains, etc. which are far from outdoor images, may have different values
  • depth images have values close to 0, indicating that the buildings, the mountains, etc. are located at remote places.
  • images of a person s face which are captured closely may vary in the nose, cheeks, etc. because of shadows, but the pixel values of depth images have similar values.
  • the pixel values of the depth images may be, as shown in FIG. 6, represented into a plurality of Gaussian distributions on the basis of the plurality of pixel values.
  • the pixel values are similar in respective pixels placed any neighboring positions.
  • the pixel values of the depth images indicate that a distance from an object, placed at the corresponding position, to a camera is close when the object is close, for example, to a value 255.
  • the dc table 300 may include a plurality of specific dc values based on the above property that the pixel values of the depth images are distributed on the basis of a specific pixel value level, as shown in FIG. 3.
  • the dc values may be pixel values which are frequently used in the picture or sequence of depth images.
  • m dc values (XO, ..., X (m-1)) are illustrated to correspond to respective indices (0, ..., m-1).
  • the dc values included in the dc table 300 are representative values, which belong to the pixel values of depth images of free viewpoint images, and may be configured to have optimal values within, for example, a sequence or picture.
  • the dc values included in the dc table 300 may be representative dc values of respective areas, wherein the depth images of free viewpoint images are divided into specific respective areas.
  • a division method may be variously set, such as a division method per on object basis within an image.
  • the dc values included in the dc table 300 may be dc values in each of which a difference in the dc value between neighboring blocks is a specific value or more.
  • a difference between the dc value of a neighboring block and the dc value of a current block is less than a specific value
  • prediction using the dc value of the neighboring block is performed.
  • the dc value may be set to a specific dc value within the dc table and prediction using the set dc value may be performed.
  • the dc values included in the dc table 300 may be dc values of images to which reference has not been referred by a reference viewpoint image.
  • dc value prediction may be performed using the reference viewpoint image.
  • the dc value of the corresponding current block may be set to a specific dc value within the dc table and prediction may be performed using the set dc value.
  • the dc table 300 may be encoded and transmitted.
  • the first syntax level may be a sequence layer level or a picture layer level.
  • a second syntax level lower than the first syntax level only the indices (0, ..., m-1) corresponding to the dc value (XO, ..., X (m-1)) that have already been encoded may be encoded.
  • the second syntax level may be a macroblock layer level or a block layer level.
  • FIG. 5 is a diagram showing an example of a dc table
  • FIG. 6 is a diagram to which reference is made in order to describe FIG. 5.
  • the dc table 500 of FIG. 5 may include a prediction table
  • a selection table 520 may also be included in the dc table 500.
  • the prediction table 510 may include prediction indices (0 ⁇ k-1) and prediction methods, corresponding to the respective prediction indices (0 ⁇ k-1), or dc values (A, B, C, ...) according to respective prediction methods.
  • the prediction methods can be used to predict respective dc values using a variety of prediction methods using neighboring blocks.
  • the prediction methods within the prediction table 510 include the case where the dc value 620 of a current block 610 is the dc value (A) of a left block, the case where the dc value 620 of the current block 610 is the dc value (B) of an upper block, the case where the dc value 620 of the current block 610 is the dc value (C) of a left upper block, and the case where the dc value 620 of the current block 610 is the mean ((A+B)»l) of the dc values of the left and upper blocks.
  • the present invention is not limited to the above methods, but the dc value 620 of the current block 610 may be predicted using a variety of methods.
  • the selection table 520 is a table for selecting any one of the plurality of prediction methods within the prediction method table 510 or the dc values (A,B,C ...) according to the prediction methods.
  • the selection table 520 may includes selection methods (0,3), and the selection indices (0,1) corresponding to the respective selection methods (0,3).
  • the selection indices (0,1) indicates any one (0,3) of the prediction indices (0 ⁇ k-1) of the prediction table 510.
  • the selection method 0 indicates the prediction index 0 and the dc value 620 of the current block 610 is the dc value (A) of the left block corresponding to the prediction index 0
  • the selection method 3 indicates the prediction index 3 and the dc value 620 of the current block 610 is the mean ((A+B)»l) of the dc values of the left and upper blocks corresponding to the prediction index 3.
  • the present invention is not limited to the above methods, but the dc value 620 of the current block 610 may be predicted using a variety of methods.
  • FIG. 7 is a diagram showing an example of a dc table.
  • the dc table 700 of FIG. 7 may include both the dc table
  • the dc table 700 may include a dc level table 710 for m dc values (XO, ..., X
  • the dc table 700 may further include a selection table for selecting various prediction methods within the prediction table 720.
  • the above-described dc tables of FIGS. 3 to 7 are used to predict and encode dc values within frequency-transformed blocks and may be changed on a unit basis. For example, each of the dc tables may be changed on a sequence or picture basis.
  • FIG. 8 is a diagram showing the encoding of a dc table.
  • the dc tables of FIGS. 3 to 7, which are used in the case where the dc values of depth images of free viewpoint images are predicted and encoding is performed based on the predicted dc values according to an embodiment of the present invention, may also be encoded.
  • dc_table_present_flag information, indicating whether a dc table is present. This may be referred to as a "dc_table_present_flag.” If the dc table is present, the flag value is encoded in the form of T. If the dc table is not present, the flag value is encoded in the form of 1 O'.
  • dc_table_present_flag T
  • whether a dc table storing a plurality of dc values pertinent to a current block is present may be encoded. This may be referred to as a "dc_level_present_flag.” If the dc table storing the plurality of dc values is present, the flag value is encoded in the form of T. If the dc table storing the plurality of dc values is not present, the flag value is encoded in the form of '0'. [74] Next, if the "dc_level_present_flag" is T, information indicative of the number of dc level values used, that is, the size of the dc table may be encoded.
  • a dc value (“dc_table[i]") may be encoded depending on the number of dc level values ("dc_level_num-l").
  • the "dc_level_num-l” corresponds to the number of indices of FIG. 3
  • the "dc_table[i]” corresponds to the dc values depending on the respective indices of FIG. 3.
  • the encoding of the dc table may be performed by encoding a plurality of dc values and indices corresponding to the respective dc values.
  • the "dc_level_num-l" may also be encoded.
  • the dc table includes a prediction table, such as that shown in FIG. 5, information indicative of whether a prediction table predicted using neighboring blocks of a current block is present may be encoded. This may be referred to as a "dc_nei_pred_present_flag.” If the dc table predicted using neighboring blocks is present, the flag value is encoded in the form of 'L' If the dc table predicted using neighboring blocks is not present, the flag value is encoded in the form of '0.'
  • the "dc_nei_pred_present_flag" is T
  • information indicative of the number of prediction methods used that is, the size of the dc table may be encoded. This may be referred to as a "dc_nei_pred_num-l.”
  • the dc value (“dc_table[i]”) may be encoded depending on the number of prediction methods, a "dc_nei_pred_num-l.”
  • the "dc_nei_pred_num-l” corresponds to the number of indices of FIG. 5
  • the "dc_table[i]” corresponds to the dc values according to the respective indices of FIG. 5.
  • the encoding of the dc table may be performed by encoding a plurality of dc values and indices corresponding to the respective dc values.
  • the encoding of the dc table may be as described above performed using the case where encoding is performed using the "dc_level_present_flag" (corresponding to FIG. 3), the case where encoding is performed using the "dc_nei_pred_present_flag” (corresponding to FIG. 5), and the case where encoding is performed using the "dc_level_present_flag” and the "dc_nei_pred_present_flag” (corresponding to FIG. 7).
  • the encoding of the dc table may be performed in the first syntax level.
  • the first syntax level may be a sequence layer level or a picture layer level.
  • a second syntax level lower than the first syntax level only indices corresponding to respective dc values within a dc table, which have already been encoded in the first syntax level, may be encoded.
  • the second syntax level may be a macroblock layer level or a block layer level. As described above, only indices within a dc table, corresponding to respective predicted dc values of a current block, are encoded in a second syntax level, thereby being capable of increasing encoding efficiency.
  • FIG. 9 is a flowchart showing a method of encoding free viewpoint images according to an embodiment of the present invention.
  • the dc values of a current block are first predicted using a dc table in which a plurality of specific dc values is stored at step S910.
  • the dc table may include specific dc level values (a dc level table), such as those shown in FIG. 3, dc values of neighboring blocks, including a prediction method table and a selection table, such as those shown in FIG. 5, or both dc level values, such as those shown in FIG. 3, and dc values of neighboring blocks, such as those shown in FIG. 5, as shown in FIG. 7.
  • a dc value prediction method reference can be made to FIGS. 3 to 7, and a description thereof is omitted.
  • a difference between the predicted dc values and the respective current dc values is then encoded at step S920.
  • the subject of encoding may be a difference between each of current dc values and each of predicted dc values within a block.
  • the subject of encoding may be the difference signal 230 between the prediction block 220 and the current block 210, which have the same dc prediction value (dcp) based on a current block as in FIG. 2.
  • the above-described prediction block and current block may be preferably blocks within the depth images of free viewpoint images.
  • the method of encoding free viewpoint images shown in FIG. 9 may further include a step of encoding a dc table.
  • a step of encoding the dc table reference can be made to the description of FIG. 8, and a description thereof is omitted.
  • the above-described current block may have the sizes of various shapes, such as 16x16, 8x8, and 4x4.
  • FIG. 10 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
  • 10 includes an entropy decoding unit 1010, an inverse-quantization inverse- transformation unit 1015, a filter 1020, a memory 1025, a motion compensation unit 1030, an intra-prediction unit 1035, a dc prediction unit 1040, and a dc table 1045.
  • the entropy decoding unit 1010 performs entropy decoding on input bitstream and outputs the entropy decoding results. Not only transformation quantization coefficients, such as residual signals, but also supplementary information (motion vectors, etc.) can be output from the entropy decoding unit 1010.
  • the entropy decoding unit 1010 may also entropy decoding on encoded dc tables.
  • the inverse-quantization inverse-transformation unit 1015 inverse transforms and inverse quantizes the outputs of the entropy decoding unit 1010.
  • the outputs of the entropy decoding unit 1010 may include encoded difference signals and encoded motion vectors.
  • the encoded difference signals may include not only difference signals according to an intra-prediction mode and an inter-prediction mode, but also difference signals according to a dc prediction mode pertinent to the present invention.
  • the motion compensation unit 1030 calculates a predicted image whose reference image has been compensated for based on the received motion vectors, and the intra- prediction unit 1035 calculates a predicted image through intra-prediction.
  • the dc prediction unit 1040 calculates a predicted image by performing dc value prediction.
  • the dc values may be predicted using the dc table 1045.
  • the predicted images calculated by the motion compensation unit 1030, the intra- prediction unit 1035, and the dc prediction unit 1040 are combined with the residual signals which have been inverse quantized and inverse transformed by the inverse- quantization inverse-transformation unit 1015.
  • the filter 1020 may perform deblocking filtering on the combination results.
  • a filtered value is stored in the memory 1025 and is used as a reference image when inter-prediction is performed.
  • the free viewpoint image decoding apparatus 1000 of FIG. 10 may be applied to both color images and depth images, but, in the present invention, the decoding of depth images is described as an example. Accordingly, the dc values predicted in the dc prediction unit 1040 may be the dc values of frequency-transformed blocks of depth images of free viewpoint images. The operation of the dc prediction unit 1040 will be described with reference to FIG. 11.
  • the inverse-quantization and inverse- transformation operations are performed in the inverse-quantization inverse- transformation unit 1015, they may be performed through separate inverse- quantization unit and inverse-transformation units.
  • FIG. 11 is a diagram showing an image decoding operation by the dc prediction unit shown in FIG. 10.
  • the dc prediction unit 1040 predicts the dc values of a current block.
  • all pixels within a prediction block 1110 have respective constant predicted dc values (dcp), and a difference signal 1130, decoded through residual decoding, may be summed with a prediction block 1110.
  • the adding operation may be performed in an adding unit between the inverse-quantization inverse-transformation unit 1015 and the filter 1020 of FIG. 10. Accordingly, a current block 1120 is restored.
  • residual decoding may be identical to the operation of the inverse-quantization inverse- transformation unit 1015 of FIG. 10.
  • the prediction of the dc values may be performed using a dc table. The contents of the dc table have been described with reference to FIGS. 3 to 7, and a description thereof is omitted.
  • FIG. 12 is a flowchart showing a method of decoding free viewpoint images according to an embodiment of the present invention.
  • the dc values of a current block are first predicted using a dc table in which a plurality of specific dc values is stored at step S 1210.
  • the dc table may include specific dc level values (a dc level table), such as those shown in FIG. 3, dc values of neighboring blocks, including a prediction method table and a selection table, such as those shown in FIG. 5, or both dc level values, such as those shown in FIG. 3, and dc values of neighboring blocks, such as those shown in FIG. 5, as shown in FIG. 7.
  • a dc value prediction method reference can be made to FIGS. 3 to 7, and a description thereof is omitted.
  • the encoded difference signals are then decoded at step S 1220.
  • the subject of decoding may be a difference between each of current dc values and each of predicted dc values within a block.
  • the subject of decoding may be the difference signal 1130 which is generated based on the difference between the current block 210 and the prediction block 220, which have the same dc prediction value (dcp) based on a current block, as in FIG. 2.
  • the above-described prediction block may be preferably a block within the depth images of free viewpoint images.
  • the current dc values are then restored using the predicted dc values and the decoded difference signal at step S 1235.
  • the predicted dc values and the decoded difference signal are summed, thereby restoring the dc values of the current block.
  • the method of decoding free viewpoint images shown in FIG. 12 may further include a step of decoding a dc table.
  • the step of decoding the dc table is a reverse process of the step of encoding the dc table shown in FIG. 9.
  • the step of decoding the dc table reference can be made to the description of FIG. 9, and a description thereof is omitted.
  • the above-described current block may have the sizes of various shapes, such as 16x16, 8x8, and 4x4.
  • FIG. 13 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention
  • FIG. 14 is a diagram used to describe FIG. 13.
  • the free viewpoint image encoding apparatus 1300 of FIG. 13 has a similar operation to the free viewpoint image encoding apparatus 100 of FIG. 1, but differs from the free viewpoint image encoding apparatus 100 in that it further includes a motion vector memory 1360. That is, the operations of a transformation quantization unit 1310, an entropy encoding unit 1315, a motion estimation unit 1320, a motion compensation unit 1325, an intra-prediction unit 1330, an inverse- quantization inverse-transformation unit 1345, a filter 1350, and a memory 1355 are similar to those of the free viewpoint image encoding apparatus 100 shown in FIG. 1, and only differences therebetween are described below.
  • the motion vectors memory 1360 stores motion vectors.
  • the motion vector memory 1360 may store the motion vectors of color images which are separately encoded, and vice versa.
  • motion compensation is performed by separately estimating the motion vector (mvl) of a color image and the motion vector (mv2) of a depth image, as shown in FIG. 14(a).
  • the free viewpoint image encoding apparatus 1300 of FIG. 13 stores a motion vector between a depth image and a color image, for common use, using the motion vector memory 1360, calculates an optimal motion vector using the motion estimation unit 1320 based on the stored motion vector, and uses the calculated optimal motion vector in motion estimation.
  • motion compensation may be performed using a motion vector (mvc), calculated from a color image, as the motion vector (mv3) of a depth image.
  • the motion vector of a color image may be calculated by performing motion estimation between a current color image and a reference color image
  • the motion vector of a depth image may be calculated by performing motion estimation between a current depth image and a reference depth image.
  • the motion estimation unit 1320 calculates a first error between the current depth image and the predicted depth image based on the calculated motion vector of the color image and calculates a second error between the current depth image and the predicted depth image based on the calculated motion vector of the depth image.
  • the motion estimation unit 1320 finally determines the motion vector of the color image as a motion vector when a difference between the first error and the second error is a specific value or less.
  • the motion compensation unit 1325 performs motion compensation of the predicted depth image based on the determined motion vector.
  • information indicating that the motion vectors of different images can be used, may also be encoded.
  • This information may be referred to as a "copy_mv_from_visual_flag” or a “copy_mv_from_depth_flag.” If the "copy_mv_from_visual_flag” or the “copy_mv_from_depth_flag” are encoded in the form of 1', it indicates that the motion vectors of different images (color images or depth images) are used.
  • a difference signal (residual signal) between a first error and a second error may be further generated as described above.
  • This difference signal converges on a specific value or less as described above. If the difference signal is transformed and quantized, it is transformed to a very small value. Accordingly, the entire encoding efficiency can be significantly increased.
  • the free viewpoint image encoding apparatus 1300 of FIG. 13 is illustrated not to include a dc prediction unit and a dc table unlike the free viewpoint image encoding apparatus 100 of FIG. 1.
  • the free viewpoint image encoding apparatus 1300 of FIG. 13 may further include the dc prediction unit and the dc table and may further perform a dc prediction operation.
  • FIG. 15 is a block diagram showing a method of encoding free viewpoint images according to an embodiment of the present invention.
  • FIG. 15 corresponds to the free viewpoint image encoding apparatus of FIG. 13.
  • the motion vector of a color image and the motion vector of a depth image are calculated at step S 1510.
  • the motion vectors of the color image and the depth image are first calculated, as shown in FIG. 14(a).
  • a first error between a current depth image and a predicted depth image is calculated based on the motion vector of the color image
  • a second error between a current depth image and a predicted depth image is calculated based on the calculated motion vector of the depth image at step S 1520.
  • FIG. 16 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
  • the free viewpoint image decoding apparatus 1600 of FIG. 16 is almost similar to the viewpoint image decoding apparatus 1000 of FIG. 10. That is, the operations of an entropy decoding unit 1610, an inverse-quantization inverse- transformation unit 1615, a filter 1620, a memory 1625, a motion compensation unit 1630, and an intra-prediction unit 1635 of the free viewpoint image decoding apparatus 1600 shown in FIG. 16 are similar to those of FIG. 10, and differences therebetween are chiefly described below. [122] Assuming that the free viewpoint image decoding apparatus 1600 of FIG. 16 is used to decode the depth images of color images and depth images, the entropy decoding unit 1610 decodes bitstream and extracts motion vectors from the decoded bitstream. The extracted motion vectors are used to perform a motion compensation operation in the motion compensation unit 1630.
  • the motion vectors may not be the motion vectors of a depth image, but be the motion vectors of color images.
  • the motion compensation unit 1635 performs motion compensation of prediction depth images based on the motion vectors of the color images.
  • whether the motion vectors are the motion vectors of the depth images or the motion vectors of the color images may be determined based on the "copy_mv_from_visual_flag" as described above.
  • the "copy_mv_from_visual_flag" is T, motion compensation is performed using the motion vectors of the color images.
  • the memory 1625 may preferably further store restored color images as well as restored depth images.
  • the method of encoding or decoding free viewpoint images according to the present invention may be implemented, in the form of codes readable by a processor, in a recording medium readable by a processor included in the apparatus for encoding or decoding free viewpoint images.
  • the recording medium readable by a processor may include all kinds of recording devices in which data capable of being read by the processor is stored.
  • the recording medium readable by a processor may include ROM, RAM, CD-ROM, magnetic tapes, floppy disks, optical data storages, and so on.
  • the recording medium readable by a processor may also be implemented in the form of carrier waves, such as transmission over the Internet.
  • the recording medium readable by a processor may be distributed into computer systems interconnected over a network, so codes readable by a processor may be stored and executed in a distributed manner.
  • the method and apparatus for encoding free viewpoint images and the method and apparatus for decoding free viewpoint images according to the present invention may be used to predict dc values and encode or decode the predicted dc values.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images. The method of encoding free viewpoint images according to the present invention includes predicting the dc values of a current block using a dc table in which a plurality of specific dc values is stored and encoding a difference between a predicted dc value and a current dc value. Accordingly, free viewpoint images can be efficiently encoded.

Description

Description
METHOD FOR ENCODING AND DECODING IMAGE OF FTV, AND APPARATUS FOR ENCODING AND DECODING IMAGE
OFFTV
Technical Field
[1] The present invention relates to a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, and more particularly, to a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, which perform dc prediction.
[2]
Background Art
[3] Free viewpoint TV (FTV) is the next-generation active TV which enables a user to freely decide a desired point of time and see and hear images at that point of time without limits, unlike in the existing 2D TV.
[4] MPEG, the international standardization organization, has recently started an FTV standardization procedure. The FTV system configuration of MPEG is described below. A encoding stage compresses color images and depth images captured by a camera on a viewpoint K. A decoding stage receives the bitstream compressed from the encoding stage and restores user viewpoint images, which have not been captured by the camera, from the restored color images and depth images at the viewpoint K.
[5] In order to implement the above method, however, K color images and depth images per image at a point of time must be encoded. That is, 2K- 1 images for 1 viewpoint image are further required. Consequently, FTV must compress a large amount of data, such as K color images and K depth images. Accordingly, there is a problem in that the existing compression standard is used without change because characteristics, such as the distributions of pixel values in color images and depth images, are different from each other and the two images have a correlation. Disclosure of Invention
Technical Problem
[6] Accordingly, the present invention has been made in view of the above problems, and it is an object of the present invention to provide a method and apparatus for encoding free viewpoint images and a method and apparatus for decoding free viewpoint images, which are capable of efficiently encoding or decoding free viewpoint images.
[7] Technical Solution
[8] To achieve the above object, a method of encoding free viewpoint images according to an embodiment of the present invention includes the steps of predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored, and encoding a difference signal between each of the predicted dc values and each of current dc values.
[9] Meanwhile, a method of decoding free viewpoint images according to an embodiment of the present invention includes the steps of predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored, decoding the encoded difference signals from input bitstream, and restoring current dc values using the predicted dc values and the decoded difference signals.
[10] Meanwhile, an apparatus for encoding free viewpoint images according to an embodiment of the present invention includes a dc table in which a plurality of specific dc values is stored, a dc prediction unit for predicting dc values of a current block, and a transformation quantization unit for transforming and quantizing a difference signal between each of the predicted dc values and each of current dc values.
[11] Meanwhile, an apparatus for decoding free viewpoint images according to an embodiment of the present invention includes a dc table in which a plurality of specific dc values is stored, a dc prediction unit for predicting dc values of a current block, an inverse-quantization inverse-transformation unit for inverse quantizing and inverse transforming the encoded difference signals from input bitstream, and a adding unit for restoring current dc values using the predicted dc values and the difference signals.
[12] Meanwhile, a method of encoding free viewpoint images according to an embodiment of the present invention includes the steps of calculating motion vectors of a color image and a depth image of the free viewpoint images, calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image and calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image, and if a difference between the first error and the second error is a specific value or less, performing motion compensation of the predicted depth image based on the motion vector of the color image.
[13] Meanwhile, a method of decoding free viewpoint images according to an embodiment of the present invention includes the steps of extracting a motion vectors from an input bitstream, and, if the extracted motion vector is a motion vector of a color image, performing motion compensation of a predicted depth image based on the motion vector of the color image.
[14] Meanwhile, an apparatus for encoding free viewpoint images according to an embodiment of the present invention includes a motion vector memory for storing motion vectors of a color image, a motion estimation unit for calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image, calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image, and, if a difference between the first error and the second error is a specific value or less, determining the motion vector of the color image as a motion vector, and a motion compensation unit for performing motion compensation of the predicted depth image based on the motion vector of the color image. [15] Meanwhile, an apparatus for decoding free viewpoint images according to an embodiment of the present invention includes an entropy decoding unit for extracting motion vectors from input bitstream, and a motion compensation unit for, if the motion vectors are motion vectors of color images, performing motion compensation of prediction depth images based on the motion vectors of the color image.
Advantageous Effects
[16] According to the present invention, in free viewpoint images, dc prediction is used.
Accordingly, when the free viewpoint images are encoded or decoded, efficiency thereof can be significantly increased. In particular, efficiency is significantly increased in depth images distributed on the basis of a specific pixel value level. [17] Further, according to the present invention, in free viewpoint images, motion vectors are used in common by depth images and color images. Accordingly, efficiency is increased when encoding or decoding is performed. [18]
Brief Description of Drawings [19] FIG. 1 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention; [20] FIG. 2 is a diagram showing an image encoding operation by a dc prediction unit shown in FIG. 1 ; [21] FIG. 3 is a diagram showing an example of a dc table, and FIG. 4 is a diagram showing the level values of depth images of free viewpoint images and a distribution thereof; [22] FIG. 5 is a diagram showing an example of a dc table, and FIG. 6 is a diagram to which reference is made in order to describe FIG. 5; [23] FIG. 7 is a diagram showing an example of a dc table;
[24] FIG. 8 is a diagram showing the encoding of a dc table;
[25] FIG. 9 is a flowchart showing a method of encoding free viewpoint images according to an embodiment of the present invention; [26] FIG. 10 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention;
[27] FIG. 11 is a diagram showing an image decoding operation by a dc prediction unit shown in FIG. 10;
[28] FIG. 12 is a flowchart showing a method of decoding free viewpoint images according to an embodiment of the present invention;
[29] FIG. 13 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention, and FIG. 14 is a diagram used to describe FIG. 13;
[30] FIG. 15 is a block diagram showing a method of encoding free viewpoint images according to an embodiment of the present invention; and
[31] FIG. 16 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
[32]
Best Mode for Carrying out the Invention
[33] Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
[34] FIG. 1 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention.
[35] Referring to this drawing, the free viewpoint image encoding apparatus 100 shown in
FIG. 1 includes a transformation quantization unit 110, an entropy encoding unit 115, a motion estimation unit 120, a motion compensation unit 125, an intra-prediction unit 130, a dc prediction unit 135, a dc table 140, an inverse-quantization inverse- transformation unit 145, a filter 150, and a memory 155. Free viewpoint images in the present invention are used to include color images and depth images.
[36] The transformation quantization unit 110 transforms an input image or a residual signal, that is, a difference between an input image and a predicted image into the data of a frequency domain and quantizes the transformed data of the frequency domain.
[37] The entropy encoding unit 115 encodes not only the output of the transformation quantization unit, but also other supplementary information (motion vectors, etc.). The entropy encoding unit 115 may also perform entropy encoding for a dc table to be described later.
[38] The motion estimation unit 120 performs a motion estimation by comparing a reference image and the input image and calculates a motion vector. The motion compensation unit 125 calculates a predicted image whose reference image has been compensated for based on the motion vector calculated by the motion estimation unit 120. Inter-prediction is performed using the motion estimation unit 120 and the motion compensation unit 125.
[39] The intra-prediction unit 130 calculates a predicted image by performing intra- prediction.
[40] The dc prediction unit 135 calculates a predicted image by performing dc value prediction. The dc value prediction may be performed using the dc table 140.
[41] Meanwhile, the free viewpoint image encoding apparatus 100 of FIG. 1 may be applied to both color images and depth images, but, in the present invention, the encoding of depth images will be described as an example. Accordingly, a dc value predicted in the dc prediction unit 135 may be a dc value of the frequency-transformed block of the depth image of a free viewpoint image. The operation of the dc prediction unit 135 will be described later with reference to FIG. 2 and its subsequent drawings.
[42] Meanwhile, the inverse- quantization inverse- transformation unit 145 inverse quantizes and inverse transforms the output of the transformation quantization unit 110 and sums the predicted images predicted by the motion compensation unit 125, the intra-prediction unit 130, and the dc prediction unit 135. The filter 150 may perform deblocking filtering on the adding results. The filtered value is stored in the memory 155 and is used as a reference image when inter-prediction is performed.
[43] Although, in the drawing, the transformation and quantization operations are performed by the transformation quantization unit 110, the transformation and quantization operations may be separately performed using a transformation unit and a quantization unit. Further, unlike in the drawing, the inverse-quantization and inverse- transformation operations may be separately performed using an inverse quantization unit and an inverse transformation unit.
[44] FIG. 2 is a diagram showing an image encoding operation by the dc prediction unit shown in FIG. 1.
[45] Referring to this drawing, the dc prediction unit 135 predicts the dc value of a current block. For example, in the dc prediction unit 135, each of all pixels within a prediction block 220 can have a constant predicted dc value (dcp), and a difference signal 230 between the prediction block 220 and a current block 210, that is, an input block is encoded through residual coding. Here, residual coding may be the same as the operation of the transformation quantization unit 110 shown in FIG. 1. That is, the difference signal 230 may be transformed and quantized in the transformation quantization unit 110 of FIG. 1.
[46] Meanwhile, the prediction of the dc value may be performed using the dc table. The contents of the dc table will be described later with reference to FIGS. 3 to 7.
[47] FIG. 3 is a diagram showing an example of a dc table, and FIG. 4 is a diagram showing the level values of depth images of free viewpoint images and a distribution thereof. [48] Referring to these drawings, the dc table 300 of FIG. 3 may also be referred to as a dc level table because it includes a plurality of specific dc levels.
[49] Meanwhile, the abscissa axis of FIG. 4 denotes the pixel values of depth images and a longitudinal axis thereof denotes the distributions of the pixel values.
[50] The depth images include only gray information. Accordingly, unlike in color images, the pixel values of the depth images are not greatly changed because of a change in the external environment, motion, etc.
[51] In other words, there is a high probability that pixels placed at close positions may have similar values. For example, although buildings, mountains, etc., which are far from outdoor images, may have different values, depth images have values close to 0, indicating that the buildings, the mountains, etc. are located at remote places. Further, images of a person s face which are captured closely may vary in the nose, cheeks, etc. because of shadows, but the pixel values of depth images have similar values.
[52] Accordingly, the pixel values of the depth images may be, as shown in FIG. 6, represented into a plurality of Gaussian distributions on the basis of the plurality of pixel values. In each of the Gaussian distributions, the pixel values are similar in respective pixels placed any neighboring positions. Meanwhile, the pixel values of the depth images indicate that a distance from an object, placed at the corresponding position, to a camera is close when the object is close, for example, to a value 255.
[53] In the present invention, the dc table 300 may include a plurality of specific dc values based on the above property that the pixel values of the depth images are distributed on the basis of a specific pixel value level, as shown in FIG. 3. The dc values may be pixel values which are frequently used in the picture or sequence of depth images. In the drawing, m dc values (XO, ..., X (m-1)) are illustrated to correspond to respective indices (0, ..., m-1).
[54] Meanwhile, the dc values included in the dc table 300 are representative values, which belong to the pixel values of depth images of free viewpoint images, and may be configured to have optimal values within, for example, a sequence or picture.
[55] For example, the dc values included in the dc table 300 may be representative dc values of respective areas, wherein the depth images of free viewpoint images are divided into specific respective areas. A division method may be variously set, such as a division method per on object basis within an image.
[56] Alternatively, the dc values included in the dc table 300 may be dc values in each of which a difference in the dc value between neighboring blocks is a specific value or more. In other words, when a difference between the dc value of a neighboring block and the dc value of a current block is less than a specific value, prediction using the dc value of the neighboring block is performed. When a difference between the dc value of a neighboring block and the dc value of a current block is a specific value or more, the dc value may be set to a specific dc value within the dc table and prediction using the set dc value may be performed.
[57] Alternatively, the dc values included in the dc table 300 may be dc values of images to which reference has not been referred by a reference viewpoint image. In the case where a current block is referred by the reference viewpoint image, dc value prediction may be performed using the reference viewpoint image. In the case where a current block is not referred by the reference viewpoint image, the dc value of the corresponding current block may be set to a specific dc value within the dc table and prediction may be performed using the set dc value.
[58] The dc table 300 may be encoded and transmitted. For example, the dc values (XO,
..., X (m-1)) and the indices (0, ..., m-1) within the dc table 300 may be encoded in a first syntax level. Here, the first syntax level may be a sequence layer level or a picture layer level. Meanwhile, in a second syntax level lower than the first syntax level, only the indices (0, ..., m-1) corresponding to the dc value (XO, ..., X (m-1)) that have already been encoded may be encoded. In this case, the second syntax level may be a macroblock layer level or a block layer level. As described above, since only indices within the dc table 300, corresponding to predicted dc values of a current block, are encoded in the second syntax level, encoding efficiency can be increased.
[59] FIG. 5 is a diagram showing an example of a dc table, and FIG. 6 is a diagram to which reference is made in order to describe FIG. 5.
[60] Referring to this drawing, the dc table 500 of FIG. 5 may include a prediction table
510. A selection table 520 may also be included in the dc table 500.
[61] The prediction table 510 may include prediction indices (0 ~ k-1) and prediction methods, corresponding to the respective prediction indices (0 ~ k-1), or dc values (A, B, C, ...) according to respective prediction methods. The prediction methods can be used to predict respective dc values using a variety of prediction methods using neighboring blocks.
[62] It is illustrated in FIG. 6 that the prediction methods within the prediction table 510 include the case where the dc value 620 of a current block 610 is the dc value (A) of a left block, the case where the dc value 620 of the current block 610 is the dc value (B) of an upper block, the case where the dc value 620 of the current block 610 is the dc value (C) of a left upper block, and the case where the dc value 620 of the current block 610 is the mean ((A+B)»l) of the dc values of the left and upper blocks. However, the present invention is not limited to the above methods, but the dc value 620 of the current block 610 may be predicted using a variety of methods.
[63] Meanwhile, the selection table 520 is a table for selecting any one of the plurality of prediction methods within the prediction method table 510 or the dc values (A,B,C ...) according to the prediction methods. The selection table 520 may includes selection methods (0,3), and the selection indices (0,1) corresponding to the respective selection methods (0,3). Here, the selection indices (0,1) indicates any one (0,3) of the prediction indices (0 ~ k-1) of the prediction table 510.
[64] In the drawing, as the methods of predicting the dc value 620 of the current block
610, the selection method 0 indicates the prediction index 0 and the dc value 620 of the current block 610 is the dc value (A) of the left block corresponding to the prediction index 0, and the selection method 3 indicates the prediction index 3 and the dc value 620 of the current block 610 is the mean ((A+B)»l) of the dc values of the left and upper blocks corresponding to the prediction index 3. However, the present invention is not limited to the above methods, but the dc value 620 of the current block 610 may be predicted using a variety of methods.
[65] FIG. 7 is a diagram showing an example of a dc table.
[66] Referring to this drawing, the dc table 700 of FIG. 7 may include both the dc table
300 of FIG. 3 and the dc table 500 of FIG. 5.
[67] That is, the dc table 700 may include a dc level table 710 for m dc values (XO, ..., X
(m-1)) and a prediction table 720 for various prediction methods using neighboring blocks. Meanwhile, although not shown in the drawing, the dc table 700 may further include a selection table for selecting various prediction methods within the prediction table 720.
[68] The above-described dc tables of FIGS. 3 to 7 are used to predict and encode dc values within frequency-transformed blocks and may be changed on a unit basis. For example, each of the dc tables may be changed on a sequence or picture basis.
[69] FIG. 8 is a diagram showing the encoding of a dc table.
[70] Referring to this drawing, the dc tables of FIGS. 3 to 7, which are used in the case where the dc values of depth images of free viewpoint images are predicted and encoding is performed based on the predicted dc values according to an embodiment of the present invention, may also be encoded.
[71] The encoding of each of the dc tables may be performed according to the following steps.
[72] First, information, indicating whether a dc table is present, is encoded. This may be referred to as a "dc_table_present_flag." If the dc table is present, the flag value is encoded in the form of T. If the dc table is not present, the flag value is encoded in the form of 1O'.
[73] Next, if the "dc_table_present_flag" is T, whether a dc table storing a plurality of dc values pertinent to a current block is present may be encoded. This may be referred to as a "dc_level_present_flag." If the dc table storing the plurality of dc values is present, the flag value is encoded in the form of T. If the dc table storing the plurality of dc values is not present, the flag value is encoded in the form of '0'. [74] Next, if the "dc_level_present_flag" is T, information indicative of the number of dc level values used, that is, the size of the dc table may be encoded. This may be referred to as a "dc_level_num-l." A dc value ("dc_table[i]") may be encoded depending on the number of dc level values ("dc_level_num-l"). Here, the "dc_level_num-l" corresponds to the number of indices of FIG. 3, and the "dc_table[i]" corresponds to the dc values depending on the respective indices of FIG. 3. As described above, the encoding of the dc table may be performed by encoding a plurality of dc values and indices corresponding to the respective dc values.
[75] Alternatively, unlike in the drawing, in the case where the "dc_level_present_flag" is not present and the "dc_table_present_flag" is T, the "dc_level_num-l" may also be encoded.
[76] Alternatively, although not shown in the drawing, in the case where, assuming that the "dc_table_present_flag" is T, the dc table includes a prediction table, such as that shown in FIG. 5, information indicative of whether a prediction table predicted using neighboring blocks of a current block is present may be encoded. This may be referred to as a "dc_nei_pred_present_flag." If the dc table predicted using neighboring blocks is present, the flag value is encoded in the form of 'L' If the dc table predicted using neighboring blocks is not present, the flag value is encoded in the form of '0.'
[77] When the "dc_nei_pred_present_flag" is T, information indicative of the number of prediction methods used, that is, the size of the dc table may be encoded. This may be referred to as a "dc_nei_pred_num-l." The dc value ("dc_table[i]") may be encoded depending on the number of prediction methods, a "dc_nei_pred_num-l." In this case, the "dc_nei_pred_num-l" corresponds to the number of indices of FIG. 5, and the "dc_table[i]" corresponds to the dc values according to the respective indices of FIG. 5. As described above, the encoding of the dc table may be performed by encoding a plurality of dc values and indices corresponding to the respective dc values.
[78] Consequently, the encoding of the dc table may be as described above performed using the case where encoding is performed using the "dc_level_present_flag" (corresponding to FIG. 3), the case where encoding is performed using the "dc_nei_pred_present_flag" (corresponding to FIG. 5), and the case where encoding is performed using the "dc_level_present_flag" and the "dc_nei_pred_present_flag" (corresponding to FIG. 7).
[79] The encoding of the dc table may be performed in the first syntax level. Here, the first syntax level may be a sequence layer level or a picture layer level. Meanwhile, in a second syntax level lower than the first syntax level, only indices corresponding to respective dc values within a dc table, which have already been encoded in the first syntax level, may be encoded. In this case, the second syntax level may be a macroblock layer level or a block layer level. As described above, only indices within a dc table, corresponding to respective predicted dc values of a current block, are encoded in a second syntax level, thereby being capable of increasing encoding efficiency.
[80] FIG. 9 is a flowchart showing a method of encoding free viewpoint images according to an embodiment of the present invention.
[81] Referring to this drawing, in the method of encoding free viewpoint images shown in
FIG. 9, the dc values of a current block are first predicted using a dc table in which a plurality of specific dc values is stored at step S910. In this case, the dc table may include specific dc level values (a dc level table), such as those shown in FIG. 3, dc values of neighboring blocks, including a prediction method table and a selection table, such as those shown in FIG. 5, or both dc level values, such as those shown in FIG. 3, and dc values of neighboring blocks, such as those shown in FIG. 5, as shown in FIG. 7. For a dc value prediction method, reference can be made to FIGS. 3 to 7, and a description thereof is omitted.
[82] A difference between the predicted dc values and the respective current dc values is then encoded at step S920. Here, the subject of encoding may be a difference between each of current dc values and each of predicted dc values within a block. Further, the subject of encoding may be the difference signal 230 between the prediction block 220 and the current block 210, which have the same dc prediction value (dcp) based on a current block as in FIG. 2. The above-described prediction block and current block may be preferably blocks within the depth images of free viewpoint images.
[83] Although not shown in the drawing, the method of encoding free viewpoint images shown in FIG. 9 may further include a step of encoding a dc table. For the step of encoding the dc table, reference can be made to the description of FIG. 8, and a description thereof is omitted.
[84] Meanwhile, the above-described current block may have the sizes of various shapes, such as 16x16, 8x8, and 4x4.
[85] FIG. 10 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
[86] Referring to this drawing, the free viewpoint image decoding apparatus 1000 of FIG.
10 includes an entropy decoding unit 1010, an inverse-quantization inverse- transformation unit 1015, a filter 1020, a memory 1025, a motion compensation unit 1030, an intra-prediction unit 1035, a dc prediction unit 1040, and a dc table 1045.
[87] The entropy decoding unit 1010 performs entropy decoding on input bitstream and outputs the entropy decoding results. Not only transformation quantization coefficients, such as residual signals, but also supplementary information (motion vectors, etc.) can be output from the entropy decoding unit 1010. The entropy decoding unit 1010 may also entropy decoding on encoded dc tables. [88] The inverse-quantization inverse-transformation unit 1015 inverse transforms and inverse quantizes the outputs of the entropy decoding unit 1010. The outputs of the entropy decoding unit 1010 may include encoded difference signals and encoded motion vectors. Here, the encoded difference signals may include not only difference signals according to an intra-prediction mode and an inter-prediction mode, but also difference signals according to a dc prediction mode pertinent to the present invention.
[89] The motion compensation unit 1030 calculates a predicted image whose reference image has been compensated for based on the received motion vectors, and the intra- prediction unit 1035 calculates a predicted image through intra-prediction.
[90] The dc prediction unit 1040 calculates a predicted image by performing dc value prediction. The dc values may be predicted using the dc table 1045.
[91] The predicted images calculated by the motion compensation unit 1030, the intra- prediction unit 1035, and the dc prediction unit 1040 are combined with the residual signals which have been inverse quantized and inverse transformed by the inverse- quantization inverse-transformation unit 1015. The filter 1020 may perform deblocking filtering on the combination results. A filtered value is stored in the memory 1025 and is used as a reference image when inter-prediction is performed.
[92] Meanwhile, the free viewpoint image decoding apparatus 1000 of FIG. 10 may be applied to both color images and depth images, but, in the present invention, the decoding of depth images is described as an example. Accordingly, the dc values predicted in the dc prediction unit 1040 may be the dc values of frequency-transformed blocks of depth images of free viewpoint images. The operation of the dc prediction unit 1040 will be described with reference to FIG. 11.
[93] Alternatively, although, in the drawing, the inverse-quantization and inverse- transformation operations are performed in the inverse-quantization inverse- transformation unit 1015, they may be performed through separate inverse- quantization unit and inverse-transformation units.
[94] FIG. 11 is a diagram showing an image decoding operation by the dc prediction unit shown in FIG. 10.
[95] Referring to this drawing, the dc prediction unit 1040 predicts the dc values of a current block. For example, in the dc prediction unit 1040, all pixels within a prediction block 1110 have respective constant predicted dc values (dcp), and a difference signal 1130, decoded through residual decoding, may be summed with a prediction block 1110. The adding operation may be performed in an adding unit between the inverse-quantization inverse-transformation unit 1015 and the filter 1020 of FIG. 10. Accordingly, a current block 1120 is restored. Meanwhile, residual decoding may be identical to the operation of the inverse-quantization inverse- transformation unit 1015 of FIG. 10. [96] Meanwhile, the prediction of the dc values may be performed using a dc table. The contents of the dc table have been described with reference to FIGS. 3 to 7, and a description thereof is omitted.
[97] FIG. 12 is a flowchart showing a method of decoding free viewpoint images according to an embodiment of the present invention.
[98] Referring to this drawing, in the method of decoding free viewpoint images of FIG.
12, the dc values of a current block are first predicted using a dc table in which a plurality of specific dc values is stored at step S 1210. Here, the dc table may include specific dc level values (a dc level table), such as those shown in FIG. 3, dc values of neighboring blocks, including a prediction method table and a selection table, such as those shown in FIG. 5, or both dc level values, such as those shown in FIG. 3, and dc values of neighboring blocks, such as those shown in FIG. 5, as shown in FIG. 7. For a dc value prediction method, reference can be made to FIGS. 3 to 7, and a description thereof is omitted.
[99] The encoded difference signals are then decoded at step S 1220. Here, the subject of decoding may be a difference between each of current dc values and each of predicted dc values within a block. Further, the subject of decoding may be the difference signal 1130 which is generated based on the difference between the current block 210 and the prediction block 220, which have the same dc prediction value (dcp) based on a current block, as in FIG. 2. The above-described prediction block may be preferably a block within the depth images of free viewpoint images.
[100] The current dc values are then restored using the predicted dc values and the decoded difference signal at step S 1235. The predicted dc values and the decoded difference signal are summed, thereby restoring the dc values of the current block.
[101] Meanwhile, although not shown in the drawing, the method of decoding free viewpoint images shown in FIG. 12 may further include a step of decoding a dc table. The step of decoding the dc table is a reverse process of the step of encoding the dc table shown in FIG. 9. For the step of decoding the dc table, reference can be made to the description of FIG. 9, and a description thereof is omitted.
[102] Meanwhile, the above-described current block may have the sizes of various shapes, such as 16x16, 8x8, and 4x4.
[103] FIG. 13 is a block diagram showing an apparatus for encoding free viewpoint images according to an embodiment of the present invention, and FIG. 14 is a diagram used to describe FIG. 13.
[104] Referring to the drawings, the free viewpoint image encoding apparatus 1300 of FIG. 13 has a similar operation to the free viewpoint image encoding apparatus 100 of FIG. 1, but differs from the free viewpoint image encoding apparatus 100 in that it further includes a motion vector memory 1360. That is, the operations of a transformation quantization unit 1310, an entropy encoding unit 1315, a motion estimation unit 1320, a motion compensation unit 1325, an intra-prediction unit 1330, an inverse- quantization inverse-transformation unit 1345, a filter 1350, and a memory 1355 are similar to those of the free viewpoint image encoding apparatus 100 shown in FIG. 1, and only differences therebetween are described below.
[105] The motion vectors memory 1360 stores motion vectors. For example, in the case where the free viewpoint image encoding apparatus 1300 of FIG. 13 is used to encode the depth images of free viewpoint images, the motion vector memory 1360 may store the motion vectors of color images which are separately encoded, and vice versa.
[106] In the typical motion estimation of free viewpoint images, motion compensation is performed by separately estimating the motion vector (mvl) of a color image and the motion vector (mv2) of a depth image, as shown in FIG. 14(a).
[107] However, the free viewpoint image encoding apparatus 1300 of FIG. 13 stores a motion vector between a depth image and a color image, for common use, using the motion vector memory 1360, calculates an optimal motion vector using the motion estimation unit 1320 based on the stored motion vector, and uses the calculated optimal motion vector in motion estimation.
[108] In other words, as shown in FIG. 14(b), motion compensation may be performed using a motion vector (mvc), calculated from a color image, as the motion vector (mv3) of a depth image.
[109] For example, the motion vector of a color image may be calculated by performing motion estimation between a current color image and a reference color image, and the motion vector of a depth image may be calculated by performing motion estimation between a current depth image and a reference depth image.
[110] The motion estimation unit 1320 calculates a first error between the current depth image and the predicted depth image based on the calculated motion vector of the color image and calculates a second error between the current depth image and the predicted depth image based on the calculated motion vector of the depth image. The motion estimation unit 1320 finally determines the motion vector of the color image as a motion vector when a difference between the first error and the second error is a specific value or less.
[I l l] The motion compensation unit 1325 performs motion compensation of the predicted depth image based on the determined motion vector.
[112] As described above, in the case where the motion vectors of different images (color images or depth images) can be used, information, indicating that the motion vectors of different images can be used, may also be encoded. This information may be referred to as a "copy_mv_from_visual_flag" or a "copy_mv_from_depth_flag." If the "copy_mv_from_visual_flag" or the "copy_mv_from_depth_flag" are encoded in the form of 1', it indicates that the motion vectors of different images (color images or depth images) are used.
[113] Meanwhile, in the case where the motion vectors of different images (color images or depth images) are used, a difference signal (residual signal) between a first error and a second error may be further generated as described above. This difference signal converges on a specific value or less as described above. If the difference signal is transformed and quantized, it is transformed to a very small value. Accordingly, the entire encoding efficiency can be significantly increased.
[114] Meanwhile, the free viewpoint image encoding apparatus 1300 of FIG. 13 is illustrated not to include a dc prediction unit and a dc table unlike the free viewpoint image encoding apparatus 100 of FIG. 1. However, the free viewpoint image encoding apparatus 1300 of FIG. 13 may further include the dc prediction unit and the dc table and may further perform a dc prediction operation.
[115] FIG. 15 is a block diagram showing a method of encoding free viewpoint images according to an embodiment of the present invention.
[116] Referring to this drawing, the method of encoding free viewpoint images shown in FIG. 15 corresponds to the free viewpoint image encoding apparatus of FIG. 13.
[117] First, the motion vector of a color image and the motion vector of a depth image are calculated at step S 1510. In more detail, the motion vectors of the color image and the depth image are first calculated, as shown in FIG. 14(a).
[118] Next, a first error between a current depth image and a predicted depth image is calculated based on the motion vector of the color image, and a second error between a current depth image and a predicted depth image is calculated based on the calculated motion vector of the depth image at step S 1520.
[119] It is then determined whether a difference between the first error and the second error is a specific value or less at step S 1530. If, as a result of the determination, the difference is determined to be a specific value or less, motion compensation is performed on the predicted depth image based on the motion vector of the color image at step S 1540.
[120] FIG. 16 is a block diagram showing an apparatus for decoding free viewpoint images according to an embodiment of the present invention.
[121] Referring to this drawing, the free viewpoint image decoding apparatus 1600 of FIG. 16 is almost similar to the viewpoint image decoding apparatus 1000 of FIG. 10. That is, the operations of an entropy decoding unit 1610, an inverse-quantization inverse- transformation unit 1615, a filter 1620, a memory 1625, a motion compensation unit 1630, and an intra-prediction unit 1635 of the free viewpoint image decoding apparatus 1600 shown in FIG. 16 are similar to those of FIG. 10, and differences therebetween are chiefly described below. [122] Assuming that the free viewpoint image decoding apparatus 1600 of FIG. 16 is used to decode the depth images of color images and depth images, the entropy decoding unit 1610 decodes bitstream and extracts motion vectors from the decoded bitstream. The extracted motion vectors are used to perform a motion compensation operation in the motion compensation unit 1630.
[123] In this case, the motion vectors may not be the motion vectors of a depth image, but be the motion vectors of color images. In the case where the motion vectors are the motion vectors of color images, the motion compensation unit 1635 performs motion compensation of prediction depth images based on the motion vectors of the color images.
[124] Meanwhile, whether the motion vectors are the motion vectors of the depth images or the motion vectors of the color images may be determined based on the "copy_mv_from_visual_flag" as described above. When the "copy_mv_from_visual_flag" is T, motion compensation is performed using the motion vectors of the color images.
[125] Meanwhile, the memory 1625 may preferably further store restored color images as well as restored depth images.
[126] Meanwhile, the method of encoding or decoding free viewpoint images according to the present invention may be implemented, in the form of codes readable by a processor, in a recording medium readable by a processor included in the apparatus for encoding or decoding free viewpoint images.
[127] The recording medium readable by a processor may include all kinds of recording devices in which data capable of being read by the processor is stored. For example, the recording medium readable by a processor may include ROM, RAM, CD-ROM, magnetic tapes, floppy disks, optical data storages, and so on. The recording medium readable by a processor may also be implemented in the form of carrier waves, such as transmission over the Internet. Further, the recording medium readable by a processor may be distributed into computer systems interconnected over a network, so codes readable by a processor may be stored and executed in a distributed manner.
[128] Although the preferred embodiments of the present invention have been shown and described, the scope of the present invention is not limited by or to the embodiments as described above, and those having ordinary skill in the art may modify the present invention in various forms without departing from the spirit and scope of the present invention defined in the appended claims. It should be understood that those modifications should not be individually construed from the technical spirit or prospect of the present invention.
[129] Industrial Applicability
[130] As described above, the method and apparatus for encoding free viewpoint images and the method and apparatus for decoding free viewpoint images according to the present invention may be used to predict dc values and encode or decode the predicted dc values.
[131]

Claims

Claims
[1] A method of encoding free viewpoint images, comprising the steps of: predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored; and encoding a difference signal between each of the predicted dc values and each of current dc values.
[2] The method as claimed in claim 1, further comprising the step of encoding the dc table, including the plurality of specific dc values and index values corresponding to the respective dc values.
[3] The method as claimed in claim 1, further comprising the step of predicting the dc values of the current blocks comprise the step of encoding indices within the dc table, corresponding to the respective predicted dc values.
[4] The method as claimed in claim 3, wherein: the step of encoding the dc table is performed in a first syntax level, and the step of encoding the indices is performed in a second syntax level lower than the first syntax level.
[5] The method as claimed in claim 1, wherein the dc table comprises a prediction table in which a plurality of prediction methods based on blocks neighboring the current block or dc values according to the respective prediction methods is stored.
[6] The method as claimed in claim 1, wherein the current block comprises a current block of depth images, which belong to the free viewpoint images.
[7] A method of decoding free viewpoint images, comprising the steps of: predicting dc values of a current block using a dc table in which a plurality of specific dc values is stored; decoding the encoded difference signals from input bitstream; and restoring current dc values using the predicted dc values and the decoded difference signals.
[8] The method as claimed in claim 7, further comprising the step of decoding the dc table, including the plurality of specific dc values and index values corresponding to the respective dc values.
[9] The method as claimed in claim 7, further comprising the step of predicting the dc values of the current blocks comprise the step of decoding indices within the dc table, corresponding to the respective predicted dc values.
[10] The method as claimed in claim 9, wherein: the step of decoding the dc table is performed in a first syntax level, and the step of decoding the indices is performed in a second syntax level lower than the first syntax level.
[11] The method as claimed in claim 7, wherein the dc table comprises a prediction table in which a plurality of prediction methods based on blocks neighboring the current block or dc values according to the respective prediction methods is stored.
[12] The method as claimed in claim 7, wherein the current block comprises a current block of depth images, which belong to the free viewpoint images.
[13] An apparatus for encoding free viewpoint images, comprising: a dc table in which a plurality of specific dc values is stored; a dc prediction unit for predicting dc values of a current block using the plurality of specific dc values; and a transformation quantization unit for transforming and quantizing a difference signal between each of the predicted dc values and each of current dc values.
[14] The apparatus as claimed in claim 13, further comprising an entropy encoding unit for performing entropy encoding on the transformed and quantized difference signals, wherein the entropy encoding unit further performs entropy encoding on the dc table.
[15] The apparatus as claimed in claim 13, wherein the dc table comprises a prediction table in which a plurality of prediction methods based on blocks neig hboring the current block or dc values according to the respective prediction methods is stored.
[16] An apparatus for decoding free viewpoint images, comprising: a dc table in which a plurality of specific dc values is stored; a dc prediction unit for predicting dc values of a current block using the plurality of specific dc values; an entropy decoding unit for performing entropy decoding on input bitstream; an inverse-quantization inverse-transformation unit for inverse quantizing and inverse transforming difference signals encoded from the entropy-decoded bitstream; and a adding unit for restoring current dc values using the predicted dc values and the difference signals.
[17] The apparatus as claimed in claim 16, wherein the entropy decoding unit further performs entropy decoding on the dc table.
[18] The apparatus as claimed in claim 16, wherein the dc table comprises a prediction table in which a plurality of prediction methods based on blocks neighboring the current block or dc values according to the respective prediction methods is stored.
[19] A method of encoding free viewpoint images, comprising the steps of: calculating motion vectors of a color image and a depth image of the free viewpoint images; calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image and calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image; and if a difference between the first error and the second error is a specific value or less, performing motion compensation of the predicted depth image based on the motion vector of the color image.
[20] The method as claimed in claim 19, further comprising the step of encoding information indicative of whether the motion vector of the color image is being used.
[21] A method of decoding free viewpoint images, comprising the steps of: extracting a motion vectors from an input bitstream; and if the extracted motion vector is a motion vector of a color image, performing motion compensation of a predicted depth image based on the motion vector of the color image.
[22] The method as claimed in claim 21, further comprising the step of decoding information indicative of whether the motion vector of the color image is being used.
[23] An apparatus for encoding free viewpoint images, comprising:
A motion vector memory for storing motion vectors of a color image; a motion estimation unit for calculating a first error between a current depth image and a predicted depth image based on the motion vector of the color image, calculating a second error between the current depth image and the predicted depth image based on the motion vector of the depth image, and, if a difference between the first error and the second error is a specific value or less, determining the motion vector of the color image as a motion vector; and a motion compensation unit for performing motion compensation of the predicted depth image based on the motion vector of the color image.
[24] An apparatus for decoding free viewpoint images, comprising: an entropy decoding unit for extracting motion vectors from input bitstream; and a motion compensation unit for, if the motion vectors are motion vectors of color images, performing motion compensation of prediction depth images based on the motion vectors of the color image.
PCT/KR2008/006831 2007-12-28 2008-11-20 Method for encoding and decoding image of ftv, and apparatus for encoding and decoding image of ftv WO2009084814A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US1717907P 2007-12-28 2007-12-28
US61/017,179 2007-12-28
US3242608P 2008-02-28 2008-02-28
US61/032,426 2008-02-28

Publications (1)

Publication Number Publication Date
WO2009084814A1 true WO2009084814A1 (en) 2009-07-09

Family

ID=40824500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2008/006831 WO2009084814A1 (en) 2007-12-28 2008-11-20 Method for encoding and decoding image of ftv, and apparatus for encoding and decoding image of ftv

Country Status (1)

Country Link
WO (1) WO2009084814A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2403257A3 (en) * 2010-07-02 2015-03-25 Samsung Electronics Co., Ltd. Depth image encoding and decoding
US20160044338A1 (en) * 2009-09-22 2016-02-11 Samsung Electronics Co., Ltd. Apparatus and method for motion estimation of three dimension video

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030031372A1 (en) * 2001-06-29 2003-02-13 Jeongnam Youn Decoding of predicted DC coefficient without division
US20060188164A1 (en) * 2005-02-18 2006-08-24 Samsung Electronics Co., Ltd. Apparatus and method for predicting coefficients of video block
US20060282237A1 (en) * 2005-05-25 2006-12-14 Shu Xiao Fixed point integer division techniques for AC/DC prediction in video coding devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030031372A1 (en) * 2001-06-29 2003-02-13 Jeongnam Youn Decoding of predicted DC coefficient without division
US20060188164A1 (en) * 2005-02-18 2006-08-24 Samsung Electronics Co., Ltd. Apparatus and method for predicting coefficients of video block
US20060282237A1 (en) * 2005-05-25 2006-12-14 Shu Xiao Fixed point integer division techniques for AC/DC prediction in video coding devices

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160044338A1 (en) * 2009-09-22 2016-02-11 Samsung Electronics Co., Ltd. Apparatus and method for motion estimation of three dimension video
US10798416B2 (en) * 2009-09-22 2020-10-06 Samsung Electronics Co., Ltd. Apparatus and method for motion estimation of three dimension video
EP2403257A3 (en) * 2010-07-02 2015-03-25 Samsung Electronics Co., Ltd. Depth image encoding and decoding

Similar Documents

Publication Publication Date Title
JP5590133B2 (en) Moving picture coding apparatus, moving picture coding method, moving picture coding computer program, moving picture decoding apparatus, moving picture decoding method, and moving picture decoding computer program
KR101228020B1 (en) Video coding method and apparatus using side matching, and video decoding method and appartus thereof
US8228989B2 (en) Method and apparatus for encoding and decoding based on inter prediction
KR101830352B1 (en) Method and Apparatus Video Encoding and Decoding using Skip Mode
US8649431B2 (en) Method and apparatus for encoding and decoding image by using filtered prediction block
CN107347154B (en) Method for encoding and decoding images, encoding and decoding device, and corresponding computer program
EP2250816B1 (en) Method and apparatus for encoding and decoding an image by using consecutive motion estimation
KR100772391B1 (en) Method for video encoding or decoding based on orthogonal transform and vector quantization, and apparatus thereof
KR20170026536A (en) Method for encoding a digital image, and associated decoding method, devices and computer programmes
JPWO2008084745A1 (en) Image coding apparatus and image decoding apparatus
EP2555523A1 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
EP2036358A1 (en) Image encoding/decoding method and apparatus
KR20100087600A (en) Method and apparatus for coding and decoding using adaptive interpolation filters
EP2497271A2 (en) Hybrid video coding
KR20110017302A (en) Method and apparatus for encoding/decoding image by using motion vector accuracy control
US20080107175A1 (en) Method and apparatus for encoding and decoding based on intra prediction
WO2008123657A1 (en) Method and apparatus for encoding and decoding image using modification of residual block
KR20100102386A (en) Method and apparatus for encoding/decoding image based on residual value static adaptive code table selection
EP2252059B1 (en) Image encoding and decoding method and device
KR101449683B1 (en) Motion Vector Coding Method and Apparatus by Using Motion Vector Resolution Restriction and Video Coding Method and Apparatus Using Same
KR20090040028A (en) Method and apparatus for determining encoding mode of video image, method and apparatus for encoding/decoding video image using the same and recording medium storing program for performing the method thereof
WO2009084814A1 (en) Method for encoding and decoding image of ftv, and apparatus for encoding and decoding image of ftv
KR102020953B1 (en) Image Reencoding Method based on Decoding Data of Image of Camera and System thereof
CN115567710A (en) Data encoding method and apparatus, and method and apparatus for decoding data stream
KR101366088B1 (en) Method and apparatus for encoding and decoding based on intra prediction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08866191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08866191

Country of ref document: EP

Kind code of ref document: A1