WO1997004404A1 - Multi-viewpoint digital video encoding - Google Patents
Multi-viewpoint digital video encoding Download PDFInfo
- Publication number
- WO1997004404A1 WO1997004404A1 PCT/US1996/011826 US9611826W WO9704404A1 WO 1997004404 A1 WO1997004404 A1 WO 1997004404A1 US 9611826 W US9611826 W US 9611826W WO 9704404 A1 WO9704404 A1 WO 9704404A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- viewpoint
- viewpoint video
- vector
- video
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/008—Vector quantisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/004—Predictors, e.g. intraframe, interframe coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0096—Synchronisation or controlling aspects
Definitions
- the present invention related to video decoding and encoding apparatus and method and, more particu ⁇ larly, to a multi-viewpoint digital video coder/decoder and method.
- a multi-viewpoint video is a three-dimensional extension of the traditional movie seguence, in that multiple perspectives of the same scene exist at any instance in time.
- the multi-viewpoint video offers the capability of "looking around" objects in a scene.
- typical uses may include interactive applications, medical surgery technologies, remote sensing development, virtual reality games, etc.
- MPEG-2 is a coding standard specified for one video sequence.
- MPEG-2 has also been recently shown to be applicable to two sequences of stereoscopic signals through the use of additional vectors.
- the relevant parts of sections 6 and 7 of the ISO document DIS 13818-2 will be hereinafter referred to as the "MPEG-2 standard.”
- a multi-viewpoint coder/decoder should compress the digital information so that information can be sent using as little bandwidth as possible.
- a multi-viewpoint coder/decoder should be compatible with prior standards. In other words, while a TV may not properly show the different viewpoints in the multi-viewpoint video, the TV should be able to decode one viewpoint.
- a multi-viewpoint coder/decoder should also be open-ended. In-this manner, individual coding modules can be improved in accordance with any technological advances as well as the creativity and inventive spirits of software providers.
- An open-ended scheme would also allow a person to adjust the quality of the multi-viewpoint video according to system requirements and variables. Furthermore, such scheme would be easily expandable to provide as many video viewpoints as desired.
- a multi-viewpoint coder/decoder should be hardware-based, instead of software-based. In this manner, fast and efficient coding/decoding can be achieved.
- the multi-viewpoint video encoder disclosed herein comprises a depth estimator, a predictor connected to the depth estimator, and a comparator connected to the predictor.
- the multi-viewpoint video encoder has an output, preferably including a multi- plexer for multiplexing the first image, the depth map, the second viewpoint vector and the prediction errors into a signal.
- the multi-viewpoint video encoder also includes a depth map encoder/compressor.
- the depth map is compressed according to a video compression standard, preferably compatible with the MPEG-2 standard.
- the multi-viewpoint video encoder further includes a first image encoder.
- the first image is encoded according to a video coding standard, preferably compatible with the MPEG-2 standard. In this manner, an MPEG-2 monitor can display the first image video without any further modifications.
- a multi- viewpoint video encoder only requires the addition of the depth estimator and the predictor mentioned above.
- a first image having a first viewpoint vector is selected.
- a depth map is formed for this image.
- a second image having a second viewpoint vector is also selected.
- a predicted second image having the second viewpoint vector is then predicted by manipulating the first image and the depth map to reflect the second viewpoint vector.
- the prediction errors required for reconstructing the second image from the predicted second image are calculated by comparing the second image and the predicted second image.
- the first image, the depth map, the second viewpoint vector and the prediction errors are transmitted, preferably they are multiplexed into a signal.
- the depth map could be compressed according to a video compression standard, preferably compatible with the MPEG-2 standard.
- the first image should be encoded according to a video coding standard, such as the MPEG-2 standard.
- the multi-viewpoint video decoder disclosed herein comprises a receiver, a predictor connected to the receiver, and a reconstructor connected to the receiver and the predictor.
- the predictor further includes a manipulator.
- the multi-viewpoint video decoder may include a depth map decompressor connected between the receiver and the predictor.
- the multi-viewpoint video decoder In order to provide video in a desired viewpoint, the multi-viewpoint video decoder must include a receiver and a predictor connected to the receiver. This predictor has a manipulator.
- the multi-viewpoint video decoder may also include a depth map decompressor connected between the receiver and the predictor.
- the multi-viewpoint video decoder further includes a constructor connected to the predictor.
- the constructor also includes a memory.
- a multi-viewpoint video decoder requires only the addition of the predictor mentioned above.
- the multi- viewpoint video decoder may also include a constructor connected to the predictor.
- Such decoder should also include means for obtaining the desired viewpoint vector.
- a decoder To decode multi-viewpoint video, a decoder must receive a first image having a first viewpoint, a depth map, a second viewpoint vector and prediction errors. A predicted second image having the second viewpoint vector is then formed by manipulating the first image and the depth map to reflect the second viewpoint vector. Further, a second image having the second viewpoint vector then reconstructed by combining the prediction errors and the predicted second image.
- a decoder must receive a first image having a first viewpoint, a depth map, a second viewpoint vector and prediction errors.
- a predicted second image having the desired viewpoint vector is then formed by manipulating the first image and the depth map to reflect the desired viewpoint vector.
- a second image having the desired viewpoint vector can be constructed by combining a first stored mesh, a second stored mesh, a first stored image, a second stored image, and the predicted second image.
- the first stored image is a nearest past stored image reconstructed by combining the prediction errors and the predicted second image.
- the first stored mesh is a stored mesh respective to the nearest stored past reconstructed image.
- the second stored image is a nearest future image reconstructed by combining the prediction errors and the predicted second image.
- the second stored mesh is a stored mesh respective to the nearest stored future reconstructed image.
- FIG. 1 illustrates the viewpoint image arrangement referred to throughout the specification
- FIG. 2 illustrates a block diagram of an embodiment of the multi-viewpoint encoder of the present invention
- FIG. 3 is a flow chart illustrating the encoding process of the multi-viewpoint encoder of the present invention
- FIG. 4 is a "round robin" prediction structure for the encoder selection of viewpoints, wherein the encoder only selects one viewpoint at a time;
- FIG. 5 is two alternative "round robin" prediction structures for the encoder selection of viewpoints, wherein the encoder selects two viewpoints at a time;
- FIG. 6 illustrates a block diagram of an embodi ⁇ ment of the multi-viewpoint decoder of the present invention.
- FIG. 7 is a flow chart illustrating the decoding process of the multi-viewpoint decoder of the present invention.
- FIG. 1 illustrates the viewpoint image arrange- ment, i.e., the positioning of the cameras, to be encoded by the multi-viewpoint video encoder of the present invention.
- the images referred to hereinafter will correspond to the viewpoint image arrangement. Accordingly, I c will have a central viewpoint, I ⁇ will have a top viewpoint, I B will have a bottom viewpoint, I R will have a right viewpoint, and I L will have a left viewpoint.
- FIG. 2 schematically illustrates an embodiment of the multi-viewpoint video encoder of the present invention.
- the encoder has a depth estimator 10.
- the depth estimator 10 creates a depth map D c l for the central image I c l .
- the central image I c l has a first viewpoint vector, namely the central viewpoint vector.
- the depth map D c l is created from the multiple viewpoint images, in the manner described below.
- the depth of an object can be geometrically calculated if two or more perspectives of the object are given.
- the positions of the object in each of the available viewpoint images must be located.
- the simplest method is to use the same matching techniques used in estimating motion for a temporal sequence of images. These techniques include: (1) correlation matching, as described in Andreas Kopernik and Danielle Pele, "Disparity Estimation for Stereo Compensated 3DTV Coding," 1993 Picture Coding Symposium, March 1993, Lausanne, Switzerland; (2) relaxation matching, as described in D. Marr and T. Poggio, “Cooperative Computation of Stereo Disparity," Science, vol. 194, pp. 283-287 (1976) ; and (3) coarse-to-fine matching, as described in Dimitrios Tzovaras, Michael G. Strintzis, and Ioannis Pitas, "Multiresolution Block Matching
- the matching and disparity algorithms mentioned above can be used in the preferred embodiments of the invention.
- the specific algorithm to be used in matching and determining disparity depend on the system capabilities, including processing speed, bandwidth capability, desired picture quality, number of available viewpoint images, etc. Nevertheless, the algorithms should be translated into a hardware solution, either hard-wired, logic table-based, etc., so that the images can be processed at a faster rate than with a software solution.
- the central image I c l is then encoded by the image encoder 16 in a format compatible with section 7 of the ISO document DIS 13818-2. Such an encoder is described in U.S. Patent 5,193,004, issued to Feng Ming Wang and Dimitris Anastassiou.
- any MPEG-2 monitor may be able to decode the information and display the image. Such monitor, however, will not be able to decode the multi-viewpoint video unless it is equipped with the extra hardware described below.
- the depth map D c ' is also encoded and compressed in a format that is compatible with section 7 of the DIS 13818-2 and/or MPEG Test Model 5 (ISO Doc. ISO-IEC/JTC1/SC29/WG11/NO400) , by the encoder/compressor 17.
- MPEG Test Model 5 ISO Doc. ISO-IEC/JTC1/SC29/WG11/NO400
- both the image I c l and the depth map D c l are decoded by decoder 22 and decoder 23, respectively.
- the encoder will base its coding on the same data the decoder will receive, allowing for better results.
- the predictor 20 predicts a predicted second image having a second selected viewpoint vector.
- the predictor 20 contains three essential components.
- a matrix manipulator 12 forms a mesh or 3-D matrix W by combining the image I c * and the depth map
- this set of 3D coordinate information (X c ,y c ,z c ) is similar to a 3D geometrical model or mesh.
- this set of 3D coordinate information (X c ,y c ,z c ) is similar to a 3D geometrical model or mesh.
- a 3-D matrix or mesh is created.
- a corresponding texture map incorporating the intensity values for each coordinate is also kept. This process is further explained in James Foley et al., Computer Graphics Principles and Practice, Addison-Wesley Publishing Co. (2d ed. 1990) .
- hardware-based solutions for this manipulator can be found throughout the computer graphics field.
- the predictor 20 has a vector selector 13.
- the vector selector 13 selects a vector V x '.
- the vector V x ' is selected in a "round robin" rotational basis amongst the directional vectors of the four non-central images of FIG. 1, i.e., I , I B , I R , and I ⁇ .
- the selected vector/image sequence as related to time t would be I L ', I B t+1 / I R 1"1"2 .
- FIG. 5 illustrates alternative selected vectors/images sequences as related to time t if the bandwidth permits the encoding of three images.
- the predictor 20 also includes a combiner 14.
- the combiner 14 interpolates the mesh Nf with the selected vector V x l .
- the resulting predicted image PI x l will portray the mesh M 1 in the viewpoint of vector V x l .
- This process is further explained in James Foley et al., Computer Graphics Principles and Practice. Addison- Wesley Publishing Co. (2d ed. 1990) .
- hardware-based solutions for this combiner can be found throughout-the computer graphics field.
- the output of the vector selector 13 is used to trigger selector 11.
- the selector 11 assures that the image I x l sent to the comparator 15 will have the same viewpoint as the selected vector V x l . In other words, if the selected vector V x ' is the viewpoint vector of image I L l , selector 11 will send image I L l to the comparator 15.
- the comparator 15 compares the predicted image PI x l with the selected image I x l in order to calculate the prediction errors PE 1 required to reconstruct image I x l from predicted image PI x l .
- the prediction errors PE 1 are calculated by examining the differences between the image I x l and the predicted image PI X '.
- the comparator 15 calculates prediction errors in the usual manner of MPEG-2 encoders, i.e., compatible with section 7 of the ISO document DIS 13818-2.
- the prediction error encoder 18 then encodes the prediction errors PE' according to the MPEG-2 specification.
- the encoded central image I c l , depth map D c l and prediction errors PE* are then multiplexed into a signal S along with the selected vector V x l by the output/mul iplexer 19.
- the MPEG-2 syntax of the encoded bitstreams is found in section 6 of the ISO document DIS 13818-2. Additionally, the encoder may also transmit an MPEG-2 header containing the direc ⁇ tional information, i.e., the directional vectors, of
- the comparator 15, the encoders 16, 17 and 18, the output/multiplexer 19, and the decoders 22 and 23 are all found in MPEG-2 encoder 21.
- FIG. 3 illustrates the flow chart of the method for encoding multi-viewpoint video.
- the images I c l , 1 ⁇ . I B '. I R *. and I ⁇ l are inputted into the multi-viewpoint video encoder of FIG. 2.
- the central image I c l is then encoded and outputted according to the MPEG-2 specification (ST 102) .
- the encoded image I c l is decoded for use within the process (herein image I c l ) .
- a depth map D c l is then calculated using the information in images Ic', I L '/ I B ' I R 1 an ⁇ ⁇ as mentioned above (ST 103) .
- the depth map D c l is also encoded and outputted according to the MPEG-2 specifi- cation (ST 104) .
- the encoded depth map D c ' is decoded for use within the process (herein depth map D c ) .
- Step 105 a vector V x l is selected in a "round robin" rotational basis amongst the directional vectors of the four non-central images of FIG. 1, i.e., I L , I B , I R , and I ⁇ .
- the selected vector/image sequence as related to time t would be I L l .
- step 107 a mesh or 3-D matrix Nf by manipulat ⁇ ing the image I c l and the depth map D c l as described above. A corresponding texture map incorporating the intensity values for each coordinate is also kept.
- the mesh If is then combined, or interpolated, with the selected vector V x * (ST 108) . In this manner, the resulting predicted image PI x l will portray the mesh M 1 in the viewpoint of vector V x '.
- the predicted image PI x l is compared with the selected image I x l in order to calculate the prediction errors PE 1 required to reconstruct image I x l from predicted image PI x l (ST 109) .
- the prediction errors PE* are calculated by examining the differences between the image I x l and the predicted image PI x l .
- FIG. 5 illustrates two possible selected vectors/images sequences as related to time t. Otherwise, the entire process starts over (ST 111) .
- the prediction errors PE 1 are then encoded and outputted According to the MPEG-2 specification (ST 110) .
- the selected vector V x l is also outputted (ST 106) .
- FIG. 6 schematically illustrates an embodiment of the multi-viewpoint video decoder of the present invention.
- the multi-viewpoint video decoder has an input/demultiplexer 60.
- the input/demultiplexer 60 receives a signal S and demultiplexes the information corresponding to the central image I c l , the depth map D c l , the selected viewpoint vector V x ' and prediction errors PE 1 .
- the multi-viewpoint video decoder has an image decoder 61, a decoder/decompressor 62 and a prediction error decoder 63 for decoding the central image I c l , the depth map D c ', the prediction errors PE 1 , respectively.
- These decoders comply with the MPEG-2 standard and, more specifically, section 7 of the ISO document DIS 13818-2.
- the input/demultiplexer 60, the image decoder 61, the decoder/decompressor 62 and the prediction error decoder 63 are part of the MPEG-2 decoder 75. Once decoded, the image I c l and the selected viewpoint vector V x ' are stored in memory 69.
- the multi-viewpoint video decoder also has a vector input 64.
- a person can input any desired vector Vu' to display through any variation of vector input 64, including a head tracker, a joystick, a mouse, a light pen, a trackball, a desk pad, verbal commands, etc.
- a predictor 76 contains two essential elements: a matrix manipulator 65 and a combiner 66.
- the matrix manipulator 65 forms a mesh or 3-D matrix Nf by combining the image i c l and the depth map D c l , in the manner described above.
- This resulting mesh M 1 is stored in a memory 69.
- a corresponding texture map incorporating the intensity values for each coordinate is also kept.
- the combiner 66 interpolates the mesh Nf with the desired vector V ⁇ l . In this manner, the resulting predicted image PI, will portray the mesh Nf in the viewpoint of vector V, .
- a switch 67 is dependent on the relation between the desired vector V,/ and the selected vector V x *. If both vectors are equal, the predicted image PI, is then combined with the prediction errors PE 1 via the predic ⁇ tion error combiner 68. (The prediction error combiner 68 is also part of the MPEG-2 decoder 75.) The result- ing reconstructed image I x l is then stored in memory 69 and outputted via the output 72.
- the constructor 77 has several essential elements: the memory 69, the mesh imagers MSI and MS2, the warping module 70, and the constructing module 71.
- the nearest past reconstructed image Iu' "f and its respective mesh Nf ⁇ f are combined to form a nearest past mesh image I c t_f by the mesh imager MSI.
- the nearest future reconstructed image Iu t+B and its respec ⁇ tive mesh l ⁇ +B are combined to form a nearest future mesh image MI c t+B by the mesh imager MS2.
- the nearest past mesh image MI c l"f and the nearest future mesh image MI c t+B are then warped by the warping module 70 to form an intermediate mesh image MPIu' for the time t. Additionally, the warping procedure should weigh the desired time t in order to provide a proper intermediate mesh image. Accordingly, if the time t is closer to time t-f than to time t+B, the warped intermediate mesh image will reflect an image closer to the image at time t-f rather than at time t+B.
- the warping process is further explained in George Woldberg, Digital Image Warping. IEEE Computer Society Press (1990) .
- hardware-based solutions for this warping module can be found throughout the computer graphics field.
- This mesh image is then combined with the predicted image PIu 1 by the constructing module 71.
- the combination process is further explained in Y.T. Zhou, "Multi-Sensor Image Fusion, " International Conference on Image Processing. Austin, Texas, U.S.A. (1994) .
- the constructing module 71 can be as simple as an exclusive OR (XOR) logic gate.
- XOR exclusive OR
- other hardware-based solutions for this constructing module can be found throughout the computer vision/image fusion field.
- the resulting constructed image I ⁇ l is then outputted via the output 72.
- the mesh imaging, warping and construction algorithms to be used depend on the system capabilities, including processing speed, bandwidth capability, desired picture quality, number of available viewpoint images, etc. Nevertheless, these algorithms should be translated into a hardware solution, either hard-wired, logic table-based, etc., so that the images can be processed at a faster rate than with a software solution.
- FIG. 7 illustrates the flow chart of the method for decoding multi-viewpoint video.
- Step 201 the image I c ', the depth map D c ', the selected viewpoint vector V x ', and the prediction errors PE' are inputted into the multi-viewpoint video decoder of FIG. 6.
- a user-desired vector V,/ is selected and inputted (ST 202) .
- the image I c ' and the depth map D c ' are combined through matrix manipulations to forms a mesh or 3-D matrix Nf, in the manner described above (ST 203) .
- a corresponding texture map incorporating the intensity values for each coordinate is also kept.
- the mesh Nf is interpolated with the desired vector v/ to form predicted image PIu', which portrays the mesh Nf in the viewpoint of vector V ⁇ l (ST 204) .
- Step 205 is dependent on the relation between the desired vector Vu' and the selected vector V x '. If both vectors are equal, the predicted image PI ⁇ is then combined with the prediction errors PE 1 (ST 211) . The resulting reconstructed image I x ' is then stored
- the nearest past reconstructed image in the desired viewpoint I v l'f , the mesh M '1 respective to the nearest past reconstructed image I v l' ⁇ , the nearest future reconstructed image in the desired viewpoint Iu' +B , and the mesh Nf +B respective to the nearest future reconstructed image Iu t+B are retrieved from memory (ST 206) .
- the nearest past reconstructed image I,/ "f and its respective mesh * ⁇ f are combined to form a nearest past mesh image MI,/ "f
- the nearest future reconstructed image Iu t+B and its respective mesh M t+B are combined to form a nearest future mesh image MI,/ +B (ST 207) .
- the nearest past mesh image Iu' "f and the nearest future mesh image MI,/ +B are then warped to form an intermediate mesh image MPI,/ for the time t (ST 208) .
- the warping procedure should weigh the desired time t in order to provide a proper intermediate mesh image. Accordingly, if the time t is closer to time t-f than to time t+B, the warped intermediate mesh image will reflect an image closer to the image at time t-f rather than at time t+B.
- This mesh image is then combined with the predicted image Ply' (ST 209) .
- the resulting constructed image I is then outputted (ST 210) . Then the process starts over again.
- the depth map D c ' and the image I c l need not be manipulated together to form a mesh Nf, which is later combined with a viewpoint vector. Instead, both the depth map D c ' and the image I c ' can each be combined with the viewpoint vector and later be reconstructed.
- the nearest past and future meshes need not be stored in memory. Instead, the nearest past and future images can be stored in memory and later combined with stored depth maps to form the meshes.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP96924567A EP0843857A4 (en) | 1995-07-21 | 1996-07-17 | Multi-viewpoint digital video encoding |
JP9506820A JPH11510002A (en) | 1995-07-21 | 1996-07-17 | Multi-viewpoint digital video encoding |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/505,051 US5617334A (en) | 1995-07-21 | 1995-07-21 | Multi-viewpoint digital video coder/decoder and method |
US08/505,051 | 1995-07-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1997004404A1 true WO1997004404A1 (en) | 1997-02-06 |
Family
ID=24008802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1996/011826 WO1997004404A1 (en) | 1995-07-21 | 1996-07-17 | Multi-viewpoint digital video encoding |
Country Status (4)
Country | Link |
---|---|
US (1) | US5617334A (en) |
EP (1) | EP0843857A4 (en) |
JP (1) | JPH11510002A (en) |
WO (1) | WO1997004404A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999030280A1 (en) * | 1997-12-05 | 1999-06-17 | Dynamic Digital Depth Research Pty. Ltd. | Improved image conversion and encoding techniques |
AU738692B2 (en) * | 1997-12-05 | 2001-09-27 | Dynamic Digital Depth Research Pty Ltd | Improved image conversion and encoding techniques |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5872575A (en) * | 1996-02-14 | 1999-02-16 | Digital Media Interactive | Method and system for the creation of and navigation through a multidimensional space using encoded digital video |
US6084979A (en) * | 1996-06-20 | 2000-07-04 | Carnegie Mellon University | Method for creating virtual reality |
DE69733233T2 (en) * | 1996-09-11 | 2006-01-19 | Canon K.K. | Image processing for three-dimensional rendering of image data on the display of an image capture device |
US6055330A (en) * | 1996-10-09 | 2000-04-25 | The Trustees Of Columbia University In The City Of New York | Methods and apparatus for performing digital image and video segmentation and compression using 3-D depth information |
US6393142B1 (en) * | 1998-04-22 | 2002-05-21 | At&T Corp. | Method and apparatus for adaptive stripe based patch matching for depth estimation |
AUPQ416699A0 (en) * | 1999-11-19 | 1999-12-16 | Dynamic Digital Depth Research Pty Ltd | Depth map compression technique |
FR2806570B1 (en) * | 2000-03-15 | 2002-05-17 | Thomson Multimedia Sa | METHOD AND DEVICE FOR CODING VIDEO IMAGES |
EP1273180B1 (en) * | 2000-03-24 | 2006-02-22 | Reality Commerce Corporation | Method and apparatus for parallel multi-viewpoint video capturing and compression |
FI109633B (en) * | 2001-01-24 | 2002-09-13 | Gamecluster Ltd Oy | A method for speeding up and / or improving the quality of video compression |
KR20040030081A (en) * | 2001-08-15 | 2004-04-08 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | 3D video conferencing system |
US20030198290A1 (en) * | 2002-04-19 | 2003-10-23 | Dynamic Digital Depth Pty.Ltd. | Image encoding system |
US7525565B2 (en) * | 2003-03-14 | 2009-04-28 | Koninklijke Philips Electronics N.V. | 3D video conferencing |
US7324594B2 (en) * | 2003-11-26 | 2008-01-29 | Mitsubishi Electric Research Laboratories, Inc. | Method for encoding and decoding free viewpoint videos |
MX2007012705A (en) * | 2005-04-13 | 2008-03-14 | Thomson Licensing | Luma and chroma encoding using a common predictor. |
WO2007011147A1 (en) | 2005-07-18 | 2007-01-25 | Electronics And Telecommunications Research Institute | Apparatus of predictive coding/decoding using view-temporal reference picture buffers and method using the same |
ZA200805337B (en) | 2006-01-09 | 2009-11-25 | Thomson Licensing | Method and apparatus for providing reduced resolution update mode for multiview video coding |
US7916934B2 (en) * | 2006-04-04 | 2011-03-29 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for acquiring, encoding, decoding and displaying 3D light fields |
WO2009002115A2 (en) * | 2007-06-26 | 2008-12-31 | Lg Electronics Inc. | Media file format based on, method and apparatus for reproducing the same, and apparatus for generating the same |
KR101545009B1 (en) * | 2007-12-20 | 2015-08-18 | 코닌클리케 필립스 엔.브이. | Image encoding method for stereoscopic rendering |
WO2009131688A2 (en) * | 2008-04-25 | 2009-10-29 | Thomson Licensing | Inter-view skip modes with depth |
BRPI0911016B1 (en) * | 2008-07-24 | 2021-01-05 | Koninklijke Philips N.V. | three-dimensional image signal provision method, three-dimensional image signal provision system, signal containing a three-dimensional image, storage media, three-dimensional image rendering method, three-dimensional image rendering system to render a three-dimensional image |
CN101673395B (en) * | 2008-09-10 | 2012-09-05 | 华为终端有限公司 | Image mosaic method and image mosaic device |
EP2353298B1 (en) * | 2008-11-07 | 2019-04-03 | Telecom Italia S.p.A. | Method and system for producing multi-view 3d visual contents |
US8798158B2 (en) * | 2009-03-11 | 2014-08-05 | Industry Academic Cooperation Foundation Of Kyung Hee University | Method and apparatus for block-based depth map coding and 3D video coding method using the same |
WO2010108024A1 (en) * | 2009-03-20 | 2010-09-23 | Digimarc Coporation | Improvements to 3d data representation, conveyance, and use |
US8746894B2 (en) | 2009-08-25 | 2014-06-10 | Dolby Laboratories Licensing Corporation | 3D display system |
KR101636539B1 (en) * | 2009-09-10 | 2016-07-05 | 삼성전자주식회사 | Apparatus and method for compressing three dimensional image |
CN101986716B (en) * | 2010-11-05 | 2012-07-04 | 宁波大学 | Quick depth video coding method |
JP5872676B2 (en) | 2011-06-15 | 2016-03-01 | メディアテック インコーポレイテッド | Texture image compression method and apparatus in 3D video coding |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4691329A (en) * | 1985-07-02 | 1987-09-01 | Matsushita Electric Industrial Co., Ltd. | Block encoder |
US5043806A (en) * | 1989-07-26 | 1991-08-27 | L'etat Francais Represente Par Le Ministre Des P.T.T. | Method of processing and transmitting over a "MAC" type channel a sequence of pairs of sterescopic television images |
US5229935A (en) * | 1989-07-31 | 1993-07-20 | Kabushiki Kaisha Toshiba | 3-dimensional image display apparatus capable of displaying a 3-D image by manipulating a positioning encoder |
US5382979A (en) * | 1991-07-26 | 1995-01-17 | Samsung Electronics Co., Ltd. | Method and circuit for adaptively selecting three-dimensional sub-band image signal |
US5384861A (en) * | 1991-06-24 | 1995-01-24 | Picker International, Inc. | Multi-parameter image display with real time interpolation |
-
1995
- 1995-07-21 US US08/505,051 patent/US5617334A/en not_active Expired - Lifetime
-
1996
- 1996-07-17 EP EP96924567A patent/EP0843857A4/en not_active Withdrawn
- 1996-07-17 JP JP9506820A patent/JPH11510002A/en active Pending
- 1996-07-17 WO PCT/US1996/011826 patent/WO1997004404A1/en not_active Application Discontinuation
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4691329A (en) * | 1985-07-02 | 1987-09-01 | Matsushita Electric Industrial Co., Ltd. | Block encoder |
US5043806A (en) * | 1989-07-26 | 1991-08-27 | L'etat Francais Represente Par Le Ministre Des P.T.T. | Method of processing and transmitting over a "MAC" type channel a sequence of pairs of sterescopic television images |
US5229935A (en) * | 1989-07-31 | 1993-07-20 | Kabushiki Kaisha Toshiba | 3-dimensional image display apparatus capable of displaying a 3-D image by manipulating a positioning encoder |
US5384861A (en) * | 1991-06-24 | 1995-01-24 | Picker International, Inc. | Multi-parameter image display with real time interpolation |
US5382979A (en) * | 1991-07-26 | 1995-01-17 | Samsung Electronics Co., Ltd. | Method and circuit for adaptively selecting three-dimensional sub-band image signal |
Non-Patent Citations (1)
Title |
---|
See also references of EP0843857A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999030280A1 (en) * | 1997-12-05 | 1999-06-17 | Dynamic Digital Depth Research Pty. Ltd. | Improved image conversion and encoding techniques |
AU738692B2 (en) * | 1997-12-05 | 2001-09-27 | Dynamic Digital Depth Research Pty Ltd | Improved image conversion and encoding techniques |
US7054478B2 (en) | 1997-12-05 | 2006-05-30 | Dynamic Digital Depth Research Pty Ltd | Image conversion and encoding techniques |
US7551770B2 (en) | 1997-12-05 | 2009-06-23 | Dynamic Digital Depth Research Pty Ltd | Image conversion and encoding techniques for displaying stereoscopic 3D images |
US7894633B1 (en) | 1997-12-05 | 2011-02-22 | Dynamic Digital Depth Research Pty Ltd | Image conversion and encoding techniques |
Also Published As
Publication number | Publication date |
---|---|
US5617334A (en) | 1997-04-01 |
EP0843857A4 (en) | 1998-11-11 |
JPH11510002A (en) | 1999-08-31 |
EP0843857A1 (en) | 1998-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5617334A (en) | Multi-viewpoint digital video coder/decoder and method | |
US11599968B2 (en) | Apparatus, a method and a computer program for volumetric video | |
JP3776595B2 (en) | Multi-viewpoint image compression encoding apparatus and decompression decoding apparatus | |
US6144701A (en) | Stereoscopic video coding and decoding apparatus and method | |
US8644386B2 (en) | Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method | |
US7848425B2 (en) | Method and apparatus for encoding and decoding stereoscopic video | |
KR100742674B1 (en) | Image data delivery system, image data transmitting device thereof, and image data receiving device thereof | |
US6055012A (en) | Digital multi-view video compression with complexity and compatibility constraints | |
US20070104276A1 (en) | Method and apparatus for encoding multiview video | |
EP3526966A1 (en) | Decoder-centric uv codec for free-viewpoint video streaming | |
WO2019166688A1 (en) | An apparatus, a method and a computer program for volumetric video | |
JP3693407B2 (en) | Multi-view image encoding apparatus and decoding apparatus | |
EP1927250A1 (en) | Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method | |
WO2018223086A1 (en) | Methods for full parallax light field compression | |
US11910015B2 (en) | Method and device for multi-view video decoding and method and device for image processing | |
CN111937382A (en) | Image processing apparatus, image processing method, program, and image transmission system | |
Yang et al. | An MPEG-4-compatible stereoscopic/multiview video coding scheme | |
Chan et al. | The plenoptic video | |
Garus et al. | Bypassing depth maps transmission for immersive video coding | |
Tseng et al. | Multiviewpoint video coding with MPEG-2 compatibility | |
Ziegler et al. | Evolution of stereoscopic and three-dimensional video | |
EP3729805A1 (en) | Method for encoding and decoding volumetric video data | |
Chien et al. | Efficient stereo video coding system for immersive teleconference with two-stage hybrid disparity estimation algorithm | |
KR20230078669A (en) | How to encode and decode multi-view video | |
WO2023110592A1 (en) | Reduction of redundant data in immersive video coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA JP KR SG |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1996924567 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 1997 506820 Kind code of ref document: A Format of ref document f/p: F |
|
WWP | Wipo information: published in national office |
Ref document number: 1996924567 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: CA |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1996924567 Country of ref document: EP |