CA2514655A1 - Apparatus and method for depth image-based representation of 3-dimensional object - Google Patents

Apparatus and method for depth image-based representation of 3-dimensional object Download PDF

Info

Publication number
CA2514655A1
CA2514655A1 CA002514655A CA2514655A CA2514655A1 CA 2514655 A1 CA2514655 A1 CA 2514655A1 CA 002514655 A CA002514655 A CA 002514655A CA 2514655 A CA2514655 A CA 2514655A CA 2514655 A1 CA2514655 A1 CA 2514655A1
Authority
CA
Canada
Prior art keywords
node
context
nodes
probability
octree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA002514655A
Other languages
French (fr)
Other versions
CA2514655C (en
Inventor
In-Kyu Park
Alexander Olegovich Zhirkov
Mahn-Jin Han
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co., Ltd.
In-Kyu Park
Alexander Olegovich Zhirkov
Mahn-Jin Han
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR10-2002-0067970A external-priority patent/KR100446635B1/en
Application filed by Samsung Electronics Co., Ltd., In-Kyu Park, Alexander Olegovich Zhirkov, Mahn-Jin Han filed Critical Samsung Electronics Co., Ltd.
Priority claimed from CA002413056A external-priority patent/CA2413056C/en
Publication of CA2514655A1 publication Critical patent/CA2514655A1/en
Application granted granted Critical
Publication of CA2514655C publication Critical patent/CA2514655C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

A family of node structures for representing 3-dimensional objects using depth image are provided. These node structures can be adopted into MPEG-4 AFX for conventional polygonal 3D representations. Main formats of the family are Depthlmage, PointTexture and Octreelmage. Depthlmage represents an object by a union of its reference images and corresponding depth maps.
PointTexture represents the object as a set of colored points parameterized by projection onto a regular 2D grid. Octreelmage converts the same data into hierarchical octree-structured voxel model, set of compact reference images and a tree of voxel-image correspondence indices. Depthlmage and Octreelmage have animated versions, where reference images are replaced by videostreams. DIBR formats are very convenient for 3D model construction from 3D range-scanning and multiple source video data. MPEG-4 framework allows construction of a wide variety of representations from the main DIBR
formats, providing flexible tools for effective work with 3D models.
Compression of the DIBR formats is achieved by application of image (video) compression techniques to depth maps and reference images (videostreams).

Claims (48)

1. A depth image-based 3D object representing apparatus comprising:
a shape information generator for generating shape information for an object by dividing an octree containing the object into 8 subcubes and defining the divided subcubes as children nodes until each subcube becomes smaller than a predetermined size;
a reference image determiner for determining a reference image containing a color image for each subcube divided by the shape information generator;
an index generator for generating index information of the reference image corresponding to the shape information;
a node generator for generating octree nodes including shape information, index information and reference image; and an encoder for encoding the octree nodes to output bitstreams.
2. The apparatus according to claim 1, wherein the shape information includes a resolution field in which the maximum number of octree leaves along a side of a cube containing the object is recorded, an octree field in which a sequence of octree node structures is recorded, and an index field in which indices of the reference images corresponding to each octree node are recorded.
3. The apparatus according to claim 1, wherein the reference image is a Depthlmage node composed of viewpoint information and a color image corresponding to the viewpoint information.
4. The apparatus according to claim 1, wherein viewpoint information includes a plurality of fields defining an image plane for the object, the respective fields constituting the viewpoint information include a position field in which a position which an image plane is viewed from is recorded, an orientation field in which an orientation which an image plane is viewed from is recorded, a visibility field in which a visibility area from the viewpoint to the image plane is recorded, and a projection method field in which a projection method, selected from an orthographic projection method in which the visibility area is represented by width and height and a perspective projection method in which the visibility area is represented by a horizontal angle and a vertical angle, is recorded.
5. The apparatus according to claim 1, further comprising a preprocessor for preprocessing pixels in a boundary between blocks in the reference image and providing the preprocessed pixels to the reference image determiner, the preprocessor comprising:
an expanding portion for expanding pixels to a background using an average color of blocks and fast decay in intensity; and a compressing portion for performing block-based compression on the reference image to then squeeze distortion into the background.
6. The apparatus according to claim 1, wherein the index generator comprises:
a color point generator for acquiring color points by shifting pixels existing in the reference image by a distance defined in the depth map corresponding thereto;
a point-based representation (PBR) generator for generating an intermediate PBR image by a set of color points;
an image converter for converting the PBR image into an octree image represented by the cube corresponding to each point; and an index information generator for generating index information of the reference image corresponding to each cube.
7. The apparatus according to claim 1, wherein the encoder comprises:
a context determining portion for determining a context of a current octree node on the basis of the number of encoding cycles for the octree node;

a first stage encoding portion for encoding a first predetermined number of nodes by a 0-context model and arithmetic coding while keeping a single probability table with a predetermined number of entries;
a second stage encoding portion for encoding a second predetermined number of nodes following after the first predetermined number of nodes by a 1-context model using a parent node as a context; and a third stage encoding portion for encoding the remaining nodes following after the second predetermined number of nodes by a 2-context model and arithmetic coding using parent and children nodes as contexts, the first stage encoding portion starting coding from uniform distribution, the second stage encoding portion copying the single probability table to all of 1-context model probability tables at the switching moment from the 0-context to the 1-context model, and the third stage encoding portion copying the 1-context model probability tables for a parent node pattern to 2-context model probability tables corresponding to the respective positions at the same parent node pattern at the switching moment from the 1-context to the 2-context model.
8. The apparatus according to claim 7, wherein the second encoding portion comprises:
a probability retrieval part for retrieving the probability of generating the current node in a context from the probability table corresponding to the context;
an arithmetic coder for compressing octrees by a probability sequence containing the retrieved probability; and a table updating part for updating the probability table with a predetermined increment to generation frequencies of the current node in the current context.
9. The apparatus according to claim 7, wherein the third encoding portion comprises:
a first retrieval part for retrieving a parent node of the current node;

a first detection part for detecting a class to which the retrieved parent node belongs and detects transform by which the parent node is transformed to a standard node of the detected class;
a second retrieval part class for applying the detected transform to the parent node and retrieving a position of the current node in the transformed parent node;
a pattern acquisition part for acquiring a pattern as a combination of the detected class and a position index of the current node;
a second detection part for detecting a necessary probability from entries of the 2-context model probability table corresponding to the acquired pattern;
an arithmetic coder for compressing octrees by a probability sequence containing the retrieved probability; and a table updating part for updating the probability table with a predetermined increment to generation frequencies of the current node in the current context.
10. The apparatus according to claim 7, wherein the encoder further comprises:
a symbol byte recording portion for recording symbol bytes corresponding to the current node on bitstreams if the current node is not a leaf node;
an image index recording part for recording a same reference image index on the bitstreams for subnodes of the current node if all children nodes of the current node have the same reference image index and the parent node of the current node has an "undefined" reference image index, or recording an "undefined" reference image index for subnodes of the current node if the children nodes of the current node have different reference image indices.
11. A depth image-based 3D object representing method comprising:

generating shape information for an object by dividing an octree containing the object into 8 subcubes and defining the divided subcubes as children nodes until each subcube becomes smaller than a predetermined size;
determining a reference image containing a color image for each subcube divided by the shape information generator;
generating index information of the reference image corresponding to the shape information;
generating octree nodes including shape information, index information and reference image; and encoding the octree nodes to output bitstreams.
12. The method according to claim 11, wherein the shape information includes a resolution field in which the maximum number of octree leaves along a side of a cube containing the object is recorded, an octree field in which a sequence of octree node structures is recorded, and an index field in which indices of the reference images corresponding to each octree node are recorded.
13. The method according to claim 12, wherein each internal node is represented by a byte and node information recorded in a bit sequence constituting the byte represents presence or absence of children nodes of children nodes belonging to the internal node.
14. The method according to 11, wherein the reference image is a Depthlmage node composed of viewpoint information and a color image corresponding to the viewpoint information.
15. The method according to claim 14, wherein viewpoint information includes a plurality of fields defining an image plane for the object, the respective fields constituting the viewpoint information include a position field in which a position which an image plane is viewed from is recorded, an orientation field in which an orientation which an image plane is viewed from is recorded, a visibility field in which a visibility area from the viewpoint to the image plane is recorded, and a projection method field in which a projection method, selected from an orthographic projection method in which the visibility area is represented by width and height and a perspective projection method in which the visibility area is represented by a horizontal angle and a vertical angle, is recorded.
16. The method according to claim 11, wherein the index generating step comprises:

acquiring color points by shifting pixels existing in the reference image by a distance defined in the depth map corresponding thereto;
generating an intermediate point-based representation (PBR) image by a set of color points;
converting the PBR image into an octree image represented by the cube corresponding to each point; and generating index information of the reference image corresponding to each cube.
17. The method according to claim 11, wherein the reference image determining step comprises:
expanding pixels in a boundary to a background using the average color of nodes and fast decay of intensity; and performing block-based compression to then squeeze distortion into the background.
18. The method according to claim 11, wherein the encoding step comprises:
determining a context of a current octree node on the basis of the number of encoding cycles for the octree node;
firstly encoding a first predetermined number of nodes by a 0-context model and arithmetic coding while keeping a single probability table with a predetermined number of entries;

secondly encoding a second predetermined number of nodes following after the first predetermined number of nodes by a 1-context model using a parent node as a context; and thirdly encoding the remaining nodes following after the second predetermined number of nodes by a 2-context model and arithmetic coding using parent and children nodes as contexts, the firstly encoding step being started from uniform distribution, the secondly encoding step being copying the single probability table to all of 1-context model probability tables at the switching moment from the 0-context to the 1-context model, and the thirdly encoding step being copying the 1-context model probability tables for a parent node pattern to 2-context model probability tables , corresponding to the respective positions at the same parent node pattern at the switching moment from the 1-context to the 2-context model.
19. The method according to claim 18, wherein the 1-context model is a class of the parent node.
20. The method according to claim 19, wherein a total number of classes is 22, and when the nodes are connected by an orthogonal transform G
generated by a combination of basis transforms, then two nodes belong to a same class, where the basis transforms m1, m2, and m3 , being given by where, m1 and m2 are reflections to the planes x=y and y=z, respectively, and m3 is reflection to the plane x=0.
21. The method according to claim 18, wherein the 2-context includes a class of the parent node and a position of the current node at the parent node.
22. The method according to claim 18, wherein the second encoding step comprises:

retrieving the probability of generating the current node in a context from the probability table corresponding to the context;
compressing octrees by a probability sequence containing the retrieved probability; and updating the probability table with a predetermined increment to generation frequencies of the current node in a current context.
23. The method according to claim 18, wherein the third encoding step comprises:

retrieving a parent node of the current node;
detecting a class to which the retrieved parent node belongs and detecting transform by which the parent node is transformed to a standard node of the detected class;
applying the detected transform to the parent node and retrieving the position of the current node in the transformed parent node;
applying the transform to the current node and acquiring a pattern as a combination of the detected class and a position index of the current node;
detecting a necessary probability from entries of the probability table corresponding to combination of the detected class and position;
compressing octrees by a probability sequence containing the retrieved probability; and updating the probability table with a predetermined increment to generation frequencies of the current node in a current context.
24. The method according to claim 18, wherein the encoding step further comprises:

fourthly, recording symbol bytes corresponding to the current node on bitstreams if the current node is not a leaf node;
fifthly, recording the same reference image index on the bitstreams for subnodes of the current node if all children nodes of the current node have the same reference image index and the parent node of the current node has an "undefined" reference image index, or recording an "undefined" reference image index for subnodes of the current node if the children nodes of the current node have different reference image indices.
25. A depth image-based 3D object representing apparatus comprising:

an input unit for inputting bitstreams;
a first extractor for extracting octree nodes from the input bitstreams;
a decoder for decoding the octree nodes;
a second extractor for extracting shape information and reference images for a plurality cubes constituting octrees from the decoded octree nodes; and an object representing unit for representing an object by combination of the extracted reference images corresponding to the shape information.
26. The apparatus according to claim 25, wherein the decoder comprises:

a context determining portion for determining a context of the current octree node on the basis of the number of decoding cycles for the octree node;
a first stage decoding portion for decoding a first predetermined number of nodes by a 0-context model and arithmetic coding while keeping a single probability table with a predetermined number of entries;
a second stage decoding portion for decoding a second predetermined number of nodes following after the first predetermined number of nodes by a 1-context model using a parent node as a context; and a third stage decoding portion for decoding for decoding the remaining nodes following after the second predetermined number of nodes by a 2-context model and arithmetic decoding using parent and children nodes as contexts, the first stage decoding portion starting coding from uniform distribution, the second stage decoding portion copying the single probability table to all of 1-context model probability tables at the switching moment from the 0-context to the 1-context model, and the third stage decoding portion copying the 1-context model probability tables for a parent node pattern to 2-context model probability tables corresponding to the respective positions at the same parent node pattern at the switching moment from the 1-context to the 2-context model.
27. The apparatus according to claim 26, wherein the 1-context model is a class of the parent node.
28. The apparatus according to claim 27, wherein a total number of classes is 22, and when the nodes are connected by an orthogonal transform G
generated by a combination of basis transforms, then two nodes belong to a same class, where the basis transforms m1, m2 and m3 , being given by where, m1 and m2 are reflections to the planes x=y and y=z, respectively, and m3 is reflection to the plane x=0.
29. The apparatus according to claim 26, wherein the 2-context includes a class of the parent node and a position of the current node at the parent node.
30. The apparatus according to claim 26, wherein the second decoding portion comprises:

a probability retrieval part for retrieving the probability of generating the current node in a context from the probability table corresponding to the context;
an octree compressing part for compressing octrees by a probability sequence containing the retrieved probability; and an updating part for updating the probability table with a predetermined increment to generation frequencies of the current node in a current context.
31. The apparatus according to claim 26, wherein the third decoding portion comprises:

a node retrieval part for retrieving a parent node of the current node;
a transform detection part for detecting a class to which the retrieved parent node belongs and detecting transform by which the parent node is transformed to a standard node of the detected class;

a position retrieval part for applying the detected transform to the parent node and retrieving the position of the current node in the transformed parent node;

a pattern acquisition part for applying the transform to the current node and acquiring a pattern as a combination of the detected class and a position index of the current node;

a probability detection part for detecting a necessary probability from entries of the probability table corresponding to combination of the detected class and position;

an octree compression part for compressing octrees by a probability sequence containing the retrieved probability; and a table updating part for updating the probability table with a predetermined increment to generation frequencies of the current node in a current context.
32. The apparatus according to claim 25, wherein the shape information includes a resolution field in which the maximum number of octree leaves along a side of the cube containing the object is recorded, an octree field in which a sequence of internal node structures is recorded, and an index field in which indices of the reference images correspond to each internal node.
33. The apparatus according to claim 32, wherein each internal node is represented by a byte and node information recorded in a bit sequence constituting the byte represents presence or absence of children nodes belonging to the internal node.
34. The apparatus according to 25, wherein the reference image is a Depthlmage node composed of viewpoint information and a color image corresponding to the viewpoint information.
35. The apparatus according to claim 34, wherein the viewpoint information includes a plurality of fields defining an image plane for the object, the respective fields constituting the viewpoint information include a position field having a position in which an image plane is viewed recorded therein, an orientation field having an orientation in which an image plane is viewed recorded therein, a visibility field having a visibility area from the viewpoint to the image plane recorded therein, and a projection method field having a projection method selected from an orthographic projection method in which the visibility area is represented by width and height, and a perspective projection method in which the visibility area is represented by a horizontal angle and a vertical angle.
36. A depth image-based 3D object representing method comprising:

inputting bitstreams;
extracting octree nodes from the input bitstreams;
decoding the octree nodes;
extracting shape information and reference images for a plurality cubes constituting octrees from the decoded octree nodes; and representing an object by combination of the extracted reference images corresponding to the shape information.
37. The method according to claim 36, wherein the decoding step comprises:

determining a context of the current octree node on the basis of the number of decoding cycles for the octree node;

firstly decoding a first predetermined number of nodes by a 0-context model and arithmetic coding while keeping a single probability table with a predetermined number of entries;

l09 secondly decoding a second predetermined number of nodes following after the first predetermined number of nodes by a 1-context model using a parent node as a context; and thirdly decoding the remaining nodes following after the second predetermined number of nodes by a 2-context model and arithmetic decoding using parent and children nodes as contexts, the firstly decoding step being started from uniform distribution, the secondly decoding step being copying a the single probability table to all of 1-context model probability tables at the switching moment from the 0-context to the 1-context model, and the thirdly decoding step being copying the 1-context model probability tables for a parent node pattern to 2-context model probability tables corresponding to the respective positions at the same parent node pattern at the switching moment from the 1-context to the 2-context model.
38. The method according to claim 37, wherein the 1-context model is a class of the parent node.
39. The method according to claim 38, wherein a total number of classes is 22, and when the nodes are connected by an orthogonal transforms G generated by a combination of basis transforms, then two nodes belong to the same class, where the basis transforms m1, m2, and m3, being given by where, m1 and m2 are reflections to the planes x=y and y=z, respectively, and m3 is reflection to the plane x=0.
40. The method according to claim 37, wherein the 2-context includes a class of the parent node and a position of the current node at the parent node.
41. The method according to claim 37, wherein the secondly decoding step comprises:

retrieving the probability of generating the current node in a context from the probability table corresponding to the context;
compressing octrees by a probability sequence containing the retrieved probability; and updating the probability table with a predetermined increment to generation frequencies of the current node in a current context.
42. The method according to claim 37, wherein the thirdly decoding step comprises:

retrieving a parent node of the current node;
detecting a class to which the retrieved parent node belongs and detecting transform by which the parent node is transformed to a standard node of the detected class;
applying the detected transform to the parent node and retrieving the position of the current node in the transformed parent node;
applying the transform to the current node and acquiring a pattern as a combination of the detected class and the position index of the current node;
detecting a necessary probability from entries of the probability table corresponding to combination of the detected class and position;
compressing octrees by a probability sequence containing the retrieved probability; and updating the probability table with a predetermined increment to generation frequencies of the current node in the current context.
43. The method according to claim 36, wherein the shape information includes a resolution field in which the maximum number of octree leaves along a side of the cube containing the object is recorded, an octree field in which a sequence of internal node structures is recorded, and an index field in which indices of the reference images correspond to each internal node.
44. The method according to claim 43, wherein each internal node is represented by a byte and node information recorded in a bit sequence constituting the byte represents presence or absence of children nodes belonging to the internal node.
45. The method according to 36, wherein the reference image is a Depthlmage node composed of viewpoint information and a color image corresponding to the viewpoint information.
46. The method according to claim 45, wherein the viewpoint information includes a plurality of fields defining an image plane for the object, the respective fields constituting the viewpoint information include a position field having a position in which an image plane is viewed recorded therein, an orientation field having an orientation in which an image plane is viewed recorded therein, a visibility field having a visibility area from the viewpoint to the image plane recorded therein, and a projection method field having a projection method selected from an orthographic projection method in which the visibility area is represented by width and height, and a perspective projection method in which the visibility area is represented by a horizontal angle and a vertical angle.
47. A computer-readable recording medium recording a program for executing the depth image-based 3D object representing method defined in claim 29 on a computer.
48. A computer-readable recording medium recording a program for executing the depth image-based 3D object representing method defined in claim 54 on a computer.
CA2514655A 2001-11-27 2002-11-27 Apparatus and method for depth image-based representation of 3-dimensional object Expired - Fee Related CA2514655C (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US33316701P 2001-11-27 2001-11-27
US60/333,167 2001-11-27
US36254502P 2002-03-08 2002-03-08
US60/362,545 2002-03-08
US37656302P 2002-05-01 2002-05-01
US60/376,563 2002-05-01
US39530402P 2002-07-12 2002-07-12
US60/395,304 2002-07-12
KR10-2002-0067970A KR100446635B1 (en) 2001-11-27 2002-11-04 Apparatus and method for depth image-based representation of 3-dimensional object
KR2002-67970 2002-11-04
CA002413056A CA2413056C (en) 2001-11-27 2002-11-27 Apparatus and method for depth image-based representation of 3-dimensional object

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CA002413056A Division CA2413056C (en) 2001-11-27 2002-11-27 Apparatus and method for depth image-based representation of 3-dimensional object

Publications (2)

Publication Number Publication Date
CA2514655A1 true CA2514655A1 (en) 2003-05-27
CA2514655C CA2514655C (en) 2010-05-11

Family

ID=35206795

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2514655A Expired - Fee Related CA2514655C (en) 2001-11-27 2002-11-27 Apparatus and method for depth image-based representation of 3-dimensional object

Country Status (1)

Country Link
CA (1) CA2514655C (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8745515B2 (en) 2004-04-07 2014-06-03 Nokia Corporation Presentation of large pages on small displays
CN109313820A (en) * 2016-06-14 2019-02-05 松下电器(美国)知识产权公司 Three-dimensional data coding method, coding/decoding method, code device, decoding apparatus
CN110892725A (en) * 2017-07-13 2020-03-17 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN111291625A (en) * 2020-01-16 2020-06-16 广东工业大学 Friend recommendation method and system based on face retrieval
CN111583348A (en) * 2020-05-09 2020-08-25 维沃移动通信有限公司 Image data encoding method and device, display method and device, and electronic device
CN112438046A (en) * 2018-07-17 2021-03-02 华为技术有限公司 Prediction type signaling and time sequence signaling in Point Cloud Coding (PCC)
CN112950753A (en) * 2019-12-11 2021-06-11 腾讯科技(深圳)有限公司 Virtual plant display method, device, equipment and storage medium
CN113574540A (en) * 2019-09-16 2021-10-29 腾讯美国有限责任公司 Point cloud compression method and device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8745515B2 (en) 2004-04-07 2014-06-03 Nokia Corporation Presentation of large pages on small displays
CN109313820A (en) * 2016-06-14 2019-02-05 松下电器(美国)知识产权公司 Three-dimensional data coding method, coding/decoding method, code device, decoding apparatus
CN109313820B (en) * 2016-06-14 2023-07-04 松下电器(美国)知识产权公司 Three-dimensional data encoding method, decoding method, encoding device, and decoding device
CN110892725A (en) * 2017-07-13 2020-03-17 交互数字Vc控股公司 Method and apparatus for encoding/decoding a point cloud representing a 3D object
CN112438046A (en) * 2018-07-17 2021-03-02 华为技术有限公司 Prediction type signaling and time sequence signaling in Point Cloud Coding (PCC)
CN113574540A (en) * 2019-09-16 2021-10-29 腾讯美国有限责任公司 Point cloud compression method and device
CN113574540B (en) * 2019-09-16 2023-10-31 腾讯美国有限责任公司 Point cloud encoding and decoding method and device and electronic equipment
CN112950753A (en) * 2019-12-11 2021-06-11 腾讯科技(深圳)有限公司 Virtual plant display method, device, equipment and storage medium
CN112950753B (en) * 2019-12-11 2023-09-26 腾讯科技(深圳)有限公司 Virtual plant display method, device, equipment and storage medium
CN111291625B (en) * 2020-01-16 2023-04-18 广东工业大学 Friend recommendation method and system based on face retrieval
CN111291625A (en) * 2020-01-16 2020-06-16 广东工业大学 Friend recommendation method and system based on face retrieval
CN111583348A (en) * 2020-05-09 2020-08-25 维沃移动通信有限公司 Image data encoding method and device, display method and device, and electronic device
CN111583348B (en) * 2020-05-09 2024-03-29 维沃移动通信有限公司 Image data encoding method and device, image data displaying method and device and electronic equipment

Also Published As

Publication number Publication date
CA2514655C (en) 2010-05-11

Similar Documents

Publication Publication Date Title
RU2002131792A (en) DEVICE AND METHOD FOR SUBMITTING A THREE-DIMENSIONAL OBJECT BASED ON IMAGES WITH DEPTH
JP4629005B2 (en) 3D object representation device based on depth image, 3D object representation method and recording medium thereof
RU2237283C2 (en) Device and method for presenting three-dimensional object on basis of images having depth
RU2237284C2 (en) Method for generating structure of assemblies, meant for presenting three-dimensional objects with use of images having depth
US8217941B2 (en) Apparatus and method for depth image-based representation of 3-dimensional object
JP4832975B2 (en) A computer-readable recording medium storing a node structure for representing a three-dimensional object based on a depth image
EP1566769B1 (en) Method and apparatus for encoding and decoding 3D data
RU2267161C2 (en) Method for encoding and decoding given three-dimensional objects and device for realization of said method
JP2011028757A (en) Method and apparatus for representing and searching for an object in an image, and computer-readable storage medium for storing computer-executable process step for executing the method
RU2001118222A (en) Hierarchical image-based representation of a fixed and animated three-dimensional object, method and apparatus for using this representation to visualize an object
WO1997032281A1 (en) Wavelet based data compression
CN113518226A (en) G-PCC point cloud coding improvement method based on ground segmentation
CA2514655A1 (en) Apparatus and method for depth image-based representation of 3-dimensional object
Levkovich-Maslyuk et al. Depth image-based representation and compression for static and animated 3-D objects
Marshall Application of image contours to three aspects of image processing. compression, shape recognition and stereopsis
Lyness et al. Low-cost model reconstruction from image sequences
Nordland Compression of 3D media for internet transmission
WO2022025770A2 (en) Method for compressing image data having depth information
WO2022131946A2 (en) Devices and methods for spatial quantization for point cloud compression
Wang et al. A Space Carving Based Reconstruction Method Using Discrete Viewing Edges
Ning Applications of data compression to three-dimensional scalar field visualization
WO2003056518A1 (en) Image compression

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20171127