WO2019070778A1 - METHOD AND DEVICE FOR GENERATING POINTS OF A 3D SCENE - Google Patents
METHOD AND DEVICE FOR GENERATING POINTS OF A 3D SCENE Download PDFInfo
- Publication number
- WO2019070778A1 WO2019070778A1 PCT/US2018/054057 US2018054057W WO2019070778A1 WO 2019070778 A1 WO2019070778 A1 WO 2019070778A1 US 2018054057 W US2018054057 W US 2018054057W WO 2019070778 A1 WO2019070778 A1 WO 2019070778A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- point
- points
- scene
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/529—Depth or shape recovery from texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Definitions
- a volume unit being associated with a point of the 3D scene, the depth difference corresponding to a number of volume units, the number of generated points corresponding to the depth difference minus 1 .
- attributes to be associated with the at least one additional point are determined, the attributes being determined from attributes associated with the current point and with the adjacent pixel.
- the points of the 3D scene are part of a point cloud.
- FIG. 13 shows an example of a process for decoding a bitstream to obtain the decoded point cloud representing the 3D object of figure 1 object of figure 1 , in accordance with a non-limiting embodiment of the present principles.
- the part of each image that receives attributes from the point cloud is shown as a grey area while the part of the image that does not receive attributes from the point cloud is shown as a white area, said white area may be filled with default value, like the free space between images.
- the data associated with the pixels of the images 21 1 to 21 n may correspond to texture information and/or depth information.
- a first picture 21 is used to store the texture information (e.g. 3 components RGB or YUV) and a second picture 21 with the same arrangement of images 21 1 to 21 m is used to store the depth information, both pictures representing the point cloud at time 't ⁇
- LLE (Locally-Linear Embedding) that corresponds to a mathematical operation of dimension reduction, here applied to convert/transform from 3D to 2D, the parameters representative of the LLE comprising the transformation coefficients.
- Each image has advantageously a rectangular shape to ease the packing process on the picture 21 .
- volume elements different from the square may be associated with the points of the 3D scene, e.g. a sphere.
- the expression "volume unit” will be used to express the volume element associated with a point, a volume unit corresponding for example a voxel of size 1 by 1 by 1 , e.g. 1 mm by 1 mm by 1 mm (with a volume of 1 mm 3 ), or 1 cm by 1 cm by 1 cm (with a volume of 1 cm 3 ) or any other dimensions.
- the cubes/points 802 to 809 correspond to the neighborhood of the cube/point 601 as their corresponding pixels 612 to 619 of the associated depth image correspond to the adjacent pixels of the depth image surrounding the pixel 61 1 corresponding to the cube/point 801 .
- the depth image associated with the 3D scene may be used to determine where hoie(s) may be located in area(s) in the 3D scene.
- the part 8B of the depth image associated with (and obtained from) the part 6A of the points of the 3D object 10 is processed and analysed as explained hereinbelow to obtain the location of the hole(s).
- the depth information associated with the pixels 81 1 to 819 is used to obtain the location of the hole(s) 6001 , 6002.
- the block 6C of Figure 6B shows the depth information which is associated with the pixels 81 1 to 619.
- each point of the 3D scene may be processed as a current point and its depth compared with the depth of its neighborhood (i.e. in the space of the associated depth image).
- additional cubes/points may be generated between two cubes/points having a depth difference d (in the depth image) fulfilling the equation 1 .
- the additional cubes/points may be generating by computing their associated depth and texture from the depth and texture associated with the cubes used to determine the hole (e.g. by interpolation of the points/cubes used to determine the presence of a hole).
- the number of generated additional points may be a function of the depth difference, for example equals to d minus 1 (d - 1 ), when the depth difference is expressed with a number of volume units.
- the weight associated with a texture value when interpolating a texture value to be associated with a generated additional point may be inversely proportional to the distance (depth) separating the generating additional point from the point used to generate it.
- a weight equal to 2 may be associated with the texture of the point 601 and a weight equal to 1 may be associated with the texture of the point 604, the distance (depth difference) between the additional point 6001 and the point 601 being equal to 1 volume unit while the distance (depth difference) between the additional point 6001 and the point 604 being equal to 2 volume units.
- the greatest depth difference d max (that is less or equal to Th2) is selected among ail depth differences de- 2 to deig of the block of pixels 6B and only the adjacent pixel 614 corresponding to the greatest depth difference dmax among all adjacent pixel 612 to 619 is considered with the current pixel 61 1 to generated additional points/cubes (from the corresponding points/cubes 601 and 604).
- the apparatus 9 comprises following elements that are linked together by a data and address bus 91 :
- a power supply e.g. a battery.
- a local memory e.g. a video memory or a RAM (or
- the point cloud 103 may be represented in a picture or in one or more groups of temporally successive pictures, each picture comprising a representation of the point cloud at a determined time 't ⁇
- the one or more groups of temporally successive pictures may form a video representative of at least a part of the point cloud 103,
- the encoded data of the picture 20 is decoded by a decoder DEC1 .
- the decoder DEC1 is compliant with the encoder ENC1 , for example compliant with a legacy decoder such as:
- AVC also named MPEG-4 AVC or h264
- the attributes, encoded at operation 120, are decoded and retrieved, at operation 121 , for example stored in a buffer memory, for use in the generation of a reference picture 125 associated with the picture 20.
- a reference picture 135 (that may be identical to the reference picture 125 of figure 12) may be obtained from the picture by fusing the decoded first attributes obtained from the operation 121 with the second attributes obtained from the operation 123.
- the reference picture may comprise the same structure than the picture, i.e. the same spatial arrangement of the set of images but with different data, i.e. with the decoded first attributes and the obtained second attributes.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on a processor-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Image Generation (AREA)
- Processing Or Creating Images (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Priority Applications (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| RU2020115158A RU2788439C2 (ru) | 2017-10-06 | 2018-10-03 | Способ и устройство для генерации точек трехмерной (3d) сцены |
| CN201880076258.6A CN111386556B (zh) | 2017-10-06 | 2018-10-03 | 用于生成3d场景的点的方法和装置 |
| JP2020518785A JP7407703B2 (ja) | 2017-10-06 | 2018-10-03 | 3dシーンの点を生成するための方法およびデバイス |
| US16/753,787 US11830210B2 (en) | 2017-10-06 | 2018-10-03 | Method and device for generating points of a 3D scene |
| KR1020207012438A KR102537420B1 (ko) | 2017-10-06 | 2018-10-03 | 3d 장면의 포인트들을 생성하기 위한 방법 및 디바이스 |
| EP18786924.3A EP3692509B1 (en) | 2017-10-06 | 2018-10-03 | Method and device for generating points of a 3d scene |
| DK18786924.3T DK3692509T3 (da) | 2017-10-06 | 2018-10-03 | Fremgangsmåde og indretning til generering af punkter af en 3d-scene |
| BR112020006530-7A BR112020006530A2 (pt) | 2017-10-06 | 2018-10-03 | método e dispositivo para gerar pontos de uma cena 3d |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP17306345.4A EP3467782A1 (en) | 2017-10-06 | 2017-10-06 | Method and device for generating points of a 3d scene |
| EP17306345.4 | 2017-10-06 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019070778A1 true WO2019070778A1 (en) | 2019-04-11 |
Family
ID=60143653
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2018/054057 Ceased WO2019070778A1 (en) | 2017-10-06 | 2018-10-03 | METHOD AND DEVICE FOR GENERATING POINTS OF A 3D SCENE |
Country Status (9)
| Country | Link |
|---|---|
| US (1) | US11830210B2 (enExample) |
| EP (2) | EP3467782A1 (enExample) |
| JP (1) | JP7407703B2 (enExample) |
| KR (1) | KR102537420B1 (enExample) |
| CN (1) | CN111386556B (enExample) |
| BR (1) | BR112020006530A2 (enExample) |
| DK (1) | DK3692509T3 (enExample) |
| HU (1) | HUE061036T2 (enExample) |
| WO (1) | WO2019070778A1 (enExample) |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10907954B2 (en) * | 2018-09-28 | 2021-02-02 | Hand Held Products, Inc. | Methods and systems for measuring dimensions of a 2-D object |
| GB2584119B (en) * | 2019-05-22 | 2022-11-02 | Sony Interactive Entertainment Inc | Content coding system and method |
| US20230164353A1 (en) * | 2020-04-22 | 2023-05-25 | Lg Electronics Inc. | Point cloud data processing device and processing method |
| US12423871B2 (en) | 2020-09-03 | 2025-09-23 | Lg Electronics Inc. | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method |
| US11823327B2 (en) | 2020-11-19 | 2023-11-21 | Samsung Electronics Co., Ltd. | Method for rendering relighted 3D portrait of person and computing device for the same |
| WO2022119304A1 (ko) * | 2020-12-01 | 2022-06-09 | 현대자동차주식회사 | 적응적 데드존 양자화를 이용하는 포인트 클라우드 코딩 장치 및 방법 |
| CN116724556A (zh) * | 2021-01-06 | 2023-09-08 | Lg电子株式会社 | 点云数据发送装置和方法、点云数据接收装置和方法 |
| KR102665543B1 (ko) | 2021-02-22 | 2024-05-16 | 한국전자통신연구원 | 다시점 영상으로부터의 깊이지도 생성 장치 및 방법 |
| CN113781653B (zh) * | 2021-08-17 | 2022-09-23 | 北京百度网讯科技有限公司 | 对象模型生成方法、装置、电子设备及存储介质 |
| US11756281B1 (en) * | 2023-03-14 | 2023-09-12 | Illuscio, Inc. | Systems and methods for splat filling a three-dimensional image using semi-measured data |
| US11935209B1 (en) | 2023-07-24 | 2024-03-19 | Illuscio, Inc. | Systems and methods for dynamic backfilling of a three-dimensional object |
| JP2025058733A (ja) * | 2023-09-28 | 2025-04-09 | キヤノン株式会社 | 生成装置、再生装置、情報処理方法、およびプログラム |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090174710A1 (en) * | 2008-01-08 | 2009-07-09 | Samsung Electronics Co., Ltd. | Modeling method and apparatus |
Family Cites Families (34)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7015926B2 (en) * | 2004-06-28 | 2006-03-21 | Microsoft Corporation | System and process for generating a two-layer, 3D representation of a scene |
| US7576737B2 (en) * | 2004-09-24 | 2009-08-18 | Konica Minolta Medical & Graphic, Inc. | Image processing device and program |
| RU2407224C2 (ru) | 2005-04-19 | 2010-12-20 | Конинклейке Филипс Электроникс Н.В. | Восприятие глубины |
| CN101657825B (zh) * | 2006-05-11 | 2014-02-19 | 普莱姆传感有限公司 | 根据深度图对人形进行建模 |
| US9645240B1 (en) * | 2010-05-10 | 2017-05-09 | Faro Technologies, Inc. | Method for optically scanning and measuring an environment |
| US20110304618A1 (en) * | 2010-06-14 | 2011-12-15 | Qualcomm Incorporated | Calculating disparity for three-dimensional images |
| JP5858380B2 (ja) * | 2010-12-03 | 2016-02-10 | 国立大学法人名古屋大学 | 仮想視点画像合成方法及び仮想視点画像合成システム |
| KR101210625B1 (ko) * | 2010-12-28 | 2012-12-11 | 주식회사 케이티 | 빈공간 채움 방법 및 이를 수행하는 3차원 비디오 시스템 |
| CN102055982B (zh) | 2011-01-13 | 2012-06-27 | 浙江大学 | 三维视频编解码方法及装置 |
| US9053571B2 (en) * | 2011-06-06 | 2015-06-09 | Microsoft Corporation | Generating computer models of 3D objects |
| US9471988B2 (en) * | 2011-11-02 | 2016-10-18 | Google Inc. | Depth-map generation for an input image using an example approximate depth-map associated with an example similar image |
| US9282915B2 (en) * | 2011-11-29 | 2016-03-15 | St. Jude Medical, Atrial Fibrillation Division, Inc. | Method and system for generating and/or repairing a surface model of a geometric structure |
| WO2013111994A1 (en) * | 2012-01-26 | 2013-08-01 | Samsung Electronics Co., Ltd. | Image processing method and apparatus for 3d video |
| US8712147B2 (en) * | 2012-02-03 | 2014-04-29 | Harris Corporation | Fractal method for detecting and filling data gaps within LiDAR data |
| WO2014016895A1 (ja) * | 2012-07-23 | 2014-01-30 | 富士通株式会社 | 形状データ生成プログラム、形状データ生成方法及び形状データ生成装置 |
| US9811880B2 (en) * | 2012-11-09 | 2017-11-07 | The Boeing Company | Backfilling points in a point cloud |
| KR20150117662A (ko) * | 2013-02-12 | 2015-10-20 | 톰슨 라이센싱 | 깊이 맵의 컨텐츠를 강화하기 위한 방법 및 디바이스 |
| US9756359B2 (en) * | 2013-12-16 | 2017-09-05 | Qualcomm Incorporated | Large blocks and depth modeling modes (DMM'S) in 3D video coding |
| US9171403B2 (en) * | 2014-02-13 | 2015-10-27 | Microsoft Technology Licensing, Llc | Contour completion for augmenting surface reconstructions |
| US9292961B1 (en) * | 2014-08-26 | 2016-03-22 | The Boeing Company | System and method for detecting a structural opening in a three dimensional point cloud |
| US9792531B2 (en) * | 2015-09-16 | 2017-10-17 | Siemens Healthcare Gmbh | Intelligent multi-scale medical image landmark detection |
| GB2543749A (en) * | 2015-10-21 | 2017-05-03 | Nokia Technologies Oy | 3D scene rendering |
| CN105825544B (zh) * | 2015-11-25 | 2019-08-20 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
| US20170186223A1 (en) * | 2015-12-23 | 2017-06-29 | Intel Corporation | Detection of shadow regions in image depth data caused by multiple image sensors |
| US10192103B2 (en) * | 2016-01-15 | 2019-01-29 | Stereovision Imaging, Inc. | System and method for detecting and removing occlusions in a three-dimensional image |
| WO2017162594A1 (en) * | 2016-03-21 | 2017-09-28 | Thomson Licensing | Dibr with depth map preprocessing for reducing visibility of holes by locally blurring hole areas |
| TW201805894A (zh) * | 2016-05-06 | 2018-02-16 | 國立臺灣大學 | 三維渲染方法以及三維繪圖處理裝置 |
| US10074160B2 (en) * | 2016-09-30 | 2018-09-11 | Disney Enterprises, Inc. | Point cloud noise and outlier removal for image-based 3D reconstruction |
| US9972067B2 (en) * | 2016-10-11 | 2018-05-15 | The Boeing Company | System and method for upsampling of sparse point cloud for 3D registration |
| WO2018073829A1 (en) * | 2016-10-20 | 2018-04-26 | Robo-Team Home Ltd. | Human-tracking robot |
| US10176589B2 (en) * | 2017-01-31 | 2019-01-08 | Mitsubishi Electric Research Labroatories, Inc. | Method and system for completing point clouds using planar segments |
| CN108694740A (zh) * | 2017-03-06 | 2018-10-23 | 索尼公司 | 信息处理设备、信息处理方法以及用户设备 |
| US10803561B2 (en) * | 2017-06-02 | 2020-10-13 | Wisconsin Alumni Research Foundation | Systems, methods, and media for hierarchical progressive point cloud rendering |
| US10509413B2 (en) * | 2017-09-07 | 2019-12-17 | GM Global Technology Operations LLC | Ground reference determination for autonomous vehicle operations |
-
2017
- 2017-10-06 EP EP17306345.4A patent/EP3467782A1/en not_active Withdrawn
-
2018
- 2018-10-03 CN CN201880076258.6A patent/CN111386556B/zh active Active
- 2018-10-03 KR KR1020207012438A patent/KR102537420B1/ko active Active
- 2018-10-03 JP JP2020518785A patent/JP7407703B2/ja active Active
- 2018-10-03 HU HUE18786924A patent/HUE061036T2/hu unknown
- 2018-10-03 US US16/753,787 patent/US11830210B2/en active Active
- 2018-10-03 WO PCT/US2018/054057 patent/WO2019070778A1/en not_active Ceased
- 2018-10-03 BR BR112020006530-7A patent/BR112020006530A2/pt unknown
- 2018-10-03 EP EP18786924.3A patent/EP3692509B1/en active Active
- 2018-10-03 DK DK18786924.3T patent/DK3692509T3/da active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090174710A1 (en) * | 2008-01-08 | 2009-07-09 | Samsung Electronics Co., Ltd. | Modeling method and apparatus |
Non-Patent Citations (7)
| Title |
|---|
| BRIAN CURLESS ET AL: "A volumetric method for building complex models from range images", COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH '96, ACM, NEW YORK, US, August 1996 (1996-08-01), pages 303 - 312, XP058220108, ISBN: 978-0-89791-746-9, DOI: 10.1145/237170.237269 * |
| DAEYOUNG KIM ET AL: "High-quality depth map up-sampling robust to edge noise of range sensors", IMAGE PROCESSING (ICIP), 2012 19TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 30 September 2012 (2012-09-30), pages 553 - 556, XP032333235, ISBN: 978-1-4673-2534-9, DOI: 10.1109/ICIP.2012.6466919 * |
| G.H. TARBOX ET AL: "IVIS: an integrated volumetric inspection system", PROCEEDINGS OF THE 1994 SECOND CAD-BASED VISION WORKSHOP: FEBRUARY 8 - 11, 1994, CHAMPION, PENNSYLVANIA, January 1994 (1994-01-01), Piscataway, NJ, USA, pages 220 - 227, XP055480240, ISBN: 978-0-8186-5310-0, DOI: 10.1109/CADVIS.1994.284498 * |
| JIANXIONG XIAO ET AL: "Reconstructing the World s Museums", 7 October 2012, COMPUTER VISION ECCV 2012, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 668 - 681, ISBN: 978-3-642-33717-8, XP047018590 * |
| JULIEN RICARD ET AL: "CGI-based dynamic point cloud test content", 117. MPEG MEETING; 16-1-2017 - 20-1-2017; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m40050, 12 January 2017 (2017-01-12), XP030068395 * |
| MATTHEW BERGER ET AL.: "State of the Art in Surface Reconstruction from Point Clouds", STATE OF THE ART REPORT, 2014 |
| MATTHEW BERGER ET AL: "State of the Art in Surface Reconstruction from Point Clouds", EUROGRAPHICS STAR REPORT, April 2014 (2014-04-01), pages 161 - 185, XP055367964, Retrieved from the Internet <URL:https://hal.inria.fr/docs/01/01/77/00/PDF/star_author.pdf> [retrieved on 20170426], DOI: 10.2312/egst.20141040> * |
Also Published As
| Publication number | Publication date |
|---|---|
| HUE061036T2 (hu) | 2023-05-28 |
| BR112020006530A2 (pt) | 2020-10-06 |
| EP3467782A1 (en) | 2019-04-10 |
| US20200258247A1 (en) | 2020-08-13 |
| KR102537420B1 (ko) | 2023-05-26 |
| JP7407703B2 (ja) | 2024-01-04 |
| DK3692509T3 (da) | 2023-01-09 |
| KR20200057077A (ko) | 2020-05-25 |
| EP3692509B1 (en) | 2022-12-07 |
| EP3692509A1 (en) | 2020-08-12 |
| RU2020115158A (ru) | 2021-11-08 |
| JP2020536325A (ja) | 2020-12-10 |
| US11830210B2 (en) | 2023-11-28 |
| CN111386556B (zh) | 2024-03-12 |
| CN111386556A (zh) | 2020-07-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11830210B2 (en) | Method and device for generating points of a 3D scene | |
| US20200273258A1 (en) | Method and device for modifying attributes of points of a 3d scene | |
| US11019363B2 (en) | Method and device for encoding a point cloud | |
| KR102594003B1 (ko) | 볼류메트릭 비디오를 인코딩/디코딩하기 위한 방법, 장치 및 스트림 | |
| US20190108655A1 (en) | Method and apparatus for encoding a point cloud representing three-dimensional objects | |
| US11508041B2 (en) | Method and apparatus for reconstructing a point cloud representing a 3D object | |
| US20200302653A1 (en) | A method and apparatus for encoding/decoding the geometry of a point cloud representing a 3d object | |
| EP3429206A1 (en) | Method and device for encoding a point cloud | |
| US20200211232A1 (en) | A method and apparatus for encoding/decoding a point cloud representing a 3d object | |
| US20200302652A1 (en) | A method and apparatus for encoding/decoding a colored point cloud representing the geometry and colors of a 3d object | |
| US20210166435A1 (en) | Method and apparatus for encoding/decoding the geometry of a point cloud representing a 3d object | |
| US20200296427A1 (en) | A method and apparatus for encoding/decoding the colors of a point cloud representing a 3d object | |
| RU2788439C2 (ru) | Способ и устройство для генерации точек трехмерной (3d) сцены | |
| HK40036511B (en) | Method and device for generating points of a 3d scene | |
| HK40036511A (en) | Method and device for generating points of a 3d scene | |
| CN120092450A (zh) | 编码/解码点云几何数据 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18786924 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020518785 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 20207012438 Country of ref document: KR Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2018786924 Country of ref document: EP Effective date: 20200506 |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020006530 Country of ref document: BR |
|
| ENP | Entry into the national phase |
Ref document number: 112020006530 Country of ref document: BR Kind code of ref document: A2 Effective date: 20200331 |