WO2024046849A1 - Transmission d'une valeur d'attribut manquante pour une vue rendue d'une scène volumétrique - Google Patents

Transmission d'une valeur d'attribut manquante pour une vue rendue d'une scène volumétrique Download PDF

Info

Publication number
WO2024046849A1
WO2024046849A1 PCT/EP2023/073171 EP2023073171W WO2024046849A1 WO 2024046849 A1 WO2024046849 A1 WO 2024046849A1 EP 2023073171 W EP2023073171 W EP 2023073171W WO 2024046849 A1 WO2024046849 A1 WO 2024046849A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
value
volumetric scene
color value
values
Prior art date
Application number
PCT/EP2023/073171
Other languages
English (en)
Inventor
Franck Thudor
Bertrand Chupeau
Remy Gendrot
Francois-Louis Tariolle
Original Assignee
Interdigital Ce Patent Holdings, Sas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Ce Patent Holdings, Sas filed Critical Interdigital Ce Patent Holdings, Sas
Publication of WO2024046849A1 publication Critical patent/WO2024046849A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2213/00Details of stereoscopic systems
    • H04N2213/003Aspects relating to the "2D+depth" image format

Definitions

  • the present principles generally relate to the domain of encoding, transmitting and rendering volumetric scenes for example when rendered on end-user devices such as mobile devices or Head-Mounted Displays (HMD) like see-through glasses.
  • the present principles relate to attributing a color value to pixels of a viewport image of a volumetric scene when this color value cannot be retrieved from the representation of the volumetric scene.
  • the present principles apply to various rendering attributes like reflectance, transparency, material or shininess
  • a volumetric scene is a three-dimensional (3D) scene that have been captured, for example as a multi-view plus depth image (MVD) using a collection of cameras, or, for example, modelized as a point cloud or a mesh representation.
  • technologies to acquire a volumetric scene are numerous and can be mixed together.
  • the key point of a volumetric scene is that it can be rendered from points of views different from the points of views used for the capture (and/or the modeling).
  • Source content type, compression standard, rendering device, and synthesis method may differ from one solution to another. However, in any case, when it comes to synthesize a viewport at the decoder side, a virtual camera located in 3D space of the volumetric scene is used to project the available information on a viewport image to be rendered.
  • some pixels of the viewport image may have no projected data for a given rendering attribute (also called pixel attribute).
  • the missing pixel attribute value may be depth, color or any other rendering attribute (e.g. reflectance, transparency or shininess).
  • rendering attribute values like color values
  • some pixels may miss a color information.
  • the present principles provide a suggested default color.
  • the present principles apply to various rendering attributes like reflectance, transparency, material or shininess.
  • an attribute value is associated with a region of the 3D space.
  • a quality level linked to the default value helps the renderer to select a filling method.
  • the present principles relate to a method comprising encoding, in a data stream, a representation of a volumetric scene and metadata comprising a color value indicating to a renderer to set missing color values to the color value when rendering a viewport image of the volumetric scene.
  • the color value is associated with a quality level indicating to the renderer a visibility level of visual artifacts when setting missing color values to the color value.
  • the color value may be determined as a function of color values of pixels of a multi-views image used to generate the representation of the volumetric scene.
  • the metadata comprises at least two color values, each given color value being associated with a region of the volumetric scene indicating to the renderer to use the given color value for parts of the viewport image representing the region of the volumetric scene.
  • the present principles apply to color value and to any other rendering attribute value, like reflectance, transparency, shininess, material, etc.
  • the present principles also relate to a device comprising a memory associated with a processor configured to implement the method above.
  • the present principles also relate to a method comprising obtaining, from a data stream, a representation of a volumetric scene and metadata comprising a color value; and, rendering a viewport image of the volumetric scene and setting missing color values to the color value.
  • the color value is associated with a quality level indicating to the renderer a visibility level of visual artifacts when setting missing color values to the color value.
  • the color value may be determined as a function of color values of pixels of a multi-views image used to generate the representation of the volumetric scene.
  • the metadata comprises at least two color values, each given color value being associated with a region of the volumetric scene indicating to the renderer to use the given color value for parts of the viewport image representing the region of the volumetric scene.
  • the present principles apply to color value and to any other rendering attribute value, like reflectance, transparency, shininess, material, etc.
  • the present principles also relate to a device comprising a memory associated with a processor configured to implement the method above.
  • the present principles also relate to a data stream comprising a representation of a volumetric scene and metadata comprising a color value indicating to a renderer to set missing color values to the color value when rendering a viewport image of the volumetric scene.
  • the color value is associated with a quality level indicating to the renderer a visibility level of visual artifacts when setting missing color values to the color value.
  • the color value may be determined as a function of color values of pixels of a multi-views image used to generate the representation of the volumetric scene.
  • the metadata comprises at least two color values, each given color value being associated with a region of the volumetric scene indicating to the renderer to use the given color value for parts of the viewport image representing the region of the volumetric scene.
  • the present principles apply to color value and to any other rendering attribute value, like reflectance, transparency, shininess, material, etc.
  • FIG. 1 illustrates a generic encoding/decoding flowchart of a volumetric scene according to the present principles
  • FIG. 2 shows a viewport image of a volumetric scene in which some color information is missing
  • FIG. 3 shows an example architecture of an engine which may be configured to implement the encoding and rendering present principles
  • FIG. 4 shows an example of an embodiment of the syntax of a data stream encoding an volumetric scene description according to the present principles. 5. Detailed description of embodiments
  • each block represents a circuit element, module, or portion of code which comprises one or more executable instructions for implementing the specified logical function(s).
  • the function(s) noted in the blocks may occur out of the order noted. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved.
  • a volumetric scene is a three-dimensional (3D) scene prepared to be rendered from points of views belonging to a region of the 3D space.
  • Figure 1 illustrates a generic encoding/decoding flowchart of a volumetric scene.
  • a 3D scene is captured at a step 11 .
  • the 3D scene may be captured, for example as a multi-view plus depth image (MVD) using a collection of cameras, or, for example, modelized as a point cloud or a mesh representation.
  • technologies to acquire a volumetric scene are numerous and can be mixed together to generate a model 12 of the 3D scene.
  • model 12 of the 3D scene is encoded as a volumetric scene, that is a 3D scene meant to be observed from different points of view, for example, from any location within a determined region of the 3D space (or from the entire 3D space).
  • Encoding step 13 generates a bitstream 14 which may be stored in a memory or which may be transmitted at a step 15 over a network by a transmitter. So, a bitstream 16 is received by a receiver. It is well known that bitstreams 14 and 16 may lightly differ because of transmission issues (for instance some packets may have been lost or received too late for being decoded in real time). At a step 17, received bitstream 16 is decoded to generate a model 18. Model 18 may differ from model 12 because of transmission issues and/or because of decoding problems. For example, compression method at encoding step 13 may introduce approximated values or may have deleted some information to ensure a given bit rate to bitstream 14. The decoder may also decode only a part of bitstream 16 because of memory or processing resources limitations and/or it may only decode a part of the model needed to render a viewport image 10 at a step 19 according to the rendering point of view.
  • the source content type i.e. model 12
  • encoding and compression methods 13, and synthesis method 19 may be various.
  • some pixels of the viewport might be not filled with information (for example color).
  • Figure 2 shows a viewport image 20 of a volumetric scene in which some color information is missing. As shown on viewport image 20 of Figure 2, color information is missing for pixels in areas 21. The color information of these pixels may not have been retrieved at step 19, for instance, because of the sparsity of model 12 or 18.
  • Missing color information may also be due to encoding step 13. Indeed, the encoder may discard some regions of input MVD 12 or displace some points within a point cloud 12. Noise on the geometry may also be introduced during video encoding step 13, for example when the volumetric scene is encoded as patch atlases, potentially leading to displacement of points. In the example of Figure 2, missing color is by default set to a grey value by the renderer.
  • Inpainting may also be performed at decoder side.
  • some pixels might stay unfilled in the viewport, leading to small or larger unfilled zones.
  • An additional step of inpainting is often performed after the synthesis.
  • This solution has the advantage of independence with respect to the transmitted data, it only relies on a postprocessing of the viewport image. However, it requires significant processing resources to be achieved by the rendering device.
  • Inpainting at the encoder side is also possible. As the entire captured data is available at encoder side, there is an opportunity to use these data to build a virtual view that grasps the background of the scene, forecasting that some pixels will be missing at rendering. Additional elements (patch pictures for example) are added to the encoded bitstream. This solution has two major drawbacks. First it is time consuming at encoder side because the creation of such a patch requires un-projecting/reprojecting all pixels of all source views from the MVD. Second, these additional patches do not ensure that there is no unfilled pixel at renderer stage.
  • metadata comprising a color value indicating to the renderer to set missing color values to the color value when rendering a viewport image of the volumetric scene are encoded in data stream 14 in association with a representation of the volumetric scene.
  • This color value is decoded from received bitstream 16 and used as a suggested default color value by the renderer to fill missing pixels of viewport image 20 at step 19.
  • the color value is transmitted encapsulated in a Supplemental Enhancement Information (SEI) message.
  • SEI Supplemental Enhancement Information
  • the metadata may also comprise suggested default values for pixel attributes other than color attribute, for example, transparency, material, reflectance, shininess, etc.
  • the metadata are encoded in a SEI message of the volumetric video standard V3C.
  • the SEI table in section F.2.1 General SEI message syntax of V3C would be updated accordingly:
  • transmitting such an information for example a suggested default color set to black, would have helped limiting the annoyance caused by the presence of grey pixels.
  • the color value may be selected by an operator or automatically computed, for example, as an average value of the images or points of model 12, or as the minimum value of the different views or points of model 12.
  • the 3D space is divided in regions and a color (or other attribute) value is associated with each region of the space.
  • the color value may be determined according to color of points in this region of the 3D space.
  • the metadata associated with the volumetric scene in the data stream comprise several pairs of color (or other attribute) value and description of a region of the 3D space of the volumetric video.
  • an attribute value (that may be associated with a region of the 3D space) is associated with a quality level indicating to the renderer a visibility level of visual artifacts when setting missing color values to the color value.
  • a default color value may be determined according to an average color of the points of a the point cloud representing model 12. If the variance of these color data is over a given threshold, the quality value is low. If the variance of a considered region is low, the quality level is set to a high value. This is only an example. Numerous and various calculus may be used to determine a quality level.
  • the renderer may use the given attribute value as a default value or select another filling method instead, for example, filtering the depth or inpainting missing parts.
  • Figure 3 shows an example architecture of an engine 30 which may be configured to implement the encoding and rendering present principles.
  • a device according to the architecture of Figure 3 is linked with other devices via their bus 31 and/or via I/O interface 36.
  • Device 30 comprises following elements that are linked together by a data and address bus 31 :
  • microprocessor 32 which is, for example, a DSP (or Digital Signal Processor);
  • RAM or Random Access Memory
  • a power supply (not represented in Figure 2), e.g. a battery.
  • the power supply is external to the device.
  • the word « register » used in the specification may correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data).
  • the ROM 33 comprises at least a program and parameters. The ROM 33 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 32 uploads the program in the RAM and executes the corresponding instructions.
  • the RAM 34 comprises, in a register, the program executed by the CPU 32 and uploaded after switch-on of the device 30, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
  • the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a computer program product, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • Sensors 37 may be, for example, cameras, microphones, temperature sensors, Inertial Measurement Units, GPS, hygrometry sensors, IR or UV light sensors or wind sensors.
  • Rendering devices 38 may be, for example, displays, speakers, vibrators, heat, fan, etc.
  • the device 30 is configured to implement a method according to the present principles, and belongs to a set comprising:
  • Figure 4 shows an example of an embodiment of the syntax of a data stream encoding an volumetric scene description according to the present principles.
  • Figure 4 shows an example structure 4 of a volumetric scene representation.
  • the structure consists in a container which organizes the stream in independent elements of syntax.
  • the structure may comprise a header part 41 which is a set of data common to every element of syntax of the stream.
  • the header part comprises some of metadata about elements of syntax, describing the nature and the role of each of them.
  • the structure also comprises a payload comprising an element of syntax 42 and an element of syntax 43.
  • Element of syntax 42 comprises metadata representative of the media content items and comprises at least a rendering attribute value, for example a color value indicating to a renderer to set missing color values to the color value when rendering a viewport image of the volumetric scene.
  • an attribute value is associated with a region of the 3D space of the volumetric scene represented in element of syntax 43.
  • an attribute value is associated with a quality level indicating to the renderer a visibility level of visual artifacts when setting missing color values to the color value.
  • Element of syntax 43 is a part of the payload of the data stream and comprises data encoding the representation of the volumetric scene to the present principles. Various formats may be used for this representation.
  • the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a computer program product, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
  • An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, Smartphones, tablets, computers, mobile phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
  • PDAs portable/personal digital assistants
  • Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications associated with data encoding, data decoding, view generation, texture processing, and other processing of images and related texture information and/or depth information.
  • equipment include an encoder, a decoder, a postprocessor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices.
  • the equipment may be mobile and even installed in a mobile vehicle.
  • the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD”), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
  • the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
  • a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
  • implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
  • the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment.
  • Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries may be, for example, analog or digital information.
  • the signal may be transmitted over a variety of different wired or wireless links, as is known.
  • the signal may be stored on a processor-readable medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des procédés, un dispositif et un flux de données pour générer, transmettre et décoder des scènes volumétriques. Selon les présents principes, les valeurs d'attributs de rendu, comme les valeurs de couleur, sont transmises au restituteur pour être utilisées comme valeur d'attribut suggérée pour les pixels pour lesquels cet attribut est manquant. Par exemple, lors de la restitution d'une image d'une scène volumétrique, certains pixels peuvent manquer d'informations sur la couleur. Dans un tel cas, les présents principes fournissent une couleur par défaut suggérée. Ces principes s'appliquent à divers attributs de rendu comme la réflectance, la transparence, le matériau ou la brillance. Dans un mode de réalisation, une valeur d'attribut est associée à une région de l'espace 3D. Dans un autre mode de réalisation, un niveau de qualité lié à la valeur par défaut aide le dispositif de rendu à sélectionner un procédé de remplissage.
PCT/EP2023/073171 2022-08-29 2023-08-23 Transmission d'une valeur d'attribut manquante pour une vue rendue d'une scène volumétrique WO2024046849A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22306274.6 2022-08-29
EP22306274 2022-08-29

Publications (1)

Publication Number Publication Date
WO2024046849A1 true WO2024046849A1 (fr) 2024-03-07

Family

ID=83271374

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/073171 WO2024046849A1 (fr) 2022-08-29 2023-08-23 Transmission d'une valeur d'attribut manquante pour une vue rendue d'une scène volumétrique

Country Status (1)

Country Link
WO (1) WO2024046849A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100135379A1 (en) * 2008-12-02 2010-06-03 Sensio Technologies Inc. Method and system for encoding and decoding frames of a digital image stream
WO2021262936A1 (fr) * 2020-06-26 2021-12-30 Qualcomm Incorporated Codage de paramètre d'attribut pour compression de nuage de points basée sur la géométrie
US20220159297A1 (en) * 2019-03-19 2022-05-19 Nokia Technologies Oy An apparatus, a method and a computer program for volumetric video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100135379A1 (en) * 2008-12-02 2010-06-03 Sensio Technologies Inc. Method and system for encoding and decoding frames of a digital image stream
US20220159297A1 (en) * 2019-03-19 2022-05-19 Nokia Technologies Oy An apparatus, a method and a computer program for volumetric video
WO2021262936A1 (fr) * 2020-06-26 2021-12-30 Qualcomm Incorporated Codage de paramètre d'attribut pour compression de nuage de points basée sur la géométrie

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Test Model 14 for MPEG immersive video", no. n21853, 30 July 2022 (2022-07-30), XP030303276, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/139_OnLine/wg11/MDS21853_WG04_N00242.zip WG04N0242_TMIV14.docx> [retrieved on 20220730] *
"Text of ISO/IEC DIS 23090-5 Visual Volumetric Video-based Coding and Video-based Point Cloud Compression 2nd Edition", no. n20761, 23 July 2021 (2021-07-23), XP030296513, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/135_OnLine/wg11/MDS20761_WG07_N00188.zip WG07N0188_ISO_IEC_23090-5_DIS_2ed.pdf> [retrieved on 20210723] *
FLYNN (APPLE) D ET AL: "G-PCC: Signalling of default attribute values", no. m53681, 15 April 2020 (2020-04-15), XP030287361, Retrieved from the Internet <URL:http://phenix.int-evry.fr/mpeg/doc_end_user/documents/130_Alpbach/wg11/m53681-v1-m53681_v1.zip m53681.pdf> [retrieved on 20200415] *

Similar Documents

Publication Publication Date Title
CN111742549B (zh) 在数据流中编码及解码来自数据流的三维场景的方法和装置
CN117768653A (zh) 编码和解码体积视频的方法和设备
CN111742548B (zh) 在数据流中编码及解码来自数据流的三维场景的方法和装置
US11979546B2 (en) Method and apparatus for encoding and rendering a 3D scene with inpainting patches
US11721044B2 (en) Method and apparatus for decoding three-dimensional scenes
CN111742547B (zh) 用于在数据流中编码和从数据流中解码三维场景的方法和装置
CN114945946A (zh) 具有辅助性分块的体积视频
US20220191544A1 (en) Radiative Transfer Signalling For Immersive Video
US20230362409A1 (en) A method and apparatus for signaling depth of multi-plane images-based volumetric video
WO2024046849A1 (fr) Transmission d&#39;une valeur d&#39;attribut manquante pour une vue rendue d&#39;une scène volumétrique
US20220368879A1 (en) A method and apparatus for encoding, transmitting and decoding volumetric video
US20220345681A1 (en) Method and apparatus for encoding, transmitting and decoding volumetric video
US20230224501A1 (en) Different atlas packings for volumetric video
US20230379495A1 (en) A method and apparatus for encoding mpi-based volumetric video
WO2023194109A1 (fr) Alignement couleur-profondeur avec métadonnées d&#39;assistance pour le transcodage d&#39;une vidéo volumétrique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23757310

Country of ref document: EP

Kind code of ref document: A1