WO2023180844A1 - Mesh zippering - Google Patents

Mesh zippering Download PDF

Info

Publication number
WO2023180844A1
WO2023180844A1 PCT/IB2023/052107 IB2023052107W WO2023180844A1 WO 2023180844 A1 WO2023180844 A1 WO 2023180844A1 IB 2023052107 W IB2023052107 W IB 2023052107W WO 2023180844 A1 WO2023180844 A1 WO 2023180844A1
Authority
WO
WIPO (PCT)
Prior art keywords
implementation
zippering
per
mesh
implementations
Prior art date
Application number
PCT/IB2023/052107
Other languages
French (fr)
Inventor
Danillo GRAZIOSI
Alexandre ZAGHETTO
Ali Tabatabai
Original Assignee
Sony Group Corporation
Sony Corporation Of America
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/987,847 external-priority patent/US20230306687A1/en
Application filed by Sony Group Corporation, Sony Corporation Of America filed Critical Sony Group Corporation
Priority to CN202380013353.2A priority Critical patent/CN117897731A/en
Publication of WO2023180844A1 publication Critical patent/WO2023180844A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present invention relates to three dimensional graphics. More specifically, the present invention relates to coding of three dimensional graphics.
  • volumetric content such as point clouds
  • V3C visual volumetric video-based compression
  • MPEG had issued a call for proposal (CfP) for compression of point clouds.
  • CfP call for proposal
  • MPEG is considering two different technologies for point cloud compression: 3D native coding technology (based on octree and similar coding methods), or 3D to 2D projection, followed by traditional video coding.
  • 3D native coding technology based on octree and similar coding methods
  • 3D to 2D projection followed by traditional video coding.
  • TMC2 test model software
  • This method has proven to be more efficient than native 3D coding, and is able to achieve competitive bitrates at acceptable quality.
  • 3D point clouds of the projection-based method also known as the video-based method, or V-PCC
  • the standard is expected to include in future versions further 3D data, such as 3D meshes.
  • current version of the standard is only suitable for the transmission of an unconnected set of points, so there is nomechanism to send the connectivity of points, as it is required in 3D mesh compression.
  • V-PCC V-PCC
  • a mesh compression approach like TFAN or Edgebreaker.
  • the limitation of this method is that the original mesh has to be dense, so that the point cloud generated from the vertices is not sparse and can be efficiently encoded after projection.
  • the order of the vertices affect the coding of connectivity, and different method to reorganize the mesh connectivity have been proposed.
  • An alternative way to encode a sparse mesh is to use the RAW patch data to encode the vertices position in 3D.
  • RAW patches encode (x,y,z) directly
  • all the vertices are encoded as RAW data
  • the connectivity is encoded by a similar mesh compression method, as mentioned before.
  • the vertices may be sent in any preferred order, so the order generated from connectivity encoding can be used.
  • the method can encode sparse point clouds, however, RAW patches are not efficient to encode 3D data, and further data such as the attributes of the triangle faces may be missing from this approach.
  • a hierarchical method indicate the geometry distortion that can generate gaps between patches.
  • the value per frame, or per patch, or per boundary object is sent.
  • the number of bits to encode the values is also dependent on the previous geometry distortion.
  • a method programmed in a non-transitory memory of a device comprises finding a plurality of border points, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation.
  • the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
  • the per boundary point implementation includes receiving distortion information without performing a search.
  • an apparatus comprises a non-transitory memory for storing an application, the application for: finding a plurality of border points, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation and a processor coupled to the memory, the processor configured for processing the application.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation.
  • the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
  • the per boundary point implementation includes receiving distortion information without performing a search.
  • the matched patch/vertex index implementation includes matching indices. Selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. Selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
  • a system comprises an encoder configured for encoding content and a decoder configured for: finding a plurality of border points of the content, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation.
  • the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
  • the per boundary point implementation includes receiving distortion information without performing a search.
  • Figure 1 illustrates a flowchart of a method of mesh zippering according to some embodiments.
  • Figure 2 illustrates images of aspects of zippering according to some embodiments.
  • Figure 3 illustrates images showing advantages and disadvantages of each zippering implementation according to some embodiments.
  • Figure 4 illustrates a block diagram of an exemplary computing device configured to implement the mesh zippering method according to some embodiments.
  • a hierarchical method indicate the geometry distortion that can generate gaps between patches.
  • the value per frame, or per patch, or per boundary object is sent.
  • the number of bits to encode the values is also dependent on the previous geometry distortion.
  • FIG. 1 illustrates a flowchart of a method of mesh zippering according to some embodiments.
  • border points are found.
  • the border points are able to be found in any manner.
  • mesh zippering is implemented.
  • Mesh zippering includes determining neighbors of the bordering vertices and merging specific neighboring bordering vertices.
  • the mesh zippering is able to be implemented using one or more different implementations.
  • Mesh zippering is utilized to find points/vertices that match to remove any gaps in a mesh. To find the matching points, a search is performed in the 3D space by searching neighboring points of a point.
  • the search is able to be limited in scope (e.g., based on a fixed value such as a maximum distance of 5 or based on a maximum distortion). Therefore, if the distance is larger than 5, the point will never find its match.
  • the search is also able to be limited based on a maximum distortion.
  • the maximum distortion for each point may be different.
  • Mesh zippering per sequence is able to use distance or maximum distortion to limit the search. Since searching based on the maximum distortion may be too time consuming or computationally expensive for an entire sequence, searching on a per frame basis may be better. For example, most frames are searched based on a fixed value (e.g., maximum distance), but one specific frame is searched based on the maximum distortion.
  • the maximum distortion is able to be implemented on a per patch basis.
  • the distortion may be smaller. In another example, there are patches that are small, and the distortion may be larger.
  • the distortion is able to be sent on a per border/boundary point case. No search is performed with this implementation; rather, the distortion is applied as received. However, more distortion information is sent, so the bitrate is higher, but the mesh reconstruction is better (e.g., more accurate).
  • zippering per frame is implemented. As described, the zippering performs a search for each point in a frame using a maximum distortion. By performing zippering per frame instead of an entire sequence, some processing is performed without distortion information, and only frames that are more distorted use the zippering based on a maximum distortion.
  • zippering per patch is implemented. By performing zippering per patch, some processing is performed without distortion information, and only patches that are more distorted use the zippering based on a maximum distortion.
  • zippering per border point is implemented. No search is performed with zippering per border point; rather, the distortion is applied as received. However, more distortion information is sent, so the bitrate is higher, but the mesh reconstruction is better (e.g., more accurate).
  • step 108 zippering border point match is implemented. Indices that are matched to each other are sent.
  • the decoder will determine where the patches go in the 3D space based on the matching vertices (e.g., averaging a distance between two points or selecting one of the points).
  • the zippering implementation is able to be selected in any manner such as being programmed in or adaptively selected based on a set of detected criteria (e.g., detecting that a frame or patch includes a distortion amount higher than a threshold).
  • step 110 vertices are merged. Merging the vertices is able to be performed in any manner. In some embodiments, fewer or additional steps are implemented. In some embodiments, the order of the steps is modified.
  • the zippering implementations are performed on the decoder side.
  • Figure 2 illustrates images of aspects of zippering according to some embodiments.
  • An image 200 is able to have gaps between border points.
  • zippering is applied to border vertices to narrow or eliminate the gaps.
  • zippering involves: classifying vertices as bordering vertices or non-bordering vertices, determining neighbors of the bordering vertices and merging the neighboring bordering vertices.
  • Image 204 shows a decoded image without gaps by utilizing zippering.
  • Figure 3 illustrates images showing advantages and disadvantages of each zippering implementation according to some embodiments.
  • Image 300 is the original image.
  • Image 302 shows without zippering - 12.172 Mbps.
  • Image 304 shows zippering - 12.222 Mbps.
  • Image 306 shows zippering - 13.253 Mbps.
  • Image 308 shows zippering - 13.991 Mbps.
  • ⁇ gs_zippering_max_match_distance[ k ] specifies the value of the variable zipperingMaxMatchDistance[ k ] used for processing the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used.
  • gs_zippering_send_border_point_match[ k ] 1 specifies that zippering by transmitting matching indices is applied to border points for the geometry smoothing instance with index k.
  • gs_zippering_send_border_point_match[ k ] equal to 0 specifies that zippering by transmitting matching indices is not applied to border points for the geometry smoothing instance with index k.
  • gs_zippering_send_border_point_match[ k ] is equal to 0.
  • gs_zippering_number_of_patches[ k ] indicates the number of patches that are to be filtered by the current SEI message.
  • the value of gs_zippering_number_of_patches shall be in the range from 0 to MaxNumPatches[ frameldx ], inclusive.
  • the default value of gs_zippering_number_of_patches is equal to 0 gs_zippering_number_of_border_points[ k ][ p ] indicates the number of border points numBorderPoints[ p ] of a patch with index p.
  • gs_zippering_border_point_match_patch_index[ k ] [ p ] [ b ] specifies the value of the variable zipperingBorderPointMatchPatchIndex[ k ][ p ][ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used.
  • gs_zippering_border_point_match_border_point_index[ k ][ p ][ b ] specifies the value of the variable zipperingBorderPointMatchBorderPointIndex[ k ] [ p ] [ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used
  • gs_zippering_send_distance_per_patch[ k ] equal to 1 specifies that zippering by transmitting matching distance per patch is applied to border points for the geometry smoothing instance with index k.
  • gs_zippering_send_distance_per_patch[ k ] 0 specifies that zippering by matching distance per patch is not applied to border points for the geometry smoothing instance with index k.
  • the default value of gs_zippering_send_ distance_per_patch[ k ] is equal to 0.
  • gs_zippering_send_distance_per_border_point[ k ] 1 specifies that zippering by transmitting matching distance per border point is applied to border points for the geometry smoothing instance with index k.
  • gs_zippering_send_distance_per_border_point [ k ] 0 specifies that zippering by matching distance per border point is not applied to border points for the geometry smoothing instance with index k.
  • the default value of gs_zippering_send_distance_per_border_point [ k ] is equal to 0.
  • gs_zippering_max_match_distance_per_patch[ k ] specifies the value of the variable zipperingMaxMatchDistancePerPatch[ k ] [ p ] used for processing the current patch with index p in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used.
  • gs_zippering_border_point_distance[ k ][ p ][ b ] specifies the value of the variable zipperingMaxMatchDistancePerBorderPoint[ k ][ p ][ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used.
  • Figure 4 illustrates a block diagram of an exemplary computing device configured to implement the mesh zippering method according to some embodiments.
  • the computing device 400 is able to be used to acquire, store, compute, process, communicate and/or display information such as images and videos including 3D content.
  • the computing device 400 is able to implement any of the encoding/decoding aspects.
  • a hardware structure suitable for implementing the computing device 400 includes a network interface 402, a memory 404, a processor 406, I/O device(s) 408, a bus 410 and a storage device 412.
  • the choice of processor is not critical as long as a suitable processor with sufficient speed is chosen.
  • the memory 404 is able to be any conventional computer memory known in the art.
  • the storage device 412 is able to include a hard drive, CDROM, CDRW, DVD, DVDRW, High Definition disc/ drive, ultra-HD drive, flash memory card or any other storage device.
  • the computing device 400 is able to include one or more network interfaces 402. An example of a network interface includes a network card connected to an Ethernet or other type of LAN.
  • the HO device(s) 408 are able to include one or more of the following: keyboard, mouse, monitor, screen, printer, modem, touchscreen, button interface and other devices.
  • Mesh zippering application(s) 430 used to implement the mesh zippering implementation are likely to be stored in the storage device 412 and memory 404 and processed as applications are typically processed. More or fewer components shown in Figure 4 are able to be included in the computing device 400.
  • mesh zippering hardware 420 is included.
  • the computing device 400 in Figure 4 includes applications 430 and hardware 420 for the mesh zippering implementation, the mesh zippering method is able to be implemented on a computing device in hardware, firmware, software or any combination thereof.
  • the mesh zippering applications 430 are programmed in a memory and executed using a processor.
  • the mesh zippering hardware 420 is programmed hardware logic including gates specifically designed to implement the mesh zippering method.
  • the mesh zippering application(s) 430 include several applications and/or modules.
  • modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included.
  • suitable computing devices include a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player (e.g., DVD writer/player, high definition disc writer/player, ultra high definition disc writer/player), a television, a home entertainment system, an augmented reality device, a virtual reality device, smart jewelry (e.g., smart watch), a vehicle (e.g., a self-driving vehicle) or any other suitable computing device.
  • a personal computer e.g., a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console
  • a device acquires or receives 3D content (e.g., point cloud content).
  • 3D content e.g., point cloud content.
  • the mesh zippering method is able to be implemented with user assistance or automatically without user involvement.
  • the mesh zippering method enables more efficient and more accurate 3D content decoding compared to previous implementations.
  • a method programmed in a non-transitory memory of a device comprising: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.
  • the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
  • An apparatus comprising: a non-transitory memory for storing an application, the application for: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation; and a processor coupled to the memory, the processor configured for processing the application.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.
  • the apparatus of clause 9 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
  • selecting the zippering implementation from the plurality of mesh zippering implementations is programmed.
  • selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
  • a system comprising: an encoder configured for encoding content; and a decoder configured for: finding a plurality of border points of the content; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
  • the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Ways to improve mesh reconstruction by modifying the position of vertices at the border of patches to make sure that neighboring patches do not have a gap between them, also known as zippering, are described herein. Six different methods to implement the post-processing operation, as well as syntax elements and semantics for transmission of the filter parameters, are disclosed. A hierarchical method indicate the geometry distortion that can generate gaps between patches. The value per frame, or per patch, or per boundary object is sent. The number of bits to encode the values is also dependent on the previous geometry distortion. A method sends index matches instead of geometry distortion. The matching index is sent per boundary vertex, but a method to send only one index of the pair is implemented as well.

Description

MESH ZIPPERING
CROSS-REFERENCE TO RELATED APPLICATION(S)
This application claims priority under 35 U.S.C. § 119(e) of the U.S. Provisional Patent Application Ser. No. 63/269,911, filed March 25, 2022 and titled, “MESH ZIPPERING,” which is hereby incorporated by reference in its entirety for all purposes.
FIELD OF THE INVENTION
The present invention relates to three dimensional graphics. More specifically, the present invention relates to coding of three dimensional graphics.
BACKGROUND OF THE INVENTION
Recently, a novel method to compress volumetric content, such as point clouds, based on projection from 3D to 2D is being standardized. The method, also known as V3C (visual volumetric video-based compression), maps the 3D volumetric data into several 2D patches, and then further arranges the patches into an atlas image, which is subsequently encoded with a video encoder. The atlas images correspond to the geometry of the points, the respective texture, and an occupancy map that indicates which of the positions are to be considered for the point cloud reconstruction.
In 2017, MPEG had issued a call for proposal (CfP) for compression of point clouds. After evaluation of several proposals, currently MPEG is considering two different technologies for point cloud compression: 3D native coding technology (based on octree and similar coding methods), or 3D to 2D projection, followed by traditional video coding. In the case of dynamic 3D scenes, MPEG is using a test model software (TMC2) based on patch surface modeling, projection of patches from 3D to 2D image, and coding the 2D image with video encoders such as HEVC. This method has proven to be more efficient than native 3D coding, and is able to achieve competitive bitrates at acceptable quality. Due to the success for coding 3D point clouds of the projection-based method (also known as the video-based method, or V-PCC), the standard is expected to include in future versions further 3D data, such as 3D meshes. However, current version of the standard is only suitable for the transmission of an unconnected set of points, so there is nomechanism to send the connectivity of points, as it is required in 3D mesh compression.
Methods have been proposed to extend the functionality of V-PCC to meshes as well. One possible way is to encode the vertices using V-PCC, and then the connectivity using a mesh compression approach, like TFAN or Edgebreaker. The limitation of this method is that the original mesh has to be dense, so that the point cloud generated from the vertices is not sparse and can be efficiently encoded after projection. Moreover, the order of the vertices affect the coding of connectivity, and different method to reorganize the mesh connectivity have been proposed. An alternative way to encode a sparse mesh is to use the RAW patch data to encode the vertices position in 3D. Since RAW patches encode (x,y,z) directly, in this method all the vertices are encoded as RAW data, while the connectivity is encoded by a similar mesh compression method, as mentioned before. Notice that in the RAW patch, the vertices may be sent in any preferred order, so the order generated from connectivity encoding can be used. The method can encode sparse point clouds, however, RAW patches are not efficient to encode 3D data, and further data such as the attributes of the triangle faces may be missing from this approach.
SUMMARY OF THE INVENTION
Ways to improve mesh reconstruction by modifying the position of vertices at the border of patches to make sure that neighboring patches do not have a gap between them, also known as zippering, are described herein. Six different methods to implement the post-processing operation, as well as syntax elements and semantics for transmission of the filter parameters, are disclosed. A hierarchical method indicate the geometry distortion that can generate gaps between patches. The value per frame, or per patch, or per boundary object is sent. The number of bits to encode the values is also dependent on the previous geometry distortion. A method sends index matches instead of geometry distortion. The matching index is sent per boundary vertex, but a method to send only one index of the pair is implemented as well.
In one aspect, a method programmed in a non-transitory memory of a device comprises finding a plurality of border points, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation. The plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation. The fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance. The per boundary point implementation includes receiving distortion information without performing a search. The matched patch/vertex index implementation includes matching indices. Selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. Selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
In another aspect, an apparatus comprises a non-transitory memory for storing an application, the application for: finding a plurality of border points, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation and a processor coupled to the memory, the processor configured for processing the application. The plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation. The fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance. The per boundary point implementation includes receiving distortion information without performing a search. The matched patch/vertex index implementation includes matching indices. Selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. Selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
In another aspect, a system comprises an encoder configured for encoding content and a decoder configured for: finding a plurality of border points of the content, selecting a zippering implementation from a plurality of mesh zippering implementations and merging vertices based on the selected mesh zippering implementation. The plurality of mesh zippering implementations comprise: a fixed value per sequence implementation, a max distortion per sequence implementation, a max distortion per frame implementation, a max distortion per patch implementation, a per boundary point implementation and a matched patch/vertex index implementation. The fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance. The per boundary point implementation includes receiving distortion information without performing a search. The matched patch/vertex index implementation includes matching indices. Selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. Selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a flowchart of a method of mesh zippering according to some embodiments.
Figure 2 illustrates images of aspects of zippering according to some embodiments.
Figure 3 illustrates images showing advantages and disadvantages of each zippering implementation according to some embodiments.
Figure 4 illustrates a block diagram of an exemplary computing device configured to implement the mesh zippering method according to some embodiments. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Ways to improve mesh reconstruction by modifying the position of vertices at the border of patches to make sure that neighboring patches do not have a gap between them, also known as zippering, are described herein. Six different methods to implement the post-processing operation, as well as syntax elements and semantics for transmission of the filter parameters, are disclosed. A hierarchical method indicate the geometry distortion that can generate gaps between patches. The value per frame, or per patch, or per boundary object is sent. The number of bits to encode the values is also dependent on the previous geometry distortion. A method sends index matches instead of geometry distortion. The matching index is sent per boundary vertex, but a method to send only one index of the pair is implemented as well.
As described in U.S. Patent App. Ser. No. 17/161,300, filed January 28, 2021, titled, “PROJECTION-BASED MESH COMPRESSION” and U.S. Provisional Patent Application Ser. No. 62/991,128, filed March 18, 2020 and titled, “PROJECTION-BASED MESH COMPRESSION,” which are hereby incorporated by reference in their entireties for all purposes, zippering addresses the issue of misaligned vertices.
Figure 1 illustrates a flowchart of a method of mesh zippering according to some embodiments. In the step 100, border points are found. The border points are able to be found in any manner. After the border points are found, mesh zippering is implemented. Mesh zippering includes determining neighbors of the bordering vertices and merging specific neighboring bordering vertices. The mesh zippering is able to be implemented using one or more different implementations. Mesh zippering is utilized to find points/vertices that match to remove any gaps in a mesh. To find the matching points, a search is performed in the 3D space by searching neighboring points of a point. The search is able to be limited in scope (e.g., based on a fixed value such as a maximum distance of 5 or based on a maximum distortion). Therefore, if the distance is larger than 5, the point will never find its match. The search is also able to be limited based on a maximum distortion. The maximum distortion for each point may be different. Mesh zippering per sequence is able to use distance or maximum distortion to limit the search. Since searching based on the maximum distortion may be too time consuming or computationally expensive for an entire sequence, searching on a per frame basis may be better. For example, most frames are searched based on a fixed value (e.g., maximum distance), but one specific frame is searched based on the maximum distortion. The maximum distortion is able to be implemented on a per patch basis. For example, there are patches that are large, and the distortion may be smaller. In another example, there are patches that are small, and the distortion may be larger. The distortion is able to be sent on a per border/boundary point case. No search is performed with this implementation; rather, the distortion is applied as received. However, more distortion information is sent, so the bitrate is higher, but the mesh reconstruction is better (e.g., more accurate).
In the step 102, zippering per frame is implemented. As described, the zippering performs a search for each point in a frame using a maximum distortion. By performing zippering per frame instead of an entire sequence, some processing is performed without distortion information, and only frames that are more distorted use the zippering based on a maximum distortion. In the step 104, zippering per patch is implemented. By performing zippering per patch, some processing is performed without distortion information, and only patches that are more distorted use the zippering based on a maximum distortion. In the step 106, zippering per border point is implemented. No search is performed with zippering per border point; rather, the distortion is applied as received. However, more distortion information is sent, so the bitrate is higher, but the mesh reconstruction is better (e.g., more accurate). In the step 108, zippering border point match is implemented. Indices that are matched to each other are sent. The decoder will determine where the patches go in the 3D space based on the matching vertices (e.g., averaging a distance between two points or selecting one of the points). The zippering implementation is able to be selected in any manner such as being programmed in or adaptively selected based on a set of detected criteria (e.g., detecting that a frame or patch includes a distortion amount higher than a threshold). In the step 110, vertices are merged. Merging the vertices is able to be performed in any manner. In some embodiments, fewer or additional steps are implemented. In some embodiments, the order of the steps is modified. The zippering implementations are performed on the decoder side.
Figure 2 illustrates images of aspects of zippering according to some embodiments. An image 200 is able to have gaps between border points. In image 202, zippering is applied to border vertices to narrow or eliminate the gaps. As described, zippering involves: classifying vertices as bordering vertices or non-bordering vertices, determining neighbors of the bordering vertices and merging the neighboring bordering vertices. Image 204, shows a decoded image without gaps by utilizing zippering.
Figure 3 illustrates images showing advantages and disadvantages of each zippering implementation according to some embodiments. Image 300 is the original image. Image 302 shows without zippering - 12.172 Mbps. Image 304 shows zippering - 12.222 Mbps. Image 306 shows zippering - 13.253 Mbps. Image 308 shows zippering - 13.991 Mbps. By zippering, gaps are able to be filled such as in the face, hair and ear.
The updated zippering syntax is described herein: geometry_smoothing( payloadSize ) { Descriptor gs_persistence_flag u(l) gs_reset_flag u(l) gs_instances_updated u(8) for( i = 0; i < gs_instances_updated; i++ ) { gs_instance_index[ i ] u(8) k = gs_instance_index[ i ] gs_instance_cancel_flag[ k ] u(l) if( gs_instance_cancel_flag[ k ] != 1 ) { gs_method_type[ k ] ue(v) if( gs_method_type[ k ] == 1 ) { gs_filter_eom_points_flag[ k ] u(l) gs_grid_size_minus2[ k ] u(5) gs_threshold[ k ] u(8)
} if( gs_method_type[ k ] == 2 ) { gs_zippering_max_match_distance[ k ] ue(v) if( gs_zippering_max_match_distance_per_frame[ k ] != 0 ) { gs_zippering_send_border_point_match[ k ] u(l) if( gs_zippering_send_border_point_match[ k ] ) { gs_zippering_number_of_patches[ k ] ue(v) numPatches = gs_zippering_number_of_patches[ k ] ue(v) for( p = 0; p < numPatches; p++ ) gs_zippering_number_of_border_points[ k ][ p ] ue(v) for( p = 0; p < numPatches; p++ ) { numBorderPoints = gs_zippering_number_of_border_points[ k ][ p ] for( b = 0; b < numBorderPoints ; b++ ) { if( zipperingBorderPointMatch!ndexFlag[ k ][ p ][ b ] == 0) { gs_zippering_border_point_match_patch_index[ k ][ p ][ b ] u(v) patchindex = gs_zippering_border_point_match_patch_index[ k ] [ p ][ b ] if( patchindex != numPatches ) { gs_zippering_border_point_match_border_point_index[ k ] [ p ] [ b ] u(v) borderlndex=gs_zippering_border_point_match_border_point_ index[ k ][ p ][ b ] if( patchindex > p) zipperingBorderPointMatchhidexFlag[ k ][ patchindex ][ borderindex ] = 1
}
}
}
}
} else { gs_zippering_send_distance_per_patch[ k ] u(l) gs_zippering_send_distance_per_border_point[ k ] u(l) if( gs_zippering_send_distance_per_patch[ k ] ) { gs_zippering_number_of_patches[ k ] ue(v) numPatches = gs_zippering_number_of_patches[ k ] ue(v) for( p = 0; p < numPatches; p++ ) { gs_zippering_max_match_distance_per_patch[ k ] [ p ] u(v) if( gs_zippering_max_match_distance_per_patch[ k ][ p ] != 0 ) { if( gs_zippering_send_distance_per_border_point[ k ][ p ] == 1 ) { gs_zippering_number_of_border_points[ k ][ p ] ue(v) numBorderPoints = gs_zippering_number_of_border_points[ k ][ p ] for( b = 0; b < numBorderPoints ; b++ ) gs_zippering_border_point_distance[ k ][ p ][ b ] i(v)
}
}
}
} }
}
}
} gs_zippering_max_match_distance[ k ] specifies the value of the variable zipperingMaxMatchDistance[ k ] used for processing the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used. gs_zippering_send_border_point_match[ k ] equal to 1 specifies that zippering by transmitting matching indices is applied to border points for the geometry smoothing instance with index k. gs_zippering_send_border_point_match[ k ] equal to 0 specifies that zippering by transmitting matching indices is not applied to border points for the geometry smoothing instance with index k. The default value of gs_zippering_send_border_point_match[ k ] is equal to 0. gs_zippering_number_of_patches[ k ] indicates the number of patches that are to be filtered by the current SEI message. The value of gs_zippering_number_of_patches shall be in the range from 0 to MaxNumPatches[ frameldx ], inclusive. The default value of gs_zippering_number_of_patches is equal to 0 gs_zippering_number_of_border_points[ k ][ p ] indicates the number of border points numBorderPoints[ p ] of a patch with index p. gs_zippering_border_point_match_patch_index[ k ] [ p ] [ b ] specifies the value of the variable zipperingBorderPointMatchPatchIndex[ k ][ p ][ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used. gs_zippering_border_point_match_border_point_index[ k ][ p ][ b ] specifies the value of the variable zipperingBorderPointMatchBorderPointIndex[ k ] [ p ] [ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used gs_zippering_send_distance_per_patch[ k ] equal to 1 specifies that zippering by transmitting matching distance per patch is applied to border points for the geometry smoothing instance with index k. gs_zippering_send_distance_per_patch[ k ] equal to 0 specifies that zippering by matching distance per patch is not applied to border points for the geometry smoothing instance with index k. The default value of gs_zippering_send_ distance_per_patch[ k ] is equal to 0. gs_zippering_send_distance_per_border_point[ k ] equal to 1 specifies that zippering by transmitting matching distance per border point is applied to border points for the geometry smoothing instance with index k. gs_zippering_send_distance_per_border_point [ k ] equal to 0 specifies that zippering by matching distance per border point is not applied to border points for the geometry smoothing instance with index k. The default value of gs_zippering_send_distance_per_border_point [ k ] is equal to 0. gs_zippering_max_match_distance_per_patch[ k ] specifies the value of the variable zipperingMaxMatchDistancePerPatch[ k ] [ p ] used for processing the current patch with index p in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used. gs_zippering_border_point_distance[ k ][ p ][ b ] specifies the value of the variable zipperingMaxMatchDistancePerBorderPoint[ k ][ p ][ b ] used for processing the current border point with index b, in the current patch with index p, in the current mesh frame for geometry smoothing instance with index k when the zippering filtering process is used.
As described, trade-off is able to be achieved by choosing different zippering methods. Sending a single distance for the entire sequence uses just one single SEI message, while sending the distance per frame, patch or border distance includes sending SEI messages every frame. However, the subjective impact may be significant, since holes may or may not be visible, depending on the zippering method chosen.
Figure 4 illustrates a block diagram of an exemplary computing device configured to implement the mesh zippering method according to some embodiments. The computing device 400 is able to be used to acquire, store, compute, process, communicate and/or display information such as images and videos including 3D content. The computing device 400 is able to implement any of the encoding/decoding aspects. In general, a hardware structure suitable for implementing the computing device 400 includes a network interface 402, a memory 404, a processor 406, I/O device(s) 408, a bus 410 and a storage device 412. The choice of processor is not critical as long as a suitable processor with sufficient speed is chosen. The memory 404 is able to be any conventional computer memory known in the art. The storage device 412 is able to include a hard drive, CDROM, CDRW, DVD, DVDRW, High Definition disc/ drive, ultra-HD drive, flash memory card or any other storage device. The computing device 400 is able to include one or more network interfaces 402. An example of a network interface includes a network card connected to an Ethernet or other type of LAN. The HO device(s) 408 are able to include one or more of the following: keyboard, mouse, monitor, screen, printer, modem, touchscreen, button interface and other devices. Mesh zippering application(s) 430 used to implement the mesh zippering implementation are likely to be stored in the storage device 412 and memory 404 and processed as applications are typically processed. More or fewer components shown in Figure 4 are able to be included in the computing device 400. In some embodiments, mesh zippering hardware 420 is included. Although the computing device 400 in Figure 4 includes applications 430 and hardware 420 for the mesh zippering implementation, the mesh zippering method is able to be implemented on a computing device in hardware, firmware, software or any combination thereof. For example, in some embodiments, the mesh zippering applications 430 are programmed in a memory and executed using a processor. In another example, in some embodiments, the mesh zippering hardware 420 is programmed hardware logic including gates specifically designed to implement the mesh zippering method.
In some embodiments, the mesh zippering application(s) 430 include several applications and/or modules. In some embodiments, modules include one or more sub-modules as well. In some embodiments, fewer or additional modules are able to be included.
Examples of suitable computing devices include a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, a smart phone, a portable music player, a tablet computer, a mobile device, a video player, a video disc writer/player (e.g., DVD writer/player, high definition disc writer/player, ultra high definition disc writer/player), a television, a home entertainment system, an augmented reality device, a virtual reality device, smart jewelry (e.g., smart watch), a vehicle (e.g., a self-driving vehicle) or any other suitable computing device.
To utilize the mesh zippering method, a device acquires or receives 3D content (e.g., point cloud content). The mesh zippering method is able to be implemented with user assistance or automatically without user involvement.
In operation, the mesh zippering method enables more efficient and more accurate 3D content decoding compared to previous implementations.
SOME EMBODIMENTS OF MESH ZIPPERING
1. A method programmed in a non-transitory memory of a device comprising: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
2. The method of clause 1 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation. 3. The method of clause 2 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
4. The method of clause 2 wherein the per boundary point implementation includes receiving distortion information without performing a search.
5. The method of clause 2 wherein the matched patch/vertex index implementation includes matching indices.
6. The method of clause 1 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed.
7. The method of clause 1 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
8. An apparatus comprising: a non-transitory memory for storing an application, the application for: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation; and a processor coupled to the memory, the processor configured for processing the application. The apparatus of clause 8 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation. The apparatus of clause 9 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance. The apparatus of clause 9 wherein the per boundary point implementation includes receiving distortion information without performing a search. The apparatus of clause 9 wherein the matched patch/vertex index implementation includes matching indices. The apparatus of clause 8 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. The apparatus of clause 8 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria. A system comprising: an encoder configured for encoding content; and a decoder configured for: finding a plurality of border points of the content; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
16. The system of clause 15 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.
17. The system of clause 16 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
18. The system of clause 16 wherein the per boundary point implementation includes receiving distortion information without performing a search.
19. The system of clause 16 wherein the matched patch/vertex index implementation includes matching indices.
20. The system of clause 15 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. 21. The system of clause 15 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria. The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of principles of construction and operation of the invention. Such reference herein to specific embodiments and details thereof is not intended to limit the scope of the claims appended hereto. It will be readily apparent to one skilled in the art that other various modifications may be made in the embodiment chosen for illustration without departing from the spirit and scope of the invention as defined by the claims.

Claims

C L A I M S What is claimed is:
1. A method programmed in a non-transitory memory of a device comprising: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
2. The method of claim 1 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.
3. The method of claim 2 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
4. The method of claim 2 wherein the per boundary point implementation includes receiving distortion information without performing a search.
5. The method of claim 2 wherein the matched patch/vertex index implementation includes matching indices.
6. The method of claim 1 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed.
7. The method of claim 1 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
8. An apparatus comprising: a non-transitory memory for storing an application, the application for: finding a plurality of border points; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation; and a processor coupled to the memory, the processor configured for processing the application.
9. The apparatus of claim 8 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation.
10. The apparatus of claim 9 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance.
11. The apparatus of claim 9 wherein the per boundary point implementation includes receiving distortion information without performing a search.
12. The apparatus of claim 9 wherein the matched patch/vertex index implementation includes matching indices.
13. The apparatus of claim 8 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed.
14. The apparatus of claim 8 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
15. A system comprising: an encoder configured for encoding content; and a decoder configured for: finding a plurality of border points of the content; selecting a zippering implementation from a plurality of mesh zippering implementations; and merging vertices based on the selected mesh zippering implementation.
16. The system of claim 15 wherein the plurality of mesh zippering implementations comprise: a fixed value per sequence implementation; a max distortion per sequence implementation; a max distortion per frame implementation; a max distortion per patch implementation; a per boundary point implementation; and a matched patch/vertex index implementation. The system of claim 16 wherein the fixed value per sequence implementation includes limiting a scope of a search for a matching border point based on distance. The system of claim 16 wherein the per boundary point implementation includes receiving distortion information without performing a search. The system of claim 16 wherein the matched patch/vertex index implementation includes matching indices. The system of claim 15 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is programmed. The system of claim 15 wherein selecting the zippering implementation from the plurality of mesh zippering implementations is adaptively selected based on a set of detected criteria.
PCT/IB2023/052107 2022-03-25 2023-03-07 Mesh zippering WO2023180844A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202380013353.2A CN117897731A (en) 2022-03-25 2023-03-07 Grid zipper fastener

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263269911P 2022-03-25 2022-03-25
US63/269,911 2022-03-25
US17/987,847 US20230306687A1 (en) 2022-03-25 2022-11-15 Mesh zippering
US17/987,847 2022-11-15

Publications (1)

Publication Number Publication Date
WO2023180844A1 true WO2023180844A1 (en) 2023-09-28

Family

ID=85724600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/052107 WO2023180844A1 (en) 2022-03-25 2023-03-07 Mesh zippering

Country Status (1)

Country Link
WO (1) WO2023180844A1 (en)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DANILLO B GRAZIOSI (SONY) ET AL: "[V-CG] Sony's Dynamic Mesh Coding Call for Proposal Response", no. m59284, 24 April 2022 (2022-04-24), XP030301436, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/138_OnLine/wg11/m59284-v2-m59284_Sony_Dynamic_Mesh_CfP_Response.zip m59284_Sony_Dynamic_Mesh_CfP_Response.docx> [retrieved on 20220424] *
DANILLO GRAZIOSI (SONY) ET AL: "[V-PCC][EE2.6-related] Mesh Geometry Smoothing Filter", no. m55374, 13 October 2020 (2020-10-13), XP030291885, Retrieved from the Internet <URL:https://dms.mpeg.expert/doc_end_user/documents/132_OnLine/wg11/m55374-v1-m55374_mesh_geometry_smoothing.zip m55374_mesh_geometry_smoothing.docx> [retrieved on 20201013] *

Similar Documents

Publication Publication Date Title
US11334969B2 (en) Point cloud geometry padding
US20210174551A1 (en) Mesh compression via point cloud representation
US11190803B2 (en) Point cloud coding using homography transform
US20210407139A1 (en) Decoded tile hash sei message for v3c/v-pcc
US20220383552A1 (en) Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
US10735766B2 (en) Point cloud auxiliary information coding
US11908169B2 (en) Dense mesh compression
Alface et al. V3c-based coding of dynamic meshes
US20230306687A1 (en) Mesh zippering
WO2023180844A1 (en) Mesh zippering
US20240177355A1 (en) Sub-mesh zippering
US20230306643A1 (en) Mesh patch simplification
US20230306683A1 (en) Mesh patch sub-division
US20230306642A1 (en) Patch mesh connectivity coding
US20240233189A1 (en) V3c syntax extension for mesh compression using sub-patches
WO2023180842A1 (en) Mesh patch simplification
US20240153147A1 (en) V3c syntax extension for mesh compression
US20230025378A1 (en) Task-driven machine learning-based representation and compression of point cloud geometry
US20230334712A1 (en) Chart based mesh compression
CN117897731A (en) Grid zipper fastener
WO2023180841A1 (en) Mesh patch sub-division
US20240127489A1 (en) Efficient mapping coordinate creation and transmission
WO2023180840A1 (en) Patch mesh connectivity coding
US20230306644A1 (en) Mesh patch syntax
WO2024074961A1 (en) Orthoatlas: texture map generation for dynamic meshes using orthographic projections

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23712607

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202380013353.2

Country of ref document: CN