WO2022133569A1 - Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points - Google Patents

Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points Download PDF

Info

Publication number
WO2022133569A1
WO2022133569A1 PCT/CA2020/051787 CA2020051787W WO2022133569A1 WO 2022133569 A1 WO2022133569 A1 WO 2022133569A1 CA 2020051787 W CA2020051787 W CA 2020051787W WO 2022133569 A1 WO2022133569 A1 WO 2022133569A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
mesh
point
point cloud
image
Prior art date
Application number
PCT/CA2020/051787
Other languages
English (en)
Inventor
Maxime HERPIN
Original Assignee
Prevu3D Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prevu3D Inc. filed Critical Prevu3D Inc.
Priority to PCT/CA2020/051787 priority Critical patent/WO2022133569A1/fr
Publication of WO2022133569A1 publication Critical patent/WO2022133569A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present invention relates to software and apparatuses for editing virtual three-dimensional spaces that are accessible through a computing device. More specifically, the present invention relates to methods and systems for extracting and editing data from virtual representations of three- dimensional spaces that are based on scanned visual data.
  • Real world environments can be digitally captured using many technologies, such as laser scanners, structured light sensors, or photogrammetry.
  • the resulting captured data can often be visually represented in the form of three-dimensional point cloud data that is comprised of individual, colored points.
  • This type of visual representation is useful for engineering purposes but is often of little interest for other applications, such as three-dimensional visual content creation or three-dimensional visualization of scanned environments, which both typically use polygonal meshes to represent the scanned three-dimensional environment and physical assets located within the environment.
  • scanned environments are therefore often processed into polygonal meshes using various methods that can be automatic or user driven.
  • the resulting polygonal mesh has to not only present the same geometric features as the environment, but also the same colors that are present in the environment.
  • photos of the environment taken by the scanning device are used to color the point cloud in a process that varies according to the scanning technology used.
  • the photos and their localizations can be included in the point cloud data file where the scanned point cloud data is stored.
  • colors can be represented in multiple ways on a polygonal mesh.
  • Two known methods of representing colors in a polygonal mesh are vertex colors and textures.
  • each vertex of each polygon that comprises the three-dimensional mesh can be assigned a color, and the colors inside each polygon can be interpolated from the colors of these boundary vertices of the polygon.
  • the vertex colors can be determined from the colors of the point cloud used to generate the mesh. For example, the colors of each vertex can be assigned to the color of the closest corresponding point in the point cloud.
  • vertex colors are a simple way of coloring a mesh but require a high resolution of polygons (or in other words, must use a large number of polygons) in the mesh in order to provide sufficient spatial resolution of the colors that reflect the real- world environment that the mesh represents in a three-dimensional format.
  • each three-dimensional polygon of the mesh is given corresponding coordinates in two-dimensional space, creating a mapping between the two-dimensional representational of the environment and the three-dimensional surface of the mesh.
  • a twodimensional texture can therefore be mapped onto the mesh, giving each point of each polygon of the mesh a color that corresponds to the corresponding pixel of the two-dimensional representation of the environment.
  • the pixels of the texture are usually colored according to the colors of the point cloud data of the environment, in an analogous manner as discussed above.
  • the point cloud data file also contains photos of the environment
  • photos can be used to provide color data that can be applied to the textures mapped to the resulting mesh, thereby avoiding loss of resolution in situations where there is a low density of points.
  • changes of colorimetry between separate yet overlapping photos can create visible seams in the resulting texture.
  • the present invention provides methods and systems for texturing a polygonal mesh created from colored point cloud data that, in some embodiments, can contain image data.
  • the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of simplifying the input mesh to result in a proxy input mesh, parametrizing the proxy input mesh to create a parameterized proxy mesh, the parametrized proxy mesh having at least one polygon, transferring the parametrized proxy mesh onto the input mesh, transferring the parametrized proxy mesh onto the input mesh comprising the step of defining at least one of the at least one polygon of the parametrized proxy mesh that overlaps with at least one corresponding polygon of the input mesh, texturing the input mesh, texturing the input mesh comprising the steps of generating a texture, and applying the texture to at least one polygon of the parametrized proxy mesh.
  • the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the input mesh, texturing the input mesh comprising the steps of generating a texture, applying the texture to at least one polygon of an input mesh, wherein the input mesh is derived from point cloud data of the environment, each point of the point cloud data of the environment belonging to a first station, at least a second point of the point cloud belonging to a second station, and correcting at least one of each point of the point cloud data of the first station using at least the at least a second point of the point cloud.
  • the present invention can provide a method for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the input mesh, the input mesh derived from point cloud data of the environment, the point cloud data obtained from a scanning device that has scanned the environment, the point cloud data further including image data, texturing the input mesh further comprising the steps of color correcting the image data, color correcting the image data comprising the steps of generating at least one image from at least one of the input mesh and the input point cloud; and transferring color information from at least one generated image to at least one image of the input data, generating a texture, and applying the texture to at least one polygon of the input mesh.
  • FIGURE 1 is an illustration of a simplification and a parametrization process applied to an input mesh in accordance with at least one embodiment of the present invention
  • FIGURE 2A is an illustration of a sample image containing a first station and a second sample image containing a second station in accordance with at least one embodiment of the present invention
  • FIGURE 2B is an illustration of a color corrected point cloud image based on an original point cloud image in accordance with at least one embodiment of the present invention
  • FIGURE 3A is a first image of destined for color correction in accordance with at least one embodiment of the present invention.
  • FIGURE 3B is a second image that is obtained from the same viewpoint as the image shown in Figure 3A;
  • FIGURE 3C is a resulting color-corrected image in accordance with at least one embodiment of the present invention
  • FIGURE 4 is an illustration of a suitable system for use in accordance with at least one embodiment of the present invention
  • FIGURE 5 is an illustration of a suitable user device for use in accordance with at least one embodiment of the present invention.
  • FIGURE 6 is a diagram of a suitable method for applying a texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention
  • FIGURE 7 is a diagram of a suitable method for applying a color corrected texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention.
  • FIGURE 8 is a diagram of another suitable method for applying a color corrected texture to at least one polygon of an input mesh of an environment in accordance with at least one embodiment of the present invention.
  • the present invention can provide methods and systems for texturing a polygonal mesh created from a colored point cloud that can contain image data.
  • a “user device” can be any suitable electronic device with appropriate capabilities, including but not limited to a smartphone, laptop, tablet, desktop, server or a wearable device (such as a virtual, augmented or extended reality device), as required by the end user application of the present invention.
  • a suitable user device will have both visual display means and user input means that permit a user to access, edit, navigate and manipulate an interactive and editable three- dimensional map as required.
  • a suitable user device will be in electronic communication with suitable data storage means as discussed herein.
  • the user device has local data storage means and in other embodiments it is contemplated that the user device additionally or alternatively will be in electronic communication with remotely located data storage means over an electronic communication network.
  • a suitable user device has suitable processing means in electronic communication with a suitable radio communication module such that the user device can be in electronic communication with a larger electronic communication network, as discussed herein.
  • the present invention can be executed over an electronic communication network such as a local or wide area network as required by the end-user application.
  • suitable data storage will be provided that can be located remotely (i.e. in the cloud and electronically accessed via typical wireless communication protocols) or in a locally oriented server stored onsite or in local storage on the user device and electronically accessed by way of standard wired or wireless communication protocols, as required by the end user application of the present invention.
  • a suitable user device can be adapted and configured to run a suitable graphics engine that is suitable for rendering and displaying an interactive and editable three-dimensional map in accordance with the present invention.
  • the present invention can be accessed by a suitable user device through a web browser having access to a suitable electronic communication network, such as the Internet or a local network.
  • a suitable electronic communication network such as the Internet or a local network.
  • a suitable “scanner” or “scanning device” is a user device and includes any suitable three-dimensional scanning device that is adapted to convert visual data into a suitable format of digital data including point cloud data.
  • a suitable scanning device includes but is not limited to a digital camera, a structured light scanner, a photogrammetry workflow, a structure from a motion capture scanner, a simultaneous localization and mapping (SLAM) scanner, a light field camera and a LIDAR scanner among other suitable and available three-dimensional scanning devices that will be readily appreciated by the skilled person.
  • Other suitable devices could include a suitably equipped unmanned aerial vehicle (“UAV”, i.e.: a drone), as will be appreciated by the skilled person.
  • UAV unmanned aerial vehicle
  • a “scanner” or “scanning device” can include any suitable scanning device for capturing a visual representation of a real-world environment in a digital format which can include but is not limited to a polygonal mesh or as point cloud data, as will be readily appreciated by the skilled person.
  • a suitable scanning device can be in electronic communication with suitable data storage means over the electronic communication network.
  • the suitable scanning device has suitable local storage, as will be appreciated by the skilled person.
  • a “point cloud” or “point cloud data” will be understood to mean a collection of “data points” that are arranged in a visual manner to represent a three-dimensional space of a real-world environment, as will be readily understood by the skilled person. It will be further appreciated that a “point cloud” is comprised of a set of individual “data points” or “points” that collectively comprise the “point cloud” or “point cloud data”.
  • an “image” will be understood to mean a digital photographic image stored in any suitable digital format as “image data”.
  • a “real world environment”, “scanned environment” or “environment” can include any interior or exterior space and the physical assets that are contained within that space, as will be appreciated by the skilled person.
  • a “polygonal mesh” or “mesh” is a three-dimensional visual representation of a real-world environment that is comprised of a plurality of adjoining polygons, as will be readily appreciated by the skilled person.
  • a “polygon” can include any polygon that is defined by edges, vertices and a surface, such as but not limited to a triangle.
  • a “polygon” and a “triangle” can be considered synonymously interchangeable in the context of the present invention.
  • a “texture” means the overlying visual detail that is applied to the surface of a polygon that comprises the mesh, which can include color information, geometric detail, or any other visual detail that is required by the end user application of the present invention, as will be appreciated by the skilled person. In at least one embodiment, this procedure is comprised of the following steps:
  • the input polygonal mesh is mapped in two dimensions
  • a mapping between a two-dimensional image of the environment and the surface of each polygon that makes up the mesh must be established.
  • this step will be referred to as “mesh parametrization” or “parametrizing the mesh”.
  • the input mesh can be decimated in order to reduce its number of polygons using any known method that will be appreciated by the skilled person, including but not limited to an edge collapse method with error quadrics and angle-based decimation.
  • the decimated mesh will be referred as the “decimated proxy” or a “decimated proxy mesh”.
  • the resulting decimated proxy mesh can then subsequently parametrized using a known method, thereby optimizing the coverage of the two-dimensional polygons of the decimated proxy in the parametrized space.
  • the parametrization of the input mesh can subsequently be created from the parametrization of the decimated proxy mesh that is created in the previous step. More specifically, the parametrized decimated proxy mesh can be projected on to the input mesh to result in a parametrized input mesh.
  • the input mesh can be parametrized as follows:
  • each polygon pi of the input mesh find the polygon pp on the parametrized mesh such that pp is the polygon(s) that overlap(s) the most with pi.
  • the coordinates of parametrization of each vertex v of pi are then the points that have the same barycentric coordinates with respect to pp in the parametrized space as the barycentric coordinates of v with respect to pp in the three- dimensional space.
  • the measure of overlap between two polygons can be defined as follows: pl overlaps p2 with ratio r if pl intersects p2, r is average between the ratio of the area of pl projected on p2 and the area of p2 and the ratio of the area of p2 projected on pl and the area of pl.
  • the original input mesh can be simplified by a decimation process in order to result in a decimated proxy mesh that is simpler than the original input mesh.
  • a suitable parametrization process is applied to the original input mesh, a relatively complex project parametrization mesh results, while on the other hand if a suitable parametrization process is applied to the decimated proxy mesh, the resulting proxy parametrization mesh derived from the decimated proxy mesh is far simpler than the projected parametrization mesh of the original input mesh, as can be seen in Figure 1.
  • the colors provided in the point cloud data can be used to create a texture for the resulting parametrized mesh.
  • a number of common ways to obtain point cloud data and how colors are determined and assigned are discussed below:
  • the procedure is as follows: a) The scanning device is placed at a given position in the environment; b) The environment is captured using the corresponding scanner in order to generate a three- dimensional point cloud representation of the environment; c) A 360 degree photo of the environment is captured suing a suitable device and this photo is used to color the points taken in step (b); and d) Steps (a) to (c) are repeated in different locations until the environment is scanned to a predetermined degree of coverage.
  • the procedure can be as follows: a) Images of the environment are captured from multiple viewpoints using a suitable device until a predetermined degree of coverage is obtained for the real- world environment; b) The images are processed into a point cloud representation of the environment; and c) For each point in the point cloud data that comprises the point cloud representation, a corresponding image is associated with the point and the assigned color of the point is the color of the corresponding pixel on the corresponding image.
  • a “station” can be defined as a subset of the point cloud data that is all the points of the point cloud data that have been colored using the same image;
  • the point cloud data can be considered a collection of stations
  • the images may represent the same parts of the environment, but with different lighting environments, causing noise in the colors of the point cloud.
  • point cloud data is comprised of multiple overlapping stations. If:
  • Ci(p) is the color of the point cloud from the station i taken at position p;
  • G(p) be the ground truth color of the point cloud at the position p.
  • G represents the color of the desired point cloud.
  • G is supposed to be close to Ci and the goal is to correct Ci so that it equals G at all points.
  • Ci(p) G(p) + (p) where N is the noise caused by the changes in the lighting when the station was colored.
  • N has low frequency since the lighting that caused it was also at a low frequency.
  • the skilled person will appreciate that, in practice, there may be other factors that contribute to N (such as moving objects), but these factors will not have a strong impact over the present results for the purposes of the present invention.
  • all stations can be color corrected by subtracting the colors of each point p of a particular station by the difference between the smoothed colors of the station and the smoothed average colors of all stations at point p.
  • a texture can be created and mapped onto the parametrized mesh. It is contemplated that, in at least one embodiment, the resolution of the texture can be inferred from the mean distance between each point and the nearest neighboring point.
  • each pixel of a texture falling inside a polygon of the parametrized mesh has a corresponding position in three dimensions. This position can be used to interpolate the colors of neighboring points to further reduce the noise and provide a smooth result, especially when the point cloud data cannot be colored using the method presented above due to a lack of information concerning each station that makes up the point cloud data.
  • a space partitioning method can be used (such as but not limited to a kd-tree or an octree) yielding a complexity of: where R is the resolution of the texture, K is the average number of points used to color a pixel, C(/V)is the complexity of selecting a point within a distance of a target in a set of N points.
  • C(1V) is a near constant and thus the complexity of the procedure scales linearly with the number of pixels in the texture and is quasi-constant with respect to the number of points in the point cloud.
  • each point of the point cloud can be colored by at least one image.
  • this step creates a texture that blends the colors from a neighborhood of points, effectively averaging the potential multitude of images coloring the neighborhood of points.
  • a neighborhood of points can be defined as a set of at least two points of the point cloud data that are spatially close to one another, as will be readily appreciated by the skilled person.
  • each pixel of the texture can be calculated as an average of the corresponding pixels from each image that contains a particular represented visual detail.
  • errors in the relative position and orientation of the images would cause the represented visual detail to be blurred in the resulting texture.
  • each pixel can only be colored by one image, and in order to avoid neighboring pixels coming from different images having a different colorimetry, it is contemplated that the images must be corrected such that:
  • Property (1) states that the visual details shared between images are corrected to look the same on all images, while property (2) makes sure that any corrections are not local to shared visual details but also effect their neighborhoods in order to avoid introducing visible seams.
  • a texture Tgt is created such that each pixel p is an average of corresponding pixels in the images that contain the visual detail on which p is mapped on the mesh.
  • T gt can be Tg& is rendered from the same positions, orientations, and camera properties as each image to be corrected.
  • the result is a set of images that comply to property (1) and may comply to (2) and (3). It is further contemplated that the render may be performed using rasterization on a GPU to accelerate the process, as will be readily appreciated by the skilled person.
  • images can be grouped into couples (71, 72) such that II is from the set of provided images as seen in Figure 3A, and 12 is the render of the mesh from the same position, orientation, and camera properties as 71, as can be seen in Figure 3B.
  • Il is the original image that needs to be color corrected and 12 is a render of the environment that is used to color correct II.
  • the colorimetry of 12 is considered ground truth, and 71 is modified such that the local average of the difference between 71 and 72 is 0, satisfying the property (1). Moreover, once this constraint is smoothly applied, it will be appreciated that after 71 is modified, Il also complies with properties (2) and (3).
  • this operation produces modified images as can be seen in Figure 3C, such that every pixel of the texture(s) can be colored by any image containing the corresponding visual detail without any visible seams caused by colorimetry differences between adjacent or overlapping images that include the same visual detail.
  • a Texture Using the Corrected Image Data It will be appreciated that these corrected images may be used to create a texture without noticeable seams. In at least one embodiment, it is contemplated that a texture can be created using a revert process of rasterization.
  • a viewpoint is chosen. Every polygon that comprises the mesh can then be projected onto a view plane of the resulting two-dimensional image. These projected polygons can then be subdivided into fragments that only cover at most one pixel of the rendered image and subsequently the color of each fragment of the rendered image can be evaluated and matched to the color of the underlying pixel covered by the fragment, thereby filling the rendered image with the colors of the underlying input mesh.
  • a viewpoint can be selected that corresponds to the viewpoint of that the existing image was taken from relative to the mesh.
  • the polygons of the mesh can then be projected onto that viewpoint and subdivided into fragments.
  • the pixels of the texture are considered ground truth, and each fragment can subsequently be colored by the underlying pixel it lies on top off. Each fragment then gives its colors to the pixels covering it in the texture.
  • a shader may be used to give every point p of the mesh a color(r, g, b) such that r and g are the parametrized coordinates of p. and b is proportional to the distance between p and the viewpoint.
  • each pixel of Ir represents a mapping from the same pixel in Ic to the parametrized space and as a result a mapping from the texture space and Ic can thus be established. Pixels of the texture can then be directly mapped from those in Ic.
  • a user device 2 a scanning device 4 and data storage 6 are in electronic communication by way of an electronic communication network 8. It is contemplated that user device 2 has visual display means and user interface means, as discussed herein. It is further contemplated that a scanning device 4 can be, for example a digital camera, a LIDAR scanner or a UAV-based scanner and that data storage 6 is a remotely located server.
  • user device 2, scanning device 4 and data storage 6 are in electronic communication with each other through an electronic communication network 8 that is a wireless communication network operated through remote servers, also known as a cloud-based network, although other arrangements such as hard-wired local networks are also contemplated as discussed herein.
  • electronic communication network 8 is a wireless communication network operated through remote servers, also known as a cloud-based network, although other arrangements such as hard-wired local networks are also contemplated as discussed herein.
  • FIG. 5 At least one embodiment of a suitable user device 2 is illustrated.
  • user device 2 has a suitable radio communication module 3, local data storage 5, input means 7, display means 9 and processing means 11. It is contemplated that each of radio communication module 3, local data storage 5, input means 7, display means 9 and processing means 11 are all in electronic communication with one another through a suitable bus 13.
  • a suitable user device 2 is electronic communication with a suitable electronic network by way of radio communication module 3 in electronic communication with processing means 11.
  • the method starts and proceeds to the step where the input mesh is simplified 20 in order to create a simplified proxy input mesh.
  • the input mesh is a three-dimensional representation of a real- world environment that can be captured in a number of ways, as discussed herein.
  • the input mesh is comprised of a plurality of polygons that in some embodiments are triangles.
  • the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device.
  • the input mesh can be simplified to result in a proxy input mesh in a number of ways, including applying an edge collapse method to the input mesh and applying an angle based decimation method to the input mesh, as will be appreciated by the skilled person. In this way, it is contemplated that the input mesh can be simplified in order to perform further processes on the resulting simplified proxy input mesh, as discussed in further detail herein.
  • the proxy input mesh is parametrized 22 to create a proxy parametrized mesh. It is contemplated that the proxy parametrized mesh is similarly comprised of a plurality of polygons that in some embodiments are triangles.
  • this resulting parametrized proxy mesh is transferred onto the input mesh 24.
  • This step further involves the step of defining at least one polygon of the parametrized proxy mesh that at least somewhat overlaps with a corresponding underlying polygon of the input mesh 26. In some embodiments, this can further include the step of calculating a ratio of the degree of overlap between the overlapping polygon of the parametrized proxy mesh and the corresponding underlying polygon of the input mesh.
  • the input mesh can be subsequently textured 28 with a generated texture 30.
  • the step of generating the texture 30 further involves the step of calculating a resolution of the texture. It is contemplated that a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under under consideration.
  • each pixel of the generated texture can be color corrected based on color data obtained from the point cloud data.
  • a color can be interpolated for each pixel in the texture by obtaining a color from a point in the point cloud data that has a similar or identical position to the particular pixel of the texture under consideration.
  • each pixel of the generated texture can be color corrected based on the color of a point in the point cloud data that positionally corresponds to the pixel under consideration.
  • the generated texture can be applied to the input mesh 32 to result in a textured input mesh.
  • a method is provided for texturing an input mesh where the input mesh can be simplified to result in a proxy input mesh, parametrized to result in a parametrized proxy mesh, and then subsequently textured using a generated texture that, in some embodiments, can be generated based on point cloud data that corresponds to the pixels of the generated texture.
  • FIG. 7 a method in accordance with another embodiment of the present invention is depicted.
  • the method starts and proceeds to the step where an input mesh is to be textured 28 with a generated texture 30.
  • the input mesh is a three-dimensional representation of a real-world environment that can be captured in a number of ways, as discussed herein.
  • the input mesh is comprised of a plurality of polygons that in some embodiments are triangles.
  • the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device.
  • the point cloud data further includes image data.
  • image data includes at least one image that visually corresponds to a portion of the point cloud data.
  • each point in the point cloud data belongs to a station. As discussed herein, it is contemplated that a station is a subset of points in the point cloud that have an associated color that has been derived from the same image.
  • the generated texture 30 can subsequently applied to at least one polygon of the input mesh 132. It is further contemplated that the colors of a point in the point cloud data can be corrected 134 using a first point that belongs to a first station and a second point that belongs to a second station.
  • the colors of a point in the point cloud data can be corrected 134 by subtracting the color of the point under consideration (belonging to a first station) from the difference between the smoothed colors of the neighborhood of that point under consideration (belonging to the first station) and the smoothed average colors of all stations that include a neighborhood of the point under consideration.
  • a single point in the point cloud may be close to other points from different stations and may correspond to different images each having different coloration.
  • all colors of all points in a first station can be smoothed to result in a smoothed color for a particular point that belongs to that particular station.
  • average colors can be determined based on the average color of all colors of the neighborhood to that particular point in all stations of the neighborhood.
  • a color can be corrected by taking the original color of the point and subtracting that original color from the difference between the smoothed color (derived from a single station) and the smoothed average color of that point (derived from a plurality of stations).
  • the step of generating the texture 30 further involves the step of calculating a resolution of the texture. It is contemplated that a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under consideration.
  • each pixel of the generated texture can be color corrected based on color data obtained from the point cloud data.
  • a color can be interpolated for each pixel in the texture by obtaining a color from a point in the point cloud data that has a similar or identical position to the particular pixel of the texture under consideration.
  • each pixel of the generated texture can be color corrected based on the color of a point in the point cloud data that positionally corresponds to the pixel under consideration.
  • an input mesh is be textured 28.
  • the input mesh is a three- dimensional representation of a real-world environment that can be captured in a number of ways, as discussed herein.
  • the input mesh is comprised of a plurality of polygons that in some embodiments are triangles.
  • the input mesh can be derived from point cloud data of the environment that is captured by a suitable scanning device.
  • the point cloud data can further include image data. It will be appreciated that image data includes at least one image that visually corresponds to a portion of the point cloud data.
  • the image data contained in the point cloud data can be color corrected 200.
  • the image data can be color corrected 200 by generating a new image 202 based on the input mesh or the point cloud data that the input mesh is based on.
  • color information can be transferred 204 from the newly generated image to a corresponding image that is contained in the image data contained in the point cloud data.
  • a texture can be generated 30 and the texture can be applied to the polygons of the input mesh 32.
  • the new image can be generated 202 by generating a proxy texture for the input mesh and subsequently applying this proxy texture to the input mesh in order to create a textured mesh.
  • a viewpoint can be identified that corresponds to an image that is included in the image data of the point cloud data.
  • the textured mesh can be rendered from a viewpoint that corresponds to the viewpoint identified to the corresponding image that is included in the image data of the point cloud data.
  • the new image can be generated 202 based on the input mesh or the point cloud data that the input mesh is based on.
  • this image can be generated by identifying a viewpoint of an image that is included in the image data of the point cloud data and the points of the point cloud can be rendered from a corresponding viewpoint to the viewpoint of the identified image that is included in the image data.
  • color information can be transferred 204 from the newly generated image to a corresponding image that is contained in the image data contained in the point cloud data by averaging the difference between the colors of the newly generated image and the colors of the corresponding image that is contained in the image data contained in the point cloud data.
  • this averaged difference in color between the new image and the original image is obtained, this averaged difference of color can be subtracted from each color in the original image included in the image data in order to correct the color of the original image included in the image data.
  • the texture can be generated 30 by identifying a viewpoint from an image included in the image data and projecting this image on the input mesh.
  • projecting this image on the input mesh can include projecting a polygon of the input mesh onto the plane of the identified viewpoint and separating this projected polygon into a plurality of fragments.
  • this process can be accelerated by using a graphics processing unit (GPU), as will be understood by the skilled person.
  • GPU graphics processing unit
  • a fragment can be associated with a pixel of the original image included in the image data and a pixel of the generated texture.
  • a color can subsequently be assigned to the pixel of the texture associated with the fragment using a color that has been derived from the pixel of the original image associated with the fragment.
  • projecting the image of at least one viewpoint onto the input mesh further involves applying a shader to a polygon of the mesh. It is further contemplated that the shader can execute a number of additional steps depending on the particular end user application.
  • the shader can generating screen space coordinates for at least one vertex of the polygon of the input mesh. It is also contemplated that the shader can use these coordinates to map an image onto the polygon. It is also contemplated that the shader can move a vertex of the input mesh such that its position on the render corresponds to coordinates in UV space. It is also contemplated that the shader can render a polygon of the mesh onto at least one pixel of the generated texture.
  • projecting the image of a viewpoint onto the input mesh can further include rendering a depth map of a polygon of the input mesh. It is also contemplated that a distance can be obtained using a vertex shader between the vertices of a polygon of the input mesh and the camera that captured the image data. It is also contemplated that a distance of the depth mask can be obtained using a fragment shader between the coordinates of a fragment and the camera that captured the image data and comparing this distance to the distance in the depth map at the UV coordinates of the fragment.
  • At least one proxy texture for the input mesh can be generated by determining the position of a pixel of the texture and subsequently identifying a corresponding point in the point cloud data that has the same position as the position of the pixel of the texture.
  • the step of generating the texture further involves the step of calculating a resolution of the texture.
  • a resolution of the texture can be calculated by calculating a mean distance between each of the points that are included in the point cloud data and the nearest neighboring point in the point cloud data to each point in the point cloud data under consideration.
  • the colors of a point in the point cloud data can be corrected by subtracting the color of the point under consideration (belonging to a first station) from the difference between the smoothed colors of the neighborhood of that point under consideration (belonging to the first station) and the smoothed average colors of all stations that include points form the neighborhood of the point under consideration.
  • a single point in the point cloud may be close to other points from different stations and may correspond to different images each having different coloration.
  • all colors of all points in a first station can be smoothed to result in a smoothed color for a particular point that belongs to that particular station.
  • average colors can be determined based on the average color of all colors of that particular point derived from all stations to which that particular point under consideration belongs.
  • a color can be corrected by taking the original color of the point and subtracting that original color from the difference between the smoothed color (derived from a single station) and the smoothed average color of that point (derived from a plurality of stations).
  • the present invention provides methods and systems for applying a texture to at least one polygon of an input mesh of an environment, the method comprising the steps of texturing the mesh, texturing the mesh comprising the steps of generating a texture, and applying the texture to at least one polygon of the mesh.
  • the embodiments described herein are intended to be illustrative of the present compositions and methods and are not intended to limit the scope of the present invention. Various modifications and changes consistent with the description as a whole and which are readily apparent to the person of skill in the art are intended to be included.
  • the appended claims should not be limited by the specific embodiments set forth in the examples but should be given the broadest interpretation consistent with the description as a whole.

Abstract

Dans au moins un mode de réalisation, la présente invention concerne des procédés et des systèmes pour appliquer une texture à au moins un polygone d'un maillage d'entrée d'un environnement, le procédé comprenant les étapes consistant à texturer le maillage, l'action de texturer le maillage comprenant les étapes consistant à produire une texture, et à appliquer la texture à au moins un polygone du maillage.
PCT/CA2020/051787 2020-12-22 2020-12-22 Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points WO2022133569A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CA2020/051787 WO2022133569A1 (fr) 2020-12-22 2020-12-22 Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CA2020/051787 WO2022133569A1 (fr) 2020-12-22 2020-12-22 Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points

Publications (1)

Publication Number Publication Date
WO2022133569A1 true WO2022133569A1 (fr) 2022-06-30

Family

ID=82156902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2020/051787 WO2022133569A1 (fr) 2020-12-22 2020-12-22 Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points

Country Status (1)

Country Link
WO (1) WO2022133569A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013029232A1 (fr) * 2011-08-30 2013-03-07 Technicolor (China) Technology Co., Ltd. Codage de maillage texturé 3d multi-résolution
US20140354632A1 (en) * 2012-01-13 2014-12-04 Thomson Licensing Method for multi-view mesh texturing and corresponding device
US20170104980A1 (en) * 2015-02-24 2017-04-13 HypeVR Lidar stereo fusion live action 3d model video reconstruction for six degrees of freedom 360° volumetric virtual reality video
US20190213778A1 (en) * 2018-01-05 2019-07-11 Microsoft Technology Licensing, Llc Fusing, texturing, and rendering views of dynamic three-dimensional models
US20200020155A1 (en) * 2018-07-13 2020-01-16 Nvidia Corporation Virtual photogrammetry
US20200225356A1 (en) * 2019-01-11 2020-07-16 Nurulize, Inc. Point cloud colorization system with real-time 3d visualization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013029232A1 (fr) * 2011-08-30 2013-03-07 Technicolor (China) Technology Co., Ltd. Codage de maillage texturé 3d multi-résolution
US20140354632A1 (en) * 2012-01-13 2014-12-04 Thomson Licensing Method for multi-view mesh texturing and corresponding device
US20170104980A1 (en) * 2015-02-24 2017-04-13 HypeVR Lidar stereo fusion live action 3d model video reconstruction for six degrees of freedom 360° volumetric virtual reality video
US20190213778A1 (en) * 2018-01-05 2019-07-11 Microsoft Technology Licensing, Llc Fusing, texturing, and rendering views of dynamic three-dimensional models
US20200020155A1 (en) * 2018-07-13 2020-01-16 Nvidia Corporation Virtual photogrammetry
US20200225356A1 (en) * 2019-01-11 2020-07-16 Nurulize, Inc. Point cloud colorization system with real-time 3d visualization

Similar Documents

Publication Publication Date Title
US11551418B2 (en) Image rendering of laser scan data
El-Hakim et al. A multi-sensor approach to creating accurate virtual environments
EP2507768B1 (fr) Procédé et système de génération d'une vue en trois dimensions d'une scène réelle pour une planification et des opérations militaires
US20150325044A1 (en) Systems and methods for three-dimensional model texturing
US11625861B2 (en) Point cloud colorization system with real-time 3D visualization
AU2003298666A1 (en) Reality-based light environment for digital imaging in motion pictures
US11790610B2 (en) Systems and methods for selective image compositing
US7528831B2 (en) Generation of texture maps for use in 3D computer graphics
JP3855053B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
Frommholz et al. Inlining 3d reconstruction, multi-source texture mapping and semantic analysis using oblique aerial imagery
EP4068216A1 (fr) Rendu et conversion de surface de modèle numérique 3d
WO2022133569A1 (fr) Procédés et système de reconstruction de maillage texturé à partir de données de nuage de points
Hanusch A new texture mapping algorithm for photorealistic reconstruction of 3D objects
Reguera-Salgado et al. Real time orthorectification of high resolution airborne pushbroom imagery
US20220245890A1 (en) Three-dimensional modelling from photographs in series
EP1398734A2 (fr) Méthode de mappage de texture
El-Hakim et al. Visualization of Frescoed surfaces: buonconsiglio castle–aquila tower,“cycle of the months”
Zhang et al. Laser Echo Intensity Based Texture Mapping of 3D Scan Mesh
Gianniou et al. The documentation of the medieval entrance of the Rhodes fortification complex
Bornik et al. Texture Minification using Quad-trees and Fipmaps.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20966196

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966196

Country of ref document: EP

Kind code of ref document: A1