LU502856B1 - Automated processing of 3d data obtained from a physical site - Google Patents

Automated processing of 3d data obtained from a physical site Download PDF

Info

Publication number
LU502856B1
LU502856B1 LU502856A LU502856A LU502856B1 LU 502856 B1 LU502856 B1 LU 502856B1 LU 502856 A LU502856 A LU 502856A LU 502856 A LU502856 A LU 502856A LU 502856 B1 LU502856 B1 LU 502856B1
Authority
LU
Luxembourg
Prior art keywords
layers
point cloud
reference surface
successive
predefined
Prior art date
Application number
LU502856A
Other languages
French (fr)
Inventor
Shahriar Agaajani
Antonino Mancuso
Jean-Fabrice Pepin
Original Assignee
Space Time S A
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Space Time S A filed Critical Space Time S A
Priority to LU502856A priority Critical patent/LU502856B1/en
Priority to PCT/EP2023/077031 priority patent/WO2024068915A1/en
Application granted granted Critical
Publication of LU502856B1 publication Critical patent/LU502856B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The invention is directed to a method of processing 3D data of a physical site of a given type, comprising the following steps: obtaining a 3D point cloud of an area of the physical site; identifying in the 3D point cloud at least one reference surface (S0); analysing the 3D point cloud, comprising scanning said 3D point cloud in successive first layers being transversal (T1, T2, …Tn) or parallel (P1, P2, …Pn) to the at least one reference surface (S0) and, in each of said successive first layers (T1, T2, …Tn; P1, P2, …Pn), in successive second layers being parallel (P1, P2, …Pn) or transversal (T1, T2, …Tn), respectively, to the at least one reference surface (S0); recognizing shapes in the successive first layers (T1, T2, …Tn; P1, P2, …Pn) using a data base of predefined shapes related to the type of the physical site.

Description

Description
AUTOMATED PROCESSING OF 3D DATA OBTAINED FROM A PHYSICAL
SITE
Technical field
[0001] The invention is directed to the field of data processing, more particularly 3D data processing obtained from a physical site like a building under construction, a forest, a mine, a public area, or the like.
Background art
[0002] Patent document published WO 2022/069665 A1 discloses a method of data management of the construction of a building, comprising the following steps: (a) optically 3D scanning the building with a laser scanner so as to obtain 3D data of the building; (b) storing the 3D data; (c) iterating steps (a) and (b) at different points in time T4, Ty, Ts, T4, Th, and (d) formatting the 3D data obtained at the different points in time T4, Ty, Ts, T4,
T, so as to display said 3D data with a point of time selector enabling, upon selection, to display the 3D data at any of the different points in time
T4, Ta, Ts, T4, T,. This teaching does not however detail how to process the 3D data for facilitating their use and enhancing the information contained therein.
[0003] Patent document published US 2022/0092291 A1 discloses a system and a method for processing point cloud data by labelling objects. The method uses a 2D image of a scene for identifying objects of interest and superpose that image on the 3D point cloud of the same scene at the same time so as to thereby label the objects of interest in the 3D point cloud. This approach is based on the consideration that automatic object recognition is a well-understood and well-performing process contrary to automatic object recognition in 3D data which requires intensive hardware resources and is thereby slower and less accurate. This process is intended for autonomous vehicles where hight reactivity and therefore short processing times are required.
[0004] Patent documents published US 2019/0180105 A1 and
US 2020/0412926 A1 disclose methods for providing information based on construction site images and for determining image capturing parameters in construction sites from electronic records. They fail however to detail how the 3D data are analysed for determining building quality indications.
Also, it is limited to determining building quality indications, so it fails to provide fully consultable information of the building over its construction time.
Summary of invention Technical Problem
[0005] The invention has for technical problem to overcome at least one drawback of the above cited prior art. More specifically, the invention has for technical problem to provide an automated or at least partially automated method for processing 3D data.
Technical solution
[0006] The invention is directed to a method of processing 3D data of a physical site of a given type, comprising the following steps: obtaining a 3D point cloud of an area of the physical site; identifying in the 3D point cloud at least one reference surface; analysing the 3D point cloud, comprising: scanning said 3D point cloud in successive layers being transversal or parallel to the at least one reference plane; and recognizing shapes in the successive first layers using a data base of predefined shapes related to the type of the physical site.
[0007] According to a preferred embodiment, scanning the 3D point cloud in the successive layers comprises, for each of said successive layers, detecting points contained in, or located at a maximum distance from, said layer.
[0008] According to a preferred embodiment, detecting points contained in, or located at a maximum distance from, each of the successive layers comprises: subdividing the corresponding layer into tiles; for each of said tiles, detecting a number of points contained in, or located at a maximum distance from said tile; subdividing in tiles each of the tiles of the preceding step where the number of points in greater than a predetermined limit; for each of said subdivided tiles, detecting a number of points contained in, or located at a maximum distance from said subdivided tile; iterating the two preceding steps.
[0009] The predetermined limit of number of points can be dependent on, e.g. proportional to, the size of the corresponding tile. The criterion that the number of points is greater than the predetermined limit can then be concentration-based.
[0010] According to a preferred embodiment, the successive layers being transversal or parallel are successive first layers, the method further comprising scanning the 3D point cloud, in each of said first successive layers, in successive second layers being parallel or transversal, respectively, to the at least one reference surface.
[0011] According to a preferred embodiment, recognizing shapes in the successive layers comprises: detecting in each of the successive first layers, a potential shape showing a minimum degree of irregularity relative to the reference surface; comparing said detected shape with data base of predefined shapes; in case the detected shape shows a minimum degree of similarity with one of the predefined shapes, replacing the detected shape by the predefined shape.
[0012] According to a preferred embodiment, replacing the detected shape by the predefined shape comprises replacing the points by a geometrical definition of the predefined shape and by at least one reference point of said predefined shape.
[0013] According to a preferred embodiment, the predefined shapes replacing the detected shapes are grouped in layers by categories of the predefined shapes.
[0014] According to a preferred embodiment, identifying the at least one reference surface comprises detecting points comprised between two parallel surfaces and with a density that is larger than a minimum density level, the two parallel surfaces determining the said at least one reference surface.
[0015] According to a preferred embodiment, the at least one reference surface is generally horizontal or vertical.
[0016] According to a preferred embodiment, the at least one reference surface is generally planar.
[0017] According to a preferred embodiment, obtaining a 3D point cloud of an area of the physical site comprises obtaining several 3D point sub clouds of the area and merging said several 3D point sub clouds.
[0018] According to a preferred embodiment, obtaining several 3D point sub clouds of the area comprises using a LIDAR at different locations in said area.
[0019] According to a preferred embodiment, each of the several 3D point sub clouds is obtained with at least one reference mark in the area, and merging said several 3D point sub clouds comprises matching said at least one reference mark between said several 3D point sub clouds.
Advantages of the invention
[0020] The invention is particularly interesting in that it provides an automated or at least greatly automated method for recognizing the shapes of the objects or components present in the physical site. The replacement of the recognized shapes by modelled predefined shapes requires less computer memory and less hardware capacity, resulting also in faster consultation and manipulation. It also facilitates geometric relationships like distances or relative orientations between the objects or components. Also, the arrangement of the recognized shapes in layers, based on categories, provides an enhanced information consultation to any user.
Brief description of the drawings
[0021] Figure 1 is a schematic perspective view of a physical site being scanned and analysed, according to a first embodiment of the invention.
[0022] Figure 2 is representation of a 3D point cloud obtained from the physical site of figure 1, of a reference surface and of transversal first layers.
[0023] Figure 3 is a representation of the 3D point cloud of figure 2 in one of the transversal first layers, showing successive parallel second layers.
[0024] Figure 4 is a representation of the 3D point cloud of figure 3 after recognition and replacement of a shape by a predefined shape.
[0025] Figure 5 is a representation of the 3D point cloud of figure 2 after analysis.
[0026] Figure 6 is a schematic perspective view of a physical site being scanned and analysed, according to a second embodiment of the invention.
[0027] Figure 7 is representation of a 3D point cloud obtained from the physical site of figure 6, of a reference surface and of parallel first layers.
[0028] Figure 8 is a representation of the 3D point cloud of figure 7 in one of the parallel first layers, showing successive transversal second layers.
[0029] Figure 9 is a schematic perspective view of a physical site being scanned and analysed, according to a third embodiment of the invention.
[0030] Figure 10 is representation of a 3D point cloud obtained from the physical site of figure 9, of a reference surface and of parallel first layers, and also of the 3D point cloud in one of the parallel first layers, showing successive transversal second layers.
[0031] Figure 10 is representation of a 3D point cloud obtained from the physical site of figure 9, of a reference surface and of transversal first layers, and also of the 3D point cloud in one of the transversal first layers, showing successive parallel second layers.
Description of an embodiment
[0032] Figures 1 to 5 illustrate a first embodiment of the invention.
[0033] Figure 1 is a schematic perspective view of a physical site to be 3D scanned and processed according to the invention. The physical site is an area of a building under construction, for instance a room 2 of a building under construction. The room 2 comprises a floor 4, walls made of construction blocks 6 and comprising a door passage 8. The room 2 further comprises an electric tube 10 containing electrical cables and lying horizontally on the floor 4, and a corresponding groove 12 formed in one of the walls, and receiving a vertical portion of the electric tube 10 for reaching an electrical block in the wall. These different components of a room of a building under construction are exemplary and simplified for the sake of clarity. It is therefore to be understood that a room of a building under construction can comprise substantially more and/or different components. The present embodiment will be focused on the analysis and identification of the electric tube 10, being however understood that the invention applies to any other component of the room or other types of physical site as this will be exemplified with the second and third embodiments.
[0034] Thanks to LIDARS 14, the room 2 is 3D scanned, resulting in 3D point cloud. In many cases, it is necessary to proceed to different scanning from different points of view in the room or next the room, for covering all wanted or useful portions of the room 2. In figure 1, the LIDAR 14 is represented at two distinct and opposed positions corresponding to two scanning positions. The resulting two 3D point sub clouds are then merged together, e.g. by stitching. To that end, one or several reference marks 16 can be provided in the room, for instance on the walls, for providing reference marks in the thus obtained 3D point sub clouds, for superimposing said corresponding mark(s) when merging the 3D point sub clouds together.
[0035] Figure 2 illustrates in a simplified and schematic manner how the 3D point cloud, obtained for example as detailed here above in connection with figure 1, is processed in an automated or at least partially automated manner, using computing means i.e., one or several computers.
[0036] The 3D point cloud visible in figure 2 corresponds essentially to the floor 4 and the electric tube 10 in figure 1. It is understood the 3D point cloud obtained a detailed in connection with figure 1 can contain substantially more elements like the blocks 6 of the wall, the door opening 8 and the groove 14 in the wall.
[0037] As this is apparent, the 3D point cloud comprises many points contained between two parallel surfaces S; and S, distant from each other by a limited distance, for instance less than 100mm, preferably less than 80mm. These points correspond to the floor 4 being for instance essentially planar. The surfaces S4 and S; are for instance planar, being understood they need not be planar, i.e. they can be curved, preferably slightly curved. A reference surface Sp is defined between the two parallel surfaces S, and S,. Depending on input parameters, a distance between the two parallel Sq and S, can be predefined and the process can then iterate by trial and error with different positions and orientations relative to the 3D point cloud until a minimum concentration of points is found between the two parallel S4 and S,. Such a concentration can be a surface and/or volume concentration. The planar nature of the reference surface
So to be identified can be also provided as an input parameter, depending on the type of physical site to be analysed and processed.
[0038] The reference surface Sy is for instance defined at a median position between the two parallel surfaces Sy and S,, being however understood that other geometric relationships between the reference surface Sy and the two parallel surfaces S; and S, can be used.
[0039] After identification of the reference surface Se, the 3D point cloud is analysed in successive first layers Tq, To, ...T, being transversal ie. layers being for instance perpendicular to the reference surface Sy and distant from each other by a distance forming an incremental distance between the successive first layers Ty, Ta, ... Ty. Similarly to the distance between the two parallel surfaces Sy and S», the incremental distance between the successive transversal first layers Tq, T>, ...T, can be predefined or be adjusted during processing of the 3D point cloud, for example based on the density of the points which can vary much along the transversal direction.
[0040] In each transversal first layer T4, T2, ...T,, the points are scanned in successive second layers being parallel to the reference surface So, as this will be detained in connection with figure 3. The scanning process of the points along the successive transversal first layers T4, Ta, … T, and the successive parallel second layers, consists in identifying the points located in these layers, within a given tolerance, being for instance a tolerance distance relative to the corresponding layer.
[0041] Still with reference to figure 2, points corresponding to the electrical cable of figure 1 are visible in the different transversal first layers T4, T2, … Tp.
A progressive scanning via the successive transversal first layers T4, To, … Tn, And the parallel second layers, allows an automated analysis of the points and recognition of specific shaped by comparison with predefined shapes available in a database.
[0042] Figure 3 illustrates one of the transversal first layers T4, T>, … T, of figure 2 and the corresponding successive parallel second layers P4, P,, …Pn briefly mentioned here above. The successive parallel second layers Pj,
P», ...P, are parallel to the reference surface Sy, previously identified as described here above. The first parallel second layer P, is the closest to the reference surface Sg, whereas P, is the next parallel second layer after the parallel second layer P4, and so on. In each of the parallel second layers, the point cloud is scanned, ie. the points located in the corresponding parallel second layer or at given proximity thereto are identified. As this is apparent in figure 3, the point cloud in the transversal first layer T4, T2, Or T, forms an approximate circle, which can be easily detected by scanning the point cloud via the successive parallel second layers P4, Py, …Pn, for instance by comparing the shape of the points with predefined shapes consulted in a database related with the nature of the physical site. For instance, the database can comprise the geometrical shapes of usual building construction materials, like electrical tubes with predetermined diameters, for example 16mm, 22mm and 25mm.
[0043] While scanning the point cloud through the successive parallel second layers P4, Pa, ...Py, the points identified as forming an approximate circle are considered as a potential predefined shape because of the relative proximity of the points, i.e. successively around the approximate circumference, and because of a minimum average distance from the reference surface So.
[0044] In alternative to scanning the point cloud through the successive parallel second layers Pq, Py, …Pn, the transversal first layer T4, T2, … T, illustrated in figure 2 can be divided into tiles (i), (ii), (iii), (iv), and each tile can be analysed by detecting the number of points contained in, or located at a maximum distance from said tile. For those tiles where the number of points is lower or not greater than a predetermined limit, they can be left unscanned or not further analysed, whereas those where the number of points is greater than a predetermined limit, the tiles are subdivided into further tiles. For each of these subdivided tiles, the number of points contained in, or located at a maximum distance from said subdivided tile is determined. The above two preceding steps can then be iterated, i.e. further subdividing those tiles where the number of points is greater than the predetermined limit and leaving unscanned or not further analysed those tiles where the number of points is less or not greater than the predetermined limit. Such an approach is particularly interesting for large point clouds where only restricted areas thereof are relevant, where that relevancy can be selected based on the point concentration. The predetermined limit is therefore advantageously related, e.g. proportional, to the size of the tile whose number of points is determined and compared.
Various criteria can be applied for stopping the above subdivision, like when the size of the tile gets less than a predetermined minimum size.
[0045] With reference to figure 2, the transversal first layer T4, To, ... T, illustrated therein can be subdivided into four tiles (i), (ii), (iii) and (iv), where, for instance, each of the tiles (i), (ii) and (iii) contain no points or at least a number of points that is less than a given limit or threshold, whereas tile (iv) contains a number of points that is greater than that limit. That tile (iv) can then be further divided, for instance in four tiles (iv)-(1), (iv)-(2), (iv)-(3) and (iv)-(4) (delimited by the dashed lines), where only the top and lower left tiles (iv)-(1) and (iv)-(3) contain points, the top and lower right tiles (iv)- (2) and (iv)-(4) being not analysed.
[0046] Once the subdividing into tiles is stopped, those tiles containing points or at least a minimum number of points can be analysed, either by scanning through the successive parallel second layers Py, P,, ...P, as detailed above, or by directed analysing the points contained therein.
[0047] After recognition of a predefined shape, for instant taken from the database, the points are replaced by the predefined shape, being for instance a circle with a given radius r or diameter.
[0048] Figure 4 illustrates the result of the above scanning of one of the transversal second layers T4, T2, ...or T,, consisting in replacing the points, or superimposing said points, with the recognized predefined shape. Advantageously, that predefined shape is provided with a geometrical definition, e.g. the coordinates of its centre and the radius r, so as to occupy a minimum memory space and be thereby very quickly retrievable.
[0049] Figure 5 illustrates the result of the scanning the point cloud through the successive transversal first layers T4, To, … T,, and through the successive parallel second layers P4, P>, ...P,. The result is a recognition and modelling of the electrical cable 10 being for instance a standardized electrical tube containing electrical wires, which is modelled by a circular envelope 10.1 with the same diameter r as the effective electrical tube 10 in the point cloud and with a centre that extends along a main axis 10.2 that is not straight, i.e., curved. That main axis 10.2 can be modelled by fitting a curve on the various centres obtained by the shape recognition and replacement by predefined shapes. The curve fitting can be for example fitting a polynomial function with least squares. It is however understood that other curve fittings, as such well known to the skilled person, can be used.
[0050] It results that the recognized and modelled electrical tube 10, for instance by a circular predefined envelope 10.1 extending along a main axis 10.2, shows a standardized shape and is a close approximation of the effective electrical tube. That predefined-shape-based model is particular advantageous in that it requires a very limited memory capacity and can be more easily exploited later on for determining distance relative to other objects or components of the physical site.
[0051] The above description of the recognition and replacement of objects or components of the physical site illustrated in figure 1 was focused on the electrical tube 10. It is however understood that the same reasoning applies to the other objects or components like the groove 12 in the wall, intended to receive and extension of the electrical tube 10 running on the floor 4. For example, the wall that comprises the electrical groove 12 can be identified as a reference surface, being for instance perpendicular to the reference surface Sy referred to in figures 2 to 5 and corresponding to the floor 4. The successive transversal first layers can be along a vertical direction corresponding to the main direction of the electrical groove 12 and the successive parallel second layers can then be along a direction that extends along the depth of the electrical groove 12, perpendicular to the wall and reference surface. The result can then be a predefined U- shaped envelope extending along a main axis being instance essentially vertical and straight, being however understood that it can be non-vertical and also non-straight.
[0052] Also, the blocks 6 and the mortar joints between the blocks can be detected and modelled by the above method. Similarly to the analysis of the electrical groove 12, each wall can be identified as reference surface and the point cloud forming the wall can be analysed as detailed here above i.e., by scanning the point cloud through successive transversal first layers and, in each of said transversal first layers, through successive parallel second layers. In each transversal first layers, the deepening formed by the mortar joint will be recognized as a predefined shape and replaced by said predefined shapes so as to model said mortar joints.
Incidentally, the presence of construction blocks of a given size will be recognized as corresponding to a predefined shape and replaced by said predefined shape. Consequently, the walls will be modelled based on predefined shapes of construction blocks and mortar joints.
[0053] In the above description, the identification of different reference surfaces has been considered, for instance a first one So corresponding to the floor and others corresponding to the different walls is not mandatory. The whole 3D point could can be analysed from a single reference surface, e.g. the reference surface Sy as illustrated in figures 2-5. In that case, the scanning through the successive transversal first layers T4, T2, … Tn, as illustrated in figure 2 reaches the wall that comprises the electrical groove 12 (figure 1). When reaching that wall, many points are present in the corresponding transversal first layer, which can then be recognized as a predefined shape being for instance a plane. However, through the following transversal first layers, the points forming the relief of the wall i.e., the mortar joints between the construction blocks, if any, and the electrical groove 12, will be present and can be recognized as predefined shapes being for instance recesses or grooves. The accuracy of the shape recognition can be less than when the transversal first layers cross-cut a main direction of the shape to be recognized.
[0054] During analysis of the point cloud, the selection of the identified reference surfaces, for instance those corresponding to the floor and the walls in the exemplary physical site of figure 1, can be made manually depending on the various parameters. Also, the same point cloud can be analysed several times i.e., iteratively, starting from each of the different identified reference surfaces. The results of these iterated analyses can be merged, meaning that if an object or component of the physical site could not be recognized by one of the analyses, based on one of the reference surfaces, it might be recognized by another one of the analyses, based on another one of the reference surfaces.
[0055] Figures 6 to 8 illustrate another embodiment of the invention, where the physical site is a forest.
[0056] As illustrated in figure 6, the physical site being a forest or piece of forest 102 comprises essentially a ground 104 which is potentially uneven and even rough, several trees 110.
[0057] Similarly to the first embodiment where the physical site is a construction site, the forest 102 is scanned by a LIDAR 114 so as to obtain a point cloud. As this is apparent, the LIDAR 114 are carried by a robot, for instance a quadruped robot. Such an autonomous device is particularly suitable for evolving on rough terrain like in a forest. It is to be understood that other autonomous devices carrying or integrating the LIDAR can be considered. Similarly, the point could can be obtained by scanning the forest 102 under different scanning sectors, by changing the orientation of the LIDAR 114 before each new scanning. The resulting different 3D point sub clouds can then be merged to a form a final or complete 3D point could. Depending on the extent of the area to be scanned, a single scanning process can be sufficient in certain cases whereas in others, several scans are necessary.
[0058] In figure 7, the 3D point cloud is analysed by first identifying at least one reference surface, for instance the reference surface Sy that is comprised between the two parallel surfaces S; and S,. As this is apparent, the two parallel surfaces S; and S, and the reference surface Sq are not planar.
They correspond to the ground 104 of the forest 102 (figure 6). The reference surface Sp corresponds to a high concentration of points between the two parallel surfaces S; and S,. Such an approach is advantageous for avoiding perturbations by irregularities like bushes and young trees.
[0059] Still with reference to figure 7, the analysis of the 3D point cloud is made by scanning through successive parallel first layers P4, P», ...P, i.e., layers that are parallel locally to the reference surface Sy. As this is apparent, parallel first layers Pq, P2, ...P, cross cut the trunks of the trees 110 (figure 6).
[0060] As illustrated in figure 8, each of the successive parallel first layers P4, P>, … Pn is scanned through successive transversal second layers Ty, To, … Tn i.e., transversal locally to the reference surface Sy. This will reveal points in the point cloud forming a shape resembling a circle. The distance between the different points forming the contour of the tree trunk is detected and are compared with predefined shapes in a database which can be related to the type of physical site. The approximate circular shape is recognized and can be replaced by a modelled shape like a circle or a more complex shape as tree trunk can show, notably at location where there is a connection to one or several branches. Also, a tree trunk can show an elongate recess or different other complex but specific shapes.
[0061] The successive scanning through the parallel first layers P4, Po, ...P, and, for each of said parallel first layers, through the successive transversal second layers T4, Ty, ...T,, enables to model the ground 104 and the trunks of the trees 110 (figure 6) by mathematical functions having for advantage to be substantially lighter in size (for memory) and being better manipulable for detecting various information, like the volume, mass, relative distance, growth, etc. of the trees.
[0062] The alternative or preliminary tiling described in connection with the first embodiment applies here also.
[0063] Figures 9 to 11 illustrate a third embodiment of the invention, where the physical site is a mine.
[0064] As illustrated in figure 9, the physical site being a mine, for instance a mine gallery 202, comprises essentially a ground 204 which is potentially uneven and even rough, and a series of arches 210.
[0065] Similarly to the first embodiment where the physical site is a construction site, the mine gallery 202 is scanned by a LIDAR 214 so as to obtain a point cloud. As in figure 6, the LIDAR 214 is carried by an autonomous device or robot, being for instance a quadruped robot, being understood that other autonomous devices, like a drone for example, can be used.
Similarly, the point could can be obtained by scanning the mine gallery 202 under different scanning sectors, by changing the orientation of the
LIDAR 214 before each new scanning. The resulting different 3D point sub clouds can then be merged to a form a final or complete 3D point could.
Depending on the extent of the area to be scanned, a single scanning process can be sufficient in certain cases whereas in others, several scans are necessary.
[0066] In figure 10, the 3D point cloud is analysed by first identifying at least one reference surface, for instance the reference surface Sp that is comprised between the two parallel surfaces S4 and S». As this is apparent, the two parallel surfaces Sy and S, and the reference surface Sq are not planar.
They correspond to the ground 204 of the mine gallery 202 (figure 9). The reference surface Sy corresponds to a high concentration of points between the two parallel surfaces S4 and S». Such an approach is advantageous for avoiding perturbations by irregularities like very locally protruding stones or the like.
[0067] In figure 10, the analysis of the 3D point cloud is made by scanning through successive parallel first layers P4, Py, …P, i.e., layers that are parallel locally to the reference surface Sy. As this is apparent, parallel first layers P4, P,, ...P, cross cut the essentially vertical pillars of the arches 210 (figure 9). Each of the successive parallel first layers Py, P,, ...P, is scanned through successive transversal second layers T4, T2, ...T, Le, transversal locally to the reference surface Sy. This will reveal points in the point cloud forming a shape resembling a rectangle and corresponding to the cross-section of the essentially vertical pillars of the arch 210.
[0068] Figure 11 illustrates an alternative way of scanning the 3D point cloud of the mine gallery. The identification of the reference surface is identical as in figure 10. Contrary to the scanning illustrated in figure 10, the scanning is achieved through successive transversal first layers T4, Ty, ...T, Le. layers that are locally transversal to the reference surface So. Further, in each of the transversal first layers T4, To, … Ty, the point cloud is scanned through successive parallel second layers P4, P,, ...P,. This reveals points in the point cloud forming a shape corresponding to the profile of the essentially vertical pillars of the arch 210.
[0069] The alternative or preliminary tiling described in connection with the first embodiment applies here also.
[0070] In a general manner, the shapes recognised and replaced by predefined shapes are arranged in layers per category. For example, in the case of the physical site being a construction site like a room under construction, as illustrated in figure 1, the detected shapes and corresponding objects can be organized in different layers based on categories like electrical equipment, masonry, flooring, etc. Important is to note that each recognized shape can be assigned to different categories. As a matter of example, the electrical tube 10 in figure 1 can be categorised as being an electrical equipment and, in parallel, as belonging to the flooring. At the contrary, the electrical groove 12 in figure 2 would belong to the electrical equipment and to the masonry but not to the flooring. This arrangement in layers is particularly advantageous in that it provides an enhance access to information of the physical site, notable over time since the above obtention and analysis of 3D point clouds can be done at different times, so as to have a particularly useful and exploitable information about the physical site.

Claims (13)

Claims
1. Method of processing 3D data of a physical site (2; 102; 202) of a given type, comprising the following steps: obtaining a 3D point cloud of an area of the physical site (2; 102; 202); identifying in the 3D point cloud at least one reference surface (Sy); analyzing the 3D point cloud, comprising: scanning said 3D point cloud in successive layers being transversal (T4, Ta, ... Ty) or parallel (P4, Py, ...Py) to the at least one reference surface (So): and recognizing shapes in the successive layers (T4, T2, ... Tn; Pq, Po, … Pn) using a data base of predefined shapes related to the type of the physical site.
2. Method according to claim 1, wherein scanning the 3D point cloud in the successive layers (T4, To, … Tm P4, Po, ...P,) comprises, for each of said successive layers (T4, T2, ... Tn Pq, Py, ...Py), detecting points contained in, or located at a maximum distance from, said layer (T4, T2, ... Ty; Pq, Pa, … Pp).
3. Method according to claim 2, wherein detecting points contained in, or located at a maximum distance from, each of the successive layers (T4, To, ... Ty; P4, Pa, … Pr) comprises: subdividing the corresponding layer into tiles ((i), (ii), (ii), (iv)); for each of said tiles ((i), (ii), (ii), (iv)), detecting a number of points contained in, or located at a maximum distance from said tile; subdividing in tiles ((iv)-(1), (iv)-(2), (iv)-(3), (iv)-(4)) each of the tile(s) ((iv)) of the preceding step where the number of points in greater than a predetermined limit; for each of said subdivided tiles ((iv)-(1), (iv)-(2), (iv)-(3), (iv)-(4)), detecting a number of points contained in, or located at a maximum distance from said subdivided tile; iterating the two preceding steps.
4. Method according to one of claims 1 and 2, wherein the successive layers (T4, To, ... Ty; Pq, Po, … Pn) being transversal or parallel are successive first layers, the method further comprises scanning the 3D point cloud, in each of said first successive layers (T4, T2, ... Ty; Pq, Po, …Pn), In successive second layers being parallel (P4, P>, …P,) Or transversal (T4, T2, … Ty), respectively, to the at least one reference surface (So).
5. Method according to any one of claims 1 to 4, wherein recognizing shapes in the successive layers (T4, T2, ... Tn; Pq, P2, … Pn) comprises: detecting in each of the successive layers (T4, To, … Tn Pq, Po, …Pn), a potential shape showing a minimum degree of irregularity relative to the reference surface (So); comparing said detected shape with data base of predefined shapes; in case the detected shape shows a minimum degree of similarity with one of the predefined shapes, replacing the detected shape by the predefined shape.
6. Method according to claim 5, wherein replacing the detected shape by the predefined shape comprises replacing the points by a geometrical definition of the predefined shape and by at least one reference point of said predefined shape.
7. Method according to one of claims 5 and 6, wherein the predefined shapes replacing the detected shapes are grouped in layers by categories of the predefined shapes.
8. Method according to any one of claims 1 to 7, wherein identifying the at least one reference surface (So) comprises detecting points comprised between two parallel surfaces (Sy, Sz) and with a density that is larger than a minimum density level, the two parallel surfaces determining the said at least one reference surface (So).
9. Method according to any one of claims 1 to 8, wherein the at least one reference surface (Sy) is generally horizontal or vertical.
10. Method according to any one of claims 1 to 9, wherein the at least one reference surface (So) is generally planar.
11. Method according to any one of claims 1 to 10, wherein obtaining a 3D point cloud of an area of the physical site comprises obtaining several 3D point sub clouds of the area and merging said several 3D point sub clouds.
12. Method according to claim 11, wherein obtaining several 3D point sub clouds of the area comprises using a LIDAR at different locations in said area.
13. Method according to one of claims 11 and 12, wherein each of the several 3D point sub clouds is obtained with at least one reference mark (16) in the area, and merging said several 3D point sub clouds comprises matching said at least one reference mark between said several 3D point sub clouds.
LU502856A 2022-09-29 2022-09-29 Automated processing of 3d data obtained from a physical site LU502856B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
LU502856A LU502856B1 (en) 2022-09-29 2022-09-29 Automated processing of 3d data obtained from a physical site
PCT/EP2023/077031 WO2024068915A1 (en) 2022-09-29 2023-09-29 Automated processing of 3d data obtained from a physical site

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU502856A LU502856B1 (en) 2022-09-29 2022-09-29 Automated processing of 3d data obtained from a physical site

Publications (1)

Publication Number Publication Date
LU502856B1 true LU502856B1 (en) 2024-04-02

Family

ID=83448042

Family Applications (1)

Application Number Title Priority Date Filing Date
LU502856A LU502856B1 (en) 2022-09-29 2022-09-29 Automated processing of 3d data obtained from a physical site

Country Status (2)

Country Link
LU (1) LU502856B1 (en)
WO (1) WO2024068915A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216237A1 (en) * 2004-03-10 2005-09-29 Adachi Jeffrey M Identification of 3D surface points using context-based hypothesis testing
US20190180105A1 (en) 2018-02-17 2019-06-13 Constru Ltd System and method for providing information based on construction site images
US20200412926A1 (en) 2019-09-14 2020-12-31 Ron Zass Determining image capturing parameters in construction sites from electronic records
US20220092291A1 (en) 2020-09-24 2022-03-24 Argo AI, LLC Methods and systems for labeling lidar point cloud data
WO2022069665A1 (en) 2020-09-30 2022-04-07 Space Time S.A. Data management of a building construction over time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050216237A1 (en) * 2004-03-10 2005-09-29 Adachi Jeffrey M Identification of 3D surface points using context-based hypothesis testing
US20190180105A1 (en) 2018-02-17 2019-06-13 Constru Ltd System and method for providing information based on construction site images
US20200412926A1 (en) 2019-09-14 2020-12-31 Ron Zass Determining image capturing parameters in construction sites from electronic records
US20220092291A1 (en) 2020-09-24 2022-03-24 Argo AI, LLC Methods and systems for labeling lidar point cloud data
WO2022069665A1 (en) 2020-09-30 2022-04-07 Space Time S.A. Data management of a building construction over time

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LAFARGE FLORENT ET AL: "A hybrid multi-view stereo algorithm for modeling urban scenes A Hybrid Multi-View Stereo Algorithm for Modeling Urban Scenes", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 18 April 2013 (2013-04-18), pages 759261, XP093036213, Retrieved from the Internet <URL:https://hal.inria.fr/hal-00759261/document> [retrieved on 20230330], DOI: 10.1109/TPAMI.2012.84ï¿¿ *
XIA SHAOBO ET AL: "Geometric Primitives in LiDAR Point Clouds: A Review", IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, IEEE, USA, vol. 13, 30 January 2020 (2020-01-30), pages 685 - 707, XP011773696, ISSN: 1939-1404, [retrieved on 20200220], DOI: 10.1109/JSTARS.2020.2969119 *

Also Published As

Publication number Publication date
WO2024068915A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
Forlani et al. C omplete classification of raw LIDAR data and 3D reconstruction of buildings
Ariyachandra et al. Detection of railway masts in airborne LiDAR data
CN101163940B (en) Imaging position analyzing method
CN107833230A (en) The generation method and device of indoor environment map
Hofmann Analysis of TIN-structure parameter spaces in airborne laser scanner data for 3-D building model generation
Aljumaily et al. Big-data approach for three-dimensional building extraction from aerial laser scanning
Fryskowska et al. ALS and TLS data fusion in cultural heritage documentation and modeling
Macher et al. Semi-automatic segmentation and modelling from point clouds towards historical building information modelling
CN116993928B (en) Urban engineering mapping method and system based on unmanned aerial vehicle remote sensing technology
KR20220001274A (en) 3D map change area update system and method
Badenko et al. Algorithms of laser scanner data processing for ground surface reconstruction
Elberink Acquisition of 3D topography: Automated 3D road and building reconstruction using airborne laser scanner data and topographic maps
LU502856B1 (en) Automated processing of 3d data obtained from a physical site
Hellmuth UPDATE APPROACHES AND METHODS FOR DIGITAL BUILDING MODELS--LITERATURE REVIEW.
CN113848560A (en) Dam surface image unmanned aerial vehicle rapid and safe acquisition method and system
Elkhrachy Feature extraction of laser scan data based on geometric properties
Mansor et al. An overview of object detection from building point cloud data
Willrich Quality control and updating of road data by GIS-driven road extraction from imagery
Ellis et al. Tactile Recognition by Probing: Identifying a Polygon on a Plane.
Ariyachandra et al. Application of railway topology for the automated generation of geometric digital twins of railway masts
Wang et al. GPS trajectory-based segmentation and multi-filter-based extraction of expressway curbs and markings from mobile laser scanning data
Krogstad et al. The allure and pitfalls of using LiDAR topography in harvest and road design
Ruichek et al. Towards real-time obstacle detection using a hierarchical decomposition methodology for stereo matching with a genetic algorithm
JPH09102034A (en) Method for automatically generating road network information and its device
CN110929630B (en) Railway track information extraction method, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20240402