WO2006027339A2 - Method and system for 3d scene change detection - Google Patents

Method and system for 3d scene change detection Download PDF

Info

Publication number
WO2006027339A2
WO2006027339A2 PCT/EP2005/054335 EP2005054335W WO2006027339A2 WO 2006027339 A2 WO2006027339 A2 WO 2006027339A2 EP 2005054335 W EP2005054335 W EP 2005054335W WO 2006027339 A2 WO2006027339 A2 WO 2006027339A2
Authority
WO
WIPO (PCT)
Prior art keywords
means
comprises
data
step
model
Prior art date
Application number
PCT/EP2005/054335
Other languages
French (fr)
Other versions
WO2006027339A3 (en
Inventor
Vitor Sequeira
Original Assignee
The European Community, Represented By The European Commission
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to LU91099 priority Critical
Priority to LU91099 priority
Priority to LU91177 priority
Priority to US68864205P priority
Priority to LU91177A priority patent/LU91177A2/en
Priority to US60/688,642 priority
Application filed by The European Community, Represented By The European Commission filed Critical The European Community, Represented By The European Commission
Publication of WO2006027339A2 publication Critical patent/WO2006027339A2/en
Publication of WO2006027339A3 publication Critical patent/WO2006027339A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Abstract

A method for 3D scene change detection comprising the steps of scanning a scene to be verified by means of a laser scanning device of a data acquisition system; constructing a 3D model from said scanned data; and comparing the constructed 3D model with a reference model. A system for carrying out the above method.

Description

Method and System for 3D Scene Change Detection

The present invention relates to a method and system for 3D scene change detection.

Such scene change detection is usually carried out to detect changes made in a given installation or track the progression of the construction work in a new plant.

A general problem in various industries is to detect the change of scenarios including but not limited to: inspection of scenarios; change detection of items; - volume changes of items; area changes, changes of a selected area; movement of objects from one scan to another; and any movements or changes in arbitrary scenarios: o small scale (rooms, objects, etc.); o large scale (Open areas, city squares, internals of buildings); and o huge scale (entire city centers, manufacturing plant areas, nuclear plant areas, open areas).

The objects or volumes that need to be analyzed or modeled range from centimeters of volume, i.e. small scale, to several kilometers in square, i.e. urban areas, industrial plant areas or the like.

Accurate scene change detection is of particular importance in Nuclear Security. Indeed, in order to ensure adherence to the Nuclear Non-Proliferation Treaty (NPT) obligations, countries are required to declare design information on all new and modified facilities, which are under safeguards, and to ensure that the accuracy and completeness of the declaration is maintained for the life of the facility. It is the obligation of the United Nations' International Atomic Energy Agency (IAEA) to verify that the design and purpose of the "as-built" facility is as declared and that it continues to be correct. These activities are referred to as Design Information Examination and Verification (DIE/DIV) and can be divided into three steps: 1 ) examination of the declared design documents;

2) collection of information on the "as-built" facility using various methodologies; and

3) comparison of the "as built" facility with the declared information.

Although methodologies have been available for DIV, they have not provided the level of continuity of knowledge needed for the lifetime of the facility.

The DIV task is one of the main challenges that International Nuclear Safeguards organizations have to face. The size of some facilities as well as the complexity of their design and process poses an insurmountable challenge when considering 100% verification before the facility comes into operation.

As an in-depth verification of all areas has up to date been beyond the Inspectorates' resources, a structured, methodical approach has been taken, prioritizing equipment, structures and activities and randomizing the lower priority items. Even with prioritized tasks and with the application of a random approach, the verification activities, especially for cell and piping verification remains a tedious and costly activity. Also the fact that DIV activities must take place over several years represents additional problems, the issue of maintaining continuity of knowledge of the previously verified equipment and structures being with no doubt the most important one, not only during the construction phase but also for the entire life of the plant.

Some systems exist for detecting changes in objects. However, such systems are designed for small objects that are moved into the laboratory for analysis. It is clear that such systems cannot be used for 3D change detection of scenes. In the context of the present document, "scene" should be understood to be a large area, such as e.g. a room, a building, a landscape, a city.

It is therefore an object of the present invention to propose an improved method and system for 3D scene change detection, wherein the detection can preferably be carried out faster and more accurately. It is a further object of the invention to provide an accurate presentation of a 3D model of a scene and of the detected changes.

The present invention concerns a method and a system for 3D scene change detection of small to large areas including urban areas, industrial plants and other large areas. The described system is an integrated system, both for the scanning part with a number of hardware components and includes a software scan management system capable of processing and hosting the acquired data.

The method according to the present invention can be divided into three distinctive phases:

• Building a 3D Reference model by acquiring multiple scans from the scene. The quality of the DIV activities is highly dependent on how accurately and realistically the 3D model documents the "as-built" plant. As such, the reference model should preferably be acquired with the best possible conditions including a) higher spatial resolution, b) lower measurement noise and c) multiple views to cover possible occlusions. The number of required scans depends on the complexity of the scene.

• Initial verification of the created 3D model cells versus the CAD models or the engineering drawing provided by the plant operator. The process can be fully automatic. In the case of unavailability of CAD models the software can be provided with tools allowing the measurement of distances for verification of lengths, heights, pipe diameters, etc

• Re-verification: At that time, new scans of the selected area are taken and compared with the reference model in order to detect any differences with the initial verification. The automatic detected changes can be further analysed by an operator. The re-verification phase can occur at any time after the reference model is constructed.

The re-verification stage can be divided into three substages, comprising input, inspection and report.

The input is a previously created reference model, where the reference model can be externally imported data or deriving from previously acquired scans, and the newly acquired 3D data.

The inspection is preferably based on searching the closest triangle of the reference model from each point of the scan. The search in the reference model has generally linear time complexity in number of triangles. To speed up the process, all triangles in a reference model are added to a spatial search tree

(e.g. octree). This allows to pre-compute regions where triangles are localized in a hierarchical tree with which the closest triangle search becomes of logarithmic time complexity. During inspection, for each 3D point for the Model under inspection, the distance to the closest triangle of the reference model is computed. The closest triangle among all of them is found; closest point and shortest distance, d, is computed. The result is a file that stores the shorter distance that has been found for each point so it is not necessary to re-compute the inspection each time.

The report of the inspection results can be visualized in different ways to aid an operator: e.g. Pseudo coloring based on distance, color-coding with an alarm level or logging of information.

According to an embodiment of the present invention, the reconstructed model is based on a weighted integration of all available data based on sensor- specific parameters such as noise level, accuracy, inclination and reflectivity of the target, spatial distribution of points. The geometry is robustly reconstructed with a volumetric approach. Once registered and weighed, all data is re- sampled in a multi-resolution distance field using out-of-core techniques. The final mesh is extracted by contouring the iso-surface with a feature preserving dual contouring algorithm. In the process of registering data for large areas, a number of approaches have been investigated. Large outdoor areas have been reconstructed based on laser range finders mounted in cars [H. Zhao, R. Shibasaki, "Surface Modeling of Urban 3D Objects From Vehicle-borne Laser Range Data", Proc. of Photogrammetric Computer Vision, Sept. 2002 Graz, Austria I G. Bostrom, M. Fiocco, D. Puig, A. Rossini, J. G. M. Goncalves, V. Sequeira, "Acquisition, Modelling and Rendering of Very Large Urban Environments", in Proc. 2nd Int. Symposium on 3D Data Processing Visualization & Transmission (3DPVT 2004), Thessaloniki, Greece, Sept, 2004]. Merging this acquired facade- information with DSM's has been investigated by Friih et al [C. Friih and A. Zakhor, "An Automated Method for Large-Scale, Ground-Based CityModel Acquisition", International Journal of Computer Vision 60(1), 5-24, 2004]. The approach is to use an aerial scan/image as a master to which the terrestrial information is registered, lavarone et al [A. lavarone and D. Vagners, "Sensor Fusion: Generating 3D by combining airborne and tripodmounted LIDAR data", International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10, 2003] has presented techniques for merging tripod mounted laser range scans with aerial scans to reconstruct an outdoor area. The present approach differs from the previously mentioned contributions in that we make use of an accurate terrestrial model on which we register the aerial scan. This approach is more general in that it does not require dense matching between the terrestrial model and the aerial data.

Previous approaches for the fusion of data were based on mesh stitching of facades and rooftops [C. Friih and A. Zakhor, "Constructing 3D City Models by Merging Ground-Based and Airborne Views", in IEEE Computer Graphics and Applications, Nov/Dec 2003, pp. 52-61], but it has been very specialized for that task. Among all, the volumetric techniques are the best, but very resource demanding because of the cubic complexity in space and time. For the fusion- part according to the present method, a general technique has been developed that robustly merges the data from any source with an appropriate weight and produce a seamless mesh.

The work of Zhao et. al. [H. Zhao, R. Shibasaki, "Surface Modeling of Urban 3D Objects From Vehicle-borne Laser Range Data", Proc. of Photogrammetric Computer Vision, Sept. 2002 Graz, Austria] is an example of volumetric techniques applied to city modeling. They use a conservative space-carving-like approach, based on a uniform grid and mesh extraction with Marching Cubes (MC). This technique is inspired on volumetric method of [B. Curless, M. Levoy: "A Volumetric Method for Building Complex Models from Range Images". SIGGRAPH 1996: 303-312]. Adaptive Distance Fields of [S. Frisken, R. Perry, A. Rockwood, T. Jones: Adaptively sampled distance fields: a general representation of shape for computer graphics. SIGGRAPH 2000: 249-254] sample the data on an adaptive grid rather than uniform, exploiting multi- resolution and thus concentrating the samples only where really needed. These techniques are robust in terms of noise sensibility. However, unless you use high grid resolutions you cannot reconstruct sharp features of the surface because it does not consider the normal of the original surface. Directed distance fields sample / on the edges of the grid rather than on the corners. The Extended MC [L. Kobbelt, M. Botsch, U. Schwanecke, Hans-Peter Seidel: "Feature sensitive surface extraction from volume data". SIGGRAPH 2001: 57- 66] is a contouring algorithm that uses Hermite data (intersection plus normal) sampled on the edges to create additional feature vertexes inside the cells of the grid, but the underlying MC still produces dense meshes. Dual Contouring (DC) [T. Ju, F. Losasso, S. Schaefer, J. Warren: "Dual contouring of hermite data", SIGGRAPH 2002: 339-346] still uses Hermite data to place representative features vertexes inside the cells and creates a mesh with a connectivity that is dual to mesh of MC, which tends to be better. All these latter techniques have been demonstrated to work well, however only with noiseless models, like CAD models.

The present invention however proposes the implementation of a directed distance field octree with DC meshing applied to real world 3D scans. To treat the different scan resolutions and spatial extents the present method uses a direct multi-resolution voxelization of the range surfaces to sample adaptively the data, instead of a uniform grid.

The present method and system for 3D scene change detection proves to be effective in advanced surveillance and design information examination and verification of very large plants. Typical plants are located on sites of few square kilometres with tens of buildings, housing process and storage facilities. The method and system has been successfully tested the in a large re-processing plant, where it allowed to carry out rapid and accurate Design Information Examinations (DIE) and Design Information Verifications (DIV) in a relatively complex facility faster and more accurately than had been possible in the past. Outdoor applications, such as verification of the facility buildings or modifications to a site, have also been successfully tested. The present invention also relates to advantageous uses of such a method or system. The system can be used in many situations to perform 3D scene change detection. Objects moved in, removed from or added to a scene are detected. Objects deformed, scaled and in other ways modified can be detected. The system can be used to create a reference model that can be used for performing measurements, analysis, forensic usage and clash detections.

The system can be used for analysis of plant or site constructions. For new constructions the system can aid in verifying that a construction is built according to the specifications and for other cases the system can be used for verifying reconstructions.

The system can also aid rescue teams in actual rescue operations and in the planning for a rescue operation as well as used for efficient simulations and training of rescue operations.

Other features and advantages of the invention will become apparent in the following detailed description of non-limiting preferred embodiments.

Figure imgf000008_0001

Preferred embodiments of the invention and additional uses thereof will now be described with reference to the accompanying drawings in which: Fig. 1 : is a schematic representation of the tripod mounted data acquisition system;

Fig. 2: is a 3D view of a car with vehicle mounted data acquisition system; Fig. 3: is a schematic representation of the data acquisition system of Fig.2; Fig. 4: is plan view of a car equipped with a data acquisition system, illustrating management of occlusions for the main vertical laser range scanner;

Fig. 5: is a schematic diagram illustrating the concept of a timeline; Fig. 6: is a schematic block diagram of the modeling steps; Fig. 7: is a schematic block diagram of the distance field creation; Fig. 8: is a diagram of the voxelization step;

Fig. 9: is a schematic block diagram of the sequence of inspection activities Fig. 10: are three pictures illustrating the visualization of automatically detected differences (in bold); Fig. 11 : is a picture illustrating pseudo coding of a performed change detection; Fig. 12: schematic top view of tripod mounted data acquisition system of Fig.1 ; and Fig. 13: schematic side view of tripod mounted data acquisition system of Fig.1 with laser rays.

Figure imgf000009_0001

The system in accordance with the present invention is an integrated system both for the scanning part and the scan management part. The scanning part comprises a number of hardware components for acquiring the necessary data. The scan management part is capable of processing and storing the acquired data, models, scans and other information imported into the system. The scan management part preferably comprises scan modeling functionality, external data import functionality, inspection functionality and presentation functionality.

1. Data acquisition

For acquiring the data necessary to compile 3D models, a tripod or vehicle mounted data acquisition system (DAS) are proposed. 1.1. Tripod mounted DAS

1.1.1. Hardware

The tripod mounted DAS comprises a portable scanning device that can be used in various locations. Such a portable scanning device (10) is shown in Fig.1 and preferably comprises a laser range scanner LRS (12) mounted on a tripod (14); data storage means (16) and a power supply (18) for providing the individual components with sufficient power.

In Fig.12 and Fig.13, the tripod mounted DAS is shown in front elevation view and in top side view respectively. In Fig.13, a section of measurement rays are depicted. The rays are consecutively pointed in 360 tilt degrees and do so repeatedly in the scanning phase.

Optionally, the portable scanning device (10) can further comprise at least one, preferably two, digital color cameras (20). The LRS (12) provides range data for half the hemisphere or parts of interest. By means of the data storage means (16), the data acquired from the LRS (12) can be stored in a preferably non-volatile memory, such as computer hard disk, flash-card or similar. The digital color cameras (20) can be used to provide coloring to the scanned regions. Color data can then be stored in association with the data acquired from the LRS (12). In addition, the portable scanning device (10) may comprise a positioning system for automatic registration of the scans. In these cases, the positioning system may be composed of an inertial orientation reference system (23) and of a GPS receiver (24) for global positioning or only a GPS receiver for global positioning. Further accuracy may be achieved in the positioning accuracy by utilizing a network of GPS reference stations distributed on the territory. These GPS Reference stations transmit correction data, which are received by the scan-mounted GPS unit by means of a GSM modem (25).

All the single data acquisition systems, GPS receiver, inertial orientation reference system, the laser range scanner and the digital color cameras should preferably be calibrated with respect to each other. In other words the mechanical rigid transformation between all these devices should be known. The sensors and equipment of the portable scanning device (10) are preferably interchangeable and based on commercial off-the-shelf components.

1.1.2. Data acquisition process

In order to acquire the data necessary to compile 3D models, the LRS (12), and optionally the digital color camera (20), are mounted on a tripod (14), which is then positioned in a suitable location so that the LRS (12) can "see" as much as possible. The LRS (12) should have a view as clear as possible onto the object to be scanned or modeled. A scan is then performed and the data acquired from the LRS (12) is transmitted via adequate wiring (22) to the data storage means (16) and stored therein. If a digital color camera (20) is provided, the latter then also takes images of the object that can be used to colorize the scanned object.

The above operation can be repeated from as many different locations as needed in order to cover the entire area to be modeled. This is often needed in case the object or area to be scanned is occluded by another object. By acquiring data of the object to be modeled from different locations, it is possible to "see" through the occluding object.

1.2. Vehicle mounted DAS

1.2.1. Hardware The vehicle mounted DAS as shown in Fig.2 can comprise a portable scanning device (30) that is compact enough to be mounted on the roof of a personal car (32). Such a vehicle mounted portable DAS can be used for larger area modeling, e.g. urban modeling or similar.

The portable scanning device (30) of the vehicle mounted portable DAS is shown in more detail in Fig.3. The portable scanning device (30) preferably comprises, in a basic configuration, a main vertical laser range scanner LRS

(34) and a position logging unit (36a, 36b) and is linked to one or more computers (38) for maintaining hardware components.

The main vertical LRS (34) provides the vertical data used for reconstructing a model. The LRS (34) measures the distance from the LRS (34) to the objects inspected. A full scan is performed several times per second and each scan measures several vertical angle-directions. For example, an 80 degrees LRS (34) with a range of 150 meters and an accuracy of 25 millimeters over 150 meters can be used. The position logging unit (36a, 36b) can be composed of a GPS unit and/or an inertial system (36a) with an antenna (36b). The position logging unit (36a, 36b) is used by the DAS to measure the global coordinates for the antenna (36b) and thereby for the portable scanning device (30).

Each computer (38), preferably a touch-screen tablet-PC, acts like a controller for one or more hardware units. The controller stores the real-time data for its attached hardware units on disks for later processing. The DAS can be equipped - depending on configuration - with one or several interacting computers.

The DAS preferably runs on batteries, which provide autonomy of several hours.

Optionally for better performance and for some applications the DAS can be equipped with the additional components as described herebelow.

Sometimes, in specific environments where the GPS of the position logging unit (36a, 36b) is unable to provide feasible position-accuracy, such as indoor environments or areas under bridges etc., there is a need for other means for registering consecutive vertical scans together. In these cases a DAS-mounted horizontal LRS (40) can be used to aid in the reconstruction of the path the DAS takes.

To better manage the problem of occluded areas caused by objects such as persons, cars, trees etc. the DAS can optionally be equipped with one or two auxiliary LRS (42). These auxiliary LRS (42) are mounted in a known and fixed angle in relation to the main LRS (34). The auxiliary LRS (42) will register areas which are occluded by objects for the main LRS (34) as illustrated on Fig.4. Part of the object (46) to be scanned is occluded as shown in Fig.4 by an obstacle (48), e.g. a pedestrian. That part of the object (46) to be scanned situated behind the obstacle (48) cannot be seen or scanned by the main LRS (34). As the car (32) continues its journey along the object (46) to be scanned, the area behind the obstacle (48) is captured by the auxiliary LRS (42). The data collected from the auxiliary LRS (42) can then be used to fill the gaps in the data collected from the main LRS (34). This allows the DAS to "see" through obstacles.

For instant coloring of the model, one or more digital color cameras (44), preferably calibrated color CCD cameras, can be connected to the DAS. Continuously during the acquisition, the cameras (44) acquire the electro¬ magnetic response in the visible range or other range of interest. The data is stored on the hosting computer manager for later use in the re-calibration step.

1.2.2. Data acquisition process

In order to acquire the data necessary to compile 3D models, the car (32) with the portable scanning device (30) mounted thereon is driven along the road. While the portable scanning device (30) is moved along a path past the object or area to be modeled, the main LRS (34) scans, preferably at short regular intervals, the object or area to be modeled and transmits the data acquired is to the computer (38). Data from the other sensors (40, 42, 44) of the DAS can be stored in the same way.

The final resolution of the 3D model depends not only on the resolution of the sensors of the DAS but also on the driving speed of the car (32). With a speed of around 7 km/h, a resolution of 10 cm for a straight vehicle direction can be acquired.

Depending on the type of vertical LRS (34, 42) used, the spatial resolution in the driving direction varies. Fig.5 shows the raw-data where each vertical line strip is colored with a DAS mounted digital color camera (44). The upper parts in Fig.5 are left with the original reflectance for illustrating the coloring of the model. The reflectance is acquired by the main vertical LRS (34).

2. Scan Modeling

Depending on the scanner type used, different scan modeling steps are needed. The acquisition hardware collects data from a number of different sources. In order to form a model, these input sources need to be treated in consecutive steps as described in Fig.6 a) for the tripod mounted DAS and b) for the vehicle mounted DAS. In order to facilitate quick and efficient rendering for large models the scan is, according to the invention, converted to an octree and opacity for each leaf needs to be calculated. More details can be found herebelow.

2.1. Vertical profiles re-calibration (for vehicle mounted DAS)

The vertical profiles re-calibration problem consists of estimating the vehicle position and orientation for each vertical scan. All mounted vertical LRS take scan profiles in a 2D plane; the movement of the vehicle provides the 3rd dimension necessary to obtain the 3D model.

One of the problems that involve the use of a GPS receiver is that it elaborates the position from the signal received from several satellites that are not geostationary. For that reason, the number and position of the available satellites can change in time, influencing the system's precision. In order to increase the precision of the GPS, the data from the horizontal LRS can be merged with previously acquired data and a global optimization is performed.

With this kind of configuration it is possible to have centimeter-level positioning accuracy. The GPS receiver provides positions at a different rate from the scanners.

For synchronization, the position and the orientation of each scan are interpolated.

The interpolation can be based on interpolation technique, such as the Catmull-Rom local interpolating spline or Kalman filtering. The interpolation enables estimation of the vehicle position and orientation at any position along a trajectory, thus for every vertical scan profile. The points are interpolated and averaged for each GPS point, which constructs in a smoothed vehicle trajectory.

2.2. Scan registration In order to extend an acquired model or to cope with occluded models, an alignment and fusion of data from several models is often a solution. In order to merge the data from two or more scans, one scan noted "MOD", needs to be aligned with the reference scan noted "REF". Doing so, the coordinate-system of the MOD is transformed into the coordinate system of REF whereby the areas acquired of the same volume overlap.

Several methods and variations for these types of operations for registration are known. In the literature, (see Chen, Y. and Medioni, G. "Object Modeling by Registration of Multiple Range Images," Proc. IEEE Conf. on Robotics and Automation, 1991 and Besl, P. and McKay, N. "A Method for Registration of 3-D Shapes," Trans. PAMl, Vol. 14, No. 2, 1992), the fundamental and far most popular technique, Interactive Closest Point (ICP), is defined and discussed.

A short description of the ICP algorithm can be as follows: 1. Select a random and evenly spaced pointcloud, n, from REF.

2. For all points in the reference sample set n, find, by some means, corresponding points, q, in MOD. The detection of corresponding points can be performed with different methods; i.e closest Euclidian distance, normal projection from the reference model surface or similar. 3. For all point pairs (n,Ci) find the solution which finds the optimal rigid registration by minimizing the error function err =

Figure imgf000015_0001
+T - rπ\\ . Here R denotes a searched rotation matrix and T denotes a searched transformation matrix.

4. Apply the current found transformation (R1T) on MOD. The loop from 2 to 4 is iteratively performed until a stop criterion has been achieved. Normally a combination of number of iterations, total error, average error and convergence are used as stop criteria. Different techniques discussed and proposed are mostly relating to minimizing the number of iterative loops needed to enter the stop criteria as well as finding methods to avoid local minima which could make the overall result to come out bad. For an ideal or near ideal case this normal ICP works very well. When the models put into the iterative procedure are well behaved with a low portion of position-error or where the error for all points or surfaces is overall random the result yields a result, which comes as close to the truth as it can be. Temporary errors in points are averaged out and the overall error is minimized. Situations where the abovementioned ICP algorithm yields poorer results are among others:

• Models with known regions having larger uncertainty.

• Models with non-uniform distribution of accuracy, for example fused models from different sensor types having different point accuracy.

For these cases the ICP algorithm needs to be slightly modified. Knowing the model accuracy at any point may not always be guaranteed. Although, to operate, a quality measure needs to be assigned for each vertex/part-surface where, a quality value q=1.0 can be considered as ideal quality and a quality value of q=0.0 represents vertexes/surfaces, which are totally random distributed in the model.

Taking the quality measure into account for a model represented by (n, qi), the ICP algorithm, and specifically the rigid transformation calculation, can be modified as follows: 1. Select a random and evenly spaced reference sample set, (n, qi), from REF. This can be performed under the sub-requirement that the points need to be above a lowest q-value accepted, qth- All vertexes below this value are neglected. For the rest, divide the model in a hierarchal spatial structure where each structural element, i.e. sub-volume, contains the points in this bounding box. Select evenly from all volumes under the criteria to pick points with highest possible q-factor within the volume but also to pick from all available volume.

2. For all points in the reference sample set (n, qri), find, by some means, the corresponding points (q, qCi) on MOD. The quality values q, have been denoted qri for the i-th point in the n pointcloud and qCi for the q point cloud respectively. The selection criteria for the corresponding points are based on the Euclidian distance under the influence of the point weights.

3. For all point pairs ((ri,qri),(Ci, qCi)), find the optimal rigid registration by minimizing the function, see below for details of finding the rotation e = £min(,^)|(r; -i?C;) -r|2

4. Find the searched Transformation T = ∑(rt -Rc1) i

5. Apply the current found transformation (R1T) on the model to be registered MOD

The loop from 2 to 5 is iteratively performed until a stop criterion has been achieved.

Finding the rotation for a given pointset having Quality values:

The quality values, q, are all defined in the range of [0,1] which means that the quality for a point-pair can be considered as the weight for this pair.

As described in publication by Horn [B.K.P. Horn, "Closed form solution of absolute orientation using unit quarternions", Journal of Optical Society of America, series A, vol. 4, no. 4, pp 629-642, Apr. 1987] we can describe the closed form solution to the find the R1T by minimizing the error, e, defined by

e = £min(,^)|(r; -i?C;) -r|2

The solution to this problem is based on finding the rotation around the relative points relative the weighted average position rav and cav thereby the translation is removed from the problem and can be found in a second step.

1 1

Σ«. ∑4,r, Σ«. ∑q,c,

r, = -- r, - rαV C, = -- C, - CαV T = T - r +Rc^ When working with the weighted average and the relative points, the solution can be found as solving an eigenvalue problem to the quaternions representing the individual rottions. The equation to solve becomes:

[N-λj]έmλ=0 Sx, = ∑nώϊ(<7c,,><7r,,KA* and so on for S yy-

Nis

Figure imgf000018_0001
Figure imgf000018_0002

The highest eigenvalue λm with corresponding quaternion em (w,x,y,z) represents the best found rotation. The rotation matrix can then be found from the quaternion by the equation

l-2(e2+ez 2) 2(ee -ewez) 2(exez+ee )

R = 2(exey +ewez) \-2(ex 2+ez 2) 2(eyez -ewex) 2(exez-ewey) 2(eyez+ewex) \-2(ex 2+e2)

Then finding the missing translation T, is given by T = rav +Rcav

2.2.1. Finding quality points for a scan

This section discusses the techniques to find the quality points for various scan types. The quality measure for a point in a model should give a relative or absolute measure of the position accuracy with which the point is represented. If the quality value is to be defined in the range of [0,1], then an absolute measure is more suitable since different scan types can be treated together and points always can be managed disregard the technique have being used to acquire it.

2.2.2. Definition of non direction specific quality value q

We hereby define the quality measure to be a function of the accuracy of a point in 3D Euclidian space as follows:

The position accuracy is written as rmeasured = r^ + Ar . Here, Δr stands for the unknown error around the true point r and |Δr| becomes the radius of the direction-less uncertainty sphere measured in meters.

- Δr

The quality(r), is assigned a value of q = e s , where S is a scale function.

This means that we can show that the uncertainty of a point measurement has a value of 5 centimetres, and then the q value of this point becomes q = e^05 = 0.9512.

2.2.3. Calculating quality values

Different scan types acquire its points in 3D in different manner due to the physical nature of the acquisition method. Therefore, different techniques need to be applied to different scan acquisition methods. Below a description of the techniques to calculate and assign quality measures for scan types are discussed.

The scan types discussed below are scans coming from: (a) a tripod based laser range finder operating with time-of-flight measurement

In these cases all measurements are coming from the same position. If we assign a couple of values to the laser scanner measurement itself and also assign parameters for the movement of rotating mirrors, devices and so on, we get the function:

^ιpod = [dj +(φOT *df + (d *θjψ + e r,eg

wherein: derr is the average error in a single distance measurement, (taken from scanner datasheet) <perr is the uncertainty in the pan angle in radians, (taken from scanner datasheet) θerr is the uncertainty in the tilt angle in radians, (taken from scanner datasheet) ereg is a vector describing the resulting registration error for a registered scan,

and

Itripod = β

(b) a sheet of light triangulated laser range finder mounted on a translation stage

In this case, the scan comes from a triangulation sensor mounted on a translation stage, the error are described as follows:

Arsol = [dj + (φOT * df + (Terr f f5 +

Figure imgf000020_0001

err is the average error in a single point distance measurement, (taken from scanner datasheet)

^err is the uncertainty in the pan angle in radians, see below, (taken from scanner datasheet)

T err is the uncertainty in translation stage position accuracy

βreg is a vector describing the resulting registration error for a registered scan.

Qrsol — β

(c) a laser range scanner mounted on a car with attached equipment for movement and trajectory reconstruction A model constructed from a car mounted scanner is kind of difficult. There are a number of errors that are introduced. The errors much depend on the technique to reconstruct the trajectory from which the model is reconstructed. If the model was acquired using an inertial system as the source for trajectory and temporal alignment, a certain number of error are defined, and if the scan was acquired with a GPS-position sensor only, this calls for other errors. Below is a description of the errors when already knowing the trajectory position inaccuracy derr , the pan error φOT , and the roll error γOT (in radians).

Δ^ =fc,2 +OT *rf)2 + U*γJ25 + reg

is the average error in a single distance measurement, (taken from scanner datasheet)

^err is the uncertainty in the pan angle in radians for the trajectory.

^err is the uncertainty in the roll angle in radians.

6reg is a vector describing the resulting registration error for a registered scan, and

Figure imgf000021_0001

(d) either satellite or airborne images which are treated with stereoscopic techniques

Based on the spatial resolution of the input images, the output is subject to errors in all dimensions. If known, the value for each point is used to calculate the quality measure. If not, the approach to finding a quality value of these types of scans is to use the grid spacing as input for the inaccuracy as follows:

Δr image - (VJ + Ay Ah2 + ereg

Figure imgf000021_0002
wherein

Δx is the grid spacing in meters in the x-direction

y is the grid spacing in meters in the y-direction

Δh is the uncertainty in the h-direction, and

rl image (e) satellite based radar observations

Based on the spatial resolution of the input images, the output is subject to errors in all dimensions. If known, the value for each point is used to calculate the quality measure. If not, the approach to finding a quality value of these types of scans is to use the grid spacing as input for the inaccuracy as follows:

Figure imgf000022_0001
wherein

Δ* is the grid spacing in meters in the x-direction

^y is the grid spacing in meters in the y-direction

^h is the uncertainty in the h-direction and

Hrαdαr ~ e

(f) airborne-mounted laser range finders

The accuracy of a scan coming from a airborne-mounted laser range finder depends on a number of parameters such as used technique, flight- speed, trajectory information and so on. Therefore this is not a simple task here to assign a quality value for the vertex points. But an approximation can be taken as in the case of the satellite-based radar observations, i.e.:

Figure imgf000022_0002
wherein

Δx is the grid spacing in meters in the x-direction

y is the grid spacing in meters in the y-direction

Δh is the uncertainty in the h-direction and rlairlaser

Figure imgf000023_0001

3. External data import

Various types of data can be imported into the system in order to improve or construct a reference model. The types of external sources that can be imported to the system are listed below. This list, however, is non-exhaustive:

- Externally acquired Models;

- Externally acquired tripod based 3D laser range scans;

- Images from CCD/CMOS cameras or similar digital cameras;

- Outputs from Total stations; - Vehicle borne Laser scanners (Laser, GPS, lnertial system);

- Airplane/helicopter/other airborne means acquired height maps of areas (by using suitable technique to generate height information, i.e. stereoscopic imaging, LIDAR,..);

- Airplane/helicopter/other airborne means acquired images; - Satellite acquired height maps of areas;

- Satellite acquired images of areas;

- Output from CAD systems

- Data from optical devices for measuring 3D including:

- structured light techniques - Laser-sheet triangulation techniques

A fusion of any combination of the data from these information sources can be used to construct a model.

3.1. Techniques for rendering preparations

This section describes the actions that should be taken in order to efficiently render a large model. A model acquired with the system easily extends beyond the 1 GByte-level of raw data having far more points than a normal visualization system can cope with. The technique, which is specified by the system, is to keep the data in hierarchical spatial structures, e.g. octrees. This enables the rendering part to more intelligently consider which parts of the model will be visible on the viewport and thereby further investigated and which parts of the model will not be visible and can be neglected. Below two subsections discuss the preparation step before rendering which simplifies visualization. These subsections are:

- Out-of-core octree subdivision: dividing the model of any size to an octree.

- Octant-Division and opacity calculations: calculating the denseness of an octant.

3.1.1. Out-of-core octree Subdivision

The amount of data to be processed from point-cloud vertexes, structured grid vertexes or triangles can by no means be kept in memory. The present system works with a combination of memory mapped data segments on disk with in-core local segments for fast processing.

The process is recursive and treated in depth-first order, where all vertexes or triangles, hereafter generally named primitives, to be treated are added to the root node having a bounding box according to the entire desired model. Recursively, the model is subdivided when the number of primitives is larger than a specified value (MAX_OBJECTS_IN_LEAF). Since the model can be very big, the usage of RAM needs to be limited. For each leaf node that has been finalized, the data is directly flushed to a final data-file. At the same time, the corresponding leaf-node structure keeps track of the number of primitives for the leaf, start index in the data file and other useful information. In an arbitrary recursive step, all nodes having a number of primitives above the MAX_OBJECTS_-IN_LEAF threshold and thus needs further recursion, are stored temporarily on disc for later memory-mapped usage. An exception for speed-optimization has been added where leaf-data can remain in RAM over recursive levels in order to eliminate unnecessary temporary file creations and deletions. As the recursion continues, temporary files are memory mapped and the primitives are further divided down to child-nodes. For any primitive, the primitive is added to a child-node if its bounding box is overlapping the child- nodes bounding-box. For vertexes the number of potential child-nodes is always equal to 1 but several child-nodes can contain a single triangle due to the triangle-extension in space.

When the entire recursion has been performed, all primitives have their information stored on the final data structure where each leaf-node knows its corresponding data position and amount.

3.1.2. Octant opacity calculations

The opacity of an octant is pre-computed for a number of view angles (e.g. for 26 view angles) and stored in a table inside the octant's node for later usage during rendering. The system uses the occlusion query extension of OpenGL to calculate a measure of the opacity by projecting the primitive for a leaf-node on a virtual screen and by means of the Occlusion extension the system can read back the coverage in relation to virtual screen-size.

4. Rendering

Generally, the model size doesn't fit into RAM or is bigger than the accepted allocation in RAM. Techniques for managing models bigger than the available RAM have been designed and reported.

The implementation of a system based on front-to-back tree traversal with sparse viewport ray-tracing and speculative pre-fetching has been decided. The technique of using a priority queue for traversal where the priority measure is calculated based on the model density distribution and the current viewport, cPLP and PLP, have been defined by J. T. Klosowski and C. T. Silva., "Efficient conservative visibility culling using the prioritized-layered projection algorithm", in IEEE Transactions on Visualization and Computer Graphics, 7(4): pp. 365- 379,2001 and extended by W. Correa, "New Techniques for Out-Of-Core Visualization of Large Datasets", PhD Thesis, Princeton University, USA, 2004. With this technique, it is possible to predict the probability for a leaf-node to be visible or not. Since the models are primarily based on vertexes, the density of primitives constructing a model is fairly large. This means that an efficient and intelligent way of rendering in front-to-back order is well suited. For the cPLP/PLP-technique, it is possible to predict which leaf-nodes are likely to be rendered in the near future. The general idea of the rendering system is to maintain a multi-threaded environment where a look-ahead part, the pre-fetcher(s), continuously traverses the tree-structure, and by only knowing the current and an extrapolated viewport, CameraView (T>Tcurr), estimates which leaf-node soon needs to be available in RAM. A rendering thread generally maintains the current viewport and by means of some means (preferably the Occlusion query OpenGL extension) estimates which leaf-nodes are to be sent for rendering and performing so in a prioritized order.

The models can be constructed of real-world data containing building facades with natural variations and construction details and natural objects such as cars, trees and even people. Such a model has a large extent of natural variations and will not easily become simplified to a mesh without loosing some of the fine details. It is an aim of the present system to provide an unbroken chain of accuracy from the acquisition to the model. Based on this, the model is based on raw vertexes. Either in subsampled version due to a level-of-detail simplification or with it highest degree of resolution: the amount of primitives is large.

Based on the amount of data needed for providing a high-quality representation for each screen-frame, use is made of the conservative PLP- traversal technique for the rendering-part implemented in Windows together with a single pre-fetcher thread using the overlapped read-operations existing in the operating system.

A generic traversal algorithm, which is used for both the pre-fetcher and the renderer, has been implemented. For the pre-fetching however one cannot make use of the graphics hardware accelerated occlusion query. Instead, the pre-fetcher works with an estimate of the existence of the octant in the viewport by means of the view frustum. Our implementation of the PLP traversal algorithm uses a sparse ray-tracing with 225 rays equally spread over the viewport.

4.1. Pre-fetching The pre-fetcher is based on the generic cPLP/PLP traverser and executes in a 10 Hz loop. For each node the PLP algorithm finds, the node is investigated for non-loaded data. If the data is not loaded and is not scheduled for reading, the tree-node is attached to a list of read-tasks to be issued, the Fetch List.

When the traversal is finalized, i.e. when the number of touched primitives in the ongoing traversal exceeds a maximal allowed amount, the FetchList contains leaf-nodes with primitives that need to be loaded. The order of the

FetchList is automatically prioritized as most important leaf first (based on the front-to-back traversal).

Read-operations can now be issued for all the entries in the FetchList. This is performed with overlapped read operations, in which the operation, if not finalized immediately, is detached from the current executing thread and can asynchronously by queried for completion.

For all read operations, which do not finalize immediately, the outstanding read-request is stored in a DelayedList. Regularly, after each tree-traversal the pre-fetch thread queries in a loop for finalized read operations. For each of the finalized operations, the corresponding leaf-node is updated with a bit-status signaling that data is loaded.

When the pre-fetcher executes in PLP, the first loops of the pre-fetcher in a model traversal would miss substantial amount of data. However, as time goes, the pre-fetcher will catch up and the pre-fetch budget will for most cases be enough to work in advance of the renderer.

4.2. Rendering engine

The rendering engine uses the same generic PLP-traversal implementation in conservative mode and for each leaf that needs to be drawn; it draws the data in OpenGL vector-draw operations. Currently for point-clouds, color based on three unsigned char and vertex positions based on three float values are used. There is one important difference between the PLP-traversal of the pre-fetcher and the renderer. The renderer can make use of the hardware accelerated occlusion query. With this query, the renderer knows the current consequence and number of pixels on the screen, which will be modified based on a theoretical draw of the bounding-box, i.e. the number of pixels, which will shine through the previous drawn parts of the model. This information is used to discard the drawing of a leaf node and to sub-sample the vertex data based on visibility effect on the screen. Consequently, an octant far away will be rendered with less data than if it is in the closest foreground as a level-of-detail treatment.

5. Reconstruction: Distance field creation

After preprocessing and creating trivial range surfaces (or surface estimations of unstructured point clouds by local plane fitting), the data can be robustly integrated and triangulated with volumetric techniques like distance fields.

Volumetric techniques produce good results in terms of reconstruction robustness of the topology of the underlying geometry, but there are two problems to address: the computational resources can be exhausted rapidly and texture mapping is not straightforward. The trivial implementation allows managing only very small models. Some tricks must be used to keep the resources under control in order to run the software on most common machines.

The distance field definition to which this work is inspired is similar to Tao

Ju, Frank Losasso, Scott Schaefer, Joe Warren: "Dual contouring of hermite data", SIGGRAPH 2002: 339-346. Various modifications are described later, though. Dual Contouring generates multi-resolution triangulations with triangles with better aspect ratio than Marching Cubes.

The generation process is subdivided in different phases that can be iterated. At each step, the data is saved. This allows preserving the expensive computed data and trying different parameter configurations. It is possible to integrate new data incrementally without re-computing everything from scratch, as well.

The generation process is schematically shown in Fig.7. Boxes 50, 51 , 52,

54 indicate data that can be integrated in the distance field and boxes 56, 58, 60, 62, 64, 66 indicate the stages of the distance field generation. The final stage is shown in box 68 because it is both part of the distance field generation both data that can be integrated in another distance field.

The generation process is outlined here, texture mapping is discussed later: (1 ) Volume size definition: defined by the user or just the union of the bounding boxes of all the scans to integrate

(2) Octree scan conversion: the volume is subdivided in cells in a hierarchical way rather than uniform. For each smallest element of input data (leaves of input trees) find the smallest enclosing distance field leaf, then fill it down to the smallest desired cell size (minimum feature size parameter)

(3) Enumerate minimal edges: use Dual Contouring to find edges shared by the cells

(4) Grid edge intersection with o triangle octree of range surface or an external triangulation o local planes of unstructured point cloud (which is accelerated by point cloud octree) o mesh of another distance field (which is already structured in an octree) For all these, the information stored per intersection is: o intersection position along the edge o normal of the passing surface o interpolated color of the surface o an intersection id to uniquely identify it o surface id to know from which surface the intersection belongs to o interpolated weight parameter to perform weighted average of data during integration o a direction value that tells if the edge stabs the surface from inside to outside or the opposite

(5) Check if necessary to refine the octree to higher resolution in order to capture mode detail

(6) Intersections are used to compute feature vertexes inside the cells: find connected components to interpret topology inside the cell

(7) Feature vertexes are computed

(8) Save for optimized out of core access

(9) Meshing: topology is reconstructed by connecting neighbour feature vertexes with multi-resolution Dual Contouring. Several scans can be integrated in the same distance field in order to complete the model or close holes or include higher resolution/accuracy in some regions. This can be done in the integration process, which is similar to the previous process, except that multiple intersections are be merged coherently. The final mesh can be reduced with fewer triangles by the simplification process, which collapses the distance field tree where the reconstruction accuracy exceeds a user-defined threshold.

The system uses parallelization techniques on multiprocessor machines because the octants of the octree are independent once the data has been split.

5.1. Auto-refining

The structure of each triangle octree is embedded in the octree of the distance field volume according to the desired grid resolution. This process is called "octree voxelization". The global volume is defined by the user or automatically as the union of the bounding boxes of the scans with their registration transform.

Each scan is voxelized inside this volume creating the cells of the distance field octree with a per-scan specified minimum cell size. This is decided by the triangle octree cell size or forced by the user. This procedure is shown in Fig.8.

For each triangle octree leaf T, the smallest distance field octree cell D that encloses T is created (or found, if it already exists). Then D is uniformly voxelized down to the desired resolution (filled with cells equal to the minimum size). In theory each triangle octree can be voxelized with different resolutions yielding a multi-resolution octree.

The distance field can be computed on a uniform grid, but if one wants to capture fine detail, the memory required to store the grid increases rapidly (cubic complexity). We use a multi-resolution structure like the octree to encode with higher resolution only in regions that really need it. Rather than capturing coarse detail, it is better to capture the maximum detail possible from the beginning and then simplify the final mesh. In this way, the reconstruction algorithm is able to interpret the correct topology. Having fixed a minimum grid resolution and a minimum feature size, the scan conversion subdivides the octree where the topology is most likely to be too complex to be reconstructed with the current resolution, by looking for example at the number of intersections along the cells, if it is more than 2 then subdivide. Other topology- tests may be used like the complex cell test and star-shaped test like in Gokul Varadhan, Shankar Krishnan, T.V.N Sriram, Dinesh Manocha, "Topology Preserving Surface Extraction Using Adaptive Subdivision", Second Eurographics Symposium on Geometry Processing, Nice, France, 2004.

5.2. Enumerate minimal edges and compute intersections

The minimal edge is the shortest common segment shared by the cells. If it is intersected by the original surface then the feature vertexes of the four sharing cells will be connected with a quad during meshing with DC. We enumerate minimal edges with a DC recursive traversal [T. Ju, F. Losasso, S.

Schaefer, J. Warren: "Dual contouring of hermite data", SIGGRAPH 2002: 339- 346\. When a minimal edge is reached we know the four sharing cells. If the edge has not been already created then it is inserted in a list along with its endpoints. Its pointer is then copied to the sharing cells.

In this way, intersections of range surfaces with each edge are computed only once. We compute intersections of each edge with the triangle octrees. The edge is quickly rejected or intersected by taking advantage of the octree spatial subdivision. When an octree leaf that intersects the edge is found, all the triangles contained are intersected with the edge. The triangle attributes are then interpolated. All intersections are kept in a list and sorted along the edge. In multi-resolution grids the 3-cell case is recognized and the common cell is ignored. The longest edge that is collinear to the minimal edge is determined among the sharing cells and effectively stored, so the disk space is reduced because the edges are less fragmented. Less edges also means less triangle octree traversals when computing the intersections. Even if the longest edge has more probability of intersect data, the hierarchical structure of the octree early culls edges that won't find anything.

Some leaf cells allocated during the voxelization may be useless because they do not intersect any data. Optionally, we remove them with their edges to save some space. The information stored per intersection is for example - intersection position along the edge

- normal of the passing surface

- an intersection id to uniquely identify it

- interpolated weight parameter to perform weighted average of data during integration - direction value that tells if the edge stabs the surface from inside to outside or the opposite. It is the projection of the surface normal along the edge.

5.3. Noise removal and weighted average along edges

A preprocess is recommended to find collinear edges. Each axis is examined at a time. All edges parallel to the axis are found and virtual edges are created by concatenating all collinear edges with their intersections. This is followed by a weighted average of consecutive intersections of each virtual edge. The average is done only if they are concordant and the distance between each other is lower than a maximum (maximum merge distance). This process also merges redundant intersections from the same scan. The resulting average is equivalent to a surface deformation that could create new intersections with the two complementary axes as pointed out in C. Rocchini, P. Cignoni, F. Ganovelli, C. Montani, P. Pingi, R. Scopigno: "Marching Intersections: An Efficient Resampling Algorithm for Surface Management", Shape Modeling International 2001: 296-305. High frequency geometry (noise) is removed if consecutive intersections are discordant (opposite direction), and their distance is lower than a minimum (minimum separable distance). This removes spikes that intersect the grid for all scans. Finally, the merged intersections are redistributed to the original edges.

5.4. Finding connected components

In order to correctly reconstruct the topology inside a cell, the connectivity of the intersections along the edges should be found. The technique adopted is similar to Nan Zhang, "Multiresolution lsosurface Modeling and Rendering", Phd dissertation, Stony Brook University. The idea is to think of the cell corners and intersections with the edges as a closed graph. We start from a cell corner and, with a depth first walk, we group the first intersections found. These form a component, then they are peeled from the cell and the walk goes further, eventually changing the starting point. The starting corner is important, it is chosen the one with the best score, which is computed taking into account the normal deviation and the number of intersections that are firstly encountered. Each component has a representative vertex (feature vertex) that encodes also the connectivity with the neighbor cells storing the intersection id that has been used on each edge to compute the feature vertex. This connectivity is stored in a CET (Connectivity Encoding Table). The Dual Contouring algorithm will reconstruct the mesh by connecting vertexes that share the same intersection id along the same edge by looking up the CET. 5.5. Compute feature vertexes

The Hermite data in a cell (intersections p plus normals n) are put in a system to solve for the point x that has the minimum error E

E[x] = ∑(n, (x -p,))2

This is done through SVD decomposition and with an optimization to improve the numerical stability through QR decomposition. A minSingularValue parameter controls the sharpness of the feature vertexes during SVD decomposition. If the generated vertex falls outside the cell, it is projected on the nearest side of the cell.

5.6. Storage

In preparation to treat big datasets, the present invention proposes to use an algorithm that is a hybrid of multi-resolution isosurface and marching intersections.

The octree is stored in depth first order to improve search performance. The data is organized in different files:

- octree_struct: octree structure is encoded in a different file than the leaf data in order to decouple the file windowing during the out of core access. This should improve the cache coherency because the two windows do not jump too much. - octreejeaves: data for each leaf cell, i.e. housekeeping information and pointers to edges and feature vertexes.

- edges: list of unique edges. Each edge shares four cells, so it's not necessary to store the edge four times. Each cell has pointers to its edges which are stored once. - Vertexes: list of feature representative vertexes for each connected component of a cell.

The file separation is beneficial for out-of-core access in case of very big models. The independent file windows allow better cache coherency because windows do not jump too much back and forth to access the data, since there is always a space locality during the processing operations.

5.7. Mesh extraction by Dual Contouring and export

The mesh is extracted with Dual Contouring, which is a well-known multi- resolution isosurface mesh extraction algorithm. The mesh can be directly exported in any triangulation format like X3D, 3DC or VG3.

5.8. Automatic detection of features and normal computation

When extracting the mesh, each triangle stores a normal that is the cross product of its edges, but in our framework the normals are always associated to vertexes, and it depends on the type of feature vertex.

Each vertex is computed by solving a system that could be subconditioned and thus we compute the pseudo inverse by SVD decomposition. The Z diagonal elements that must be zeroed during the pseudo inverse computation are decided by a threshold parameter. The formula 3-Z gives the feature dimension of the vertex.

If the feature dimension (computed during SVD decomposition) is higher than 1 (i.e. 2 for edges or 3 for corners), the normal used for the vertex during mesh extraction is the normal of the triangle that shares it. Otherwise is the average normal of the intersections that generated it. This allows to distinguish sharp corners from smooth or flat surfaces.

5.9. Simplification

The eight leaves of a cell are examined to check if it is possible to collapse them in a single cell without losing too much detail (controlled by an error threshold) or without topology ambiguity, otherwise the connectivity with the neighbor cells cannot be reconstructed. In practice the QEF (quadric error functions) of the feature vertexes of the leaves are merged, if the merged feature vertex has a residual error lower than the threshold and the topology is non ambiguous the leaves are collapsed.

5.10. Weighted integration Data is triangulated first, each vertex has a confidence based on normal- to-sensor, reflectance, depth, density. When computing the intersections of this triangulations on the field grid the confidence is interpolated inside the triangles. This value is used for a weighted average of the intersections to merge along a grid line with a process similar to Claudio Rocchini, Paolo Cignoni, Fabio Ganovelli, Claudio Montani, P. Pingi, Roberto Scopigno: "Marching Intersections: An Efficient Resampling Algorithm for Surface Management", Shape Modeling International 2001: 296-305. Having a defined volume size (could be the one of a previous distance field), the triangulated scans are scan converted in this grid, that can be uniform or octree encoded. The final step is to examine the edges of each grid cell and check if it is possible to merge the intersections, according to some parameters like maxlntegrationDistance and if the intersections have concordant direction. From this distance field the integrating mesh can be extracted by finding the connected components, computing feature vertexes and connecting them with Dual Contouring.

5.11. Distribute octants to multiple processors for parallelization.

The octree creation can be parallelized. Once the data has been subdivided to octants, each octant can be further subdivided independently from the others so it can be assigned to a different processor.

5.12. Texturing

The final mesh has no more one to one correspondence with the originating scans; the vertexes are the result of a merge process of many scans. So it is not possible use any 2D map that was associated to the scan in a direct way. Two techniques can be used to paint the model: - Unwrapping of triangles: by projecting triangles with some user selected projection (spherical, cylindrical, cubic, camera...) on a fixed canvas we find their texture coordinates. In particular, given an undistorted image (thanks to a calibrated camera) we can re-project the photo to the model. - Octextures (texture octrees): volumetric textures allow to paint 3D data without the parameterization problem of the 2D textures 6. Scene change analysis

Scene change analysis is divided into two phases: i) Creation of 3D reference model; and ii) Verification.

6.1. Creation of 3D reference model

The creation of the 3D reference model is divided into the following steps: pre-processing, registration, integration and triangulation. (For more details reference is made to following publication: V. Sequeira, K. Ng, E. Wolfart, J. GM. Gonςalves, and D. Hogg - "Automated Reconstruction of 3D Models from Real Environments", ISPRS Journal of Photogrammetry and Remote Sensing (Elsevier), vol. 54, pp. 1-22, 1999). The main objective of pre¬ processing is to filter the raw data. To build a complete model from multiple viewpoints, all scans must share a common global coordinate system and then be merged together. Registration starts by coarse alignment either using the global positions acquired during the scans or manually by identifying corresponding points in the individual scans. An automatic optimizing algorithm then refines the alignment. Redundant points are discarded during data integration. Finally, triangulation of range points is used to estimate the original surface by linear interpolating neighboring points in the mesh. It further simplifies the model by fitting planes among the 3D points, thereby reconstructing the surfaces of the scanned objects.

The result of this process is a collection of unstructured triangles (triangle soup) that will be used to measure the distance of a point to the closest surface.

This simple structure without cross references is convenient for out of core access considering that the reference model could be of size that is not loadable in memory.

6.2. Verification

In the verification phase, an object, room or area is checked for changes relative to a reference model. Fig.9 outlines the verification process into three sections: input, inspection and report. The input is a previously created reference model, where the reference model can be externally imported data (as described hereinbefore) or deriving from previously acquired scans, and the newly acquired 3D data. During inspection, for each 3D point to verify, the distance to the closest triangle of the reference model is computed. This may imply comparisons against several models at the same time. An internal list keeps track of all changed areas.

The closest triangle search in the triangle soup has linear time complexity in the number of triangles. To speed up the process, all triangles in a reference are added to a spatial search tree (e.g. octree). This allows to pre-compute regions where triangles are localized in a hierarchical tree with which the closest triangle search becomes of logarithmic time complexity.

A leaf of the tree contain actually the reference to the cluster of triangles that are intersected by the leaf volume, so copies of some triangles may exist when they share the boundaries of more leaf volumes. The tree is stored in the disk in two files, one that encodes only the structure of the tree and one with the per-leaf clusters of triangles. This decoupling improves file mapping because the file windows are independent, thus avoiding too frequent jumps. The tree is also stored in depth-first order, which improves processing operations like searches, which require reaching the bottom of the trees as fast as possible. There are two parameters that control the tree creation, the maximum number of triangles per leaf (density of the leaf) and the maximum distance that one wants to measure (spatial size of the leaf). The more the triangles per leaf, the shorter the tree (less recursion) but more distance computations per leaf. The lower the maximum distance, the smaller the size of the leaves, this improves the culling of leaves that are farther than the maximum distance from the surface, also less triangles per leaf is probable, but with a deeper tree though. The best performance is a trade off of these two parameters and they should preferably be adapted each time a reference model is created, in particular the maximum search distance also because it is easier to understand for the end user.

Once the reference trees are created, during the inspection they are opened with file mapping and checked for each 3D point to measure. A bounding box of the point is constructed with side length equal to two times the maximum distance to measure. All the leaf octants that intersect the bounding box are considered; the triangles inside are used to compute the distance. The closest triangle among all of them is found; closest point and shorter distance are computed.

The result is a file that stores the shorter distance that has been found for each point so it is not necessary to re-compute the inspection each time. These values can be encoded quickly with any kind of graduated pseudo-colors to help the user to grasp the difference of distance. In this viewing stage it is also possible to filter points based on their pre-processed confidence and lower the maximum distance of the scale in order to better discriminate smaller objects. Optionally a report file is created, where for each point the closest point and its distance are listed.

6.3. Presentations

6.3.1. Presentation of model

The system can be used to visualize models; references and other related information (see Fig.10):

> By visualizing the reference model and the inspection model in 3D with true coloring, a user gets a good understanding of the model inspected.

> Projectors: calibrated cameras can be used to project colors in real time directly onto the model (any geometry in front of the projector is actually colored, both points and triangles) without any pre-processing for quick preview by means of hardware accelerated projective shaders: the fragment shader distorts the projected coordinate of a point according to the calibration parameters loaded in the graphics card. This corrected texture coordinate is used to fetch the right pixel in the photo. This allows to check the quality of the calibration in real time just by changing the parameters in the shader. Optionally it is possible to avoid coloring the parts of the model that the photo doesn't cover. This is done by computing the shadows of the geometry from the camera point of view with an additional rendering step. For example, with shadow mapping the model is initially rendered from the camera point of view and the resulting depth buffer is used by means of projective mapping to check if a pixel from the current viewpoint must be colored or not. If the test passes the distortion of the texture coordinates is actually performed to read the texture.

> Texture mapping: a video stream from a vehicle scanner or camcorder can be used to colorize the point clouds and generate a texture for the triangulated model: the texture coordinates are computed by projecting the triangles on the undistorted video frame (thanks to calibrated camera).

The visualization can be performed in 2D and in 3D where applicable.

6.3.2. Presentation of inspection results

For Change-detection, the result can be visualized as follows:

> By using pseudo coloring or color-coding for the inspection model, where the coloring is based on calculated distance from the reference model, the user can easily understand and interpret the result (see Fig.11).

> By using coloring or color-coding with an alarm level where all distances above a threshold are colored in a distinct color, the user can quickly detect specific movements (see Fig.10). > The result can be treated in such a way that areas or volumes which have distance changes above a threshold are saved to an inspection log which contains a global unique location reference (xyz location and extension).

6.4. Surveying

In any model the following operations can be performed (sometimes pre- processing is required):

> Compute distances: return the Euclidian distance between two points

> Compute angles: Given 3 points A1B1C the angle between AB and BC is given by φ = arccos

Figure imgf000041_0001

> Compute areas: having selected a region over the surface of the triangulated model the total area is the sum of the area of all triangles or parts of them Planes can be defined or estimated based on a region of interest of a model, i.e. the plane is fitted with least squared to the given selected point cloud. These planes can be used as reference to compute:

> Isolated volumes: having defined a bounding box of the point cloud (or better the triangulation), the data can be integrated in the domain of the plane. The plane is subdivided in tiles of user defined size which are the base of lozenges capped by the closest surface to the plane.

> Cross sections:

- Vector format: the plane intersects the triangulated model, each triangle that passes through the plane contributes a segment connecting the two intersection points of the triangle edges that intersect the plane.

- Raster format: hardware accelerated by setting near and far clipping planes close to the crossing plane and then reading back the result of the depth buffer.

7. Virtual scanner

Having defined a new viewpoint, it is possible to create virtual scans of the available data according to a specific projection. This converts again the 3D data to a 2D grid with depth (2 V2D) and apply standard image processing algorithms to extract more information, applications include > Orthophotos: The model (along with texture) is projected orthogonally in the direction of the plane normal in the viewport selected by the user (extension and resolution). This is done through direct hardware rendering and reading back the result from the framebuffer, eventually by subdividing the viewport in several parts if it doesn't fit in the maximum viewport size allowed by the graphics hardware. The result is a calibrated image of known unit size.

> Unwrapping/unfolding of triangles: by projecting triangles (with projections like spherical, cylindrical, cubic, camera...) on a fixed canvas we find their texture coordinates. On the canvas is possible to load an image or paint it, this creates a texture map for the projected triangles.

8. Model editing

The final model is a result of an automatic process that is not always perfect. Some manual post processing capability is sometimes desired, such as

- Add/remove triangles

- Flip normals

- Change vertex order CW/CCW

- Tweak texture coordinates - Volumetric selection of a region to use the distance field to mesh the contained point cloud or

Remesh: Keep track of triangle id that intersects the boundary of the volume, so outer vertexes can be connected to boundary feature vertexes. Remove previous boundary triangles T. Compute distance field inside the define volume.

Give to each outer vertex V a reference to the cell(s) that the triangle intersected, so each V knows to which feature vertex(es) has to connect.

Fill holes can be done in two ways: A. Find minimal edges E inside the hole compatible with neighbor cells N. Compute virtual cells and feature vertexes V, Connect V with N by triangles and find intersections with E. Now a new triangulation can be generated inside the hole with Dual Contouring. B. Resample the data in a uniform volume grid and do region growing [James Davis, Steve Marschner, Matt Garr, and Marc Levoy, "Filling holes in complex surfaces using volumetric diffusion", First International Symposium on 3D Data Processing, Visualization,

Transmission, June, 2002]

- Cut triangles: the algorithm is the same of the JRC Reconstructor

9. Applications

Potential applications for the system are listed below. It is to be noted that this list is not meant to be exhaustive:

9.1. Design information verification/3D plant verification of industrial and nuclear facilities

The system can be used to verify that a plant has been built/re-built or is currently according to a master or the blue-print specification. Inputs for the reference to the system can be either previously acquired 3D scans, CAD models or other relevant information.

Example 1

A Processing Plant has been constructed. In order to verify its construction the owner/operator wants to make a quick analysis by comparing the original designs with the as-built model. This can be performed by utilizing the system described in this invention.

9.2. Nuclear safeguards and non-proliferation activities

The system can be used as a tool for safeguards to facilitate annual or repeated inspections of interesting areas. The system can perform a change detection of a plant or parts of a plant area to verify that no non-authorized changes or modifications have been performed.

Example 2

A nuclear power plant has been modified according to a strict modification plan. Several areas have been changed. With this system it is possible to perform a change detection and verify and analyze the changes performed. Example 3

To ensure adherence to the Nuclear Non-Proliferation Treaty (NPT) obligations, Countries are required to declare design information on all new and modified facilities, which are under safeguards, and to ensure that the accuracy and completeness of the declaration is maintained for the life of the facility. It is the obligation of the United Nations' International Atomic Energy Agency (IAEA) to verify that the design and purpose of the "as-built" facility is as declared and that it continues to be correct. These activities are referred to as Design Information Examination and Verification (DIE/DIV) and can be divided into three steps: 1 ) examination of the declared design documents; 2) collection of information on the "as-built" facility using various methodologies; and 3) comparison of the "as built" facility with the declared information.

A newly applied methodology for DIV tasks was developed making use of a 3D laser scanning and associated processing and analysis software. The main components of the system are: a) 3D Data acquisition systems, and the tools to b) create realistic "as built" 3D reference models, c) detect and verify the changes between the current 3D reconstructed model and a reference model, and d) track and document changes in successive inspections. The system is capable to accept both CAD and acquired 3D models from the "as- built" facility either from indoor or outdoor areas. The procedure is divided into three distinctive phases:

- Building a 3D Reference model by acquiring multiple scans from the scene. The quality of the DIV activities is highly dependent on how accurately and realistically the 3D model documents the "as-built" plant. As such, the reference model should be acquired with the best possible conditions including a) higher spatial resolution, b) lower measurement noise and c) multiple views to cover possible occlusions. The number of required scans depends on the complexity of the scene.

- Initial verification of the 3D models cells versus the CAD models or the engineering drawing provided by the plant operator. The process is fully automatic. In the case of unavailability of CAD models the software is provided with tools allowing the measurement of distances for verification of lengths, heights, pipe diameters, etc

- Re-verification. At that time, new scans of the selected area are taken and compared with the reference model in order to detect any differences with the initial verification. The automatic detected changes can be further analysed by an operator. The re-verification phase can occur at any time after the reference model is constructed.

9.3. Industrial automated production line: clash detection

For the automated production line industry, including automotive industry, the system can be used in several applications. As described above in the design information verification for design and re-design, the system aids in verifying the construction according to specifications. When new models or versions of existing models are to be introduced and produced in the plant, the system can be used for clash-detection, i.e. verifying that the existing facilities can cope and manage the new models.

9.4. Analysis of vehicle accidents

The system can be used to register, on site, car accidents and to register the surrounding of the car accident. The scenario for the crash is registered in the model with true coordinate system, which means that measurements can be performed offline directly on the model. Also the car itself is registered, which means that the impact of the crash of the car can be measured and inspected.

9.5. Analysis of the car deformations

Just as in vehicle accidents, the system can be used for analyzing the impact of a car after an accident. The change detection capabilities of the system are capable to instantly present the deformed zones to an operator.

9.6. City, forest and landscape assessment, cadastral mapping (initial, detect changes and update)

Governments of isolated communities can use the scanning capabilities of the system to acquire photo-realistic facades for the surrounding. This can be used in conjunction with other resources such as GIS data, maps. An existing reference can be used at any time together with later scans to perform a change detection in order to supervise building-modifications, authorized or not, and afterwards update the records.

Example 4

A county within a country has an area that is considered valuable for preservation in the future. The area is mostly containing private houses. The restrictions for modifications of these are such that all external modifications to the houses are to be informed in advance and accepted by the county-architect. In this specific case, the county-architect can request a model constructed of the area once a year and perform a change detection of the fagades. All changes detected by the system from a previous scan are presented and logged by the system. By verifying and analyzing the changes, the county- architect can easily and quickly detect if some property owners have made modifications without previous notification and proper acceptance.

9.7. Environment studies (e.g., city pollution) In specific cases an external sensor can be added to the system providing data measures for a topic such as content of pollution. The data is stored by the system for the specific location and also direction if feasible. Directly, this scan can be used to compare to a threshold or similar or the scan can be used as a reference. Further scans after a period of time, with the same sensor attached, can be used for change detection in relation to the previous scan.

9.8. Management of city infrastructures

Scanned urban areas can be used to keep a reference for analyzing the effect of potential modifications.

9.9. Preparation and training of rescue operations Rescue operations can be planned off-site if the areas to be operated in have previously been scanned by the system. The rescue-team can prepare for operations and, in advance, detect potential problems with narrow areas, cable lengths, position of personnel, equipment and other objects.

9.10. Security planning It is possible to use the system to create 3D Models from relevant areas to plan the visits of important people. It is possible to simulate threat/attack scenarios and design security schemes including the installation of sensors. The same technology can be used for major events, e.g., major sports and/or entertainment events with the participation of large crowds. Within this system it is possible to automatically distinguish vehicles according to their major type of use, e.g. sedan, van, minivan, truck, etc.

9.11. Disaster assessment

In case of serious disasters, a previously acquired reference model of an area can be compared to post-disaster scans. The system can perform a change-detection and present to a user the consequence of building movements, areas being restructured and so forth.

9.12. Spatial forensics

In the field of spatial forensics, the method and system for 3D scene change detection can also be of advantage.

Claims

Claims
1. A method for 3D scene change detection comprising the steps of:
- scanning a scene to be verified by means of a laser scanning device of a data acquisition system;
- constructing a 3D model from said scanned data; and - comparing the constructed 3D model with a reference model.
2. The method according to claim 1 , wherein said reference model is a model constructed from previously scanned data.
3. The method according to claim 1 , wherein said reference model is a CAD model or a model based on drawings by an engineer.
4. The method according to any of claims 1 to 3, wherein said data acquisition system comprises a laser range scanner and said step of scanning a scene to be verified comprises the step of scanning said scene by said laser range scanner.
5. The method according to any of claims 1 to 3, wherein said data acquisition system comprises a main laser range scanner and at least one auxiliary laser range scanner, said at least one auxiliary laser range scanner being arranged at a fixed angle with respect to said main laser range scanner and said step of scanning a scene to be verified comprises the step of scanning said scene by both said main laser range scanner and said at least one auxiliary laser range scanner.
6. The method according to any of claims 1 to 5, wherein said data acquisition system comprises a vertical laser range scanner and a horizontal laser range scanner and said step of scanning a scene to be verified comprises the step of scanning said scene by both said vertical laser range scanner and said horizontal laser range scanner.
7. The method according to any of claims 1 to 6, wherein said data acquisition system comprises a digital color camera and said step of scanning a scene to be verified comprises the step of acquiring digital color photographs by means of said digital color camera.
8. The method according to any of claims 1 to 7, wherein said data acquisition system comprises a positioning system and said step of scanning a scene to be verified comprises the further step of acquiring positional data by means of said positioning system.
9. The method according to claim 8, wherein said positioning system comprises a GPS receiver and/or an inertial orientation reference system.
10. The method according to any of claims 1 to 9, comprising the further steps of providing data storage means and storing said scanned data in said data storage means.
11. The method according to any of claims 1 to 10, wherein said step of constructing a 3D model from said scanned data comprises the step of providing model building means.
12. The method according to claim 11 , wherein said model building means comprises means for re-calibrating vertical profiles of the scanned scene and said step of constructing a 3D model comprises the step of re-calibrating vertical profiles of said scanned scene by using freshly acquired data and previously acquired data.
13. The method according to any of claims 11 to 12, wherein said model building means comprises scan registration means for merging acquired data from at least two different sources and said step of constructing a 3D model comprises the steps of acquiring data from at least two different sources and merging said data.
14. The method according to claim 13, wherein said scan registration means comprises means for determining a quality value of a point of said acquired data, and said step of constructing a 3D model comprises the step of determining a quality value of a point of said acquired data.
15. The method according to any of claims 11 to 14, wherein said model building means comprises import means and combining means and said step of constructing a 3D model comprises the steps of importing data from external sources and combining said scanned data with imported data.
16. The method according to any of claims 11 to 15, wherein said model building means comprises means for dividing said scans into octrees and means for calculating the denseness of an octant and said step of constructing a 3D model comprises the steps of dividing said scans into octrees and calculating the denseness of an octant.
17. The method according to any of claims 11 to 16, wherein said model building means comprises reconstruction means for integrating and triangulating acquired data with volumetric techniques and said step of constructing a 3D model comprises the steps of integrating and triangulating said acquired data.
18. The method according to claim 17, wherein said reconstruction means comprises voxelization means said step of constructing a 3D model comprises the steps of voxelizing said scanned data.
19. The method according to any of claims 17 to 18, further comprising the step of using a multi-resolution structure to encode said acquired data with higher resolution only in regions where necessary.
20. The method according to any of claims 17 to 19, wherein said model building means comprises means for enumerating minimal edges and computing intersections said step of constructing a 3D model comprises the steps of enumerating minimal edges and computing intersections
21. The method according to any of claims 17 to 20, wherein said model building means comprises means for finding the connectivity of the intersections along the edges said step of constructing a 3D model comprises the steps of finding the connectivity of the intersections along the edges
22. The method according to any of claims 11 to 21 , wherein said model building means comprises octree storage means said step of constructing a 3D model comprises the steps of storing data in separate files.
23. The method according to any of claims 11 to 22, wherein said step of constructing a 3D model comprises the step of computing intersections of triangulation by interpolating a confidence value into the triangles.
24. The method according to any of claims 11 to 23, wherein said step of constructing a 3D model comprises the step of texturing the created model.
25. The method according to any of claims 1 to 24, wherein said step of comparing the constructed 3D model with a reference model comprises providing inspection means.
26. The method according to claim 25, wherein said inspection means comprises distance computing means and said step of comparing the constructed 3D model with a reference model comprises the steps of constructing a bounding box for each 3D point to check; considering the leaf octants intersecting said bounding box; and using the triangles inside said bounding box to compute distance.
27. The method according to claim 26, wherein said step of comparing the constructed 3D model with a reference model further comprises the steps of finding the closest triangle among all of the triangles in said bounding box; computing closest point; and computing shortest distance.
28. The method according to claim 27, wherein said step of comparing the constructed 3D model with a reference model further comprises the step of storing the shorter distance that has been found for each point in a file.
29. The method according to any of claims 1 to 28, wherein said step of comparing the constructed 3D model with a reference model comprises providing presentation means for presenting the detected changes between said constructed 3D model and a reference model.
30. A system for 3D scene change detection comprising means for carrying out the method of any of the previous claims.
31. A system for 3D scene change detection comprising a data acquisition system comprising a laser scanning device for scanning the scene to be verified, model building means for constructing a 3D model from the scanned data inspection means for comparing the constructed 3D model with a reference model.
32. The system according to claim 31 , wherein said data acquisition system is tripod mounted or vehicle mounted.
33. The system according to any of claims 31 to 32, wherein said data acquisition system comprises a laser range scanner.
34. The system according to any of claims 31 to 32, wherein said data acquisition system comprises a main laser range scanner and at least one auxiliary laser range scanner, said at least one auxiliary laser range scanner being arranged at a fixed angle with respect to said main laser range scanner.
35. The system according to any of claims 31 to 34, wherein said data acquisition system comprises a vertical laser range scanner and a horizontal laser range scanner.
36. The system according to any of claims 31 to 35, wherein said data acquisition system further comprises at least one digital color camera.
37. The system according to any of claims 31 to 36, wherein said data acquisition system further comprises a positioning system.
38. The system according to claim 37, wherein said positioning system comprises a GPS receiver and/or an inertial orientation reference system.
39. The system according to any of claims 31 to 38, further comprising a computer including said data storage means and said model building means.
40. The system according to any of claims 31 to 39, wherein said model building means comprises means for re-calibrating vertical profiles of the scanned scene.
41. The system according to claim 40, wherein said means for re-calibrating vertical profiles of the scanned scene comprises means for using freshly acquired data and previously acquired data.
42. The system according to any of claims 31 to 41 , further comprising scan registration means for merging acquired data from at least two different sources.
43. The system according to claim 42, wherein said scan registration means comprises means for determining a quality value of a point of said acquired data.
44. The system according to any of claims 31 to 43, further comprising import means for importing data from external sources.
45. The system according to claim 44, wherein said model building means comprises means for combining said scanned data with imported data.
46. The system according to any of claims 31 to 45, wherein said model building means comprises
- means for dividing the scans into octrees
- means for calculating the denseness of an octant
47. The system according to any of claims 31 to 46, wherein said model building means comprises
- reconstruction means for integrating and triangulating acquired data with volumetric techniques
48. The system according to claim 47, wherein said reconstruction means comprises voxelization means.
49. The system according to any of claims 47 to 48, wherein said reconstruction means comprises means for encoding acquired data in a multi-resolution structure.
50. The system according to any of claims 47 to 49, wherein said reconstruction means comprises means for enumerating minimal edges and computing intersections.
51.The system according to any of claims 47 to 50, wherein said reconstruction means comprises octree storage means for storing data in separate files.
52. The system according to any of claims 47 to 51 , wherein said reconstruction means comprises means for computing intersections of triangulation.
53. The system according to any of claims 47 to 52, wherein said reconstruction means comprises texturing means for texturing the created model.
54. The system according to any of claims 47 to 53, wherein said reconstruction means comprises means for computing intersections of triangulation.
55. The system according to any of claims 31 to 54, further comprising presentation means for displaying the identified changes.
PCT/EP2005/054335 2004-09-06 2005-09-02 Method and system for 3d scene change detection WO2006027339A2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
LU91099 2004-09-06
LU91099 2004-09-06
US68864205P true 2005-06-08 2005-06-08
LU91177A LU91177A2 (en) 2005-06-08 2005-06-08 Method and system for the 3D scene change detection
US60/688,642 2005-06-08
LU91177 2005-06-08

Publications (2)

Publication Number Publication Date
WO2006027339A2 true WO2006027339A2 (en) 2006-03-16
WO2006027339A3 WO2006027339A3 (en) 2006-10-26

Family

ID=35744805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/054335 WO2006027339A2 (en) 2004-09-06 2005-09-02 Method and system for 3d scene change detection

Country Status (1)

Country Link
WO (1) WO2006027339A2 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010042288A1 (en) 2008-10-08 2010-04-15 Strider Labs, Inc. System and method for constructing a 3d scene model from an image
CN103148804A (en) * 2013-03-04 2013-06-12 清华大学 Indoor unknown structure identification method based on laser scanning
WO2013106802A1 (en) * 2012-01-12 2013-07-18 Gehry Technologies, Inc. Method and apparatus for determining and presenting differences between 3d models
WO2013138328A1 (en) * 2012-03-12 2013-09-19 Hntb Holdings Ltd. Creating a model of a scanned surface for comparison to a reference-surface model
WO2014085025A1 (en) * 2012-11-29 2014-06-05 Pelco, Inc. Object removable detection using 3-d depth information
CN104021588A (en) * 2014-06-18 2014-09-03 公安部第三研究所 System and method for recovering three-dimensional true vehicle model in real time
WO2014201189A3 (en) * 2013-06-14 2015-03-05 Microsoft Corporation Mobile imaging platform calibration
WO2016040473A1 (en) * 2014-09-10 2016-03-17 Vangogh Imaging, Inc. Real-time dynamic three-dimensional adaptive object recognition and model reconstruction
WO2015134795A3 (en) * 2014-03-05 2016-05-12 Smart Picture Technologies, Inc. Method and system for 3d capture based on structure from motion with pose detection tool
US9383753B1 (en) 2012-09-26 2016-07-05 Google Inc. Wide-view LIDAR with areas of special attention
US9423332B2 (en) 2014-10-14 2016-08-23 Caterpillar Inc. System and method for validating compaction of a work site
CN106408656A (en) * 2016-09-21 2017-02-15 江西天祥通用航空股份有限公司 Building method and building system of 3D model
US9721380B2 (en) 2015-04-27 2017-08-01 Microsoft Technology Licensing, Llc Removing redundant volumetric information from a volume-based data representation
US9971956B2 (en) 2016-03-21 2018-05-15 International Business Machines Corporation Detection and presentation of differences between 3D models
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
EP3457357A1 (en) * 2017-09-13 2019-03-20 Tata Consultancy Services Limited Methods and systems for surface fitting based change detection in 3d point-cloud
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
US10380762B2 (en) 2016-10-07 2019-08-13 Vangogh Imaging, Inc. Real-time remote collaboration and virtual presence using simultaneous localization and mapping to construct a 3D model and update a scene based on sparse data

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HUIJING ZHAO ET AL: "Reconstructing Urban 3D model using vehicle-borne laser range scanners" 3-D DIGITAL IMAGING AND MODELING, 2001. PROCEEDINGS. THIRD INTERNATIONAL CONFERENCE ON 28 MAY - 1 JUNE 2001, PISCATAWAY, NJ, USA,IEEE, 28 May 2001 (2001-05-28), pages 349-356, XP010542883 ISBN: 0-7695-0984-3 *
SURMANN H ET AL: "An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments" ROBOTICS AND AUTONOMOUS SYSTEMS, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 45, no. 3-4, 31 December 2003 (2003-12-31), pages 181-198, XP004479653 ISSN: 0921-8890 *
VITOR SEQUEIRA, MARCO FIOCCO, GUNNAR BOSTROM, J.G.M. GONCALVES: "3D Verification of Plant Design" 25TH ESARDA SYMPOSIUM ON SAFEGUARDS AND NUCLEAR MATERIAL MANAGEMENT, 13 May 2003 (2003-05-13), pages 1-8, XP007900362 Stockholm, Sweden *
WU XIAOJUN ET AL: "A new method on converting polygonal meshes to volumetric datasets" ROBOTICS, INTELLIGENT SYSTEMS AND SIGNAL PROCESSING, 2003. PROCEEDINGS. 2003 IEEE INTERNATIONAL CONFERENCE ON CHANGSHA, HUNAN, CHINA OCT. 8-13, 2003, PISCATAWAY, NJ, USA,IEEE, vol. 1, 8 October 2003 (2003-10-08), pages 116-120, XP010696500 ISBN: 0-7803-7925-X *
ZHAO H ET AL: "RECONSTRUCTION OF TEXTURED URBAN 3D MODEL BY FUSING GROUND-BASED LASER RANGE AND CCD IMAGES" IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, INFORMATION & SYSTEMS SOCIETY, TOKYO, JP, vol. E83-D, no. 7, July 2000 (2000-07), pages 1429-1439, XP000970203 ISSN: 0916-8532 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010042288A1 (en) 2008-10-08 2010-04-15 Strider Labs, Inc. System and method for constructing a 3d scene model from an image
EP2347370A1 (en) * 2008-10-08 2011-07-27 Strider Labs, Inc. System and method for constructing a 3d scene model from an image
EP2347370A4 (en) * 2008-10-08 2014-05-21 Strider Labs Inc System and method for constructing a 3d scene model from an image
WO2013106802A1 (en) * 2012-01-12 2013-07-18 Gehry Technologies, Inc. Method and apparatus for determining and presenting differences between 3d models
WO2013138328A1 (en) * 2012-03-12 2013-09-19 Hntb Holdings Ltd. Creating a model of a scanned surface for comparison to a reference-surface model
US9336627B2 (en) 2012-03-12 2016-05-10 Hntb Holdings Ltd. Creating a model of a scanned surface for comparison to a reference-surface model
US9983590B2 (en) 2012-09-26 2018-05-29 Waymo Llc Wide-view LIDAR with areas of special attention
US9383753B1 (en) 2012-09-26 2016-07-05 Google Inc. Wide-view LIDAR with areas of special attention
WO2014085025A1 (en) * 2012-11-29 2014-06-05 Pelco, Inc. Object removable detection using 3-d depth information
CN103148804B (en) * 2013-03-04 2015-05-20 清华大学 Indoor unknown structure identification method based on laser scanning
CN103148804A (en) * 2013-03-04 2013-06-12 清华大学 Indoor unknown structure identification method based on laser scanning
CN105378506A (en) * 2013-06-14 2016-03-02 微软技术许可有限责任公司 Mobile imaging platform calibration
US9922422B2 (en) 2013-06-14 2018-03-20 Microsoft Technology Licensing, Llc Mobile imaging platform calibration
WO2014201189A3 (en) * 2013-06-14 2015-03-05 Microsoft Corporation Mobile imaging platform calibration
US9430822B2 (en) 2013-06-14 2016-08-30 Microsoft Technology Licensing, Llc Mobile imaging platform calibration
WO2015134795A3 (en) * 2014-03-05 2016-05-12 Smart Picture Technologies, Inc. Method and system for 3d capture based on structure from motion with pose detection tool
US10068344B2 (en) 2014-03-05 2018-09-04 Smart Picture Technologies Inc. Method and system for 3D capture based on structure from motion with simplified pose detection
CN104021588A (en) * 2014-06-18 2014-09-03 公安部第三研究所 System and method for recovering three-dimensional true vehicle model in real time
CN104021588B (en) * 2014-06-18 2017-04-26 公安部第三研究所 System and method for recovering three-dimensional true vehicle model in real time
WO2016040473A1 (en) * 2014-09-10 2016-03-17 Vangogh Imaging, Inc. Real-time dynamic three-dimensional adaptive object recognition and model reconstruction
US9423332B2 (en) 2014-10-14 2016-08-23 Caterpillar Inc. System and method for validating compaction of a work site
US9721380B2 (en) 2015-04-27 2017-08-01 Microsoft Technology Licensing, Llc Removing redundant volumetric information from a volume-based data representation
US10083522B2 (en) 2015-06-19 2018-09-25 Smart Picture Technologies, Inc. Image based measurement system
US9971956B2 (en) 2016-03-21 2018-05-15 International Business Machines Corporation Detection and presentation of differences between 3D models
US10318844B2 (en) 2016-03-21 2019-06-11 International Business Machines Corporation Detection and presentation of differences between 3D models
CN106408656A (en) * 2016-09-21 2017-02-15 江西天祥通用航空股份有限公司 Building method and building system of 3D model
US10380762B2 (en) 2016-10-07 2019-08-13 Vangogh Imaging, Inc. Real-time remote collaboration and virtual presence using simultaneous localization and mapping to construct a 3D model and update a scene based on sparse data
US10304254B2 (en) 2017-08-08 2019-05-28 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality
EP3457357A1 (en) * 2017-09-13 2019-03-20 Tata Consultancy Services Limited Methods and systems for surface fitting based change detection in 3d point-cloud

Also Published As

Publication number Publication date
WO2006027339A3 (en) 2006-10-26

Similar Documents

Publication Publication Date Title
Nüchter et al. 6D SLAM—3D mapping outdoor environments
Ferrero et al. Advanced geostructural survey methods applied to rock mass characterization
Forlani et al. Complete classification of raw LIDAR data and 3D reconstruction of buildings
Vanegas et al. Building reconstruction using manhattan-world grammars
Jaboyedoff et al. New insight techniques to analyze rock-slope relief using DEM and 3D-imaging cloud points: COLTOP-3D software
Lafarge et al. Creating large-scale city models from 3D-point clouds: a robust approach with hybrid representation
Frueh et al. Constructing 3D city models by merging ground-based and airborne views
Stamos et al. Geometry and texture recovery of scenes of large scale
US20040105573A1 (en) Augmented virtual environments
Früh et al. Constructing 3d city models by merging aerial and ground views.
Remondino et al. Image‐based 3D modelling: a review
Pierrot-Deseilligny et al. Automated image-based procedures for accurate artifacts 3D modeling and orthoimage generation
US7940279B2 (en) System and method for rendering of texel imagery
Poullis et al. Automatic reconstruction of cities from remote sensor data
Haala et al. 3D urban GIS from laser altimeter and 2D map data
Guidi et al. A multi-resolution methodology for the 3D modeling of large and complex archeological areas
Haala et al. An update on automatic 3D building reconstruction
Musialski et al. A survey of urban reconstruction
Grussenmeyer et al. Comparison methods of terrestrial laser scanning, photogrammetry and tacheometry data for recording of cultural heritage buildings
Dorninger et al. 3D segmentation of unstructured point clouds for building modelling
Gross et al. Extraction of lines from laser point clouds
US9208612B2 (en) Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
Sportouche et al. Extraction and three-dimensional reconstruction of isolated buildings in urban scenes from high-resolution optical and SAR spaceborne images
Murphy et al. Historic building information modelling (HBIM)
Fruh et al. Constructing 3D city models by merging aerial and ground views

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase