AU2010200144A1 - Extraction processes - Google Patents
Extraction processes Download PDFInfo
- Publication number
- AU2010200144A1 AU2010200144A1 AU2010200144A AU2010200144A AU2010200144A1 AU 2010200144 A1 AU2010200144 A1 AU 2010200144A1 AU 2010200144 A AU2010200144 A AU 2010200144A AU 2010200144 A AU2010200144 A AU 2010200144A AU 2010200144 A1 AU2010200144 A1 AU 2010200144A1
- Authority
- AU
- Australia
- Prior art keywords
- values
- cell
- parameter
- measured
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000605 extraction Methods 0.000 title abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 143
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 54
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 description 56
- 238000005259 measurement Methods 0.000 description 54
- 230000008901 benefit Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 13
- 238000013459 approach Methods 0.000 description 11
- 238000012937 correction Methods 0.000 description 9
- 238000012935 Averaging Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Astronomy & Astrophysics (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
EXTRACTION PROCESSES A method and apparatus for classifying an extracted object 5 (8,10) or terrain feature (6), comprising: measuring values of a parameter in a plurality of cells; identifying cells corresponding to a particular object (8,10) or terrain feature (6) using the measured values; determining parameter values at a set of points for classes of objects; for each class, aligning 10 the measured values of the parameter corresponding to a particular object (8,10) or terrain feature (6) with the determined parameter values corresponding to the class; for each class, determining a value of an error between the aligned measured values and the determined parameter values 15 corresponding to the class; and classifying the particular object (8,10) or terrain feature (6) as an object in the class corresponding to a minimum error value. Aligning measured values of the parameter with parameter values corresponding to the class may be performed using an Iterative Closest Point 20 algorithm. (Figure 1) 2165961_1 (GHMatters) 14/01/10
Description
AUSTRALIA Patents Act 1990 COMPLETE SPECIFICATION Standard Patent Applicant(s): University of Sydney Invention Title: Extraction processes The following statement is a full description of this invention, including the best method for performing it known to me/us: -2 EXTRACTION PROCESSES FIELD OF THE INVENTION The present invention relates to extraction, extraction processes, extraction algorithms, and the like. 5 BACKGROUND Data corresponding to the geometry of an area of terrain and any natural and/or artificial features or objects of the area may be generated. For example, a laser scanner, such as a 10 Riegl laser scanner, may be used to scan the area of terrain and generate 3D point cloud data corresponding to the terrain and the features. Various algorithms for processing 3D point cloud data of a terrain area are known. Such algorithms are typically used to 15 construct 3D terrain models of the terrain area for use in, for example, path planning or analysing mining environments. The terrain models conventionally used include the Mean Elevation Map, the Min-Max Elevation Map, the Multi-Level Elevation Map, the Volumetric Density Map, Ground Modelling via 20 Plane Extraction, and Surface Based Segmentation. Mean Elevation Maps are commonly classified as 21/ 2 D models because the third dimension (height) is only partially modelled. In these models the terrain is represented by a grid having a number of cells. The height of the laser scanner returns falling 25 in each grid cell is averaged to produce a single height value for each cell. An advantage of averaging the height of the laser returns is that noisy returns can be filtered out. However, this technique cannot capture overhanging structures, such as tree canopies. 30 Min-Max Elevation Maps are also used to capture the height of the returns in each grid cell. The difference between the maximum and the minimum height of the laser scanner returns falling in a cell are computed. A cell is declared occupied if its calculated height difference exceeds a pre-defined 35 threshold. These height differences provide a computationally efficient approximation to the terrain gradient in a cell. Cells which contain too steep a slope or are occupied by an object will be characterized by a strong gradient and can be identified 2165961_1 (GHMatters) -3 as occupied. An advantage of this technique is that approximations are not made, i.e. averaging is avoided. However, this technique is more sensitive to noise than a Mean Elevation Map. 5 Multi-Level Elevation Maps are an extension of elevation maps. Such algorithms are capable of capturing overhanging structures by discretising the vertical dimension. They also allow for the generation of large scale 3D maps by recursively registering local maps. Typically however, the discrete classes 10 chosen for the vertical dimension may not facilitate segmentation. Also, typically the ground is not used as a reference for vertical height. Volumetric Density Maps discriminate between soft and hard obstacles. This technique breaks the terrain area into a set of 15 voxels and counts in each voxel the number of hits and misses sensor data. A hit corresponds to a return that terminates in a given voxel. A miss corresponds to a laser beam going through a voxel. Regions containing soft obstacles, such as vegetation, correspond to a small ratio of hits over misses. Regions 20 containing hard obstacles correspond to a large ratio of hits over misses. While this technique does allow the identification of soft obstacles (the canopy of the trees for instance), segmenting a scene based on the representation it provides would not be straightforward since parts of objects would be 25 disregarded (windows in buildings or patches of vegetation for instance). A Ground Modelling via Plane Extraction approach is suitable for extracting multi-resolution planar surfaces. This involves discretising the area terrain into two superimposed 2D 30 grids of different resolutions, i.e. one grid has larger cells than the other. Each grid cell in each of the two grids is represented by a plane fitted to the corresponding laser returns via least square regression. A least square error for each plane in the each grid is computed. By comparing the different error 35 values, several types of regions can be identified. In particular, the values are both small in sections corresponding to the ground. Also, the error value of the larger celled plane is small while error values of the smaller plane is large in 2165961_1 (GHMatters) -4 areas containing a flat surface with a spike (e.g. a thin pole for instance). Also, both error values are large in areas containing an obstacle. This method is able to identify the ground while not averaging out thin vertical obstacles (unlike a 5 Mean Elevation Map). However, it is not able to represent overhanging structures. Surface Based Segmentation performs segmentation of 3D point clouds based on the notion of surface continuity. Surface continuity is evaluated using a mesh built from data. The mesh 10 is generated by exploiting the physical ordering of the measurements which implies that longer edges in the mesh or more acute angles formed by two consecutive edges directly correspond to surface discontinuities. While this approach performs 3D segmentation, it does not identify the ground surface. 15 Thus, there is a need for an algorithm for performing segmentation of 3D point cloud data that jointly provides a representation of the ground, and representations of objects. SUMMARY OF THE INVENTION 20 In a first aspect the present invention provides a classification process for classifying an extracted object or terrain feature, the classification process comprising: measuring values of a parameter in a plurality of cells; identifying cells corresponding to a particular object or 25 terrain feature using the measured values of the parameter; determining parameter values at a set of points for each of a plurality of classes of objects; for each of the plurality of classes of objects, aligning the measured values of the parameter corresponding to a particular object or terrain 30 feature with the determined parameter values corresponding to the class of objects; for each of the plurality of classes of objects, determining a value of an error between the aligned measured values of the parameter corresponding to a particular object or terrain feature and the determined parameter values 35 corresponding to the class of objects; and classifying the particular object or terrain feature as an object in the class of objects corresponding to a minimum of the determined error values. 2165961_1 (GHMatters) -5 The step of aligning the measured values of the parameter corresponding to a particular object or terrain feature with the determined parameter values corresponding to the class of 5 objects may comprise performing an Iterative Closest Point algorithm on the measured values of the parameter and the determined parameter values. The process may further comprise: identifying which of the set of points corresponding to the particular object or terrain 10 feature or the set of points corresponding to the class of objects that the particular object or terrain feature is classified as comprises the largest number of points for which a value of the parameter has been determined; for each of the points in the identified largest set, performing the following: 15 fitting a plane to the points in the same cell as that point to produce a tangent plane; determining two planes, the two planes being orthogonal to the tangent plane, orthogonal to each other, and containing that point; identifying the point as a fit only if each of the four quadrants defined by the two orthogonal 20 planes contain a data point from the set not identified as the largest; and rejecting the classification of the particular object or terrain feature as an object in the class of objects corresponding to a minimum of the determined error values if a certain proportion of points in the identified largest set are 25 not identified as a fit. The certain proportion of points may be one half. The step of determining a value of an error may comprise calculating the following formula: Nob ect N E,oPi"'-P' ,k -Pcloses k-1 k=1 30 where: E, is the value of the error between the aligned measured values of the parameter corresponding to a particular object or terrain feature and the determined parameter values corresponding to the class of objects; NdbiOec is the number of points corresponding to the measured values of the parameter 35 corresponding to a particular object or terrain feature; N is the number of points in the set of points for ith class of 216591_1 (GHMatteS) -6 pobject objects; k is the kth point in the set of points corresponding to the measured values of the parameter corresponding to a particular object or terrain feature; Pcloses is the point in the set of points for ith class of objects 5 closest to the kth point in the set of points corresponding to the measured values of the parameter corresponding to a particular object or terrain feature; Pk is the kth point in object the set of points for ith class of objects; and closest is the point in the set of points corresponding to the measured values 10 of the parameter corresponding to a particular object or terrain feature closest to the kth point in the set of points for ith class of objects. The steps of measuring values of a parameter in a plurality of cells, and identifying cells corresponding to a 15 particular object or terrain feature using the measured values of the parameter, may in combination comprise: defining an area to be processed; dividing the area into a plurality of cells; measuring a value of a parameter at a plurality of different locations in each cell; for each cell, determining a value of a 20 function of the measured parameter values in that cell; identifying a cell as corresponding only to a particular object or terrain feature if the determined function value for that cell is in a range of values that corresponds to the particular object or terrain feature; defining, for the cells that are not 25 identified as corresponding only to a particular object or terrain feature, one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; and identifying a sub-cell as corresponding at least in part to the particular object or terrain feature if one or more of the 30 measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values. The steps of measuring values of a parameter in a plurality of cells, and identifying cells corresponding to a 35 particular object or terrain feature using the measured values of the parameter, may in combination comprise: defining an area to be processed; dividing the area into a plurality of cells; 21559611 (GHMatters) -7 during a first time period, measuring a value of a parameter at a first plurality of different locations in the area; storing in a database the values of the parameter measured in the first time period; for each cell in which a parameter value has been 5 measured, determining a value of a function of parameter values measured in that cell and stored in the database; identifying a cell in which a parameter value has been measured as corresponding only to a particular object or terrain feature if the determined function value for that cell is in a range of 10 values that corresponds to the particular object or terrain feature; defining, for the cells in which a parameter value has been measured and that are not identified as corresponding only to a particular object or terrain feature, one or more sub cells, each sub-cell having in it at least one of the plurality 15 of different locations; identifying a sub-cell as corresponding at least in part to the particular object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values; during a second time period, measuring a 20 value of a parameter at a second plurality of different locations in the area; storing the values of the parameter measured in the second time period in the database; and for each cell in which a parameter value has been measured in the second time period but not the first time period, determining a value 25 of a function of parameter values measured in that cell and stored in the database; for each cell in which a parameter value has been measured in the second time period and the first time period, updating the value of the function using parameter values measured in that cell in the second time period and 30 stored in the database; identifying a cell in which a parameter value has been measured as corresponding only to a particular object or terrain feature if the determined function value for that cell is in a range of values that corresponds to the particular object or terrain feature; defining, for the cells in 35 which a parameter value has been measured and that are not identified as corresponding only to a particular object or terrain feature, one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; and 2185961_1 (GHMatters) -8 identifying a sub-cell as corresponding at least in part to the particular object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of 5 values. The step of identifying a sub-cell as corresponding at least in part to the particular object or terrain feature may comprise: identifying a sub-cell as corresponding only to the particular object or terrain feature if the measured parameter 10 value for each of the at least one of the plurality of different locations in that sub-cell is in the range of values; and identifying a sub-cell as corresponding in part to the particular object or terrain feature if one or more of the measured parameter values for the at least one of the plurality 15 of different locations in that sub-cell is in the range of values and if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The process may further comprise identifying a sub-cell as 20 corresponding at least in part to a different object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The process may further comprise identifying a sub-cell as 25 corresponding only to a different object or terrain feature if each of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The step of determining a value of a function of the 30 measured parameter values in that cell may comprise: determining an average value of the values of a parameter measured at the plurality of different locations in each cell. In a further aspect the present invention provides an apparatus for classifying an extracted model of an object or 35 terrain feature, the apparatus comprising scanning and measuring apparatus for measuring the plurality of values of a parameter, and one or more processors arranged to perform the processing steps of any of the above aspects. 2165961_1 (GHMatters) -9 In a further aspect the present invention provides a computer program or plurality of computer programs arranged such that when executed by a computer system it/they cause the computer system to operate in accordance with the process of any 5 of the above aspects of the present invention. In a further aspect the present invention provides a machine readable storage medium storing a computer program or at least one of the plurality of computer programs according to the above aspect. 10 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic illustration of an embodiment of a terrain modelling scenario in which a laser scanner is used to scan a terrain area; 15 Figure 2 is a process flowchart showing certain steps of a terrain modelling algorithm performed by a processor; Figure 3 is a process flowchart showing certain steps of the ground extraction process of step s2 of the algorithm; Figure 4 is a schematic illustration of three cells of a 20 grid of a Mean Elevation Map; Figure 5 is a process flowchart showing certain steps of the object segmentation process of step s4 of the algorithm; Figure 6 is a schematic illustration of a first cell, a second cell, and a third cell of the grid and the range of 25 height values assigned to each of these cells in a Mean Elevation Map; Figure 7 is a schematic illustration of the first cell, the second cell, and the third cell of the Mean Elevation Map grid, after performing step s28; 30 Figure 8 is a schematic illustration of the laser scanner scanning an object that is hidden behind a further object; Figure 9 is a process flow chart showing certain steps of an iterative segmentation algorithm; Figure 10 is a schematic block diagram showing certain 35 details of a further implementation of the process of Figure 9; Figure 11 is a process flow chart showing certain steps of a ground model updating process; Figure 12 is a process flow chart showing certain steps of 2165961_1 (GHMatters) - 10 a process for determining whether a new measurement corresponds to the ground; and Figure 13 is a process flow chart showing certain steps of an embodiment of a classification process. 5 DETAILED DESCRIPTION The terminology "terrain" and "terrain features" are used herein to refer to a geometric configuration of an underlying supporting surface of an environment or a region of an 10 environment. The terminology "object" is used herein to refer to any objects or structures that exist above (or below) this surface. The underlying supporting surface may, for example, include surfaces such as the underlying geological terrain in a rural setting, or the artificial support surface in an urban 15 setting, either indoors or outdoors. The geometric configuration of other objects or structures above this surface, may, for example, include naturally occurring objects such as trees or people, or artificial objects such as buildings or cars. Some examples of terrain and objects are as follows: rural 20 terrain having hills, cliffs, and plains, together with object such as rivers, trees, fences, buildings, and dams; outdoor urban terrain having roads and footpaths, together with buildings, lampposts, traffic lights, cars, and people; outdoor urban terrain such as a construction site having partially laid 25 foundations, together with objects such as partially constructed buildings, people and construction equipment; and indoor terrain having a floor, together with objects such as walls, ceiling, people and furniture. Figure 1 is a schematic illustration of an embodiment of a 30 terrain modelling scenario in which a laser scanner 2 is used to scan a terrain area 4. In this scenario, the laser scanner 2 used to scan a terrain area 4 is a Riegl laser scanner. The laser scanner 2 generates dense 3D point cloud data for the terrain area 4 in a conventional way. This data is sent 35 from the laser scanner 2 to a processor 3. In this embodiment, the terrain area 4 comprises an area of ground 6 (or terrain surface), and two objects, namely a building 8 and a tree 10. 2165961_1 (GHMatIers) - 11 The generated 3D point cloud data for the terrain area 4 is processed by the processor 3 using an embodiment of a terrain modelling algorithm, hereinafter referred to as the "segmentation algorithm", useful for understanding the 5 invention. The segmentation algorithm advantageously tends to provide a representation of the ground 6, as well as representations of the various objects 8, 10 above the ground 6, and also to the refinements that can be made to the representation of the ground 6 using the representations of the 10 objects 8, 10, as described in more detail later below. Figure 2 is a process flowchart showing certain steps of an embodiment of a process implemented by the segmentation algorithm performed by the processor 3. At step s2, a ground extraction process is performed on 15 the 3D point cloud data. The ground extraction process explicitly separates 3D point cloud data corresponding to the ground 6 from that corresponding to the other objects, i.e. here the building 8, and the tree 10 and is described in more detail later below with reference to Figure 3. 20 At step s4 an object segmentation process is performed on the 3D point cloud data. The object segmentation process segments the 3D point cloud data such that each segment of data corresponds to a single object, as described in more detail later below with reference to Figure 5. 25 Figure 3 is a process flowchart showing certain steps of the ground extraction process of step s2 of the segmentation algorithm. At step s6, a Mean Elevation Map of the terrain area 4 is computed. This is a conventional Mean Elevation Map. The 30 resolution of a grid underlying the map may be any appropriate value. In this embodiment, the Mean Elevation Map is a grid having a plurality of cells. Each cell has assigned to it a height value determined from height values corresponding to 35 laser sensor returns from that cell. In this embodiment, the height value for a cell is the average of the height values corresponding to laser sensor returns from that cell. At step s8, a surface gradient value is computed for each 2165961_1 (GHMaters) - 12 cell in the grid. A surface gradient value for a particular cell is obtained by first computing the gradients between that cell and each of the surrounding cells. The gradient with the largest absolute 5 value is retained as the gradient at the particular cell. At step s1O, cells corresponding to relatively flat surfaces are identified. In this embodiment, this is achieved by selecting cells having a surface gradient value below a gradient-threshold value. In this embodiment, the gradient 10 threshold value is 0.5. This corresponds to a slope angle of 27 degrees. However, in other embodiments a different gradient threshold value is used. At step s12, the cells identified as corresponding to the relatively flat surfaces, i.e. the cells that have a surface 15 gradient value below the gradient-threshold, are grouped together with any adjacent cells having a surface gradient value below the gradient-threshold value. This forms clusters of cells that correspond to relatively flat areas. At step s14, the largest cluster of cells that correspond 20 to a relatively flat area, i.e. the cluster formed at step s12 containing the largest number of cells, is identified. At step s16, the identified largest cluster is used as a reference cluster with respect to which it can be determined whether the other smaller clusters formed at step s12 correspond 25 to the ground 6 of the terrain area 4. The reference cluster is used because locally smooth clusters that do not correspond to the ground 6 may exist. Thus, these cases are filtered out using the reference to the ground 6 provided by the largest ground cluster. 30 In this embodiment, the identified largest cluster is assumed to correspond to the ground 6. Thus, any of the smaller clusters of cells, the cells of which have substantially smaller or larger height values than those of the largest cluster, are assumed not to correspond to the ground 6. In other words, in 35 this embodiment the cells corresponding to the ground 6 is defined to be the union of the largest cluster of cells, with a surface gradient value below the gradient-threshold, and the other clusters of cells, also containing a surface gradient 2165951_1 (GHMatters) - 13 value below the gradient-threshold, in which the absolute value of the average height of the cells minus the average height of the cells in the largest cluster is smaller than a height threshold. In this embodiment, this height-threshold is 0.2m. 5 At step s18, a correction of errors generated during the computations of the surface gradient values is performed. One source of such errors and the correction of those errors will now be explained with reference to Figure 4. Figure 4 is a schematic illustration of three cells of the 10 grid of the Mean Elevation Map, namely the first cell 12, the second cell 14, and the third cell 16. The height value for the first cell 12, i.e. the average of the height values corresponding to laser sensor returns from the first cell 12, is hereinafter referred to as the "first 15 height value 18". The height value for the second cell 14, i.e. the average of the height values corresponding to laser sensor returns from the second cell 14, is hereinafter referred to as the "second height value 20". 20 The height value for the third cell 16, i.e. the average of the height values corresponding to laser sensor returns from the third cell 16, is hereinafter referred to as the "third height value 22". In this embodiment the first height value 18 and the 25 second height value 20 are substantially equal. Also, the third height value 22 is substantially greater than the first height value 18 and the second height value 20. The surface gradient value for the second cell 14, which is determined at step s8 as described above, is obtained by 30 first computing the gradients between that cell and each of the surrounding cells. The gradient with the largest absolute value is retained as the gradient at the particular cell. Thus, in this embodiment the surface gradient value for the second cell 14 is the slope between the height levels of the second cell 14 35 and the third cell 16 (since the gradient between the first cell 12 and the second cell 14 is zero). This gradient is indicated in Figure 4 by the reference numeral 24. Thus, in this embodiment the second cell has a relatively large surface 2165961_1 (GHMaters) - 14 gradient value. In particular, the service gradient value of the second cell 14 is above the gradient-threshold. Thus, the second cell 14 is included in the same cluster of cells as the first cell 12 despite the second height value 20 being 5 substantially equal to the first height value 18. Such errors are corrected at step s18 of the ground extraction process as follows. Each cell identified as not belonging to the ground is inspected. The neighbour cells of the cell being inspected that correspond to the ground 6 are 10 identified, and their average height is computed. If the absolute value of the difference between this average height and the height in the inspected cell is less than a correction threshold value, the inspected cell is identified as corresponding to the ground 6. For example, returning to Figure 15 4, the first cell 12 corresponds to the ground 6, whereas the third cell 16 corresponds to an object 8, 10. The difference between the height of the first cell 12, i.e. the first height value 18, and the height of the second cell 14, i.e. the second height value, is zero. In this embodiment the correction 20 threshold is 0.1m. Thus, since zero is less than 0.1m the second cell 14 is identified as corresponding to the ground 6. At step s20, the steps s12 and s14 as described above are repeated. The correction of errors carried out at step s1B modifies the cluster of cells that correspond to the ground. 25 Thus, the operations carried out steps s12 and s14, i.e. the forming clusters of cells that correspond to areas of relatively flat terrain and the identification of the largest cluster of cells, are repeated after performing the function of step s18 to accommodate for the changes made. 30 The correction steps s18 and s20 allow for the reconstructing of a larger portion of the ground 6 of the terrain area 4. This is because a reconstruction of the ground 6 obtained without this correction comprises a number of "holes" that are not identified as either the ground 6 or an obstacle 8, 35 10. The performance of the correction steps s18 and s20 advantageously tends to remove these holes. This may, for example, allow a path planner to find paths going through areas of the map previously marked as containing obstacles. 21659611 (GHMattes) - 15 Thus, the ground extraction process of step s2 is performed. Returning to Figure 2, this ground extraction process is followed by the object segmentation process of s4. Figure 5 is a process flowchart showing certain steps of 5 the object segmentation process of step s4 of the segmentation algorithm. At step s22, a Min-Max Elevation Map of the terrain area 4 is computed. This is a conventional Min-Max Elevation Map. The resolution of a grid underlying the map may be any appropriate 10 value. In this embodiment, the grid of the Min-Max Elevation Map is the same as that of the Mean Elevation Map. This Min-Max Elevation Map of the terrain area 4 is hereinafter referred to as the global map. In this embodiment, the Min-Max Elevation Map is a grid 15 having a plurality of cells. Each cell has assigned to it a range of height values. The range of height values assigned to a particular cell ranges from the minimum to the maximum height values corresponding to laser sensor returns from that cell. Figure 6 is a schematic illustration of the first cell 12, 20 the second cell 14, and the third cell 16 of the grid and the range of height values assigned to each of these cells in the Mean Elevation Map. In this embodiment, the first cell 12 is assigned a range of height values, hereinafter referred to as the "first range 25 26". The first range 26 has a minimum, indicated in Figure 6 by the reference numeral 260, and a maximum, indicated in Figure 6 be the reference numeral 262. Also, the second cell 14 is assigned a range of height values, hereinafter referred to as the "second range 28". The 30 second range 28 has a minimum, indicated in Figure 6 by the reference numeral 280, and a maximum, indicated in Figure 6 be the reference numeral 282. Also, the third cell 16 is assigned a range of height values, hereinafter referred to as the "third range 30". The 35 third range 30 has a minimum, indicated in Figure 6 by the reference numeral 300, and a maximum, indicated in Figure 6 be the reference numeral 302. Thus, the first, second, and third cells 12, 14, 16 each 2185961_1 (GHMatter) - 16 have a volume assigned to them that represents the range of the heights corresponding to the laser returns from that cell. At step s24, adjacent cells corresponding to an object 8, 10, i.e. the sets of cells not identified as corresponding to 5 the ground 6 at step s16 of the ground extraction process, are connected together to form clusters of object cells. At step s26, for each identified object cluster a second Min-Max Elevation Map is built from the laser returns contained in that cluster. These second Min-Max Elevation Map are 10 hereinafter referred to as "local maps". The local maps have higher resolution than the global map generated at step s22. For example, the cell size in the local maps is 0.2m by 0.2m, whereas the cell size in the global map is 0.4m by 0.4m. At step s28, for each local map, the range of height 15 values of each cell in the local map is divided into segments, or voxels. Each voxel for a cell corresponds to a sub-range of the range of height values. In this embodiment, the height of each voxel is 0.2m. However, in other embodiments a different voxel height is used. 20 Each voxel contain the height values within the corresponding sub-range of the laser returns from that cell. Voxels that do not contain any laser returns are disregarded. Also, voxels of a particular cell are merged with other voxels of that cell, if they are in contact with those other voxels. 25 Figure 7 is a schematic illustration of the first cell 12, the second cell 14, and the third cell 16 of the Mean Elevation Map grid, after performing step s28. In this embodiment, the third cell 16 was identified as corresponding to an object, i.e. not corresponding to the 30 ground. Thus, a higher resolution grid is defined over the third cell 16, and the range of values of laser returns in each of the cells of the higher resolution grid is divided into voxels, as shown in Figure 6. In this embodiment, each of the cells of the higher resolution grid of the third cell 16 contain 35 the same data. Also, only the voxels corresponding to the higher height values in the third range 30 and the lower height values in the third range 30 contain any laser scanner returns. Voxels in the middle of the third range 30 do not contain any 21S961.1 (GHMatters) - 17 laser scanner returns. Thus, in this embodiment, each of the cells of the higher resolution grid of the third cell 16 contains two voxels, one containing laser scanner returns corresponding to relatively lower height values, and the other 5 containing laser scanner returns corresponding to relatively higher height values. The voxels corresponding to lower height values are hereinafter referred to as the "lower voxels" and are indicated in Figure 7 by the reference numeral 38. The voxels corresponding to higher height values are hereinafter referred 10 to as the "upper" voxels and are indicated in Figure 7 by the reference numeral 38. At step s30, the voxels corresponding to the ground 6 are identified. The identification of these voxels is implemented as follows. For a given cell, a number of the closest cells 15 corresponding to the ground 6 in the grid are identified. If the absolute value of the difference between the mean height value of the lowest voxel in the given cell and the mean of the heights of the closest cells, is less than a voxel-threshold, then the that voxel is marked as corresponding to the ground 6. 20 For example, the lowest voxels in the third cell 16 are the lower voxels 36. The second cell 14 may be identified as a closest cell that corresponds to the ground 6 to the third cell 16. In this embodiment, the mean height of the second voxel 28 and the lower voxels 36 are substantially the same, i.e. the 25 difference between these values is below a voxel-threshold value of, for example, 0.2m. Thus, the lower voxels 36 is identified as corresponding to the ground 6. This process advantageously tends to allow for the reconstruction of the ground 6 under overhanging structures, for 30 example the canopy of the tree 10. This process also advantageously allows the reconstruction of the ground 6 that was generated at step s2 as described above with reference to Figures 2 and 3, to be refined. This is carried out at step s32. 35 At step s32, the reconstruction of the ground 6 is refined. At this step the fact that a voxel from a local map corresponds to the ground 6 is used to update the Mean Elevation Map generated in the ground extraction process of s2. In 2165961_1 (GHMalters) - 18 particular, the cell in the Mean Elevation Map which most closely corresponds to the cell in the local map that contains the voxel corresponding to the ground 6 is identified. The identified cell is then updated by re-computing the mean height 5 in that cell computed using only the laser returns that falls into the voxel corresponding to the ground 6. Thus, the reconstruction of the ground 6 under overhanging structures is performed. This process advantageously exploits interaction between the Mean Elevation Map of the ground 10 extraction process of step s2 and the Min-Max Elevation Map of the object segmentation process of step s4. At step s34, contacting voxels are grouped together to form voxel clusters. In this embodiment, voxels identified as belonging to the ground are interpreted as separators between 15 clusters. At step s36, noisy laser scanner returns are identified. In this embodiment, voxels which contain noisy returns are assumed to satisfy the following conditions. Firstly, the voxel belongs to a cluster which is not in contact with a cell or 20 voxel corresponding to the ground 6. Secondly, the size of the cluster (in each of the x-, y-, and z-directions) that the voxel belongs to is smaller than a predetermined noise-threshold. In this embodiment, the noise threshold is 0.1m. At step s38, the identified noisy returns are removed or 25 withdrawn from the map. This completes the segmentation algorithm performed by the processor 3. The reconstruction of the terrain area 4 produced by performing the segmentation algorithm advantageously 30 reconstructs portions of the ground that are under overhanging structures, for example the canopy of the tree 10. This is achieved by the steps s28 - s30 as described above. A further advantage of the segmentation algorithm is that fine details tend to be conserved. For example, frames of 35 windows of the building 8 are conserved by the segmentation algorithm. The segmentation algorithm advantageously tends to benefit from the advantages of the Mean Elevation Map approach. In 2165961.1 (GHMatte,s) - 19 particular, the segmentation algorithm tends to be able to generate smooth surfaces by filtering out noisy returns. Also, the segmentation algorithm advantageously tends to benefit from the advantages of the Min-Max Elevation Map 5 approach. In particular, the segmentation algorithm does not make an approximation of the height corresponding to the laser scanner return when separating the objects above the ground. Also, the local maps have higher resolution than the global map which tends to allow efficient reasoning in the ground 10 extraction process at a lower resolution, yet provide a fine resolution object model. A further advantage provided by the above described segmentation algorithm is that it tends to be able to achieve the following tasks. Firstly, the explicit extraction of the 15 surface of ground 6 is performed, as opposed to extracting 3 dimensional surfaces without explicitly specifying which of those surfaces correspond to the ground 6. Secondly, overhanging structures, such as the canopy of the tree 10, are represented. Thirdly, full 3-dimensional segmentation of the objects 8, 10 is 20 performed. Conventional algorithms do not jointly perform all of these tasks. A further advantage of the segmentation algorithm is that errors that occur when generating 3-dimensional surfaces corresponding to the ground 6 tend to be minimised. This is due 25 to the ability of the ground-object approach implemented by the segmentation algorithm to separate the objects above the ground. A further advantage is that by separately classifying terrain features, the terrain model produced by performing the segmentation algorithm tends to reduce the complexity of, for 30 example, path planning operations. Also, high-resolution terrain navigation and obstacle avoidance, particularly those obstacles with overhangs, is provided. Moreover, the segmentation algorithm tends to allow for planning operations to be performed efficiently in a reduced, i.e. 2-dimensional 35 workspace. Also, the provided segmentation algorithm allows a path planner to take advantage of the segmented ground model. For example, clearance around obstacles with complex geometry can be 21659611 (GHMatters) - 20 determined. This allows for better navigation through regions with overhanging features. In the above embodiments, the average value of the measured plurality of values of cells is used to determine 5 clusters and further process those clusters, and various other processes. However, this need not be the case, and instead, in other embodiments, other functions may be used instead of average value, for example an average value of parameter values that remain after certain extreme values have been filtered out, 10 or for example statistical measures other than an average as such. In the above embodiments, the measured parameter is the height of the terrain and/or objects above the ground. However, this need not be the case, and in other embodiments any other 15 suitable parameter may be used instead, for example colour/texture properties, optical density, reflectivity, and so on. Apparatus, including the processor 3, for implementing the above arrangement, and performing the method steps described 20 above, may be provided by configuring or adapting any suitable apparatus, for example one or more computers or other processing apparatus or processors, and/or providing additional modules. The apparatus may comprise a computer, a network of computers, or one or more processors, for implementing instructions and 25 using data, including instructions and data in the form of a computer program or plurality of computer programs stored in or on a machine readable storage medium such as computer memory, a computer disk, ROM, PROM etc., or any combination of these or other storage media. 30 In the above embodiments, the 3-dimensional point cloud data for the terrain area was provided by a Riegl laser scanner. However, in other embodiments the laser data is provided by a different means, for example by SICK and Velodyne sensors. Moreover, in other embodiments the data on the terrain area is 35 not laser scanner data and is instead a different appropriate type of data, for example data generated by an infrared camera. In the above embodiments, the terrain area is outdoors and comprises a building and a tree. However, in other embodiments 21659611 (GHMatters) - 21 the terrain area is a different appropriate area comprising any number of terrain features. In particular, the terrain features are not limited to trees and buildings. In the above embodiments, the segmentation algorithm is 5 performed by performing each of the above described method steps in the above provided order. However, in other embodiments certain method steps may be omitted. For example, steps s36 and s38 of the segmentation algorithm may be omitted, however the resulting terrain model would tend to be less accurate than if 10 these steps were included. In the above embodiments, the segmentation algorithm does not take into account occluded, or partially hidden, objects. However, in other embodiments provision is made for partially hidden objects, as will now be described in more detail with 15 reference to Figure 8. Figure 8 is a schematic illustration of the laser scanner 2 scanning an object that is hidden behind a further object. The object being scanned by the laser scanner is hereinafter referred to as the "hidden object 40", and the object partially 20 hiding, or occluding, the hidden object 40 is hereinafter referred to as the "non-hidden object 42". In this embodiment, the hidden object 40 can only be partially imaged by the laser scanner 2. Thus, a height of the hidden object observed by the laser scanner, hereinafter 25 referred to as the "observed height 44", does not correspond to the actual object height 46. Accurate estimation of the ground height ideally considers occlusions such as these. Thus an estimation of the ground height is preferably based on non-occluded cells. A cell can be 30 assessed as non-occluded using a ray-tracing process. In a ray-tracing process a set of cells, or a trace, is computed to best approximate a straight line joining two given cells. If any of the cells in the trace do not correspond to the ground, the end cell of the trace is occluded. 35 Using a ray-tracing process tends to allow for occlusions to be taken into account and reliable estimates of the ground height to be computed. In order to avoid resorting to an explicit ray-tracing 2165961_1 (GHMatters) - 22 process and decrease the amount of computations, the following approach may be adopted. As described above, the ground is extracted by applying a threshold on the computed surface gradients. Thus, there is a "smoothness constraint" between 5 neighbour cells identified as belonging to the ground. The terminology "smoothness constraints" is used to mean that the variation of height between two neighbour ground cells is limited. Thus, the closest ground cell to an obstacle will provide a reliable local estimate of the ground height, i.e. a 10 given ground cell is connected (via "smoothness constraints") to the rest of the ground cells which implies that this cell not only provides a local estimate of the ground height but in fact it provides a globally constrained local estimate. This approach advantageously tends to avoid the use of 15 ray-tracing while providing reliable estimates of the ground height. Standard ray-tracing techniques do not use this reasoning simply because extracting the ground is not always possible. An embodiment of an iterative segmentation algorithm will 20 now be described. In this embodiment, data is incorporated and the terrain model is updated as the data is collected. This advantageously tends to allow a model of the terrain to be generated in real-time, as the data is collected. Figure 9 is a process flow chart showing certain steps of 25 an iterative segmentation algorithm. At step s40, the laser scanner 2 generates dense 3D point cloud data for the terrain area 4 the same way as in the above described embodiments. At step s42, the generated data is stored in a database. 30 Newly generated data is added to the database as it is generated. Also, in this embodiment data may be deleted from the database, for example if it is replaced by newly generated data or it is deemed to be unnecessary or irrelevant at some point in time. In this embodiment, a policy or a filter that 35 encodes the definition of 'irrelevant' is utilised. For example, a filter may be used to remove data older than a certain number of seconds. Another filter may be used to filter data such that a maximum data density in a region of space is maintained e.g. 218596i_1 (GHMatters) - 23 if there are more than a certain number of data points per cell, the oldest data points are deleted in order to maintain a maximum density. An example policy is to discard data below a certain accuracy or quality. 5 At step s44, a Mean Elevation Map of the terrain area 4 is computed as described above at step s6 of Figure 3. In this embodiment, the Mean Elevation Map is a grid having a plurality of cells. Each cell has assigned to it an average height determined from the height values stored in the database 10 corresponding to that cell. In this embodiment, the height values stored in the database are iteratively being updated as new data is generated and irrelevant data is deleted. Thus, the average heights that form the Mean Elevation Map are iteratively updated. 15 At step s46, a surface gradient value is computed for each cell in the grid as described above for step s8 of Figure 3. As described in more detail above, the gradient values are determined using the average heights in the Mean Elevation Map. Thus, since the average heights of the Mean Elevation Map are 20 iteratively being updated because newly generated data is added to the database and irrelevant data is deleted from the database, the determined data values are iteratively updated. In other embodiments, different appropriate metrics to the surface gradient values may be determined in addition to or 25 instead of the surface gradient values. These different metric values may then be used in the determination of the ground model. For example, a value of the residual from a horizontal plane of a cell, or a plane fit metric for a cell may be used. Such metrics may be used to determine a value relating to how 30 'flat' a plane calculated using some or all of the data points in a cell is, e.g. the deviation of the plane from a horizontal plane. A cell may be identified as corresponding to the ground if it is suitably flat. Otherwise, the cell may be identified as corresponding to an object. These metrics may be calculated 35 incrementally, or rapidly recalculated iteratively, as new data is provided. At step s48, a model of the ground 6 is determined. In this embodiment the model of the ground 6 is determined by 2165961.1 (GHMatter) - 24 performing the following: identifying cells corresponding to relatively flat surfaces (as described above with reference to step slO of Figure 3); forming clusters of cells that correspond to relatively flat areas (as described above with reference to 5 step s12 of Figure 3); identifying the largest cluster of cells that correspond to a relatively flat area (as described above with reference to step s14 of Figure 3); using the identified largest cluster as a reference cluster with respect to which it can be determined whether the other smaller clusters correspond 10 to the ground 6 of the terrain area 4 (as described above with reference to step s16 of Figure 3); correcting of errors (as described above with reference to step slO of Figure 3); and repeating certain of the steps to generate a ground model (as described above with reference to step s20 of Figure 3). The 15 model of the ground 6 is iteratively updated as the gradient values (and/or any other determined metrics) are iteratively updated. Thus, the model of the ground 6 is iteratively updated as data generated by the laser scanner 2 is added to the database, 20 and/or as data is deleted from the database. In this embodiment, the model of the ground 6 is updated at the rate data is input or removed from the database, for example continuously. At step s50, the ground 6 and objects 8, 10 are segmented by performing the following steps that are described in more 25 detail above at steps s22 to s32 of Figure 5: generating a Min Max Elevation Map of the terrain area 4 using the data contained in the database (as described above with reference to step s22 of Figure 5); forming clusters of cells corresponding to objects (as described above with reference to step s24 of Figure 5); 30 forming local Min-Max Elevation Maps for the object clusters (as described above with reference to step s26 of Figure 5); dividing each cell in each local map into voxels (as described above with reference to step s28 of Figure 5); identifying the voxels corresponding to the ground 6 (as described above with 35 reference to step s30 of Figure 5); and refining the reconstruction of the ground 6 (as described above with reference to step s32 of Figure 5). In this embodiment, the Min-Max Elevation Maps are grids 2185961_1 (GHMatters) - 25 having a plurality of cells. Each cell has assigned to it a range of heights determined from the height values stored in the database corresponding to that cell. In this embodiment, the height values stored in the database are iteratively being 5 updated as new data is generated and irrelevant data is deleted. Also, the model of the ground 6 is iteratively updated as data generated by the laser scanner 2 is added to the database, and/or as data is deleted from the database as described above. Thus, the segmentation of the ground and the objects, which 10 depends on the local Min-Max elevation maps and the ground model, is iteratively updated. The segmentation of the ground and the objects is iteratively updated at a rate that depends upon the processing power of the processor 3 which performs the segmentation. 15 In this embodiment, a Min-Max Elevation map is generated for each iteration of the method. However, in other embodiments a Min-Max map is not generated as such for each iteration of the method. For example, in other embodiments the structure of the database is such that it corresponds to that of a Min-Max 20 elevation map. In such cases, the database structure contains at least the information of a min-max map. Also, the direct calculation of ground cells removes the need to explicitly determine a Min-Max elevation map (and perform steps s22 to s32) at each iteration of the method. In particular, the initial 25 process carried out when the data arrived means it has already been accurately determined which voxels correspond to the ground and which voxels correspond to objects, including any voxels corresponding to the ground under object overhangs. Thus, there is no requirement for the explicit determination of the Min-Max 30 elevation map (i.e. steps s22 or s26), nor for the overhang correction process (i.e. step s30 and s32). Such steps may be "combined" in the form of a relatively efficient algorithm, as described in more detail below with reference to Figure 11. At step s52, models of the objects 8, 10 are determined. 35 In this embodiment the models of the objects 8, 10 are determined by performing the following: forming voxel clusters (as described above with reference to step s34 of Figure 5); and identifying and removing noisy laser scanner returns (as 2165961_1 (GHMatters) - 26 described above with reference to steps s36 and s38 of Figure 5). The models of the objects 8, 10 are iteratively updated as the data points in the voxels are iteratively updated. The object models are iteratively updated at a rate that depends 5 upon the processing power of the processor 3 which performs the segmentation. This completes the iterative segmentation algorithm. The iterative segmentation algorithm advantageously allows streaming data from the laser scanner to be incrementally 10 processed. This tends to provide a terrain model during data collection which is updated and refined as more data is collected. The iterative segmentation algorithm tends to be advantageous over non-iterative segmentation algorithms in which 15 all of the data is collected before a single iteration of a process of forming a terrain model is performed. The iterative segmentation algorithm tends to allow for real-time generation and updating of a terrain model. Figure 10 is a schematic block diagram showing certain 20 details of a further embodiment implementing the process of Figure 9. Figure 10 represents the process in terms of the following functional modules: a database 500, an elevation map 502, a ground model 504, a ground/object segmenter 506, and an object model 508. 25 Streaming input data 510, i.e. 3D point cloud data generated by the laser scanner 2, is input into the database 500. The elevation map 502, which in this embodiment is a mean elevation map, is updated based on data that has been added to 30 the database 500 (hereinafter referred to as "added data 512") and/or data that has been removed from the database 500 (hereinafter referred to "deleted data 514"). The ground model 504, which is determined using gradient values (and/or any other determined metrics) computed from the 35 elevation map 502 as described in more detail above at step s8 of Figure 3, is updated based on the updated elevation map 502. In particular, the ground model 504 is updated using gradient values that have been changed as a result of the streaming input 2155961.1 (GHMatters) - 27 data 510 (hereinafter referred to as "changed gradients 516") and/or gradient values that have been deleted (hereinafter referred to "deleted gradients 518"). The formation of the ground model 504 comprises forming cell clusters, removing 5 certain clusters having a height above a threshold, and correcting overhangs/ground artefacts. In this embodiment, when the ground model 504 has been determined using the latest updated changed gradients 516 and deleted gradients 518, an indication 520 that the ground model 504 has been determined is 10 generated so that the ground/object segmenter 506 may perform segmentation of the ground and objects. Also, in this embodiment the ground model 504 uses data stored in the database 500. This is indicated in Figure 10 by the dotted arrow indicated by the reference numeral 501. 15 In this embodiment, the database 500 and the elevation map 502 are each updated at the rate of the streaming of the data, i.e. as data is streamed. The rate of completely updating the ground model 504 i.e. the rate and/or frequency with which an indication 520 is generated, depends on the power of the 20 processor 3, i.e. central processing unit power. The ground/object segmenter 506 separates the ground model 504 from those of the objects. The ground/object segmenter 506 performs this function each time an indication 520 is generated. The segmentation of the ground and the objects is updated using 25 data stored in the database 500 (this is indicated in Figure 10 by the dotted arrow indicated by the reference numeral 503) and the ground model 504 (this is indicated in Figure 10 by the dotted arrow indicated by the reference numeral 522).The object model 508 is updated using segmented object voxels 524 that are 30 updated by the ground/object segmenter 506 using data stored in the database 500 and the ground model 504. Generation of the segmented voxels is described in more detail above with reference to step s28 of Figure 5. In this embodiment, the rate of the updating of the ground 35 model 504, the rate that the ground and objects are segmented, and the rate that the object model is updated, depends on the power of the processor 3, i.e. central processing unit power. In this embodiment, the various updated items used by 2185961_1 (GHMatters) - 28 and/or determined by the functional modules, i.e. the streaming input data 510 (indicated by "Al" in Figure 10), the added data 512 and the deleted data 514 (indicated by "A2" in Figure 10), the changed gradients 516 and the deleted gradients 518 5 (indicated by "A3" in Figure 10), the forming of clusters, removal of certain clusters, correcting of overhangs and generation of an indication 520 (indicated by "A4" in Figure 10), the indication 520 and the access to the ground model (indicated by "A5" in Figure 10), and the updated segmented 10 object voxels (indicated by "A6" in Figure 10), are related to each other as follows. The Al updated items are used to determine the A2 updated items. The A2 updated items are used to determine the A3 updated items. The A3 updated items are used to determine the A4 updated items. The A4 updated items are used to 15 determine the A5 updated items. The A5 updated items are used to determine the A6 updated items. In the above embodiment, the functional modules (i.e. the database 500, the elevation map 502, the ground model 504, the ground/object segmenter 506, and the object model 508) are 20 updated using a distinct section of code for each functional module. However, in other embodiments the functional modules may be implemented in a different appropriate way. In other embodiments, two or more functional modules may be implemented by a single block of code. For example, in a further embodiment 25 the process of updating the Min-Max Elevation Map with new data points (measured height values), updating the ground model using updated gradients values, and refining the reconstruction of the ground under overhanging objects, are performed by a single iterative process, as described below with reference to Figures 30 11. Figure 11 is a process flow chart showing certain steps of a ground model updating process. The process of updating the ground model will be described for a single new measurement of the height parameter. However, it will be appreciated that the 35 process may be utilised for updating the ground model for any number of new measurements, for example by performing the process iteratively. At step s54, a new sensor measurement within the terrain 2185961_1 (GHMatters) - 29 area 4 is performed. In this embodiment, this new sensor measurement is a measurement of the height of the terrain. At step s56, the voxel to which the new measurement corresponds is identified. In other words, the voxel of the 5 terrain area 4 in which the sensor measurement is performed is identified. At step s58, it is determined whether or not there exists an empty voxel below the voxel identified at step s56. In this embodiment, an empty voxel is defined as a voxel 10 that contains less than a certain number of data points. In other words, a voxel is defined as empty if the number of measurements that have been made in that voxel is below a threshold value. Equivalently, in this embodiment a voxel is defined as non-empty if the number of measurements that have 15 been made in that voxel is equal to or above that threshold value. If it is determined that there exists an empty voxel below the voxel identified at step s56, the ground model updating process proceeds to step s60. However, if it is determined that 20 there is no existence of an empty voxel below the voxel identified at step s56, the ground model updating process proceeds to step s65. At step s60, the first empty voxel directly below the voxel identified at step s56 is identified. 25 At step s62, it is determined whether or not there exists a non-empty voxel below the empty voxel identified at step s60. If it is determined that there exists a non-empty voxel below the empty voxel identified at step s60, the ground model updating process proceeds to step s64. However, if it is 30 determined that there is no existence of a non-empty voxel below the empty voxel identified at step s6O, the ground model updating process proceeds to step s65. At step s63, the voxel corresponding to the new measurement is identified as not corresponding to the ground. 35 This is because there exists a non-empty voxel below the voxel in which the new measurement was made, and these voxels are separated by one or more empty voxels. Thus, the voxel identified at step s56 corresponds to an overhanging structure, 21659M1_1 (GHMatters) - 30 i.e. an object above the ground 6, and the new sensor measurement is of the object above the ground. At step s65, the voxel corresponding to the new measurement is identified as corresponding to the ground. 5 This is because no non-empty voxels that are separated from the voxel in which the new measurement was made exist below the voxel in which the new measurement was made. Thus, the voxel in which the new measurement was made corresponds to the ground 6. 10 In this embodiment, at step s66, the average height of the ground in the cell in which the new sensor measurement (i.e. the cell containing the voxel in which the new sensor measurement was made) was made is updated using the new sensor measurement. Thus step s66 represents one possible use of the new information 15 obtained as a result of step s65. It will be appreciated that in other embodiments the information obtained at step s65 may be used in other ways instead of or in addition to the use made in this embodiment at step s66. In this embodiment the average height of the ground is 20 updated, i.e. the reconstruction of the ground 6 is refined, in the same way as described above at step s32 of Figure 5. In particular, the cell in the Mean Elevation Map which most closely corresponds to the cell in the local map that contains the voxel in which the new measurement was made is identified, 25 and this identified cell is then updated by re-computing the mean height in that cell using the new measurement value as well as values of previous measurements taken in that cell. In other embodiments, a different appropriate method of updating the average height of the ground may be used, such as utilising only 30 measurement values that are measured in the uppermost layer of voxels that correspond to the ground surface. Following the ground model updating process, the gradient values may be recalculated to produce a 'finalised' ground model. This advantageously tends to provide that new 35 measurements made of objects connected to the ground (but not the ground itself) are not used to update the ground model. In the above described ground model updating process, it is assessed whether the height of the ground model in a 2165961_1 (GHMattes) - 31 particular voxel is updated based on height measurements made in voxels below it. However, in other embodiments a different criterion for deciding whether the height of the ground model in a particular voxel is updated may be used in addition to or 5 instead of the above described process. For example, a process of determining whether or not a new measurement corresponds to the ground (described below with reference to Figure 12) may be advantageously incorporated into the ground model updating process of Figure 11. 10 Figure 12 is a process flow chart showing certain steps of a process for determining whether a new measurement corresponds to the ground. In this embodiment, the process shown in Figure 12 is performed between steps s56 and s58 of Figure 11, i.e. after performing step s56, but before performing step s58. The 15 remaining steps of Figure 11 (i.e. steps s58 to s66) are then performed on measurements not identified as not corresponding to the ground surface (i.e. in effect identified as possibly corresponding to the ground). This advantageously tends to improve the efficiency of the ground model updating process of 20 Figure 11. At step s1O, the number of non-empty voxels connected to the voxel to which the new measurement corresponds (i.e. the voxel identified at step s56) and in the same column as the voxel to which the new measurement corresponds (i.e. 25 corresponding to the same cell or sub-cell) is determined. In this embodiment, one voxel is referred to as 'connected' to another voxel if there is no existence of an empty voxel between the two voxels in question. At step s102, if the number of voxels in the column is 30 less than three, the new measurement is identified as possibly corresponding to the ground. In other words, the new measurement is identified as possibly corresponding to the ground or is identified as not corresponding to the ground. This completes the process of Figure 12. In this embodiment, when the new 35 measurement is identified as possibly corresponding to the ground, the process of Figure 11 continues on to step s58; whereas if the new measurement is identified as not corresponding to the ground the process of Figure 11 is 2165961_1 (GHMatters) - 32 terminated i.e. there is no need to perform steps s58 to s66. Thus, in this embodiment the new measurement is identified as possibly corresponding to the ground if the number of connected non-empty voxels in the same column as the voxel 5 corresponding to the new measurement is less than three. Also, the new measurement is not identified as corresponding to the ground if the number of connected non-empty voxels in the same column as the voxel corresponding to the new measurement is three or more. 10 A column of three connected non-empty voxels may be labelled as a non-ground object (and thus measurements made in these connected voxel are not used to update the ground model) without the need to identify empty voxels below them. This is because measurement corresponding to the (two-dimensional) 15 ground surface typically lie in a single voxel in a column. However, the ground surface may lie at the boundary between two voxels. Thus, measurements corresponding to the ground surface may lie within two voxels in a column. Having three connected voxels being non-empty tends to require that the surface being 20 measured is at least one voxel thick. This is typically thicker than the two-dimensional ground surface, and thus a column of three or more connected non-empty voxels is assumed to correspond to an object. In this example, measurements made in columns of less than three connected non-empty voxels are 25 processed, for example using the ground model updating process described above with reference to Figure 11. In this embodiment, the number of connected non-empty voxels required to be in the column in order for the column to be classified as an object is three. However, in other 30 embodiments a different number of voxels may be required. Thus, a sufficiently thick (e.g. three voxels thick) column of connected non-empty voxels is too thick to be the two dimensional ground surface, and so must correspond to an object. There is no need to calculate the mean height of the connected 35 voxel column and then the surrounding gradients in this case. The process of determining whether a new measurement corresponds to the ground described above with reference to Figure 12 may then be followed by the remaining steps of the 2165961_1 (GHMatters) - 33 ground model updating process of Figure 11 (i.e. steps s58 to s66) which may be performed on those measurements not identified as corresponding to the ground. In a further embodiment, the above described method of 5 determining whether a new measurement corresponds to the ground may be performed without performing steps s58 to s66 of Figure 11 (i.e. steps s54 and s56 of Figure 11 may be performed, followed by steps slOO and s102 of Figure 12) and the updating of the ground model may be performed just using the measurements 10 identified as corresponding to the ground in this method. Here, the new measurements not identified as corresponding to the ground may be identified as not corresponding to the ground. The above described ground model updating process (described above with reference to Figure 11 and, optionally, 15 Figure 12) tends to advantageously allow for the efficient updating of the ground model. A further advantage of the above described ground model updating process is that its complexity is the same for each new measurement value, i.e. if the process is used iteratively to 20 update the ground model with a series of new data points, the complexity of the algorithm is the same for each iteration. Also, the ground model updating process advantageously comprises a limited number of operations which are performed on floating point numbers. 25 In the above embodiments, the ground model updating process is used to iteratively update the ground model generated using an above described segmentation process. The ground model is updated using selected new height measurements (i.e. those measurements that only correspond to the ground). However, in 30 other embodiments the ground model updating process may be used to update a ground model that is generated using a different appropriate method. For example, the ground model updating process may be used to update ground models generated using the Mean Elevation Map, the Min-Max Elevation Map, the Multi-Level 35 Elevation Map, the Volumetric Density Map, Ground Modelling via Plane Extraction, and Surface Based Segmentation. In other words, the ground model updating process for updating the average height value of the ground in a cell or 2165961.1 (GHMatters) - 34 voxel with a new height measurement (which includes identifying the voxel within which the new height measurement lies, and updating the average height of the ground only if the voxel within which the measured parameter value lies coincides the 5 voxels corresponding to the ground surface or the voxel within which the measured parameter value lies and voxels corresponding to the ground surface are not separated by non-empty voxels) may be used to update a ground model generated by any appropriate process. 10 The above described embodiments of a segmentation algorithm produce segmented 3-dimensional point cloud data-sets of the ground and the objects. A classification process is then performed on certain of the data-sets corresponding to objects. The classification process is performed to classify the 15 identified object as a particular object type. An embodiment of a classification process will be described in more detail below with reference to Figure 13. The classification process is referred to as a "feature-less" classification algorithm because in this algorithm the whole of 20 an object is used to match that object to a particular class of object, as opposed to using only certain features of the object to match that object to a particular class of object. This feature-less approach tends to be advantageous over a classification utilising object features. One advantage of the 25 feature-less approach is that any feature extraction processes may be bypassed in order to directly classify object models. Also, the feature-less approach tends to be more easily deployable that a classification technique that uses object features. Robust classification tends to be provided for by a 30 feature-less approach. The classification process will be described in terms of classifying a single object model (determined using an above described segmentation algorithm) as either one of a plurality of object classes, or as none of the plurality of object 35 classes. In this embodiment, an object class is a relatively broad description of a type of object, for example "a car", "a tree" or "a building". However, in other embodiments an object class 2165961_1 (GHMalers) - 35 may be more specific, for example certain makes or models of a car. Each object class is represented by a template 3 dimensional model. 3-dimensional template models of certain 5 objects are generally available (for example, on the Internet). Figure 13 is a process flow chart showing certain steps of this embodiment of a classification process. At step s70, an alignment process is performed. The alignment process is a technique that geometrically aligns a 10 three-dimensional model of the object to be classified with a three-dimensional template model. In this embodiment, the alignment process is a conventional Iterative Closest Point (ICP) algorithm. There are many appropriate variants of ICP algorithms that may be used to 15 align the model of the object to be classified with the template model. In this embodiment, the models are aligned according to the following variables: the x-displacement, the y-displacement, and the rotation around the z axis. Although the ICP algorithm is performed on two three 20 dimensional point clouds (the object to be classified and the template), the ICP optimisation is two-dimensional. This is because the above described segmentation algorithm provides an explicit representation of the ground surface. Thus, the position of the ground underneath each segmented object is 25 known. Therefore, the two point clouds to be aligned can be shifted so that height of the ground underneath them is at the same height (for example zero). This means the alignment of two three-dimensional shapes located above a common ground surface effectively corresponds to a two-dimensional alignment. This 30 advantageously tends to allow for the encoding of contextual constraints. Moreover, the computations for two-dimensional alignment tend to be easier to perform than those for three dimensional alignment. At step s72, a value of the error between the model of the 35 object to be classified and the template model is determined. The value of this error metric is indicative of the similarity between the model of the object to be classified and the template model. 2165961_1 (GHMatters) - 36 In this embodiment, the error metric is determined using the following formula: Nobject N, pobi_ ppI r' lse '1 E k 'ect c'us I+LPk pobc k=1 k=1 where: Ei is the value of the error between the model of 5 the object to be classified and the ith template model; Nbect is the number of points in the three dimensional point cloud model of the object being classified; N1 is the number of points in the three-dimensional ith template model; object 10 k is the kth point of the three-dimensional point cloud model of the object being classified; lcosest is the point in the ith template model closest to the kth point of the three-dimensional point cloud model of the object being classified 15 is the kth point of the three-dimensional ith template model; and object closest is the point in the three-dimensional point cloud model of the object being classified closest to the kth point of the three-dimensional ith template model. 20 In this embodiment, a point is referred to as the "closest" to another point if, of all the points in question, it has the smallest Euclidean distance in three dimensions between it and the other point. However, in other embodiments "closeness" may be a function involving other different 25 parameters. For example, "closeness" may be determined by a function that incorporates other parameters instead of or in addition to the Euclidean distance between points, e.g. colour or reflectivity for example, or any other parameters associated in some way to the data. 30 This error metric advantageously tends to provide an accurate estimate of the error between a template model and an object model regardless of whether the template point cloud of the template is larger (i.e. contains more points) or smaller (i.e. contain fewer points) than the point cloud of the object 35 being classified. This advantage is provided at least in part by the two terms in the above equation. 2185961_1 (GHMatter) - 37 Moreover, the error value may be in the units in which the data was measured, e.g. metres. Thus, the error value tends to be easily interpretable. For example, an error value of 2m may suggest that the objects being matched are not of the same 5 shape, while an error of 0.4m may suggest a good fit. At step s74, steps s70 and s72 as described above are repeated for each template. Thus, a value of the error between the model of the object to be classified and each of the template models is determined. In other words, values E for 10 i= 1,...,M are determined for each of the M templates. At step s76, the template model corresponding to minimum determined error value is identified. The error between the model of the object being classified and the identified template model is the smallest error of all 15 the template for which the above described process steps have been performed. Thus, the object most closely corresponds to the object-class represented by the identified template. At step s78, the identified template is either accepted (i.e. the object being classified is classified as the object 20 class of the identified template) or rejected (i.e. the object being classified is not classified as any of the object classes corresponding to the templates) if certain criteria are not satisfied. In this embodiment, this acceptation/rejection process is 25 implemented by evaluating an amount of overlap between the two models being compared (i.e. the model of the object being classified and the model of the identified template, which are being compared after alignment). In this embodiment, the acceptation/rejection process is 30 performed as follows. Considering the unidentified object model to be classified and the template matched to the unidentified model (determined as described above), the larger of these two models is identified. In this embodiment, the largest model is that with 35 the largest number of data points. The determination is based on the sum of the eigenvalues of the point cloud data of each of the models. For example, the largest point cloud may be that of the identified template. 2165961_1 (GHMatters) - 38 The following operations are then carried out for each of the points in the identified largest point cloud, for example for each of the points Pk k=1,... N, A plane is fitted to the points belonging to the same 5 voxel as a point in the identified largest point cloud. For example, a plane is fitted to the points belonging to the same voxel as a point ' . This approximates a plane tangent to the 3D surface at that point. Two additional planes orthogonal to the tangent plane, 10 containing the point in the identified largest point cloud (e.g. the point j ) and orthogonal to each other are determined. If the four (three-dimensional) quadrants defined by the two orthogonal planes each contain at least one data point from the model that is not the largest model (e.g. the model of the 15 object being classified), the point in the identified largest point cloud (e.g. the point Pi ) is identified as belonging to the zone of overlap between the surface of the template model and the surface of the model of the object being classified If more than a certain proportion of the points in the 20 identified largest model are identified as belonging to the zone of overlap, the template model and the model of the object being classified are assumed to match. Otherwise, the template model and the model of the object being classified are assumed to not match, i.e. the template is rejected. In this embodiment, if 25 more than a half of the points in the identified largest model are identified as belonging to the zone of overlap, the template model and the model of the object being classified are assumed to match. Thus an embodiment of a classification process is 30 provided. An advantage of the above described classification process is that an error metric is computed using only a relatively simple two-step (alignment and comparison) process. No learning of a metric from various datasets is required. 35 Moreover, the classification process can be used to identify any class of objects using a relatively small number of template models belonging to that class. 2165961_1 (GHMatters) - 39 Furthermore, the classification process can performed on three-dimensional laser data without requiring any of the templates to be made of the laser data. In the above embodiments, 3-dimensional template models of 5 certain objects that are generally available (for example, on the Internet) are used. However, in other embodiments other templates may be used. For example, in other embodiments some of the object models generated by implementing an above described segmentation algorithm may be used as template models. 10 This is advantageous in scenarios for which template models are not generally available, or in which the terrain area comprises many similar (known) features. Also, in other embodiments generated object models used for templates may first be sampled via ray-tracing in order to simulate the fact that the occluded 15 sides of an object are not observed with a laser. This advantageously tends to allow for the generation of additional template surfaces, which tends to provide more accurate matching when classifying 3D laser data. In the above embodiments, an ICP process is used to align 20 a template model with a model of an object to be classified. However, in other embodiments a different appropriate alignment process is used. In the above embodiments, the acceptation/rejection process described above at step s78 is used to determine whether 25 to accept or reject a particular classification. However, in other embodiments a different appropriate acceptation/rejection process is used. For example, in other embodiments a classification of an object as a particular template is rejected if the error value corresponding to that template is above a 30 particular threshold value. In the above embodiments, step s78, i.e. the decision to accept or reject the identified template, is performed to accept or reject the template identified by performing steps s70 to s76 as described above. However, in other embodiments the 35 acceptation/rejection process of step s78 may be performed in conjunction with any other appropriate classification process, i.e. using step s78, it may be decided whether to accept or reject a classification that is determined using any appropriate 265961_1 (GHMatters) - 40 classification technique. Thus it will be appreciated that the acceptation/rejection process step s78 in itself provides an embodiment of the present invention. In the above embodiments, the feature-less classification 5 process described above with reference to Figure 13 was used to classify an object model produced by an above described segmentation algorithm. However, in other embodiments the feature-less classification process may be used to classify a model, e.g. a three-dimensional object model in the form of 10 point cloud data, generated using any other appropriate method of generating an object model. Thus it will be appreciated that the feature-less classification processes described with reference to Figure 13 in themselves provide embodiments of the present invention. 15 In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the 20 presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention. It is to be understood that, if any prior art publication is referred to herein, such reference does not constitute an 25 admission that the publication forms a part of the common general knowledge in the art, in Australia or any other country. 2165961_1 (GHMatters)
Claims (14)
1. A classification process for classifying an extracted object (8, 10) or terrain feature (6), the classification 5 process comprising: measuring values of a parameter in a plurality of cells; identifying cells corresponding to a particular object (8,10) or terrain feature (6) using the measured values of the parameter; 10 determining parameter values at a set of points for each of a plurality of classes of objects; for each of the plurality of classes of objects, aligning the measured values of the parameter corresponding to a particular object (8,10) or terrain feature (6) with the 15 determined parameter values corresponding to the class of objects; for each of the plurality of classes of objects, determining a value of an error between the aligned measured values of the parameter corresponding to a 20 particular object (8,10) or terrain feature (6) and the determined parameter values corresponding to the class of objects; and classifying the particular object (8,10) or terrain feature (6) as an object in the class of objects 25 corresponding to a minimum of the determined error values.
2. A process according to claim 1, wherein the step of aligning the measured values of the parameter corresponding to a particular object (8,10) or terrain feature (6) with the 30 determined parameter values corresponding to the class of objects comprises performing an Iterative Closest Point algorithm on the measured values of the parameter and the determined parameter values. 35
3. A process according to claim 1 or 2 further comprising: identifying which of the set of points corresponding to the particular object (8,10) or terrain feature (6) or the 2165961_1 (GHMatters) 14/01/10 - 42 set of points corresponding to the class of objects that the particular object (8,10) or terrain feature (6) is classified as comprises the largest number of points for which a value of the parameter has been determined; 5 for each of the points in the identified largest set, performing the following: fitting a plane to the points in the same cell as that point to produce a tangent plane; determining two planes, the two planes being orthogonal to 10 the tangent plane, orthogonal to each other, and containing that point; identifying the point as a fit only if each of the four quadrants defined by the two orthogonal planes contain a data point from the set not identified as the largest; and 15 rejecting the classification of the particular object (8,10) or terrain feature (6) as an object in the class of objects corresponding to a minimum of the determined error values if a certain proportion of points in the identified largest set are not identified as a fit. 20
4. A process according to claim 3, wherein the certain proportion of points is one half.
5. A process according to any of claims 1 to 4, wherein the 25 step of determining a value of an error comprises calculating the following formula: Nobject NI E = pobec - ,, -- | p obect S k cls. I1 l l~ closest k=1 k=1 where: EI is the value of the error between the aligned measured values of the parameter corresponding to a 30 particular object (8,10) or terrain feature (6) and the determined parameter values corresponding to the class of objects; Noblect is the number of points corresponding to the measured values of the parameter corresponding to a 35 particular object (8,10) or terrain feature (6); Ni is the number of points in the set of points for 2165961_1 (GHManes) 14/01/10 - 43 ith class of objects; object k is the kth point in the set of points corresponding to the measured values of the parameter corresponding to a particular object (8,10) or terrain 5 feature (6); closest is the point in the set of points for ith class of objects closest to the kth point in the set of points corresponding to the measured values of the parameter corresponding to a particular object (8,10) or 10 terrain feature (6); is the kth point in the set of points for ith class of objects; and object closes is the point in the set of points corresponding to the measured values of the parameter 15 corresponding to a particular object (8,10) or terrain feature (6) closest to the kth point in the set of points for ith class of objects.
6. A process according to any of claims 1 to 5, wherein the 20 steps of measuring values of a parameter in a plurality of cells, and identifying cells corresponding to a particular object (8,10) or terrain feature (6) using the measured values of the parameter, in combination comprise: defining an area (4) to be processed; 25 dividing the area (4) into a plurality of cells (12, 14, 16); measuring a value of a parameter at a plurality of different locations in each cell (12, 14, 16); for each cell (12, 14, 16), determining a value of a 30 function of the measured parameter values in that cell; identifying a cell as corresponding only to a particular object (8,10) or terrain feature (6) if the determined function value for that cell is in a range of values that corresponds to the particular object (8,10) or terrain 35 feature (6); defining, for the cells that are not identified as corresponding only to a particular object (8, 10) or 2165961_1 (GHMners) 14/01/10 - 44 terrain feature (6), one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; and identifying a sub-cell as corresponding at least in part 5 to the particular object (8, 10) or terrain feature (6) if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values. 10
7. A process according to any of claims 1 to 5, wherein the steps of measuring values of a parameter in a plurality of cells, and identifying cells corresponding to a particular object (8,10) or terrain feature (6) using the measured values of the parameter, in combination comprise: 15 defining an area (4) to be processed; dividing the area (4) into a plurality of cells (12, 14, 16); during a first time period, measuring a value of a parameter at a first plurality of different locations in 20 the area (4); storing in a database the values of the parameter measured in the first time period; for each cell in which a parameter value has been measured, determining a value of a function of parameter 25 values measured in that cell and stored in the database; identifying a cell in which a parameter value has been measured as corresponding only to a particular object (8,10) or terrain feature (6) if the determined function value for that cell is in a range of values that 30 corresponds to the particular object (8,10) or terrain feature (6); defining, for the cells in which a parameter value has been measured and that are not identified as corresponding only to a particular object (8, 10) or terrain feature 35 (6), one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; identifying a sub-cell as corresponding at least in part 2165961_1 (GHMatters) 14/01/10 - 45 to the particular object (8, 10) or terrain feature (6) if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values; 5 during a second time period, measuring a value of a parameter at a second plurality of different locations in the area (4); storing the values of the parameter measured in the second time period in the database; and 10 for each cell in which a parameter value has been measured in the second time period but not the first time period, determining a value of a function of parameter values measured in that cell and stored in the database; for each cell in which a parameter value has been measured 15 in the second time period and the first time period, updating the value of the function using parameter values measured in that cell in the second time period and stored in the database; identifying a cell in which a parameter value has been 20 measured as corresponding only to a particular object (8,10) or terrain feature (6) if the determined function value for that cell is in a range of values that corresponds to the particular object (8,10) or terrain feature (6); 25 defining, for the cells in which a parameter value has been measured and that are not identified as corresponding only to a particular object (8, 10) or terrain feature (6), one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; and 30 identifying a sub-cell as corresponding at least in part to the particular object (8, 10) or terrain feature (6) if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values. 35
8. A process according to claim 6 or claim 7, wherein the step of identifying a sub-cell as corresponding at least in part 2165%1_1 (GHMatters) 14/01/10 - 46 to the particular object (8, 10) or terrain feature (6) comprises: identifying a sub-cell as corresponding only to the particular object (8, 10) or terrain feature (6) if the 5 measured parameter value for each of the at least one of the plurality of different locations in that sub-cell is in the range of values; and identifying a sub-cell as corresponding in part to the particular object (8, 10) or terrain feature (6) if one or 10 more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values and if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not 15 in the range of values.
9. A process according to any of claims 6 to 8 further comprising identifying a sub-cell as corresponding at least in part to a different object or terrain feature if one or more of 20 the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values.
10. A process according to any of claims 6 to 9 further 25 comprising identifying a sub-cell as corresponding only to a different object or terrain feature if each of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. 30
11. A process according to any of claims 6 to 10, wherein the step of determining a value of a function of the measured parameter values in that cell comprises: determining an average value of the values of a parameter 35 measured at the plurality of different locations in each cell (12, 14, 16). 2165961I (GHMatters) 14/01/10 - 47
12. An apparatus for classifying an extracted model of an object (8, 10) or terrain feature (6), the apparatus comprising scanning and measuring apparatus (2) for measuring the plurality of values of a parameter, and one or more processors (3) 5 arranged to perform the processing steps recited in claims 1 to 11.
13. A computer program or plurality of computer programs arranged such that when executed by a computer system it/they 10 cause the computer system to operate in accordance with the process of any of claims 1 to 11.
14. A machine readable storage medium storing a computer program or at least one of the plurality of computer programs 15 according to claim 13. 2165961_1 (GHMatters) 14/01/10
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010200144A AU2010200144A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
PCT/AU2011/000014 WO2011085435A1 (en) | 2010-01-14 | 2011-01-07 | Classification process for an extracted object or terrain feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010200144A AU2010200144A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2010200144A1 true AU2010200144A1 (en) | 2011-07-28 |
Family
ID=44303718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2010200144A Abandoned AU2010200144A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2010200144A1 (en) |
WO (1) | WO2011085435A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113366532A (en) * | 2019-12-30 | 2021-09-07 | 深圳元戎启行科技有限公司 | Point cloud based segmentation processing method and device, computer equipment and storage medium |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2894600B1 (en) * | 2014-01-14 | 2018-03-14 | HENSOLDT Sensors GmbH | Method of processing 3D sensor data to provide terrain segmentation |
GB2532948B (en) | 2014-12-02 | 2021-04-14 | Vivo Mobile Communication Co Ltd | Object Recognition in a 3D scene |
CN106407985B (en) * | 2016-08-26 | 2019-09-10 | 中国电子科技集团公司第三十八研究所 | A kind of three-dimensional human head point cloud feature extracting method and its device |
US11195324B1 (en) | 2018-08-14 | 2021-12-07 | Certainteed Llc | Systems and methods for visualization of building structures |
CN109583520B (en) * | 2018-12-27 | 2023-04-07 | 云南电网有限责任公司玉溪供电局 | State evaluation method of cloud model and genetic algorithm optimization support vector machine |
CN113219439B (en) * | 2021-04-08 | 2023-12-26 | 广西综合交通大数据研究院 | Target main point cloud extraction method, device, equipment and computer storage medium |
CN114612627B (en) * | 2022-03-11 | 2023-03-03 | 广东汇天航空航天科技有限公司 | Processing method and device of terrain elevation map, vehicle and medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1049030A1 (en) * | 1999-04-28 | 2000-11-02 | SER Systeme AG Produkte und Anwendungen der Datenverarbeitung | Classification method and apparatus |
US7110604B2 (en) * | 2001-06-26 | 2006-09-19 | Anoto Ab | Processing of digital images |
DE10330011B4 (en) * | 2003-07-03 | 2005-05-12 | Eads Deutschland Gmbh | Procedure for obstacle detection and terrain classification |
JP4991317B2 (en) * | 2006-02-06 | 2012-08-01 | 株式会社東芝 | Facial feature point detection apparatus and method |
-
2010
- 2010-01-14 AU AU2010200144A patent/AU2010200144A1/en not_active Abandoned
-
2011
- 2011-01-07 WO PCT/AU2011/000014 patent/WO2011085435A1/en active Application Filing
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113366532A (en) * | 2019-12-30 | 2021-09-07 | 深圳元戎启行科技有限公司 | Point cloud based segmentation processing method and device, computer equipment and storage medium |
CN113366532B (en) * | 2019-12-30 | 2023-03-21 | 深圳元戎启行科技有限公司 | Point cloud based segmentation processing method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2011085435A1 (en) | 2011-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570428B (en) | Method and system for dividing building roof sheet from large-scale image dense matching point cloud | |
Sohn et al. | Using a binary space partitioning tree for reconstructing polyhedral building models from airborne lidar data | |
Lafarge et al. | Creating large-scale city models from 3D-point clouds: a robust approach with hybrid representation | |
AU2010200144A1 (en) | Extraction processes | |
Forlani et al. | C omplete classification of raw LIDAR data and 3D reconstruction of buildings | |
Brolly et al. | Algorithms for stem mapping by means of terrestrial laser scanning | |
CN114332366B (en) | Digital urban single house point cloud elevation 3D feature extraction method | |
Lee et al. | Perceptual organization of 3D surface points | |
Wolf et al. | Automatic extraction and delineation of single trees from remote sensing data | |
CN114332291B (en) | Method for extracting outline rule of oblique photography model building | |
Huber et al. | Fusion of LIDAR data and aerial imagery for automatic reconstruction of building surfaces | |
CN111667574A (en) | Method for automatically reconstructing regular facade three-dimensional model of building from oblique photography model | |
CN111861946B (en) | Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method | |
CN111950589B (en) | Point cloud region growing optimization segmentation method combined with K-means clustering | |
WO2011085433A1 (en) | Acceptation/rejection of a classification of an object or terrain feature | |
Li et al. | New methodologies for precise building boundary extraction from LiDAR data and high resolution image | |
WO2011085434A1 (en) | Extraction processes | |
He | Automated 3D building modelling from airborne LiDAR data | |
WO2011085437A1 (en) | Extraction processes | |
Sohn et al. | A data-driven method for modeling 3D building objects using a binary space partitioning tree | |
AU2009243521A1 (en) | Extraction process | |
Rottensteiner | Status and further prospects of object extraction from image and laser data | |
CN115661398A (en) | Building extraction method, device and equipment for live-action three-dimensional model | |
WO2011085436A1 (en) | Extraction processes | |
Tóvári | Segmentation based classification of airborne laser scanner data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MK1 | Application lapsed section 142(2)(a) - no request for examination in relevant period |