WO2020141257A1 - A method to measure visibility from point cloud data - Google Patents

A method to measure visibility from point cloud data Download PDF

Info

Publication number
WO2020141257A1
WO2020141257A1 PCT/FI2019/050924 FI2019050924W WO2020141257A1 WO 2020141257 A1 WO2020141257 A1 WO 2020141257A1 FI 2019050924 W FI2019050924 W FI 2019050924W WO 2020141257 A1 WO2020141257 A1 WO 2020141257A1
Authority
WO
WIPO (PCT)
Prior art keywords
visibility
point
analysis
sight
line
Prior art date
Application number
PCT/FI2019/050924
Other languages
French (fr)
Inventor
Vesa Leppänen
Tuomo PUUMALAINEN
Original Assignee
Oy Arbonaut Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oy Arbonaut Ltd filed Critical Oy Arbonaut Ltd
Priority to EP19906605.1A priority Critical patent/EP3906502A4/en
Publication of WO2020141257A1 publication Critical patent/WO2020141257A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/02Aiming or laying means using an independent line of sight
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Definitions

  • the following disclosure relates to measuring visibility.
  • the disclosure relates to a system and method measuring visibility in presence of blocking objects.
  • Visibility may be described as ability of a viewer to see or visually detect an object from a view point. If an object in certain location may be visually detected from a view point, without any object blocking the view, it can be said that visibility exists between the two points. In many cases, the visibility may be partially or totally blocked by objects. Visibility blocking objects may be, for example, vegetation, traffic signs, structures or any other objects that may be capable of obstructing view totally or partially.
  • Measuring visibility is important in many areas of life. There are many reasons why organizations and individuals are interested in the measuring visibility; including but not limited to:
  • Remote sensing is an art known to centuries. Remote sensing may be performed using for example satellite or airborne sensors, operated from manned or unmanned vessels. Sensors most commonly used include spectral sensors (cameras, spectrometers etc.), LiDAR sensors and radar sensors, but other kinds are known to be used as well.
  • Remote Sensing has capability to produce information about the objects that may block visibility. This information may be geographically two-dimensional or three dimensional, but can also include more dimensions, like time.
  • Point clouds have been used quite extensively in sensing. Point clouds can be produced using multiple techniques, LiDAR, photogrammetry and radargrammetry being just a few. A brief presentation of some of the techniques are presented here.
  • Photogrammetry is the science of making measurements from photographs.
  • Stereogrammetry is the science of making measurements of a pair or group of images; the images may be photos or other kind of images, like radar images.
  • the images have to be taken from different viewpoints, presenting the objects at different distance from the observing sensors at different locations in the imaging sensor.
  • Corresponding features are identified in different images and their relative location on the image are interpreted to extract the 3D location of the objects on the real life.
  • the sensor locations may be given to the algorithm or, alternatively, deduced from the analysis.
  • Stereophotogrammetry is a methodology of applying Photogrammetry on photographic images.
  • LiDAR known also as laser scanning
  • LiDAR sensor is an active instrument that uses laser ranging, combined with devices measuring position and attitude of the sensor, to produce 3D location measurements of objects.
  • the sensor emits a laser beam to a known direction from a known position and records the distance to surfaces where the beam is reflected back. Additionally, LiDAR may have capability to record the intensity of the returning signal, indicating the reflectivity and size of the reflecting surfaces.
  • the laser beam is projected to the object through a mirror or prism system or other kind of optical setup (the "LiDAR Optic") that causes the laser beam to scan the target area, recording the precise direction where the beam was sent each time to allow construction of the 3D measurements .
  • LiDAR has been used to produce attributes to areas of land.
  • LiDAR-derived attributes have been assigned to timber stands, making management or inventory units.
  • LiDAR Because of its capability to measure vegetation height and canopy densities, LiDAR has been widely accepted in forest inventory purposes (1) .
  • LiDAR inventories represent a significant economic value, the volume of LiDAR inventory in Finland alone is about 3- 4 million hectares annually (2013) , replacing approximately 45-60 million euros worth of fieldwork.
  • Radargrammetry is commonly used for stereogrammetry performed from radar imagery. Radargrammetry is the technology of extracting geometric object information from radar images. The output of the radargrammetric analysis may be for example a geometric three dimensional point cloud. Like stereophotogrammetry and LiDAR, also radargrammetry can be used from airborne or satellite, ground and water vessel platforms. SUMMARY
  • a method measuring visibility with three dimensional point data utilizes point cloud and an analysis volume, presenting an effective way to quantify the visibility from a point to another or on a continuous layer.
  • safety and productivity can be improved in many areas of interest, including traffic, advertising, logging work and military operations.
  • a method of measuring visibility in presence of blocking objects comprises receiving three dimensional point data of the blocking objects; receiving a three dimensional analysis volume; and analyzing visibility in presence of blocking objects by analyzing the three dimensional point data representing the blocking objects inside the received analysis volume and estimating the visibility from the point analysis.
  • Visibility is a very important factor in areas like road safety, crossing safety, animal safety, railway-road crossing safety, advertising business, forest logging operations and military operations or similar. Additionally, there are applications in search and rescue missions, as well as in aerial search of objects, humans, vehicles or animals, where the quantification of visibility provides value added.
  • observation ability is particularly useful in applications where objects are observed from a given point of view. This takes place in many applications mentioned in this disclosure, but particularly in aerial search and logging operations.
  • the analysis volume is produced around a line of sight vector; and the method is further configured to produce a line of sight vector by receiving a view point of the viewer and producing a three dimensional line of sight from the view point to a location. This provides am ability to quantify or qualify the visibility from a particular point of view to a particular direction.
  • said analyzing visibility in the presence of blocking object comprises selecting the three dimensional points indicating the blocking objects in the proximity of the three dimensional line of sight and estimating the visibility from the returns. This improves the ability to quantify or qualify visibility. Visibility is a very important factor in areas like road safety, crossing safety, animal safety, railway-road crossing safety, advertising business, forest logging operations and military operations, or similar .
  • the three dimensional point data is data from a LiDAR sensor. This provides an ability to perform the process of the aspect with a practically and economically effective dataset.
  • the three dimensional point data is received from stereogrammetry . This provides an ability to perform the process of the aspect with a practically and economically effective dataset.
  • the blocking object are pieces of vegetation. This provides the possibility to identify vegetation that is blocking the visibility either partially or totally.
  • the method further comprises making a model between a statistic from the three dimensional points indicating the blocking objects in the proximity of the line of sight and the visibility from the view point along the line of sight; and using the said model to predict the visibility. This improves the methods described above by using empirical evidence on the actual experience visibility.
  • the method further comprises measuring the visibility of point of interest from view point by estimating visibility of a standard target, placed to the point of interest; and using the said visibility metric in the reference measurement for visibility. This improves the method by producing empirical evidence about visibility to work as modeling data .
  • the method further comprises presenting the visibility in units of distance where a target may be detected or viewed at desired level of detail. This provides an ability to present the results of the analysis as a visibility value in units of distance.
  • a system comprising at least one processor and data communication connection.
  • the system is configured to perform a method as described above. It is beneficial to implement the method as a system.
  • a computer program comprises computer program code, wherein the computer program is configured to cause a computing device to perform a method as described above when the computer program is executed in a computing device .
  • Fig. 1 a block diagram of an example embodiment of the process of measuring visibility
  • Fig. 2 an example of an embodiment of point cloud data derived from LiDAR sensor, presented as a two-dimensional cross-section of a three dimensional point cloud. Reflected laser pulses from vegetation, ground and a structure are shown as black points in the space,
  • Fig. 3 an example of an embodiment of a line of sight 33 from a viewing point 31 to a point of interest 32
  • Fig. 4 an example of an embodiment of An Analysis Volume, drawn around the Line of Sight 33.
  • the Analysis Volume 41 is of cylindrical shape around a line of sight 42.
  • the volume, and thus, the number of points in the Analysis Volume may be altered by changing the area of the end surfaces; for example in this example, the radius of the cylinder.
  • the optimal volume of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
  • Figure 1 presents the process of measuring visibility.
  • the process receives inputs of view point 12, point of interest 13, The Point cloud 11 and optionally, of analysis width 14.
  • the point cloud is also presented in figure 2, where the laser beam has reflected from vegetation 21, ground 22 and structure objects 23, correspondingly, and the sensor has produced points into the point cloud from these object types.
  • the point cloud may be classified, where a class attribute is defined to each point, representing the object type where it was reflected from.
  • Analysis to produce the line of sight 15 is a two dimensional or three dimensional analysis of producing a direct line vector between the view point and the point of interest.
  • Three dimensional line of sight is the preferred method.
  • the line of sight is used to select the point cloud points that are on the line of sight.
  • the judgment of a point being on the proximity of line of sight is performed by running a spatial operation that detects all points within a given proximity of the line of sight vector.
  • the selection area can be imagined to be a three dimensional volume, for example like a cylinder, with the main axle set to the line of sight and the analysis width 14 set as the proximity selected.
  • Other kinds of three dimensional features for example, a rectangular three dimensional feature (sometimes called as "cuboid") , may be used instead of a cylinder.
  • Analysis Volume For simplicity, all such features are called “Analysis Volume” in this disclosure.
  • the shape and size of the Analysis Volume is described as "Analysis Width in this document for simplicity; however, a corresponding shape of volume can be applied in case of other Analysis Volume choices.
  • the selection of the analysis width may be done empirically, considering point density of the input point cloud and the expected volume of the Analysis Volume. It is practical to target to a radius that yields dozens or hundreds of point cloud points into a line of sight that is blocked, but another number may be used as well.
  • a statistic 17 can be used as an independent variable in a model, setting the empirical visibility as the dependent variable.
  • the visibility may be presented as a percentage at a given distance or line of sight or, alternatively as a distance measurement. If the distance measurement is desired, a target of standardized size and distinct color is set to some location along the line of sight, facing directly to a corresponding view point. The target is moved, centered to the line of sight, until the visibility is totally blocked. The minimum distance of the blocked visibility is recoded, as well as the view point and the line of sight direction.
  • the recording of the locations may be done, practically, for example by utilizing a GPS device yielding a practical positioning accuracy; (for example 0,3 or 1 meter C,U,Z at 90% confidence level) .
  • the line of sight vector is produced from the GPS points at the view point and at the target location.
  • the diameter of the visibility target is set to be the same as the selected diameter of the Analysis Volume (In case of non-cylindrical Analysis volume; the area of the target may be same as the area of the analysis volume end surface) around the line of sight.
  • a point statistic may be produced from the point cloud; suitable point statistic may be, for example, the number of points inside the Analysis Volume or the ratio of LiDAR points inside the Analysis Volume from all points in the cloud reflected from the laser pulses that were pointed towards the same area. Other statistics may be used as well; drawing different statistics from point clouds being an art of point cloud analysis science well known to the knowledgeable people in the art.
  • the model can be calibrated to improve the precision of the measurement of visibility.
  • Optional Calibrated Visibility Measurement can be produced from the statistic with this calibrated model .
  • the visibility may be calculated to a continuous layer, for example to a raster or a grid layer.
  • the calibration may be done, for example, using similar target as presented in 32 and selecting view points and directions. The target may be moved from the selected view point to the defined direction, facing towards the view point, until it becomes invisible. The minimum distance between the target and the view point where the target is totally invisible is recorded.
  • the point cloud statistic may be taken from all points inside each grid cell that land on the analyzed elevation above ground. For example, if visibility at the eye level is of interest, the point statistic may be taken from points whose elevation is 0,5m - 3m above ground level.
  • the statistics may be summarized to the line of sight from all cells that are in the desired proximity of the line of sight; in one embodiment, the statistic may be a sum of points on each cell where the line of sight touches, the cell's point count weighted by the length of the line of sight inside each cell.
  • the model may be fit between the distance from the view point to the point of interest and the point statistic derived to the line of sight. With this model, the visibility may be predicted as an attribute of each analysis cell. One practical way may be to present the visibility as meters; however, other kinds of metrics, like percentage of cover per meters of line of sight may be also used, indicating the percentage of the target that is turned covered per each meter of line of sight. With the visibility information as an attribute of a continuous information layer, it is possible to judge the visibility between any pair of two locations.
  • the visibility is analyzed within a given analysis volume as a partially or totally isotropic measurement, not being presented on any line of sight.
  • isotropic measurement there is no direction of the measurement defined.
  • partially isotropic measurement the visibility may be defined, for example, in any horizontal direction or direction of any surface but not in vertical direction. The isotropic measurement may become useful if the visibility is measured through partially covering objects, like vegetation, fog, rainfall or smoke, group of humans or any set of objects that can partially cover visibility, but the actual expected viewing direction is not known.
  • the analysis volume is a volume between two surfaces, following the contour of some surface "Original Surface” but set to some distances apart; allowing measurement of visibility in the plane between the surfaces.
  • the space between the surfaces may be very narrow, still considering that the density of the point cloud needs to produce reasonable number of points if there are visibility-blocking objects available.
  • the Original Surface is the surface of the earth, this embodiment is similar to the horizontal visibility described above. This embodiment may be described, for example, as “visibility at human eye level above ground as described earlier" or “Visibility at machine operator' s eye level above the ground”.
  • the space between two surfaces is divided to volumes, for example, by slicing it with a horizontal grid or honeycomb shape to define volumes, that are isotropic in the direction of the surfaces but not isotropic in vertical direction.
  • a more continuous measurement can be performed by producing overlapping volumes when slicing the area.
  • This kind of overlapping volumes may be produced by introducing any set of points, where horizontal circles are drawn around these points and the volumes are produced by slicing the space between top and bottom surfaces with these horizontal, vertically extruded circles.
  • these analysis volumes we describe these analysis volumes as "Surface-Following Volumes".
  • the point cloud statistic is formed within each volume in a similar way as in other embodiments, as presented in Figure 1.
  • Performing an Analysis of the Visibility 18 in isotropic or partially isotropic embodiment is also possible by Performing Model Prediction to Produce Visibility Metric as presented in Figure 1.
  • Some Field Reference Measurements of Visibility 19 may be done, and visibility model produced as described above. As we see, the visibility distance may be defined to some given direction or, alternatively, to any direction within a given analysis volume .
  • the observation ability of an observer is measured.
  • the observer is placed into a position in the point cloud and the observer's ability to observe objects is estimated.
  • the visibility model may be produced the same way as described in this document, but the analysis is not targeted to any target but, rather, the observers' ability to see such targets in given direction or in isotropic or partially isotropic setting.
  • the observation ability may be measured also as a continuous measurement, as described before.
  • the volume can be described as a function of linearly referenced distance along a sightline that is originated from the view point and that creates a closed surface around the sightline.
  • These sightlines can point to any direction from the viewer and thus the visibility can be analyzed to multiple directions from view point simultaneously .
  • the visibility is not necessarily connected to any object visible in the point cloud. Only the blocking object needs to be present in the point cloud data.
  • the collection time of the cloud data can be arbitrary; however the blocking objects needs to be present in the data.
  • the analysis can use point clouds that may have been collected different time than the actual analysis.
  • obstructing visibility For example, vegetation has a tendency to partially cover visibility, and is especially suitable to be analyzed. Other features, like buildings or ground, may be of nature where an existence of a building totally covers the visibility. Some structures, like traffic signs, power line towers or poles and some other structures may present partial cover. Whatever type of visibility obstructions are, it is useful to make a different kind of visibility metrics for some different object types that may be present in the point cloud. It is a common practice in point cloud analysis business to classify points in the cloud based on the type of object they represent, like ground, vegetation, structures, buildings etc. Having the point cloud classified, the visibility analysis is possible using different point classes as different object types, and using different visibility obstruction effect for each of the class. The total visibility obstruction effect may be computed as a summary of different obstruction types. In another embodiment, the total obstruction may be modelled as a combined function of multiple predictors (statistics of the points in each class as separate predictors) .
  • a point cloud is a set of points, presented as a set of coordinates in some coordinate system and spatial reference.
  • a set of geometric coordinates typically three coordinates (in case of three dimensional data) is presented; often, other attributes, like a class, color information or any other description may be joined to the coordinate information.
  • the value of point clouds is often their ability to present geometrically distributed information in an efficient and practical manner .
  • Point clouds have been generated using LiDAR sensing, stereogrammetry on different kinds of images, but in some cases, even using survey equipment.
  • a point cloud is a set of records, written in a format of a file, a database, a table or any other form to present geometric coordinates.
  • the point clouds may be transferred via some electronic media (for example hard drive or other kind of electronic memory) , but on email, ftp or some other electronic information transfer.
  • some electronic media for example hard drive or other kind of electronic memory
  • a view point is a location where a spectactor' s eye, observing objective or an imaginary observer is located. For analysis purpose, it is not necessary to have an access to a view point; it may be any location in the same coordinate system and spatial reference where the point cloud is located.
  • Modern positioning technologies include global satellite navigation systems (GNSS) , but can also use other kinds of means to measure positions of the equipment and the personnel. It is possible to utilize these technologies to produce View Points in practical life. For example, a person may use a GPS device in her cell phone to acquire her location and store it as a view point.
  • GNSS global satellite navigation systems
  • Point of interest is used in this disclosure as a location that is observed from the view point. To perform the analysis presented in this document, it is not necessary to have any object in the view point. Rather, it may be imagined as a random coordinate in the same coordinate system and spatial reference where the point cloud and the view point are presented.
  • the point of interest is placed to a road crossing to a position where a person, vehicle or other object ("passing traffic") needs to be observed for safe traffic.
  • the view point can be placed to a position where the observer needs to be when he/she observes the passing traffic.
  • the visibility analysis may be performed as presented in this disclosure, to analyze if the necessary visibility is available .
  • a point cloud, a view point and a point of interest are received.
  • the received point cloud is acquired substantially from a direction different to the line formed by the view point and the point of interest.
  • vegetation near country side roads is imaged from a flying vehicle, such as an airplane, helicopter, unmanned aerial vehicle or a satellite.
  • the view point may be chosen to represent a vehicle such as a car travelling along a road and arriving at a crossing.
  • the point of interest, or a plurality of points of interest may be located on the crossing road.
  • supplementary information may be used to complement the point cloud.
  • This supplementary information which is not essential, may be acquired also from a car point of view.
  • a benefit of this approach is that a person driving the car that collects the points does not need to decide if there is a need, for example, of cutting the trees because of traffic safety reasons.
  • the above arrangements provide many advantages for determining a need for cutting the trees or managing vegetation for improving the traffic safety. For example, it provides a possibility to do cost efficient analysis based on already acquired information that may have been collected by forest or topographic mapping industry because of other reasons. Another possibility is that a person inspecting the safety has now tools for fast and reliable analysis by using an unmanned aerial vehicle, such as a drone.
  • the visibility analysis is performed on previously collected point cloud from an airborne sensor covering forest area, to decide if a timber harvester operator has sufficient view of a forest stand from his/her harvester cabin, to operate the machine. If the view is not sufficient, operation called pre-harvest clearing can be performed to remove the small vegetation obstructing operator views.
  • pre-harvest clearing can be performed to remove the small vegetation obstructing operator views.
  • the harvester operator view analysis and road crossing visibility analysis, the decision of vegetation management operation need or timing is made based on the analysis result.
  • the vegetation cutting, trimming removal or application of herbicide may be performed by the same organization as the analysis or other party, such as a subcontractor.
  • Analysis width describes the radius around the line of sight vector 1.5 that is considered when the point cloud statistic is performed.
  • point cloud data 10-100 points per square meter, that may be acquired for example from a LiDAR sensor from unmanned or manned aircraft, a land or sea vehicle or a stationary location
  • an analysis width of 0.5-1 m around the line of sight may be used.
  • values significantly different than these may be successfully used .
  • the wider is the width the higher number of points are included to the statistic; making the statistic analytically more reliable and smaller changes in blocking object density measurable.
  • the true width of clear visibility around the line of sight needed for a human eye to see from the view point to the point of interest is not very wide.
  • the wider is the analysis width the more may the blocking objects around the true line of sight cause error to the analysis. In practice, for example line of sight between 0.5 and 1 meter have yielded reliable analysis, but other values may be used.
  • the analysis width may be considered as a vertical distance above and below a given surface or a plane .
  • Field reference measurements may be made to empirically calibrate a model between the values of the point cloud statistics and the visibility.
  • a target is taken to the point of interest, while an observer (the observer being a person, a camera or any other device responding to visibility) is placed to the view point, observing the visibility of the target.
  • the visibility may be recoded either as Yes/No visibility, as a percentage or by using other metrics.
  • the corresponding line of sight is produced and the point cloud statistic is taken.
  • a model may be fit between the visibility metrics and the point cloud statistics, to predict the visibility as a function of the statistics value. Multiple statistics or any relations of different statistics may be used as predictors in the model.
  • the model may be of parametric or non-parametric type or any other method to predict a set of dependent values with a set of independent values.
  • a target is moved along a line that is drawn from the view point.
  • the location where the visibility of the target ends/starts is recorded and the distance between the view point and the target is recorded.
  • the line of sight is drawn to the location where the visibility of the target ends/starts and the point cloud statistics area taken for that line of sight.
  • Line of sight is a straight line drawn from the view point to the point of interest.
  • this line can be produced as a digital vector from a point to another, and is stored to the computer memory.
  • Analysis volume is the three dimensional space in the same spatial reference and coordinate system as the point cloud.
  • the analysis volume is defined as a space in the proximity of the line of sight.
  • the point cloud statistic is made from some or all of the points that are located inside the analysis volume .
  • the analysis volume may be of any shape; cylindrical shape and cuboid being just some examples.
  • analysis volume shape that is slightly less than optimal in terms of the analysis but provides efficient computational properties; for example, voxel type statistics may be produced from the point cloud, selecting and summarizing the analysis results to a line of sight from the voxels that are in the proximity of the line of sight.
  • the Analysis Volume 41 is of cylindrical shape.
  • the volume, and thus, the number of points in the Analysis Volume may be altered by changing the radius of the cylinder (Analysis width) .
  • the optimal size of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
  • the analysis volume is of conical shape or shape of a cone where the top end is truncated.
  • the top angle of the cone may be defined as a solid parameter or the cone may be formed to each target size and distance differently. This does not indicate that the cone bottom would need to be exactly the size of the target; it may still be useful to scale the cone or truncated cone depending on conditions, like point density of the point cloud, to include enough point information for effective analysis.
  • Producing the analysis volume may be done as a vector operation, creating an actual volume or as a search of points within a given radius from the line of sight or some analysis surface as well. Some operations that may prove useful include spatial join or distance from point to line, but many other approaches may be taken as well.
  • One approach to identify the points for the point statistics, effectually creating the analysis volume type search, is three dimensional voxel analysis; The voxels that are within the analysis width from the line of sight are identified and selected and the points within the selected voxels are identified as points in the analysis volume.
  • analysis volume is defined as a space within given limits above ground level.
  • analysis volume may be defined as space that is higher than 0.5 meters above ground but less than 3 meters above ground as described below.
  • One practical embodiment includes production of visibility information to a grid.
  • a systematic grid is produced over the analysis area; which may be very large in some cases.
  • Analysis volumes are defined for each grid cell, for example, including space above some given minimum distance from the ground level but below some maximum distance. The example mentioned above, defining the analysis volume as space that is higher than 0.5 meters above ground but less than 3 meters above ground level focuses on the space at the eye level of a standing human.
  • Point cloud analysis is an art of science and practice. In point cloud analysis, it is common to use many kinds of statistics of the points within a given area, volume, etc. In this case, the point cloud statistics are taken from the points that are in the proximity of the line of sight.
  • the proximity may be defined as a maximum distance, where the points are included to the statistic.
  • point cloud statistics could name a large number of ways to draw statistics from a point cloud around any given line of sight.
  • the point cloud points close to the view point (observer) have a different weight to the statistic than points closer to the point of interest. This may simulate the effect of obstructing objects to human eye sight.
  • the statistics can be taken and recorded for example by slicing the analysis volume with planes perpendicular to the line of sight, recording the statistics for each sliced part of the analysis volume differently. This may be useful for example if the analysis volume is of cone or truncated cone shape .
  • the process of taking a statistic includes production of additional statistic about the actual points that were identified in the analysis.
  • the actual points that were selected to be inside the analysis volume may be laid on a grid and a statistic produced to the grid. This kind of statistic may be useful to indicate the location of the analyzed points on the grid cells. For example, in case of road crossing visibility analysis, the location of the cells where the visibility blocking vegetation is located may be of interest.
  • the Analysis May Include, Performing Model Prediction to Produce Visibility Metric Performing an analysis of visibility may be done in many ways. Probably, the simplest method would be to assign a minimum number of point cloud points within the proximity of the line of sight that forms an obstruction of visibility.
  • a model is produced and used to predict a measurable visibility metric from the point cloud statistics; the independent variable being the statistic D mentioned above and the dependent variable a measured distance of visibility in the analysis cells. This kind of analysis has proven to be quite efficient in practical tests.
  • the statistics area produced on sliced analysis volumes and different analysis parameters are used for different parts of the analysis volume. If the analysis volume is divided to co-centric shapes with common center of the shape in the line of sight, different analysis or analysis weight may be given to the points in the inner volume part than to the points on the outer part of the analysis volume.
  • Locations where the reference measurements area taken may be sampled or drawn by any other criteria.
  • view point 12 is placed to each of these sampled locations.
  • a Target is placed to some point of interest 32, 13 to some distance from a view point.
  • Target may be, for example a plate or object of a standard size and of a distinct color.
  • a vest of a distinct color, wore by a member of measurement team has been used.
  • Distance from the view point to the target is measured; for example Haglof Vertex ultrasound range finder (included in tree height measurement device) has been used, but tape measure works as well.
  • the target is moved away from the view point, an observer positioned to the view point observing the visibility of the target.
  • the Statistics refer to one or multiple predictors. Performing an analysis of visibility on other locations on similar point cloud can be done using this visibility prediction model, yielding to desired visibility metrics. In case of grid analysis, the cells that land under the line of sight in the reference data are selected. The model predictors may be taken as weighted average of the statistics calculated to each cell.
  • the distance of the line of sight inside each analysis cell can be used as a weight to calculate the predictor value for each cell.
  • the model fit to this data is capable of producing a visibility metrics for each analysis cell, yielding a continuous layer of visibility. This kind of layer is quite practical in planning of operations related to visibility, for example in pre-harvest clearing of forest understory or clearing of road crossings .
  • Figures 2 - 4 illustrate an example how point cloud data derived from LiDAR sensor according to a method shown in figure 1.
  • a two- dimensional cross-section of a three dimensional point cloud is shown.
  • Reflected laser pulses from vegetation 21, ground 22 and a structure 23 are shown as black points in the space.
  • Figure 3 discloses an example of an embodiment of a line of sight 33 from a viewing point 31 to a point of interest 32.
  • Figure 4 shows an example of an embodiment of an analysis volume 41, drawn around the line of sight 42 from a view point 43.
  • the Analysis Volume 41 is of cylindrical shape around a line of sight 42.
  • the volume, and thus, the number of points in the Analysis Volume may be altered by changing the area of the end surfaces; for example in this example, the radius of the cylinder.
  • the optimal volume of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
  • the above described methods may be implemented as computer software which is executed in a computing device comprising at least one memory and at least one processor and that can be connected to the Internet.
  • the software When the software is executed in a computing device it is configured to perform a method described above.
  • the software is embodied on a computer readable medium, so that it can be provided to the computing device.
  • the components of the exemplary embodiments can include a computer readable medium or memories for holding instructions programmed according to the teachings of the present embodiments and for holding data structures, tables, records, and/or other data described herein.
  • the computer readable medium can include any suitable medium that participates in providing instructions to a processor for execution.
  • Computer-readable media can include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other suitable magnetic medium, a CD- ROM, CD ⁇ R, CDiRW, DVD, DVD-RAM, DVDiRW, DVDiR, HD DVD, HD DVD-R, HD DVD-RW, HD DVD-RAM, Blu-ray Disc, any other suitable optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other suitable memory chip or cartridge, a carrier wave or any other suitable medium from which a computer can read.
  • a floppy disk a flexible disk, hard disk, magnetic tape
  • any other suitable magnetic medium a CD- ROM, CD ⁇ R, CDiRW, DVD, DVD-RAM, DVDiRW, DVDiR, HD DVD, HD DVD-R, HD DVD-RW, HD DVD-RAM, Blu-ray Disc, any other suitable optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other suitable memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Astronomy & Astrophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method for measuring visibility with three dimensional point data is disclosed. The method utilizes a point cloud (11) and an analysis volume (13), presenting an effective way to quantify the visibility from one point to another or on a continuous layer. Using the method, saftery and productivity can be improved in many areas, including in traffic, advertising, logging work and in military operations.

Description

A METHOD TO MEASURE VISIBILITY FROM POINT CLOUD DATA
DESCRIPTION OF BACKGROUND
The following disclosure relates to measuring visibility. Particularly, the disclosure relates to a system and method measuring visibility in presence of blocking objects. Visibility may be described as ability of a viewer to see or visually detect an object from a view point. If an object in certain location may be visually detected from a view point, without any object blocking the view, it can be said that visibility exists between the two points. In many cases, the visibility may be partially or totally blocked by objects. Visibility blocking objects may be, for example, vegetation, traffic signs, structures or any other objects that may be capable of obstructing view totally or partially.
Measuring visibility is important in many areas of life. There are many reasons why organizations and individuals are interested in the measuring visibility; including but not limited to:
- Traffic safety overall
- Road or railway crossing safety
- Estimating ability to see scenery
- Planning and maintaining advertisements
- Planning logging work (pre-harvest clearing need, for example)
- Planning and conducting military operations
- etc .
Measuring visibility is a challenge in some operations. It is costly to go to the location of interest and verify by visual inspection the ability to see from one point to another. Additionally, the field inspection may be dangerous, for example, if the location of question is on a highway. Remote sensing is an art known to mankind. Remote sensing may be performed using for example satellite or airborne sensors, operated from manned or unmanned vessels. Sensors most commonly used include spectral sensors (cameras, spectrometers etc.), LiDAR sensors and radar sensors, but other kinds are known to be used as well.
Remote Sensing has capability to produce information about the objects that may block visibility. This information may be geographically two-dimensional or three dimensional, but can also include more dimensions, like time.
Three dimensional point clouds have been used quite extensively in sensing. Point clouds can be produced using multiple techniques, LiDAR, photogrammetry and radargrammetry being just a few. A brief presentation of some of the techniques are presented here.
Photogrammetry is the science of making measurements from photographs. Stereogrammetry is the science of making measurements of a pair or group of images; the images may be photos or other kind of images, like radar images. In stereogrammetry, the images have to be taken from different viewpoints, presenting the objects at different distance from the observing sensors at different locations in the imaging sensor. Corresponding features are identified in different images and their relative location on the image are interpreted to extract the 3D location of the objects on the real life. The sensor locations may be given to the algorithm or, alternatively, deduced from the analysis. Stereophotogrammetry is a methodology of applying Photogrammetry on photographic images.
LiDAR, known also as laser scanning, has been used for forest inventories approximately since 1990's. LiDAR sensor is an active instrument that uses laser ranging, combined with devices measuring position and attitude of the sensor, to produce 3D location measurements of objects. The sensor emits a laser beam to a known direction from a known position and records the distance to surfaces where the beam is reflected back. Additionally, LiDAR may have capability to record the intensity of the returning signal, indicating the reflectivity and size of the reflecting surfaces. The laser beam is projected to the object through a mirror or prism system or other kind of optical setup (the "LiDAR Optic") that causes the laser beam to scan the target area, recording the precise direction where the beam was sent each time to allow construction of the 3D measurements .
Traditionally, LiDAR has been used to produce attributes to areas of land. For example, LiDAR-derived attributes have been assigned to timber stands, making management or inventory units.
Because of its capability to measure vegetation height and canopy densities, LiDAR has been widely accepted in forest inventory purposes (1) . LiDAR inventories represent a significant economic value, the volume of LiDAR inventory in Finland alone is about 3- 4 million hectares annually (2013) , replacing approximately 45-60 million euros worth of fieldwork. Other Scandinavian countries, Sweden and Norway, being in the forefront of operational forest inventory from LiDAR, represent corresponding volumes.
Term of "Radargrammetry" is commonly used for stereogrammetry performed from radar imagery. Radargrammetry is the technology of extracting geometric object information from radar images. The output of the radargrammetric analysis may be for example a geometric three dimensional point cloud. Like stereophotogrammetry and LiDAR, also radargrammetry can be used from airborne or satellite, ground and water vessel platforms. SUMMARY
In this disclosure, a method measuring visibility with three dimensional point data is disclosed. The method utilizes point cloud and an analysis volume, presenting an effective way to quantify the visibility from a point to another or on a continuous layer. Using the method, safety and productivity can be improved in many areas of interest, including traffic, advertising, logging work and military operations.
In an aspect a method of measuring visibility in presence of blocking objects is disclosed. The method comprises receiving three dimensional point data of the blocking objects; receiving a three dimensional analysis volume; and analyzing visibility in presence of blocking objects by analyzing the three dimensional point data representing the blocking objects inside the received analysis volume and estimating the visibility from the point analysis.
This provides an ability to quantify or qualify visibility. Visibility is a very important factor in areas like road safety, crossing safety, animal safety, railway-road crossing safety, advertising business, forest logging operations and military operations or similar. Additionally, there are applications in search and rescue missions, as well as in aerial search of objects, humans, vehicles or animals, where the quantification of visibility provides value added.
The observation ability, as described as one embodiment of visibility analysis, is particularly useful in applications where objects are observed from a given point of view. This takes place in many applications mentioned in this disclosure, but particularly in aerial search and logging operations.
In an implementation the analysis volume is produced around a line of sight vector; and the method is further configured to produce a line of sight vector by receiving a view point of the viewer and producing a three dimensional line of sight from the view point to a location. This provides am ability to quantify or qualify the visibility from a particular point of view to a particular direction.
In an implementation said analyzing visibility in the presence of blocking object comprises selecting the three dimensional points indicating the blocking objects in the proximity of the three dimensional line of sight and estimating the visibility from the returns. This improves the ability to quantify or qualify visibility. Visibility is a very important factor in areas like road safety, crossing safety, animal safety, railway-road crossing safety, advertising business, forest logging operations and military operations, or similar .
In an implementation the three dimensional point data is data from a LiDAR sensor. This provides an ability to perform the process of the aspect with a practically and economically effective dataset.
In an implementation the three dimensional point data is received from stereogrammetry . This provides an ability to perform the process of the aspect with a practically and economically effective dataset.
In an implementation the blocking object are pieces of vegetation. This provides the possibility to identify vegetation that is blocking the visibility either partially or totally.
In an implementation the method further comprises making a model between a statistic from the three dimensional points indicating the blocking objects in the proximity of the line of sight and the visibility from the view point along the line of sight; and using the said model to predict the visibility. This improves the methods described above by using empirical evidence on the actual experience visibility.
In an implementation the method further comprises measuring the visibility of point of interest from view point by estimating visibility of a standard target, placed to the point of interest; and using the said visibility metric in the reference measurement for visibility. This improves the method by producing empirical evidence about visibility to work as modeling data .
In an implementation the method further comprises presenting the visibility in units of distance where a target may be detected or viewed at desired level of detail. This provides an ability to present the results of the analysis as a visibility value in units of distance.
In an aspect a system comprising at least one processor and data communication connection is disclosed. The system is configured to perform a method as described above. It is beneficial to implement the method as a system.
In an aspect a computer program is disclosed. The computer program comprises computer program code, wherein the computer program is configured to cause a computing device to perform a method as described above when the computer program is executed in a computing device .
The above described methods, systems and computer programs provide an improved way to measure visibility in various applications. The benefit of the aspects and implementations is that they provide reliable results and the measurement of visibility is easy to make even in difficult terrain. BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are included to provide a further understanding of the method to measure work production in forest operations and constitute a part of this specification, illustrate embodiments and together with the description help to explain the principles of a system and method of measuring work production in forest operations. In the drawings :
Fig. 1 a block diagram of an example embodiment of the process of measuring visibility,
Fig. 2 an example of an embodiment of point cloud data derived from LiDAR sensor, presented as a two-dimensional cross-section of a three dimensional point cloud. Reflected laser pulses from vegetation, ground and a structure are shown as black points in the space,
Fig. 3 an example of an embodiment of a line of sight 33 from a viewing point 31 to a point of interest 32,
Fig. 4 an example of an embodiment of An Analysis Volume, drawn around the Line of Sight 33. In this case, the Analysis Volume 41 is of cylindrical shape around a line of sight 42. The volume, and thus, the number of points in the Analysis Volume may be altered by changing the area of the end surfaces; for example in this example, the radius of the cylinder. The optimal volume of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
DETAILED DESCRIPTION
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings . In the following description method to measure visibility in presence of blocking objects is disclosed.
Figure 1 presents the process of measuring visibility. The process receives inputs of view point 12, point of interest 13, The Point cloud 11 and optionally, of analysis width 14. The point cloud is also presented in figure 2, where the laser beam has reflected from vegetation 21, ground 22 and structure objects 23, correspondingly, and the sensor has produced points into the point cloud from these object types. The point cloud may be classified, where a class attribute is defined to each point, representing the object type where it was reflected from.
Analysis to produce the line of sight 15 is a two dimensional or three dimensional analysis of producing a direct line vector between the view point and the point of interest. Three dimensional line of sight is the preferred method. The line of sight is used to select the point cloud points that are on the line of sight. The judgment of a point being on the proximity of line of sight is performed by running a spatial operation that detects all points within a given proximity of the line of sight vector. The selection area can be imagined to be a three dimensional volume, for example like a cylinder, with the main axle set to the line of sight and the analysis width 14 set as the proximity selected. Other kinds of three dimensional features, for example, a rectangular three dimensional feature (sometimes called as "cuboid") , may be used instead of a cylinder. For simplicity, all such features are called "Analysis Volume" in this disclosure. The shape and size of the Analysis Volume is described as "Analysis Width in this document for simplicity; however, a corresponding shape of volume can be applied in case of other Analysis Volume choices. The selection of the analysis width may be done empirically, considering point density of the input point cloud and the expected volume of the Analysis Volume. It is practical to target to a radius that yields dozens or hundreds of point cloud points into a line of sight that is blocked, but another number may be used as well. The smaller is the analysis volume or search radius, in some cases, the better does the analysis result correspond to the actual visibility from the exact view point. However, the higher is the number of the point cloud points that represent a minimally blocked visibility, the higher is the reliability of the analysis.
Optionally, if Reference Measurement 18 of the actual work outputs are available, a statistic 17 can be used as an independent variable in a model, setting the empirical visibility as the dependent variable. The visibility may be presented as a percentage at a given distance or line of sight or, alternatively as a distance measurement. If the distance measurement is desired, a target of standardized size and distinct color is set to some location along the line of sight, facing directly to a corresponding view point. The target is moved, centered to the line of sight, until the visibility is totally blocked. The minimum distance of the blocked visibility is recoded, as well as the view point and the line of sight direction. The recording of the locations may be done, practically, for example by utilizing a GPS device yielding a practical positioning accuracy; (for example 0,3 or 1 meter C,U,Z at 90% confidence level) . The line of sight vector is produced from the GPS points at the view point and at the target location. In one practical embodiment, the diameter of the visibility target is set to be the same as the selected diameter of the Analysis Volume (In case of non-cylindrical Analysis volume; the area of the target may be same as the area of the analysis volume end surface) around the line of sight. A point statistic may be produced from the point cloud; suitable point statistic may be, for example, the number of points inside the Analysis Volume or the ratio of LiDAR points inside the Analysis Volume from all points in the cloud reflected from the laser pulses that were pointed towards the same area. Other statistics may be used as well; drawing different statistics from point clouds being an art of point cloud analysis science well known to the knowledgeable people in the art.
In an embodiment the model can be calibrated to improve the precision of the measurement of visibility. Optional Calibrated Visibility Measurement can be produced from the statistic with this calibrated model .
In another embodiment, the visibility may be calculated to a continuous layer, for example to a raster or a grid layer. To apply a model for continuous layer production, the calibration may be done, for example, using similar target as presented in 32 and selecting view points and directions. The target may be moved from the selected view point to the defined direction, facing towards the view point, until it becomes invisible. The minimum distance between the target and the view point where the target is totally invisible is recorded. Instead of calculating the number of points within the volume around the line of sight, the point cloud statistic may be taken from all points inside each grid cell that land on the analyzed elevation above ground. For example, if visibility at the eye level is of interest, the point statistic may be taken from points whose elevation is 0,5m - 3m above ground level. The statistics may be summarized to the line of sight from all cells that are in the desired proximity of the line of sight; in one embodiment, the statistic may be a sum of points on each cell where the line of sight touches, the cell's point count weighted by the length of the line of sight inside each cell. The model may be fit between the distance from the view point to the point of interest and the point statistic derived to the line of sight. With this model, the visibility may be predicted as an attribute of each analysis cell. One practical way may be to present the visibility as meters; however, other kinds of metrics, like percentage of cover per meters of line of sight may be also used, indicating the percentage of the target that is turned covered per each meter of line of sight. With the visibility information as an attribute of a continuous information layer, it is possible to judge the visibility between any pair of two locations.
As described in the embodiments above, in some embodiments, the visibility is analyzed within a given analysis volume as a partially or totally isotropic measurement, not being presented on any line of sight. In isotropic measurement, there is no direction of the measurement defined. In partially isotropic measurement, the visibility may be defined, for example, in any horizontal direction or direction of any surface but not in vertical direction. The isotropic measurement may become useful if the visibility is measured through partially covering objects, like vegetation, fog, rainfall or smoke, group of humans or any set of objects that can partially cover visibility, but the actual expected viewing direction is not known. In one embodiment of partially isotropic visibility measurement, the analysis volume is a volume between two surfaces, following the contour of some surface "Original Surface" but set to some distances apart; allowing measurement of visibility in the plane between the surfaces. The space between the surfaces may be very narrow, still considering that the density of the point cloud needs to produce reasonable number of points if there are visibility-blocking objects available. If the Original Surface is the surface of the earth, this embodiment is similar to the horizontal visibility described above. This embodiment may be described, for example, as "visibility at human eye level above ground as described earlier" or "Visibility at machine operator' s eye level above the ground". In one practical embodiment, the space between two surfaces is divided to volumes, for example, by slicing it with a horizontal grid or honeycomb shape to define volumes, that are isotropic in the direction of the surfaces but not isotropic in vertical direction. Alternatively, a more continuous measurement can be performed by producing overlapping volumes when slicing the area. This kind of overlapping volumes may be produced by introducing any set of points, where horizontal circles are drawn around these points and the volumes are produced by slicing the space between top and bottom surfaces with these horizontal, vertically extruded circles. For simplicity, in this text, we describe these analysis volumes as "Surface-Following Volumes". In Surface- Following Volumes, the point cloud statistic is formed within each volume in a similar way as in other embodiments, as presented in Figure 1. Performing an Analysis of the Visibility 18 in isotropic or partially isotropic embodiment is also possible by Performing Model Prediction to Produce Visibility Metric as presented in Figure 1. Some Field Reference Measurements of Visibility 19 may be done, and visibility model produced as described above. As we see, the visibility distance may be defined to some given direction or, alternatively, to any direction within a given analysis volume .
In one practical embodiment of the invention, the observation ability of an observer is measured. The observer is placed into a position in the point cloud and the observer's ability to observe objects is estimated. The visibility model may be produced the same way as described in this document, but the analysis is not targeted to any target but, rather, the observers' ability to see such targets in given direction or in isotropic or partially isotropic setting. The observation ability may be measured also as a continuous measurement, as described before.
The volume can be described as a function of linearly referenced distance along a sightline that is originated from the view point and that creates a closed surface around the sightline. These sightlines can point to any direction from the viewer and thus the visibility can be analyzed to multiple directions from view point simultaneously .
As we see in this description, the visibility is not necessarily connected to any object visible in the point cloud. Only the blocking object needs to be present in the point cloud data. The collection time of the cloud data can be arbitrary; however the blocking objects needs to be present in the data.
The analysis can use point clouds that may have been collected different time than the actual analysis.
There are different kinds of objects obstructing visibility. For example, vegetation has a tendency to partially cover visibility, and is especially suitable to be analyzed. Other features, like buildings or ground, may be of nature where an existence of a building totally covers the visibility. Some structures, like traffic signs, power line towers or poles and some other structures may present partial cover. Whatever type of visibility obstructions are, it is useful to make a different kind of visibility metrics for some different object types that may be present in the point cloud. It is a common practice in point cloud analysis business to classify points in the cloud based on the type of object they represent, like ground, vegetation, structures, buildings etc. Having the point cloud classified, the visibility analysis is possible using different point classes as different object types, and using different visibility obstruction effect for each of the class. The total visibility obstruction effect may be computed as a summary of different obstruction types. In another embodiment, the total obstruction may be modelled as a combined function of multiple predictors (statistics of the points in each class as separate predictors) .
Receiving Point Cloud 11
Characteristically, a point cloud is a set of points, presented as a set of coordinates in some coordinate system and spatial reference. In a point cloud, a set of geometric coordinates, typically three coordinates (in case of three dimensional data) is presented; often, other attributes, like a class, color information or any other description may be joined to the coordinate information. The value of point clouds is often their ability to present geometrically distributed information in an efficient and practical manner .
Acquiring information in the form of point clouds is a common practice in the industry. For example, in forest management processes, information about the forest stands have been collected with LiDAR sensors and statistics of the point cloud shape and density have been produced, to be used as predictors of tree vegetation characteristics (See for example: "Naesset, E. 2002. Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens. Environ. 80(1): 88-99. doi : 10.1016/S0034- 4257 (01) 00290-5.") .
Point clouds have been generated using LiDAR sensing, stereogrammetry on different kinds of images, but in some cases, even using survey equipment. A point cloud is a set of records, written in a format of a file, a database, a table or any other form to present geometric coordinates.
Some practical formats to store point clouds are files, where las and laz, shp or ASCII text are a few examples commonly used. It is possible to store similar information content in a database.
The point clouds may be transferred via some electronic media (for example hard drive or other kind of electronic memory) , but on email, ftp or some other electronic information transfer.
Optionally, receiving a view point 12
A view point is a location where a spectactor' s eye, observing objective or an imaginary observer is located. For analysis purpose, it is not necessary to have an access to a view point; it may be any location in the same coordinate system and spatial reference where the point cloud is located.
Modern positioning technologies include global satellite navigation systems (GNSS) , but can also use other kinds of means to measure positions of the equipment and the personnel. It is possible to utilize these technologies to produce View Points in practical life. For example, a person may use a GPS device in her cell phone to acquire her location and store it as a view point.
Optionally, Receiving A Point of Interest 13
Point of interest is used in this disclosure as a location that is observed from the view point. To perform the analysis presented in this document, it is not necessary to have any object in the view point. Rather, it may be imagined as a random coordinate in the same coordinate system and spatial reference where the point cloud and the view point are presented. In one embodiment, the point of interest is placed to a road crossing to a position where a person, vehicle or other object ("passing traffic") needs to be observed for safe traffic. Similarly, the view point can be placed to a position where the observer needs to be when he/she observes the passing traffic. The visibility analysis may be performed as presented in this disclosure, to analyze if the necessary visibility is available .
In the example above, a point cloud, a view point and a point of interest are received. In one particularly beneficial example embodiment the received point cloud is acquired substantially from a direction different to the line formed by the view point and the point of interest. For example, in a simple implementation vegetation near country side roads is imaged from a flying vehicle, such as an airplane, helicopter, unmanned aerial vehicle or a satellite. The view point may be chosen to represent a vehicle such as a car travelling along a road and arriving at a crossing. The point of interest, or a plurality of points of interest may be located on the crossing road. Thus, from the point cloud, which has been acquired from above, can be determined if the cars approaching the crossing can see each other.
In a more advanced embodiment supplementary information may be used to complement the point cloud. This supplementary information, which is not essential, may be acquired also from a car point of view. A benefit of this approach is that a person driving the car that collects the points does not need to decide if there is a need, for example, of cutting the trees because of traffic safety reasons.
The above arrangements provide many advantages for determining a need for cutting the trees or managing vegetation for improving the traffic safety. For example, it provides a possibility to do cost efficient analysis based on already acquired information that may have been collected by forest or topographic mapping industry because of other reasons. Another possibility is that a person inspecting the safety has now tools for fast and reliable analysis by using an unmanned aerial vehicle, such as a drone.
In another beneficial embodiment, the visibility analysis is performed on previously collected point cloud from an airborne sensor covering forest area, to decide if a timber harvester operator has sufficient view of a forest stand from his/her harvester cabin, to operate the machine. If the view is not sufficient, operation called pre-harvest clearing can be performed to remove the small vegetation obstructing operator views. In this application, there is a significant economic benefit by avoiding a field visit to the site.
In both embodiments, the harvester operator view analysis and road crossing visibility analysis, the decision of vegetation management operation need or timing is made based on the analysis result. The vegetation cutting, trimming removal or application of herbicide may be performed by the same organization as the analysis or other party, such as a subcontractor.
Optionally, Receiving Analysis Width 14
Analysis width describes the radius around the line of sight vector 1.5 that is considered when the point cloud statistic is performed. For example, using point cloud data of 10-100 points per square meter, that may be acquired for example from a LiDAR sensor from unmanned or manned aircraft, a land or sea vehicle or a stationary location, an analysis width of 0.5-1 m around the line of sight may be used. However, values significantly different than these may be successfully used . The wider is the width, the higher number of points are included to the statistic; making the statistic analytically more reliable and smaller changes in blocking object density measurable. However, as an observant reader may note, the true width of clear visibility around the line of sight needed for a human eye to see from the view point to the point of interest is not very wide. The wider is the analysis width, the more may the blocking objects around the true line of sight cause error to the analysis. In practice, for example line of sight between 0.5 and 1 meter have yielded reliable analysis, but other values may be used.
In one possible embodiment; the case of grid analysis, mentioned later in this document in more detail, the analysis width may be considered as a vertical distance above and below a given surface or a plane .
Optionally, Receiving Field Reference Measurements of Visibility 19
Field reference measurements may be made to empirically calibrate a model between the values of the point cloud statistics and the visibility.
In one embodiment, a target is taken to the point of interest, while an observer (the observer being a person, a camera or any other device responding to visibility) is placed to the view point, observing the visibility of the target. The visibility may be recoded either as Yes/No visibility, as a percentage or by using other metrics. The corresponding line of sight is produced and the point cloud statistic is taken. With a sufficient number of such empirical observations taken, a model may be fit between the visibility metrics and the point cloud statistics, to predict the visibility as a function of the statistics value. Multiple statistics or any relations of different statistics may be used as predictors in the model. The model may be of parametric or non-parametric type or any other method to predict a set of dependent values with a set of independent values.
In another embodiment, a target is moved along a line that is drawn from the view point. The location where the visibility of the target ends/starts is recorded and the distance between the view point and the target is recorded. The line of sight is drawn to the location where the visibility of the target ends/starts and the point cloud statistics area taken for that line of sight. With this kind of setup, it is possible to make a prediction model where the distance between the view point and the point of interest at the location where the visibility ceases to exist is the dependent variable, the point statistics used as dependent variables. Multiple statistics or any relations of different statistics may be used as predictors in the model. The model may be of parametric or non-parametric type or any other method to predict a set of dependent values with a set of independent values.
Optionally, Producing a Line of Sight from the View Point to the Point of Interest 15
Line of sight is a straight line drawn from the view point to the point of interest.
In one embodiment, this line can be produced as a digital vector from a point to another, and is stored to the computer memory.
Producing an Analysis Volume 16
Analysis volume is the three dimensional space in the same spatial reference and coordinate system as the point cloud. In one embodiment, the analysis volume is defined as a space in the proximity of the line of sight. The point cloud statistic is made from some or all of the points that are located inside the analysis volume .
The analysis volume may be of any shape; cylindrical shape and cuboid being just some examples. For computational simplicity, it may be practical to use analysis volume shape that is slightly less than optimal in terms of the analysis but provides efficient computational properties; for example, voxel type statistics may be produced from the point cloud, selecting and summarizing the analysis results to a line of sight from the voxels that are in the proximity of the line of sight.
In one embodiment, the Analysis Volume 41 is of cylindrical shape. The volume, and thus, the number of points in the Analysis Volume may be altered by changing the radius of the cylinder (Analysis width) . The optimal size of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
In another embodiment, the analysis volume is of conical shape or shape of a cone where the top end is truncated. In conical or truncated conical shape, the top angle of the cone may be defined as a solid parameter or the cone may be formed to each target size and distance differently. This does not indicate that the cone bottom would need to be exactly the size of the target; it may still be useful to scale the cone or truncated cone depending on conditions, like point density of the point cloud, to include enough point information for effective analysis.
Producing the analysis volume may be done as a vector operation, creating an actual volume or as a search of points within a given radius from the line of sight or some analysis surface as well. Some operations that may prove useful include spatial join or distance from point to line, but many other approaches may be taken as well. One approach to identify the points for the point statistics, effectually creating the analysis volume type search, is three dimensional voxel analysis; The voxels that are within the analysis width from the line of sight are identified and selected and the points within the selected voxels are identified as points in the analysis volume.
Also more complex analysis volumes may be used. In one embodiment of grid type visibility analysis, the analysis volume is defined as a space within given limits above ground level. For example, analysis volume may be defined as space that is higher than 0.5 meters above ground but less than 3 meters above ground as described below.
One practical embodiment includes production of visibility information to a grid. A systematic grid is produced over the analysis area; which may be very large in some cases. Analysis volumes are defined for each grid cell, for example, including space above some given minimum distance from the ground level but below some maximum distance. The example mentioned above, defining the analysis volume as space that is higher than 0.5 meters above ground but less than 3 meters above ground level focuses on the space at the eye level of a standing human.
Taking a Statistic of the Point Cloud within the Analysis Volume 17
Point cloud analysis is an art of science and practice. In point cloud analysis, it is common to use many kinds of statistics of the points within a given area, volume, etc. In this case, the point cloud statistics are taken from the points that are in the proximity of the line of sight.
Many kinds of statistics may be used, but some examples are mentioned here:
A. Number of points within a proximity of the line. The proximity may be defined as a maximum distance, where the points are included to the statistic.
B. Number of points within the proximity of the line, divided by the number of points landed on the ground class from the same laser pulses. Note that the ground points may be outside of the analysis volume.
C. Proportion of points within the analysis volume, of the outgoing pulses that touch the analysis volume.
D. Number of points (in vegetation class or altogether) within the analysis volume, divided by the number of points that are classified to ground class in the 2D projection of the analysis volume to the ground level. For example, the point count in vegetation class of classified LiDAR data inside the analysis volume, is divided by the number of points in the ground class on the ground level directly below the analysis volume. (This statistic tends to be an effective one in predicting visibility in grid cells)
E. Number of points within the analysis volume that are identified to some class X in the point class attributes
Any combinations, derivatives and alterations of statistics may be used as well. A person skilled in the art of point cloud statistics could name a large number of ways to draw statistics from a point cloud around any given line of sight. In one embodiment, the point cloud points close to the view point (observer) have a different weight to the statistic than points closer to the point of interest. This may simulate the effect of obstructing objects to human eye sight. The statistics can be taken and recorded for example by slicing the analysis volume with planes perpendicular to the line of sight, recording the statistics for each sliced part of the analysis volume differently. This may be useful for example if the analysis volume is of cone or truncated cone shape .
One practical analysis has been done including statistics from a LiDAR point cloud to a grid, including all points that are 0,5 m above and 3 meters below ground level, as mentioned above. This statistic tends to relate well to visibility at human eye level or forest machine operator eye level and can provide practical information for forest machine operators, hunters, military operators and people managing scenery.
In one embodiment, the process of taking a statistic includes production of additional statistic about the actual points that were identified in the analysis. For example, the actual points that were selected to be inside the analysis volume may be laid on a grid and a statistic produced to the grid. This kind of statistic may be useful to indicate the location of the analyzed points on the grid cells. For example, in case of road crossing visibility analysis, the location of the cells where the visibility blocking vegetation is located may be of interest.
Performing an Analysis of the Visibility 18, Based on the Point Cloud Statistic 17 Optionally, the Analysis May Include, Performing Model Prediction to Produce Visibility Metric Performing an analysis of visibility may be done in many ways. Probably, the simplest method would be to assign a minimum number of point cloud points within the proximity of the line of sight that forms an obstruction of visibility.
In one embodiment, a model is produced and used to predict a measurable visibility metric from the point cloud statistics; the independent variable being the statistic D mentioned above and the dependent variable a measured distance of visibility in the analysis cells. This kind of analysis has proven to be quite efficient in practical tests.
In one embodiment, the statistics area produced on sliced analysis volumes and different analysis parameters are used for different parts of the analysis volume. If the analysis volume is divided to co-centric shapes with common center of the shape in the line of sight, different analysis or analysis weight may be given to the points in the inner volume part than to the points on the outer part of the analysis volume.
Optionally, Receiving Field Reference Measurements of Visibility 19
To use empirical models in the visibility analysis, it is useful to receive some practical reference information. Locations where the reference measurements area taken may be sampled or drawn by any other criteria. In one embodiment, view point 12 is placed to each of these sampled locations. A Target is placed to some point of interest 32, 13 to some distance from a view point. Target may be, for example a plate or object of a standard size and of a distinct color. A vest of a distinct color, wore by a member of measurement team has been used. Distance from the view point to the target is measured; for example Haglof Vertex ultrasound range finder (included in tree height measurement device) has been used, but tape measure works as well. The target is moved away from the view point, an observer positioned to the view point observing the visibility of the target. It may be practical to use a standard direction, like due north or sample a direction and use navigation device to head to the desired direction. When the visibility of the target ends, the distance is recorded. The line of sight 15 of the maximum visibility in each sampled location is placed on the point cloud 11. The desired analysis volume 16 is produced, the statistics 17 is taken and a model is fit between the measurements of visibility 19 and statistics 17, predicting the visibility with the statistics. The Statistics here refer to one or multiple predictors. Performing an analysis of visibility on other locations on similar point cloud can be done using this visibility prediction model, yielding to desired visibility metrics. In case of grid analysis, the cells that land under the line of sight in the reference data are selected. The model predictors may be taken as weighted average of the statistics calculated to each cell. The distance of the line of sight inside each analysis cell can be used as a weight to calculate the predictor value for each cell. The model fit to this data is capable of producing a visibility metrics for each analysis cell, yielding a continuous layer of visibility. This kind of layer is quite practical in planning of operations related to visibility, for example in pre-harvest clearing of forest understory or clearing of road crossings .
Figures 2 - 4 illustrate an example how point cloud data derived from LiDAR sensor according to a method shown in figure 1. In the figures a two- dimensional cross-section of a three dimensional point cloud is shown. Reflected laser pulses from vegetation 21, ground 22 and a structure 23 are shown as black points in the space. Figure 3 discloses an example of an embodiment of a line of sight 33 from a viewing point 31 to a point of interest 32. Figure 4 shows an example of an embodiment of an analysis volume 41, drawn around the line of sight 42 from a view point 43. In this case, the Analysis Volume 41 is of cylindrical shape around a line of sight 42. The volume, and thus, the number of points in the Analysis Volume may be altered by changing the area of the end surfaces; for example in this example, the radius of the cylinder. The optimal volume of the Analysis Volume is defined depending on the density of the point cloud 21 and, potentially, another criteria, like computing power, desired geometric precision or analytical precision.
The above described methods may be implemented as computer software which is executed in a computing device comprising at least one memory and at least one processor and that can be connected to the Internet. When the software is executed in a computing device it is configured to perform a method described above. The software is embodied on a computer readable medium, so that it can be provided to the computing device. As stated above, the components of the exemplary embodiments can include a computer readable medium or memories for holding instructions programmed according to the teachings of the present embodiments and for holding data structures, tables, records, and/or other data described herein. The computer readable medium can include any suitable medium that participates in providing instructions to a processor for execution. Common forms of computer-readable media can include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other suitable magnetic medium, a CD- ROM, CD±R, CDiRW, DVD, DVD-RAM, DVDiRW, DVDiR, HD DVD, HD DVD-R, HD DVD-RW, HD DVD-RAM, Blu-ray Disc, any other suitable optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other suitable memory chip or cartridge, a carrier wave or any other suitable medium from which a computer can read.
It is obvious to a person skilled in the art that with the advancement of technology, the basic idea of the method to measure visibility may be implemented in various ways. The method to measure visibility and its embodiments are thus not limited to the examples described above; instead they may vary within the scope of the claims.

Claims

1. A method of measuring visibility in presence of blocking objects which method comprising:
Receiving (11) three dimensional point data of the blocking objects;
Receiving (16) a three dimensional analysis volume;
Analyzing (17, 18) visibility in presence of blocking objects by analyzing the three dimensional point data representing the blocking objects inside the received analysis volume and estimating the visibility from the point analysis.
2. The method according to claim 1, wherein the producing an analysis volume around a line of sight vector; and
the producing a line of sight vector further comprises:
Receiving a view point of the viewer; and
Producing a three dimensional line of sight (15) from the view point to a location (13) .
3. A method according to claim 2:
wherein said analyzing visibility in the presence of blocking objects comprises selecting the three dimensional points indicating the blocking objects in the proximity of the three dimensional line of sight and estimating the visibility from the returns.
4. A method according to claim 2 or 3, wherein receiving at least portion of the received three dimensional point data is received from a different direction with the line of sight.
5. A method according to any of preceding claims, wherein the three-dimensional point data has been acquired from the air.
6. The method according to any of preceding claims 1 - 5, wherein the three dimensional point data is data from a LiDAR sensor.
7. The method according to any of preceding claims 1 - 6, wherein the three dimensional point data is received from stereogrammetry .
8. The method according to any of preceding claims 1 - 7, wherein the blocking objects are pieces of vegetation .
9. The method according to any of preceding claims
1 - 8 comprising:
Making a model between a statistic from the three dimensional points indicating the blocking objects in the proximity of the line of sight and the visibility from the view point along the line of sight; and
Using the said model to predict the visibility.
10. The method according to any of preceding claims
1 - 9, which method further comprising:
Measuring (19) the visibility of point of interest from view point by estimating visibility of a standard target, placed to the point of interest;
and using the said visibility metric in the reference measurement for visibility.
11. The method according to any of preceding claims 1 - 10, which method further comprising:
Presenting the visibility in units of distance where a target may be detected or viewed at desired level of detail .
12. The method according to any of preceding claims 1 - 11, wherein the method further comprises deciding on performing vegetation cutting, trimming removal or application of herbicide.
13. The method according to claim 12, wherein the method further comprises performing vegetation cutting, trimming removal or application of herbicide in accordance with the decision.
14. A system comprising at least one processor, at least one memory and a data communication connection, wherein the system is configured to perform the method according to any of preceding claims 1 - 12.
15. A computer program comprising computer program code, wherein the computer program is configured to cause a computing device to perform a method according to any of claims 1 - 12 when the computer program is executed in a computing device.
PCT/FI2019/050924 2018-12-31 2019-12-30 A method to measure visibility from point cloud data WO2020141257A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19906605.1A EP3906502A4 (en) 2018-12-31 2019-12-30 A method to measure visibility from point cloud data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20186141 2018-12-31
FI20186141 2018-12-31

Publications (1)

Publication Number Publication Date
WO2020141257A1 true WO2020141257A1 (en) 2020-07-09

Family

ID=71407026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2019/050924 WO2020141257A1 (en) 2018-12-31 2019-12-30 A method to measure visibility from point cloud data

Country Status (2)

Country Link
EP (1) EP3906502A4 (en)
WO (1) WO2020141257A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109541A1 (en) * 2021-12-15 2023-06-22 速感科技(北京)有限公司 Autonomous mobile device and control method and apparatus therefor and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100198775A1 (en) 2009-12-17 2010-08-05 Adam Robert Rousselle Method and system for estimating vegetation growth relative to an object of interest
US20160210863A1 (en) 2015-01-19 2016-07-21 The Aerospace Corporation Autonomous nap-of-the-earth (anoe) flight path planning for manned and unmanned rotorcraft
US20170293810A1 (en) 2015-03-11 2017-10-12 The Boeing Company Real Time Multi Dimensional Image Fusing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100198775A1 (en) 2009-12-17 2010-08-05 Adam Robert Rousselle Method and system for estimating vegetation growth relative to an object of interest
US20160210863A1 (en) 2015-01-19 2016-07-21 The Aerospace Corporation Autonomous nap-of-the-earth (anoe) flight path planning for manned and unmanned rotorcraft
US20170293810A1 (en) 2015-03-11 2017-10-12 The Boeing Company Real Time Multi Dimensional Image Fusing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3906502A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109541A1 (en) * 2021-12-15 2023-06-22 速感科技(北京)有限公司 Autonomous mobile device and control method and apparatus therefor and storage medium

Also Published As

Publication number Publication date
EP3906502A4 (en) 2022-10-05
EP3906502A1 (en) 2021-11-10

Similar Documents

Publication Publication Date Title
KR101933216B1 (en) River topography information generation method using drone and geospatial information
EP2850455B1 (en) Point cloud visualization of acceptable helicopter landing zones based on 4d lidar
EP1248939B1 (en) Method for determination of stand attributes and a computer program to perform the method
Adams et al. Multitemporal accuracy and precision assessment of unmanned aerial system photogrammetry for slope-scale snow depth maps in Alpine terrain
Barazzetti et al. 3D scanning and imaging for quick documentation of crime and accident scenes
Chatzistamatis et al. Fusion of TLS and UAV photogrammetry data for post-earthquake 3D modeling of a cultural heritage Church
CN113723403A (en) Landslide monitoring method and device, electronic equipment and storage medium
Karantanellis et al. 3D hazard analysis and object-based characterization of landslide motion mechanism using UAV imagery
Potó et al. Laser scanned point clouds to support autonomous vehicles
Jaboyedoff et al. Mapping and monitoring of landslides using LiDAR
Govedarica et al. Flood risk assessment based on LiDAR and UAV points clouds and DEM
Deliry et al. Accuracy evaluation of UAS photogrammetry and structure from motion in 3D modeling and volumetric calculations
Singh et al. High resolution DEM generation for complex snow covered Indian Himalayan Region using ADS80 aerial push-broom camera: a first time attempt
WO2020141257A1 (en) A method to measure visibility from point cloud data
Polat LIDAR Derived 3d City Modelling
Lawas Complementary use of aiborne LiDAR and terrestrial laser scanner to assess above ground biomass/carbon in Ayer Hitam tropical rain forest reserve
Wang Semi-automated generation of high-accuracy digital terrain models along roads using mobile laser scanning data
Strand ASSESSING THE MEASUREMENT QUALITY OF UAV-BORNE LASER SCANNING IN STEEP AND SNOWCOVERED AREAS
Davis Innovative technology workshop on 3D LiDAR
Rachman Waliulu et al. Integrated Low-Cost LiDAR and GNSS for Road Condition Survey
Schwind et al. A comparative analysis of LiDAR and structure from motion photogrammetry utilizing a small Unmanned Aerial System (sUAS) approach for structural mapping and inspection
Järvinen Airborne laser scanning data comparison based on roof features
Tesfay Integrating terrestrial laser scanner and unmanned aerial vehicle data to estimate above ground biomass/carbon in Kebun Raya Unmul Samarinda Tropical Rain Forest, East Kalimantan, Indonesia
Waliulu et al. for Road Condition Survey
Wang Urban Roadside Tree Inventory Using a Mobile Laser Scanning System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19906605

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019906605

Country of ref document: EP

Effective date: 20210802