EP3739291A1 - Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre - Google Patents

Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre Download PDF

Info

Publication number
EP3739291A1
EP3739291A1 EP19175241.9A EP19175241A EP3739291A1 EP 3739291 A1 EP3739291 A1 EP 3739291A1 EP 19175241 A EP19175241 A EP 19175241A EP 3739291 A1 EP3739291 A1 EP 3739291A1
Authority
EP
European Patent Office
Prior art keywords
scan
image
images
panorama
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19175241.9A
Other languages
German (de)
English (en)
Inventor
Bernhard Metzler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hexagon Technology Center GmbH
Original Assignee
Hexagon Technology Center GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hexagon Technology Center GmbH filed Critical Hexagon Technology Center GmbH
Priority to EP19175241.9A priority Critical patent/EP3739291A1/fr
Priority to US16/876,041 priority patent/US11604065B2/en
Publication of EP3739291A1 publication Critical patent/EP3739291A1/fr
Priority to US17/857,758 priority patent/US11740086B2/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • G01C1/02Theodolites
    • G01C1/04Theodolites combined with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to a position and orientation determination method for a terrestrial surveying device with scanning functionality according to claim 1, a surveying device of the same type as claimed in claim 14 and a computer program product of the same type as claimed in claim 15.
  • 3D scanning is a very effective technology to produce millions of spatial measuring points of objects within minutes or seconds.
  • Typical measurement tasks are the recording of objects or their surfaces such as industrial plants, house facades or historical buildings, but also of accident sites and crime scenes.
  • Measurement devices with scanning functionality are, for example, laser scanners such as the Leica P20 or a total or multi-station such as e.g. the Leica Multi Station 50, which is used to measure or create 3D coordinates of surfaces. For this purpose, they must be able to guide the measuring beam of a distance measuring device over a surface and at the same time record the direction and distance to the measuring point. From the distance and the correlated directional information for each point, a so-called 3D point cloud is generated by means of data processing.
  • such terrestrial scanners are thus designed to detect a distance to an object point as a measuring point with a, mostly electro-optical and laser-based, distance meter.
  • a directional deflection unit which is also present, is designed in such a way that the measuring beam of the Rangefinder is deflected in at least two independent spatial directions, whereby a spatial measurement area can be recorded.
  • the range finder can be designed, for example, according to the principles of time-of-flight (TOF), phase, wave-form digitizer (WFD) or interferometric measurement.
  • TOF time-of-flight
  • WFD wave-form digitizer
  • interferometric measurement For fast and accurate scanners, a short measuring time with high measuring accuracy at the same time is required, for example a distance accuracy in the ⁇ m range or below with measuring times of the individual points in the range from sub-microseconds to milliseconds.
  • the measuring range extends from a few centimeters to a few kilometers.
  • the deflection unit can be implemented in the form of a moving mirror or alternatively also by other elements suitable for the controlled angular deflection of optical radiation, such as, for example, rotatable prisms, movable light guides, deformable optical components, etc.
  • the measurement is usually carried out by determining the distance and angles, i.e. in spherical coordinates, which can also be transformed into Cartesian coordinates for display and further processing.
  • the location of the surveying device In order to set the measured object point coordinates in relation to a known reference coordinate system or, in other words, to obtain absolute point coordinates, the location of the surveying device must be known absolutely. Basically, the station coordinates of the surveying device can be determined on site as so-called free stationing by capturing points that have already been absolutely referenced. For example, several target points of known position present in the measurement environment are recorded during the scan, thereby also determining the orientation or alignment of the device can be determined. Such referencing, however, usually requires at least a partially manual implementation, which is time-consuming and prone to errors.
  • the live position of the surveying device can be determined using a GNSS receiver attached to the surveying device using individual satellites as reference points.
  • the disadvantage is the lower resolution, in particular with regard to the height determination, compared with referencing by means of geodetic surveying.
  • the alignment of the scanner can only be determined with several receivers, and this with an even lower resolution.
  • the process is linked to the reception of GNSS signals, which is hardly or not given, especially indoors or in narrow, angled measurement environments.
  • Image-based referencing methods are also known.
  • the EP 2199828 A2 proposes a method within the scope of mobile scanning with which the position of the surveying device relative to a (movement) reference system is determined from two 3D images of a surrounding area recorded at different angular positions.
  • the disadvantage here is that the creation of at least two 3D images from different angular positions still requires a great deal of effort.
  • a generic laser scanner which has an inertial measurement unit (IMU) for the purpose of tracking a change in position between two measurements in a measurement environment.
  • IMU inertial measurement unit
  • the disadvantage of this solution is the need to install a correspondingly sensitive inertial measuring unit in the laser scanner, as well as the limitation that only the uninterrupted change between two locations can be tracked, but not a determination of the current position in the measuring environment, if, for example, after a longer period of time such as hours, days, weeks or longer, a scanning is to take place again in the measuring environment.
  • the object of the present invention is therefore to provide a simplified, fully automatic position determination method for a terrestrial scan surveying device.
  • the present invention relates to a fully automatic method for determining the current georeferenced position and alignment of a terrestrial surveying device with scan functionality at the current location using a set of stored georeferenced 3D scan panorama images, i.e. panorama images that also represent 3D coordinate data by means of a scan.
  • the 3D scan panorama images are stored, for example, in a computer memory of the surveying device itself or on an external storage medium, for example a cloud, from which they are retrieved by the surveying device, for example using Bluetooth, Internet or WLAN.
  • the method is preferably carried out on a computer in the surveying device itself; however, processing can also take place externally, for example on an external computer or in a cloud, from where the calculated data are transmitted to the surveying device.
  • the method includes recording a panoramic image with the surveying device from the current location, a plurality of object points being mapped in the recorded panoramic image.
  • a panorama image can be recorded, for example, by means of a super wide-angle lens or created by panning all around the camera or composed of several individual images recorded in different directions.
  • At least one 3D scan panorama image is identified with object points that match the recorded panorama image.
  • Matching object points are optionally determined using feature matching algorithms, in particular using SIFT, SURF, ORB or FAST algorithms.
  • the respective georeferenced 3D coordinates of the corresponding object points are determined (e.g. retrieved or calculated) on the basis of the identified 3D scan panoramic image and calculated, e.g. by means of backward cut, the current georeferenced position and alignment of the surveying device based on the positions of the matching object points in the recorded panorama image and their determined georeferenced 3D coordinates.
  • the stored 3D scan panorama images represent embedded or linked 3D point clouds in respective 2D panorama images, for example color images with a "resolution" of 360 * 180 pixels, whereby the 2D panorama images can also be designed as individual 2D images that eg put together using stitching to produce the panorama view.
  • the stored 3D scan panorama images are stored in the form of cube or spherical projections, for example with a color channel and a combined depth channel.
  • the panorama image recorded at the current location is preferably a "simple" 2D (digital camera) image, which enables a comparatively simple, quick and inexpensive image acquisition with the surveying device on site.
  • the set of stored 3D scan panoramic images is preferably images that are spatially linked to one another and represent a surrounding area belonging together / forming a unit, the surrounding area being e.g. is an industrial plant, a building or a building complex.
  • the set of 3D scan panorama images thus represents a collection of reference images that belong together, all of which are related to a specific environment or a specific object, e.g. on a plant, a building cluster, a building floor or a contiguous site.
  • the stored 3D panorama images are, for example, from a -e.g.
  • a database stored in a cloud can be called up, which is specifically assigned to the present measurement environment or the present object.
  • the surveying device can be moved from the current location along a path to another location, with a series of images being continuously recorded during the movement with the surveying device as part of a SLAM process (Simultaneous Localization And Mapping), wherein the identified 3D scan panorama image is used as an, in particular first, image of the series of images, completing the SLAM Process at the other location by taking a last image of the image series, recording a 3D point cloud at the other location and registering the 3D point cloud based on the SLAM process relative to the identified 3D scan panorama image.
  • the point cloud which is generated by laser scanning at a further stationing, is registered using camera images linked by means of a SLAM algorithm to form the 3D scan panorama image.
  • Using the 3D scan panorama image as part of the SLAM process offers the advantage, among other things, that image points linked with 3D data are available from the outset.
  • the number of stored (and identified) 3D scan panoramic images to be used for determining the location is adapted to the present measurement situation.
  • the number of stored 3D scan panoramic images to be used for determining the location is set depending on a degree of similarity between the recorded panorama image and the identified 3D scan panorama image, the degree of similarity being based on a number of matching object points.
  • the degree of similarity represents, for example, a measure of a difference in position between the current location and the recording location of the identified 3D scan panorama image.
  • the number of stored 3D scan panoramic images to be used to determine the location is set depending on a nature of the surroundings of the location determined on the basis of the recorded panorama image, in particular a minimum distance or average distance to object points.
  • the number of stored 3D scan panoramic images to be used for determining the location is set depending on a desired degree of precision for the current location. The number of reference images on which the position or orientation determination is based is therefore adapted to a required level of accuracy.
  • the invention further relates to a terrestrial surveying device with scanning functionality with a control and evaluation unit which is designed such that the method according to the invention can be carried out with it.
  • the invention also relates to a computer program product that is stored on a machine-readable carrier, in particular a terrestrial surveying device with scanning functionality, or a computer data signal, embodied by an electromagnetic wave, with program code that is suitable for performing the method according to the invention.
  • the method according to the invention offers the advantage of a simple method with which the position and orientation of a scanning measuring device can be determined fully automatically directly at the location, in real time and even before the start of the actual scanning process when re-stationing in a measuring environment that has already been at least partially scanned.
  • the method makes it possible to calculate one's own location using only a panorama image recorded on site, for example with a conventional digital camera of the scanner. This enables setups to be staggered on site even if the measurement activity is interrupted between two setups.
  • Accurate is also advantageous Location determination inside buildings or, for example, in tunnels possible.
  • the on-site effort can be kept low in particular that a particularly rich data source is "tapped” with the georeferenced 3D scan panorama images, which is already based on a simple live image, e.g. a digital camera panorama image, a robust and precise calculation of the current position and also the orientation.
  • a particularly rich data source is "tapped” with the georeferenced 3D scan panorama images, which is already based on a simple live image, e.g. a digital camera panorama image, a robust and precise calculation of the current position and also the orientation.
  • the data "load” has been shifted to the database to be used and the laser scanner itself advantageously only has to take a "light” image of the measurement environment.
  • scaling is given or provided from the outset, which advantageously means that no additional effort is required to determine a scaling factor, e.g. by measuring, scanning or capturing an object of known size (e.g. a measuring stick).
  • the method can be tailored to actual needs. This results in a tailor-made optimization of the process effort and results without the need for user intervention. For example, this allows the process to be kept as lean as possible.
  • Figure 1 shows a terrestrial surveying device of the generic type, which can be designed, for example, as a total station with scanning functionality or scanning module or, as shown, as a laser scanner 1, for recording (scanning) object surfaces from a stationing, with a position P and an alignment or orientation O, which by means of can be determined by the method explained below.
  • the device 1 has an, for example intensity-modulated, e.g. pulsed radiation source (not shown), e.g. a laser source, and optics (not shown) so that a measuring beam L can be emitted into the free space onto a target object in an emission direction, the emission direction defining a measuring axis and one or more position / angle detectors (not shown) each the present direction of the emission or the measuring axis is measured in the internal reference system of the scanner 1 (i.e. relative to an internal zero direction).
  • the optics are designed, for example, as combined transmitting and receiving optics or each have separate transmitting optics and receiving optics.
  • light pulses reflected from the target object are received by the surveying device 1 and recorded by an opto-electronic detector (not shown).
  • an opto-electronic detector not shown.
  • up to a million or more light pulses per second and thus scanning points 98 can be recorded.
  • the measuring radiation L or emission direction is continuously pivoted and measured and at least one measured value is recorded successively at short time intervals per object point, including at least one a distance value to the respective scan point in the internal reference system, so that a large number of scan points are generated which, as a three-dimensional point cloud, form a 3D image of the object or the measurement environment. Scan data are therefore generated for a respective object point, which contain at least angle or direction and distance information.
  • the surveying device 1 has an electronic control (not shown) which includes an evaluation functionality for measuring the respective distance value, for example according to the runtime principle (evaluation according to the time-of-flight method).
  • the pivoting takes place by means of a beam deflector 3, as shown, for example, by rotating an attachment or top part A of the surveying device 1 relative to a base B - relatively slowly - about a first, vertical axis V, step by step or continuously, so that the measurement radiation L in the horizontal is pivoted and the plurality of emission directions differ from one another in the horizontal alignment, and by a pivotable optical component, for example a pivoting or rotating mirror, is rotated relatively quickly about a horizontal axis H, so that the measurement radiation L in the Vertical is pivoted and the plurality of emission directions also differ from one another in terms of vertical alignment.
  • a pivotable optical component for example a pivoting or rotating mirror
  • the scanning takes place within a predetermined angular range, the limits of which are defined by a horizontal and vertical pivoting width.
  • the angular range is 360 ° in the horizontal and 270 ° in the vertical, for example, so that a spherical scan area is present, which maps almost the entire surrounding area in all spatial directions.
  • any other angle ranges are also possible.
  • the vertical resolution is not implemented by an additional axis of rotation, but by several, simultaneously operating transmitting and receiving units that have a certain constant angular offset in the vertical direction.
  • the laser scanner 1 has at least one image camera 2 with which optical images of the object or the measurement environment can be recorded, preferably color images in RGB format.
  • a (2D) panorama image can be generated by means of the camera 2.
  • a panorama picture can e.g. can be generated in that several images are recorded and merged when pivoting with the pivoting device 3 through 360 °.
  • the laser scanner 1 has several cameras 2 which are aligned in different viewing directions and whose respective images can be combined to form a panorama image.
  • a georeferenced 3D scan panorama image which according to the invention serves as the basis for the subsequent determination of position and orientation when re-stationing, arises from the fact that the result of the 360 ° scan or the generated 3D point cloud and the camera-based 360 ° (2D) Panorama image of a georeferenced stationing can be combined so that a 3D scan panorama image or a textured 3D point cloud is created, with optionally also a brightness or intensity value (e.g. a gray level value) of the measuring radiation L itself being recorded during the scanning and taken into account when creating the image .
  • a brightness or intensity value e.g. a gray level value
  • the precise linking of the camera image data and the scan data thus provides a three-dimensional image of the Object that also contains color information about the object or a panorama image that additionally has depth information by means of the scan data.
  • a georeference is also available.
  • the Figure 2 shows an example of a 3D scan panorama image 4.
  • the panorama image 4 is stored as a so-called cube map, ie as a cube representation with six partial images, for example partial image C1 or D1.
  • the six partial images represent the forwards, backwards, right, left, top and bottom views.
  • Alternative examples of image formats or such image maps are spherical or dome views or projections.
  • the cube display is preferably free of distortion, so that a geometrically correct display of the scan and image data takes place.
  • the 3D scan panorama image 4 is in the color image portion C obtained by means of the camera (in the Figure 2 above) and the 3D component D obtained by scanning (in the figure below).
  • the respective cube areas belong together, for example the cube area C1 of the color component and the cube area D1 of the 3D component.
  • the 3D scan panorama image 4 is described by four image channels: red, green, blue (C) and depth (D). Such a 4-channel image format can be stored and processed with comparatively little effort.
  • Such a 3D scan panorama image 4 thus represents an information-rich image of measurement objects or an information-rich all-round view of a measurement environment, which not only provides optical image data, but also positionally or precisely linked distance data, whereby a scaling is automatically and immediately given or available.
  • Such a 3D scan panorama image 4 or a set of such images thus also represents a very "powerful" referencing library which, as shown below, is used to determine the position and orientation of a scanning surveying device at an unknown, new location or when it is re-stationed is used advantageously.
  • Figure 3 shows a measurement environment 100, for example a building complex or a factory site. Further examples of such a coherent surrounding area are industrial plants such as refineries or factory buildings, a delimited terrain or also a single building or a single floor.
  • the building complex 100 is scanned from three locations S1-S3 as described above, with the camera of the laser scanner 1 also taking panoramic images in each case.
  • 3D scan panorama images are generated as described, so that in the example there are three 3D scan panorama images 4a-4c, which are stored in a database, for example.
  • the 3-D scan panorama images 4a-4c are also georeferenced, that is, they are related to an external coordinate system.
  • the coordinate system to which reference is made can be a local system, which is defined, for example, by setting up / stationing the laser scanner 1, or a global, higher-level system, for example a geodetic coordinate system such as WGS 84.
  • This georeferencing can according to methods known from the prior art for a Location S1-S3 take place by scanning several known reference objects in the measurement environment 100.
  • Figure 4 shows a bird's eye view of a measuring environment 100.
  • This shows the three reference locations S1-S3, to which the three stored georeferenced 3D scan panorama images 4a-4c belong, which are all-round images (360 °), form an image reference set for the measuring environment 100 and can be called up by the laser scanner, for example, in a database via WLAN etc.
  • the laser scanner is now positioned at the new or to be determined location S, which is to serve as a station for another scan.
  • a 360 ° panoramic image I is recorded in this position and orientation by means of the device camera.
  • the panorama image I can be a monochrome image (1-channel) or, by performing a scan on site S1 and combining the scan data with the camera image data, it can be a 3D scan panorama image (RGBD image).
  • the current panorama image is an image that - in contrast to the 3D scan panorama image - not the full circle, but "only" covers a large viewing or image angle of at least 90 ° (eg 180 °) or at least in the form of a super wide-angle image with typically 92 ° -122 ° diagonal image angle or similar.
  • the camera 2 has, for example, a correspondingly wide field of view, for example by means of a super wide-angle lens or fisheye lens, or the current image is composed of several individual images recorded in different viewing directions.
  • 3-D scan panorama images 4a-4 as a comparison / referencing basis - that is, panorama images with depth information - enables the effort for creating data to be adjusted (on-site image or live image) or the requirements to minimize the data to be referenced as far as possible.
  • the current camera image I is matched with the set of stored 3D scan images 4a-4c.
  • the image matching can be used to identify one or more of the stored 3D scan images 4a-4c which have matching surrounding object points P1, P2, P3.
  • the matching of the 3D scan images 4a-4c with the panorama image I can be based, for example, on "classic" feature or keypoint matching, in particular through the use of a bag of words approach.
  • machine or deep learning is used, in particular for learning a latent representation of the image or the image features, for example encoding.
  • An image is the input for a neural network, which supplies a so-called encoding (vector with n elements) as output.
  • An image similarity is determined based on the difference between the respective vectors / encodings. There is a match if there is a small difference.
  • the image similarity can be determined by using a so-called siamese network, in which the images are input and a degree of similarity is provided as output.
  • At least one of the stored images 4a-4c is determined which is similar or has corresponding object points P1, P2, P3.
  • the identification of matching points P1, P2, P3 can be based on feature matching, e.g. by using a SIFT, SURF, ORB and / or FAST algorithm and / or feature encoding based on deep learning.
  • Said image encoding vectors or also coarsening / scaling down (thumbnails) of the images 4a-4c or I used optionally form the image pool for matching instead of the "original" image files.
  • Using Downscaled or coarsened image representations have the advantage of a lower data volume, which enables faster data transmission and / or processing.
  • the lower amount of data is used, for example, in such a way that the database of the 3D scan panorama images is stored in the memory of the surveying device in the form of small image files and the uncompressed reference images (only) on an external memory (so that a relatively small memory space on the device itself is sufficient), from where individual matched images (based on the small 3D scan panorama images) can be downloaded in their original format / size.
  • the matching can also take place as a multi-stage process by first making a first selection from the stored 3D panorama images 4a-c using such images of small data size, and then using the complete, uncompressed 3D panorama images 4a-c of the selection a further matching takes place, in which miss matches are sorted out.
  • miss matches are sorted out.
  • the matching object points P1-P3 are then used to calculate the position and orientation at the location S, as described below by way of example
  • Figure 6 shows schematically the current location S with the 2D panoramic image I recorded there and the two reference locations S1 and S2, for which the reference 3D scan panoramic images 4a and 4b are available. Also shown are object points P1-P3 that correspond to the current image I, the reference image 4a having three object points P1-P3 corresponding to the current image I and the reference image 4b having a point P1.
  • the reference images 4a-4c are 3D scan images
  • their 3D coordinates can be determined for all object points P1-P3 from the respective image data.
  • the horizontal and vertical angles are calculated from the position of a respective point P1-P3 in image 4a.
  • the (georeferenced) 3D coordinates (X, Y, Z) of the respective point P1-P3 are calculated (in the Figure 6 indicated by the arrows 5).
  • the respective georeferenced 3D point coordinates can also be determined, for example by looking up in a table in which the 3D coordinates are assigned to a respective image point, for example.
  • the position and orientation of the laser scanner are then calculated. This calculation is done, for example, by cutting back, as in the Figure 6 indicated by way of example by the arrows 6.
  • the position and orientation of the laser scanner or the location S is determined only on the basis of one reference S1, without using the likewise identified second 3D scan image 4b of the further reference location S2.
  • all identified 3D scan panoramic images 4a, 4b or the available corresponding points P1-P3 of all identified reference images 4a, 4b are optionally used to calculate the position or orientation, for example up to a defined maximum number.
  • the situation is that the current location S is close to the reference location S1.
  • a comparatively large number of matching points is determined, for example one hundred or more points, especially since all-round images are available from both locations S, S1.
  • the large number of points, of which three P1-P3 are shown as an example, allow a robust and precise calculation of the current position and orientation, especially since points can be selected whose 3D coordinates are significantly different.
  • One advantage of an automatic adaptation of the reference data set used to determine the location to the circumstances or requirements is that an optimally adapted effort is made. In other words, it can be ensured that as much (process) effort is carried out as necessary and at the same time as little as possible (which is particularly advantageous for mobile devices such as a laser scanner with limited electrical and computational capacity). Exactly as many 3D scan panorama images are used as required in view of the measurement situation. For example, the method can thus be adapted to whether the position and orientation of the scanner takes place in narrow, angled rooms or in free, wide halls or open areas.
  • the Figure 7 further explains the method with the above-described automatic adaptation of the number of stored 3D scan panoramic images to be used to determine the location to the current measurement situation.
  • step 7 a panorama image is recorded at the current location to be determined with the current position and the current orientation.
  • this recorded panorama image is matched with the set of stored 3D scan panorama images 4, whereby in the example several of the stored 3D scan panorama images are identified (step 9) which have points or image features that match the recorded image.
  • step 10 using a predetermined first criterion K1, the number of reference images to be used for the actual position determination is determined as an adaptation to the specific measurement situation.
  • the criterion K1 is, for example, a measure of the similarity of the current panorama image with one or more of the stored panorama images. If e.g. If a high degree of similarity is found, the number of reference images to be used to determine the location is kept small, e.g. limited to one picture. If, on the other hand, a low degree of correspondence is found, a comparatively large number of reference images is used, e.g. the maximum number of matches available.
  • the degree of similarity is based, for example, on the number of corresponding object points which match the recorded image in a respective reference image.
  • the type of matching features is used as a criterion to determine the similarity.
  • image descriptors that describe, for example, dominant lines in the image can be used.
  • a degree of similarity can also be determined on the basis of properties which describe the respective images as a whole or statistical image properties, for example gray value or color histograms or gradients or functions of brightness or surface normals.
  • the similarity measure K1 can be based on a deep learning-based encoding of the image or on a similarity measure determined with the Siamese network.
  • Such a measure of similarity K1 - or, seen the other way around, measure of difference - can also be a measure for you Difference in position between current location and reference position.
  • the degree of similarity K1 is then specified in such a way that it can be used, at least roughly, to classify whether the current location is close to a (respective) reference location or not.
  • the number of 3D scan images to be used can in turn be limited, for example only the image (or the associated identified 3D point coordinates) of the nearby reference location can be used for the location calculation. If, on the other hand, it is determined on the basis of the similarity measure K1 that no close reference location is available, the number of reference images to be used is set to 3, 5 or 10, for example.
  • step 10 After adapting the number of reference images in step 10, the position and orientation of the current stationing as described above is calculated in step 11 using the 3D coordinates of the object points of the selected reference image (s) and their positions in the images, and the method is ended (step 13; to step 12 see below).
  • an adaptation criterion K1 is a property of the location environment, that is to say a criterion K1 which describes a nature of the measurement environment.
  • the distance to object points is checked (for example the smallest measured distance that occurs or a mean distance) and a greater number of reference images to be used is set for large distances than for small distances.
  • the distances to the scan points or to scan objects can, for example, be an indicator of whether the measurement environment is free terrain or a wide area in which a relatively large number of identified images are possible or necessary for referencing; or if it is an angled terrain or a narrow space in which few images are possible or necessary for referencing.
  • a condition of the respective location environment can already be connoted in a respective 3D scan panorama image and can be called up directly, for example as metadata.
  • Distances to object points can also be related to the current location and, for example, after calculating the current position and orientation, so that the number of reference images is not adapted until after step 11 if it is determined on the basis of criterion K1 that for a Due to the nature of the measuring environment, an optimal result is only possible with an increased number of reference images.
  • Such a regulation of the number of reference images, which is used for the location calculation, can -as in the Figure 7 shown - optionally also based on a further criterion K2.
  • a further criterion K2 is, for example, a measure of the precision of the position and / or orientation calculated in step 11. If it is determined that the accuracy does not meet criterion K2 (step 12), the system goes back to step 10 and the number of images to be used is increased and thus adapted. With this option, at least one second 3D scan panorama image is used if necessary. If the current location is determined with sufficient precision due to the use of a further reference location or further 3D object point coordinates, the location determination method is ended with step 13, so that, for example a scan can then be carried out automatically from the now known location.
  • Figure 8 shows a further development of the method according to the invention.
  • a 3D scan panorama image 4a identified for the location S is used as a starting point in order to register a point cloud PC at a further location S 'by means of a SLAM process (Simultaneous Localization And Mapping).
  • the laser scanner 1 is moved from the current location S, for which at least one 3D scan panoramic image 4a is identified as described above, along a path (symbolized by arrow 15) to the further location S ′.
  • a series of images 14 are recorded by means of the camera 2 of the surveying device 1, for example through continuous photography or also in the form of a video.
  • the 3-D scan panorama image 4 a is integrated as part of the image series 14.
  • the image data of the image series 14 are processed in a manner known per se using a SLAM algorithm in such a way that they are spatially linked to one another, whereby - based on the 3D scan panorama image 4a as the start image, the last image recorded at the further location S 'is in spatial terms Reference (position and orientation) to the start screen 4a can be set.
  • points P1-P3 of the 3D scan panorama image 4a which are also recognized in the first subsequent camera image, are used for the SLAM process, as well as points V1-V3, which each correspond in subsequent images, and finally points C1- C3.
  • these last-mentioned points C1-C3 also have correspondences in the point cloud PC, which is recorded by means of a scan at the location S '- in addition to the camera image (symbolized by lines 16).
  • the point cloud PC can thus also be set in spatial relation to the 3D scan panoramic image 4a or registered relative to it.
  • the correspondences between the images of the image series 14 including the 3D scan panoramabil 4a are determined, for example, by means of feature matching.
  • the assignment of the points C1-C3 of the point cloud PC to the camera image of the last series of images (camera image of the location S ') can also be done "more easily” without image matching if there is a calibration of camera 2 to the scan module of the surveying device 1, which is normally the case is.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
EP19175241.9A 2019-05-17 2019-05-17 Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre Pending EP3739291A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19175241.9A EP3739291A1 (fr) 2019-05-17 2019-05-17 Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre
US16/876,041 US11604065B2 (en) 2019-05-17 2020-05-16 Fully automatic position and alignment determination method for a terrestrial laser scanner and method for ascertaining the suitability of a position for a deployment for surveying
US17/857,758 US11740086B2 (en) 2019-05-17 2022-07-05 Method for ascertaining the suitability of a position for a deployment for surveying

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP19175241.9A EP3739291A1 (fr) 2019-05-17 2019-05-17 Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre

Publications (1)

Publication Number Publication Date
EP3739291A1 true EP3739291A1 (fr) 2020-11-18

Family

ID=66625016

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19175241.9A Pending EP3739291A1 (fr) 2019-05-17 2019-05-17 Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre

Country Status (1)

Country Link
EP (1) EP3739291A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332402A (zh) * 2021-12-23 2022-04-12 中交第二公路勘察设计研究院有限公司 融合地面式、手持式激光扫描的钢桥模拟预拼装方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2199828A2 (fr) 2008-11-26 2010-06-23 Riegl Lasermeasurement Systems GmbH Procédé de détermination de la position relative d'un scanner laser par rapport à un système de référence
DE102013110581A1 (de) 2013-09-24 2015-03-26 Faro Technologies, Inc. Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung
US20160146604A1 (en) * 2013-07-04 2016-05-26 Hexagon Technology Center Gmbh Positioning method for a surveying instrument and said surveying instrument
US20160187130A1 (en) * 2014-12-19 2016-06-30 Leica Geosystems Ag Method for determining a position and orientation offset of a geodetic surveying device and such a surveying device
US20170301132A1 (en) * 2014-10-10 2017-10-19 Aveva Solutions Limited Image rendering of laser scan data
US20180158200A1 (en) * 2016-12-07 2018-06-07 Hexagon Technology Center Gmbh Scanner vis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2199828A2 (fr) 2008-11-26 2010-06-23 Riegl Lasermeasurement Systems GmbH Procédé de détermination de la position relative d'un scanner laser par rapport à un système de référence
US20160146604A1 (en) * 2013-07-04 2016-05-26 Hexagon Technology Center Gmbh Positioning method for a surveying instrument and said surveying instrument
DE102013110581A1 (de) 2013-09-24 2015-03-26 Faro Technologies, Inc. Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung
US20170301132A1 (en) * 2014-10-10 2017-10-19 Aveva Solutions Limited Image rendering of laser scan data
US20160187130A1 (en) * 2014-12-19 2016-06-30 Leica Geosystems Ag Method for determining a position and orientation offset of a geodetic surveying device and such a surveying device
US20180158200A1 (en) * 2016-12-07 2018-06-07 Hexagon Technology Center Gmbh Scanner vis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332402A (zh) * 2021-12-23 2022-04-12 中交第二公路勘察设计研究院有限公司 融合地面式、手持式激光扫描的钢桥模拟预拼装方法
CN114332402B (zh) * 2021-12-23 2024-04-02 中交第二公路勘察设计研究院有限公司 融合地面式、手持式激光扫描的钢桥模拟预拼装方法

Similar Documents

Publication Publication Date Title
EP3017275B1 (fr) Procédé de détermination d'une position pour un appareil de mesure et appareil de mesure afférent
EP3034995B1 (fr) Procédé de détermination d'un décalage d'orientation ou de position d'un appareil de mesure géodésique et appareil de mesure correspondant
DE69831181T2 (de) Positionsbestimmung
DE112010004767B4 (de) Punktwolkedaten-Verarbeitungsvorrichtung, Punktwolkedaten-Verarbeitungsverfahren und Punktwolkedaten-Verarbeitungsprogramm
DE102012112321B4 (de) Vorrichtung zum optischen Abtasten und Vermessen einer Umgebung
DE102012112322B4 (de) Verfahren zum optischen Abtasten und Vermessen einer Umgebung
DE69624550T2 (de) Gerät und Verfahren zur Extraktion von dreidimensionalen Formen
EP2044573B1 (fr) Caméra de surveillance, procédé d'étalonnage et d'utilisation de la caméra de surveillance
EP3396409B1 (fr) Procédé d'étalonnage d'une caméra et d'un balayeur laser
EP2880853B1 (fr) Dispositif et procédé destinés à déterminer la situation d'une caméra de prise de vue
EP2918972A2 (fr) Procédé et appareil de mesure d'éloignement portatif pour la génération d'un modèle spatial
WO2002103385A1 (fr) Procede pour preparer des informations imagees
DE10308525A1 (de) Vermessungssystem
WO2006075017A1 (fr) Procede et appareil geodesique pour mesurer au moins une cible
WO2005026767A1 (fr) Procede de mesure directionnelle par rapport a un objet a mesurer
CN116778094B (zh) 一种基于优选视角拍摄的建筑物变形监测方法及装置
DE102019216548A1 (de) Verfahren und mobile Erfassungsvorrichtung zur Erfassung von Infrastrukturelementen eines unterirdischen Leitungsnetzwerks
EP1460454A2 (fr) Procédé de traitement combiné d'images à haute définition et d'images vidéo
DE112019004963T5 (de) Optikbasiertes mehrdimensionales Ziel- und Mehrfachobjekterkennungs- und verfolgungsverfahren
DE202010017889U1 (de) Anordnung zur Aufnahme geometrischer und photometrischer Objektdaten im Raum
AT511460B1 (de) Verfahren zur bestimmung der position eines luftfahrzeugs
DE4416557A1 (de) Verfahren und Vorrichtung zur Stützung der Trägheitsnavigation eines ein entferntes Ziel autonom ansteuernden Flugkörpers
EP3539085B1 (fr) 3d localisation
EP3739291A1 (fr) Procédé de détermination automatique de la position et de l'orientation pour un balayeur laser terrestre
EP3584536A1 (fr) Appareil d'observation terrestre à fonctionnalité de détermination de pose

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210517

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20221207