WO2021083915A1 - Verfahren und mobile erfassungsvorrichtung zur erfassung von infrastrukturelementen eines unterirdischen leitungsnetzwerks - Google Patents
Verfahren und mobile erfassungsvorrichtung zur erfassung von infrastrukturelementen eines unterirdischen leitungsnetzwerks Download PDFInfo
- Publication number
- WO2021083915A1 WO2021083915A1 PCT/EP2020/080210 EP2020080210W WO2021083915A1 WO 2021083915 A1 WO2021083915 A1 WO 2021083915A1 EP 2020080210 W EP2020080210 W EP 2020080210W WO 2021083915 A1 WO2021083915 A1 WO 2021083915A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection device
- information
- mobile detection
- image data
- point cloud
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 224
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000009826 distribution Methods 0.000 claims description 26
- 230000003287 optical effect Effects 0.000 claims description 23
- 238000009412 basement excavation Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 19
- 230000001133 acceleration Effects 0.000 claims description 18
- 238000005259 measurement Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 16
- 230000011218 segmentation Effects 0.000 claims description 16
- 230000000007 visual effect Effects 0.000 claims description 15
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 239000000835 fiber Substances 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 4
- 230000007547 defect Effects 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 238000003066 decision tree Methods 0.000 claims description 2
- 238000010790 dilution Methods 0.000 claims description 2
- 239000012895 dilution Substances 0.000 claims description 2
- 238000003709 image segmentation Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 230000010287 polarization Effects 0.000 claims description 2
- 239000000243 solution Substances 0.000 claims 1
- 238000012706 support-vector machine Methods 0.000 claims 1
- 238000010276 construction Methods 0.000 description 27
- 238000004891 communication Methods 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000003550 marker Substances 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000005452 bending Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001427 coherent effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000033228 biological regulation Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000003365 glass fiber Substances 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000001556 precipitation Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C7/00—Tracing profiles
- G01C7/06—Tracing profiles of cavities, e.g. tunnels
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C15/00—Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
- G01C15/002—Active optical surveying means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3848—Data obtained from both position sensors and additional sensors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- the invention relates to a method for the correct position detection of exposed infrastructure elements arranged in the ground, in particular in an open excavation, with a mobile detection device.
- the invention further relates to a mobile detection device for the correct positional detection of exposed infrastructure elements arranged in the subsurface, in particular in an open excavation.
- the exposed infrastructure elements are in particular infrastructure elements of distribution networks.
- Underground infrastructure elements are usually found in large numbers in so-called line networks. These line networks differ both in their network structure and the way they are laid, as well as in their regulatory framework in so-called transmission and distribution networks. While transmission networks consist of superordinate, large, individual long-distance lines with straight lines for national and international transport, the distribution networks, with their high degree of intermeshing of several infrastructure elements and small-scale or highly branched structures, take over the regional redistribution to the end consumers. The laying of infrastructure elements of the transmission networks is even significantly deeper than that of distribution networks.
- the client does not currently receive a geo-referenced, three-dimensional model of the built-in infrastructure elements.
- B. could be used for quality examinations for compliance with guidelines or for later management information.
- the construction companies on site often only make hand-drawn sketches using a tape measure for reasons of cost and time. Some of these are very error-prone and also imprecise.
- the line as an infrastructure element in the documentation drawings is usually only shown with a sequence of polygons. The actual, geometric course of a line is consequently disregarded here.
- US 2014 210 856 A1 describes a method for recording and visualizing infrastructure elements of a line network that are concealed in a wall or floor element of a building.
- the infrastructure elements are recorded by means of a laser scanner.
- a control point is recorded, the coordinates of which are known.
- a 3D model of the infrastructure elements is created, the coordinates of which are defined in relation to the control point.
- a marker is placed in a visible place. To visualize the infrastructure elements that are now covered, this marker is captured by a camera of a mobile display unit and the 3D model of the infrastructure elements is displayed superimposed on the camera image in the display unit.
- WO 2018/213 927 A1 describes a method for detecting exposed infrastructure elements of a large supra-regional long-distance line (“pipeline”) in a transmission network, which pursues the goal of checking the minimum coverage required by the regulatory authorities.
- a platform mounted on a vehicle outside the excavation is moved forward along the exposed pipeline at constant speed.
- a local point cloud is generated using a conventional LIDAR measuring device that is connected to the mobile platform via a mechanical device.
- a geometric feature for example a longitudinal axis of a pipeline, is identified in the local point cloud with the aid of an edge detection algorithm.
- the geometric feature can be linked to absolute position data obtained via a global navigation satellite system.
- This system is designed to check the laying depths of pipelines, which are laid bare for a longer period of time in rural areas, with comparatively large diameters of approx. 1 m and straight, predictable courses.
- this method is not suitable for the correct location of infrastructure elements in underground distribution networks, such as glass fiber lines with a small cross-section and branched course, especially in urban areas.
- infrastructure elements in underground distribution networks such as glass fiber lines with a small cross-section and branched course, especially in urban areas.
- the processes of civil engineering projects in urban and suburban distribution networks are more fragmented than in pipeline construction in view of traffic regulations in the street space and the often limited underground area available and the excavation pits are typically between 0.3 and 2 m deep. In such civil engineering projects, it is necessary to record the infrastructure elements with an absolute accuracy in the range of a few centimeters.
- the mobile platform known from WO 2018/213 927 A1 which is mounted on a vehicle, a robot or an unnamed flight system, is not suitable for the detection of infrastructure elements of underground line networks in the distribution network, or can pose an additional risk for construction site workers and / or depict surrounding passers-by. From a technical point of view, this method is also inadequate, especially in urban areas, since when generating local point clouds with LI DAR alone, sensor-typical and undesirable drift effects as well as the resulting inaccuracies occur. These make it impossible to record with absolute precision in the single-digit centimeter range - as is required when mapping exposed infrastructure elements of underground pipeline networks in the distribution network.
- US Pat. No. 9 230 453 B2 describes a method for detecting an exposed infrastructure element, in which a QR code manually attached to the infrastructure element is read out using a LIDAR scanner or one or more cameras in order to determine their attributes.
- a method for recording exposed infrastructure elements with absolute georeferencing is not described.
- objects relevant to the environment whose coordinates are already known in advance in the respective official coordinate system, must be provided with target markers and recorded with one or more cameras or LI DAR. These environment-relevant objects are therefore in turn to be measured in a further previous step by experts with additional, conventional and expensive GNSS surveying devices or total station equipment.
- 9 230 453 B2 consists of several separate components:
- the data are first recorded by a device such as, for example, a LIDAR system or a camera system with several cameras then sent to a data processing system via a communication network.
- the separate data processing device converts the data into a 3D point cloud using the “AutoCAD” software, then the “Photo Soft” software and additional software are used to recognize QR codes and target markers. This data has to be imported / exported manually between the programs. If absolute georeferencing is required, a surveying system and a target marker must also be used.
- the task is to enable the correct location of infrastructure elements of an underground line network, especially in a distribution network, with an absolute accuracy of a few centimeters, with a reduced number of work steps, without expert knowledge and with compensation of almost all interference and sensor-typical measurement uncertainties.
- Image data and / or depth data of a scene are acquired by means of a 3D reconstruction device of the mobile acquisition device, which scene contains at least one exposed infrastructure element arranged in the ground and a 3D point cloud with several points is generated on the basis of this image data and / or depth data;
- signals from one or more global navigation satellite systems are received by means of one or more receivers of the mobile detection device and a first position indication of the position of the detection device in a global reference system is determined;
- Detection device detects about these main axes, and b. wherein the 3D reconstruction device has one or more 2D cameras by means of which the image data and / or the depth data of the scene are recorded and one of the second position information and one of the orientation information is determined by means of visual odometry based on the image data and / or the depth data; and c. wherein the 3D reconstruction device has a LIDAR measuring device by means of which the depth data of the scene are recorded and one of the second position information and one of the orientation information is determined by means of visual odometry on the basis of the depth data;
- a georeference is assigned to the points of the 3D point cloud based on the first position information and several of the second position information as well as several of the orientation information,
- the mobile detection device can be carried by a person, the mobile detection device can be held with both hands of a person, preferably with one hand of a person, and has a housing, the largest edge length of which is less than 50 cm, wherein the recipient (s), the inertial measuring unit and the 3D reconstruction device are arranged in the housing.
- Another object of the invention is a mobile detection device for the correct positional detection of exposed infrastructure elements arranged in the ground, in particular in an open excavation, comprising:
- a 3D reconstruction device for capturing image data and / or depth data of a scene which contains at least one exposed infrastructure element arranged in the ground, and for generating a 3D point cloud with several points on the basis of this image data and / or depth data;
- one or more receivers for receiving signals from one or more global navigation satellite systems and for determining a first position indication of the position of the detection device in a global reference system
- An inertial measuring unit for determining a second position specification of the position of the detection device in a local reference system and a Orientation indication of the orientation of the detection device in the local reference system, wherein the inertial measuring unit is set up to detect linear accelerations of the mobile detection device in three orthogonal main axes of the local reference system and angular velocities of the rotation of the mobile detection device about these main axes; and wherein the 3D reconstruction device has one or more 2D cameras, by means of which image data of the scene can be captured, a second position specification of the position of the capture device in the local reference system and the orientation specification being able to be determined by means of visual odometry on the basis of the image data; and wherein the 3D reconstruction device has a LIDAR measuring device, by means of which depth data of the scene can be acquired, a second position specification of the position of the acquisition device in the local reference system and the orientation specification being made by means of visual odometry based on the depth data;
- the detection device is set up to assign a georeference to the points of the 3D point cloud on the basis of the first position information and several of the second position information as well as several of the orientation information;
- the mobile detection device can be carried by a person, the mobile detection device can be held with both hands of a person, preferably with one hand of a person, and has a housing, the largest edge length of which is less than 50 cm, wherein the recipient (s), the inertial measuring unit and the 3D reconstruction device are arranged in the housing.
- the exposed infrastructure elements are detected by means of the mobile detection device, which includes one or more receivers for receiving signals from one or more global navigation satellite systems as well as the 3D reconstruction device and the inertial measuring unit.
- the mobile detection device includes one or more receivers for receiving signals from one or more global navigation satellite systems as well as the 3D reconstruction device and the inertial measuring unit.
- This combination of the receiver or receivers for the signals of one or more global navigation satellite systems and the 3D reconstruction device and the inertial measuring unit enables the position and orientation of the infrastructure elements to be easily detected in a geodetic reference system with high accuracy.
- Generated a 3D point cloud of the recorded scene which contains the given infrastructure element or the given infrastructure elements. A georeference is assigned to each of these points.
- a georeference is understood to be a position specification of a point in the 3D point cloud in a geodetic reference system, preferably in an official position reference system, for example ETRS89 / UTM, in particular plus geometric and / or physical height reference.
- the georeference is given to the points of the 3D point cloud on the basis of the first position information - i.e. the determined position of the mobile detection device in the global reference system - and on the basis of the multiple second position information - i.e. the estimated positions of the detection device in the local reference system - and on the basis of the orientation information - i.e. information the estimated orientation of the detection device in the local reference system - assigned.
- the image data can thus have a position indication that is independent of reference points in the area of the respective infrastructure elements or excavation. As a result, the geo-reference can be determined with increased accuracy and reliability.
- a compact, robust mobile detection device suitable for construction sites can be provided for detecting the exposed infrastructure elements, which can be used both next to an open construction pit and also enables the mobile detection device to be used in which a person in the opened construction pit can mobile detection device held in one or two hands used to detect the exposed infrastructure element or elements.
- the method according to the invention and the mobile detection device according to the invention can therefore be used particularly advantageously for detecting exposed infrastructure elements of distribution networks that are arranged in the underground, in particular in an urban environment.
- underground infrastructure elements include, in particular, line or cable elements such as fiber optics, gas, district heating, Understood water pipes, power or telecommunication cables and associated conduits, cable ducts and connecting elements.
- the connecting elements can be designed, for example, as connectors for exactly two line or cable elements, as distributors for connecting three or more line or cable elements, or as amplifier elements.
- the underground infrastructure elements to be detected are preferably those underground infrastructure elements that are part of a distribution network, in particular part of a fiber optic, power or telecommunications cable distribution network.
- the underground infrastructure elements preferably have a diameter of less than 30 cm, preferably less than 20 cm, particularly preferably less than 10 cm, for example less than 5 cm.
- image data and / or depth data of several frames of a scene are preferably acquired which contains several exposed infrastructure elements arranged in the ground and a 3D point cloud with several points is generated on the basis of this image data and / or depth data.
- the receiver or receivers are preferably set up to receive and process signals from a global navigation satellite system. It is particularly preferred if the receiver or receivers are set up to simultaneously acquire and process signals from multiple global navigation satellite systems (GNSS), in particular signals from satellites from various global navigation satellite systems and multiple frequency bands.
- GNSS global navigation satellite systems
- the global navigation satellite systems can be, for example, GPS, GLONASS, Galileo or Beidou.
- the receiver or receivers can alternatively or additionally be set up to receive signals, in particular reference or correction signals, from land-based reference stations. For example, the receiver or receivers can be set up to receive the signals from the land-based transmitting station via a cellular network.
- the correction signals can be, for example, SAPOS correction signals (German satellite positioning service) or signals from the global HxGN SmartNet.
- One or more of the following methods are preferably used to determine the position of the detection device: real-time kinematics (RTK), precise point positioning (PPP), post-processed kinematic (PPK)).
- RTK real-time kinematics
- PPP precise point positioning
- PPK post-processed kinematic
- the accuracy in determining the position of the detection device can be reduced to a range of less than 10 cm, preferably less than 5 cm, particularly preferably less than 3 cm, for example less than 2 cm.
- a quality investigation of the georeferencing can be carried out.
- one or more quality parameters of the global satellite navigation systems for example DOP (Dilution of Precision), are preferably monitored.
- the inertial measurement unit is preferably set up to perform a translational movement in three mutually orthogonal spatial directions - e.g. B. along an x-axis, a y-axis and a z-axis - and in each case a rotary movement around these three spatial directions - z. B. to record the x-axis, the y-axis and the z-axis, in particular to repeat this data acquisition several times at time intervals.
- the inertial measuring unit can detect three linear acceleration values for the translational movement and three angular speeds for the rotation rates of the rotary movement as observation variables. These observations can be derived from the proportional ratios of measured voltage differences. With the help of other methods such as the strapdown algorithm (SDA), changes in position, speed and orientation can be deduced from the measured specific force and the rotation rates.
- SDA strapdown algorithm
- the 3D reconstruction device can be a time-of-flight camera, a structured light camera, a stereo camera, a LIDAR measuring device, a RADAR measuring device and / or a combination of these with one another, in particular with one or more 2D cameras, exhibit.
- the LIDAR measuring device of the 3D reconstruction device is preferably designed as a solid-state LIDAR measuring device (English. Solid-State-LIDAR or Flash-LIDAR).
- Solid-state LIDAR measuring devices of this type offer the advantage that they can be designed without mechanical components.
- Another advantage of the solid-state LIDAR measuring device is that it can capture image and / or depth information from several points at the same time, so that distortion effects due to moving objects in the field of view cannot occur with the solid-state LIDAR measuring device. Measures to correct such distortions that occur in scanning LIDAR measuring devices with a rotating field of view can therefore be dispensed with.
- the mobile detection device comprises a housing, the receiver or receivers, the inertial measuring unit and the 3D reconstruction device in the Housing are arranged. It is advantageous if the mobile detection device does not have a frame on which the receiver or receivers, the inertial measuring unit and the 3D reconstruction device are arranged in an exposed manner. Due to the common housing, a compact and robust, mobile and construction site-compatible detection device for detecting the exposed infrastructure elements can be provided.
- the mobile detection device can be carried by a person, the detection device being holdable with both hands of a person, preferably with one hand of a person, so that the mobile detection device is carried by the user to an open excavation and used there can be used to capture exposed infrastructure elements.
- the mobile detection device has a housing whose largest edge length is smaller than 50 cm, preferably smaller than 40 cm, particularly preferably smaller than 30 cm, for example smaller than 20 cm.
- the invention provides in particular that the mobile detection device is not designed as an unmanned aircraft.
- the invention provides, in particular, that the mobile detection device cannot be attached to a ground machine or a ground vehicle, and is preferably not attached.
- the georeference is preferably determined exclusively by means of the mobile detection device - for example by means of the one or more receivers for signals from one or more global navigation satellite systems, the inertial measuring unit and the 3D reconstruction device.
- points of the 3D point cloud preferably include a position specification in a geodetic reference system as a result of the georeferencing.
- the geodetic reference system can be identical to the global reference system.
- the points of the 3D point cloud are each assigned color or gray value information, the color or gray value information preferably being captured by means of the one or more 2D cameras of the 3D reconstruction device.
- the color or gray value information can be present, for example, as RGB color information in the RGB color space or HSV color information in the HSV color space.
- a textured grid model (“mesh”) is generated on the basis of the 3D point cloud and the image data of the one or more 2D cameras.
- the captured image data and / or the captured depth data and / or the captured linear accelerations of the mobile capture device in three orthogonally mutually perpendicular main axes of the local reference system and the angular speeds of the rotation of the mobile capture device about these main axes; are stored synchronized in time, in particular in a memory unit of the detection device.
- a common time stamp and / or a common frame designation is stored.
- the mobile detection device preferably has a memory unit which is set up to store the first position specification of the position in the global reference system and / or raw data assigned to this position specification; and the one or more second position information; and the one or more second orientation indications; and to store the acquired image data and / or the acquired depth data and / or the acquired linear accelerations of the mobile acquisition device in three orthogonally mutually aligned main axes of the local reference system and the angular speeds of the rotation of the mobile acquisition device about these main axes in a time-synchronized manner.
- the one or more second position details are transformed from the respective local reference system into the global reference system, preferably by means of a rigid body transformation or Helmert transformation or by means of a principal axis transformation.
- the first position information in the global reference system and the one or more second position information in the respective local reference system can be transformed into a further reference system.
- one of the second position information and one of the orientation information is determined by means of visual odometry based on the image data and / or the depth data and / or by means of an inertial measuring unit by simultaneous position determination and map creation. Identifying the one or more The second position information and the orientation information contribute to an improved georeferencing of the points of the 3D point cloud in that a more precise determination of the trajectory of the detection device is made possible.
- the assignment of the georeference to the points of the 3D point cloud takes place by means of sensor data fusion, with a factor graph as a graphic model and / or an applied estimation method preferably being used for optimization, preferably using the first position information of the position in the global reference system become.
- drift effects and deviations between the second position information and the first position information can be recognized and corrected in the global reference system of the detection device.
- the acquisition of the first position information from one or more integrated receivers in a global reference system can compensate for the short-term stable limiting factors of the relative sensors and lead to georeferencing of the mobile acquisition device with the help of a transformation into the superordinate coordinate system.
- An embodiment is advantageous in which the sensor data fusion is based on a non-linear system of equations, on the basis of which the position and the orientation of the mobile detection device are estimated.
- a non-linear system of equations Preferably, on the basis of the non-linear system of equations, an estimation of the trajectory, that is to say of the time profile of the position of the mobile detection device, and an estimation of the time profile of the orientation of the mobile detection device takes place.
- At least one infrastructure element in particular a line or a connecting element, is detected and classified using the image data and / or depth data captured with the 3D reconstruction device, and the estimation of the position and orientation of the mobile detection device is based on the non-linear system of equations additionally takes place on the basis of the results of the detection and classification of the infrastructure element, in particular on the basis of result information containing color information and / or line diameter and / or a course and / or a bending radius and / or georeference.
- a factor graph which depicts the complex relationships between various variables and factors, is preferably used for the fusion of sensor data.
- the movement information (angular velocities, orientation information, etc.) added sequentially for each frame can be merged with carrier phase observations (GN SS factors) in a bundle block adjustment.
- the GNSS factors represent direct observations of the georeferenced position of a frame
- the relative pose factors provide information about the changes in the pose between the frames and feature point factors link the local references detected in the images (e.g. recognizable structures and / or objects) and establish the spatial relationship to the environment.
- the results of the detection, classification and / or segmentation of infrastructure elements color information, geometric, application-specific features such as e.g.
- the result is a coherent, globally complete, realigned 3D point cloud recorded frames of a scene, on the basis of which all infrastructure elements can be extracted three-dimensionally, georeferenced with an absolute accuracy of a few centimeters.
- the one or more receivers of the mobile detection device receive signals from a maximum of three navigation satellites of the global navigation satellite system, the points of the 3D point cloud each preferably having a georeference with an accuracy in the range of less than 10 cm smaller than 5 cm, particularly preferably smaller than 3 cm is assigned. Due to the use of several sensor data sources, three-dimensional absolute geographic coordinates of infrastructure elements in environments in which the satellite visibility is only limited and / or poor cellular coverage is given can be determined in the range of a few centimeters.
- the second position information of the position of the detection device and / or the orientation information of the mobile detection device as preliminary information support the resolution of ambiguities in differential measurements of carrier phases in order to georeference infrastructure elements even when the receiver reports a failure or by means of the inertial measuring unit only briefly determines a usable second position information and / or orientation information.
- An advantageous embodiment provides that a plausibility of a temporal sequence of first position details of the position of the detection device in the global reference system is determined, preferably by determining a first speed indication based on the temporal sequence of first position details and a second based on the recorded linear accelerations and angular speeds Speed specification is calculated and compared with the first speed specification. For this purpose, a comparison can be made with the time integral of the linear accelerations. In this way, the reliability of the geo-reference determined or assigned to the points can be increased.
- the points of the 3D point cloud are therefore preferably based on one or more first position information and one or more of the second position information and one or more of the orientation information and the measured accelerations of the mobile detection device along the main axes of the local reference system and the measured angular speeds of the rotations of the mobile detection device a georeference is assigned to each of these main axes.
- An advantageous embodiment provides that at least one infrastructure element, in particular a line or a connecting element, is detected and / or classified and / or segmented on the basis of the 3D point cloud and / or on the basis of the image data.
- one or more methods of image segmentation such as threshold value methods, in particular histogram-based methods, or texture-oriented methods, or region-based methods, or also pixel-based methods such as support, are used for the detection, classification and / or segmentation of an infrastructure element
- threshold value methods in particular histogram-based methods, or texture-oriented methods, or region-based methods, or also pixel-based methods such as support
- pixel-based methods such as support
- color information of the captured image data can be compared with predetermined color information.
- color information and / or geometrical information of the captured image data can be compared with, for example, given color information and / or geometrical information stored in a database in order to determine the infrastructure elements on the one hand from their environment in the To distinguish between a scene and, on the other hand, to recognize the type of infrastructure element, for example whether it is a fiber optic line or a District heating pipeline acts.
- Color information of the points of the 3D point cloud is preferably compared with a predetermined color information so that points of the 3D point cloud can be assigned directly to a recognized infrastructure element.
- the detection, classification and / or segmentation at least one histogram of color and / or gray value information and / or saturation value information and / or
- Brightness value information and / or an electromagnetic wave spectrum of several points of the 3D point cloud is generated.
- the generation of a histogram of the color or gray value information enables in a first step the assignment of the points of the point cloud that most closely resemble the predefined color and / or gray value information and / or saturation value information and / or brightness value information and / or an electromagnetic wave spectrum and thus lay the basis for improved detection of infrastructure elements in a scene.
- a histogram of color or gray value information of the image data is preferably generated in the HSV color space, for example after a previous transformation of the image data into the HSV color space.
- the histogram of the hue which is also referred to as the color angle, is particularly preferably generated.
- Local maxima are preferably detected in the histogram or in the histograms and, among the local maxima, those maxima with the smallest distances to a predetermined color, saturation and brightness threshold value of an infrastructure element are determined or detected.
- a preferred direction-distance threshold value can preferably be specified for a preferred spatial direction, which corresponds to a direction of movement of the mobile detection device when detecting the infrastructure element.
- the preferred direction distance threshold value can be larger than the distance threshold value for other spatial directions, since it can be assumed that the user will move the mobile detection device in a direction that corresponds to the main direction of extent of the infrastructure elements when detecting the infrastructure elements in the excavation pit.
- An advantageous embodiment of the invention provides that for the detection, classification and / or segmentation of the infrastructure elements and / or for improved distance measurement and / or for initializing the absolute orientation, a light point of a laser pointer of the detection device is detected and / or displayed in the display direction.
- the mobile detection device preferably has a laser pointer for the optical marking of infrastructure elements, with which a laser beam can preferably be generated, which is directed in the direction of the scene detected with the 3D reconstruction device.
- a user of the recording device can mark a point in the recorded scene that represents part of the infrastructure element.
- the point marked by means of the laser pointer can be identified in the captured image data and points that are at a geometrical distance from the marked point can represent candidate points that are presumably also part of the infrastructure element.
- the color values of the candidate points can be compared with one another, for example by means of one or more histograms. From this, the local maxima with the smallest distances to the previously defined color value, saturation and brightness values of the infrastructure element can be detected.
- An advantageous embodiment of the method according to the invention provides that for the detection, classification and / or segmentation of the infrastructure elements, color or gray value information of the captured image data, in particular color or
- Gray value information of the points of the 3D point cloud and / or the acquired depth data and associated label information can be fed to one or more artificial neural networks for training.
- the image data can be used as training data for the artificial neural network, with additional correction data being provided by a user of the detection device in order to train the artificial neural network.
- the artificial neural network can be used as part of a data processing device of the mobile
- Detection device in particular as software and / or hardware, be designed.
- the artificial neural network is provided as part of a server with which the mobile detection device is connected via a wireless communication link.
- a Detection, classification and / or segmentation of infrastructure elements can be made possible with reduced computing effort.
- An advantageous embodiment provides that an associated 3D object is generated for each detected infrastructure element, in particular on the basis of the 3D point cloud.
- the 3D object is preferably generated on the basis of the 3D point cloud in the geodetic reference system and is thus georeferenced.
- the 3D object can have a texture.
- the mobile detection device preferably comprises a graphics processor (GPU) which is set up to display the 3D object corresponding to the detected infrastructure element.
- GPU graphics processor
- the situation can arise for various reasons that part of the infrastructure element arranged in the underground cannot be detected optically due to being covered by the mobile detection device. Optical flaws thus arise in the 3D point cloud or the network defined by the 3D objects.
- Such a situation can arise, for example, when the infrastructure element is covered by a plate that extends over the construction pit, for example a steel plate, which forms a path over the construction pit.
- the exposed infrastructure element is connected to another infrastructure element that was laid in a closed construction, such. B. through a press hole.
- careless movements of a user of the mobile detection device can result in infrastructure elements or parts thereof being covered by sand or earth, or leaves can fall from surrounding trees and lead to obscurations.
- measures can be taken which are presented below.
- An advantageous embodiment of the invention provides that an optical flaw between two 3D objects is detected and a connecting 3D object, in particular as a 3D spline, is generated to close the optical flaw.
- a feature of a first end of a first 3D object and the same feature of a second end of a second 3D object are preferably determined, the first and second features being compared with one another and the first and second features being a diameter or a Color or an orientation or a geo-reference. They are particularly preferred for recognizing the optical flaw several features of a first end of a first 3D object and the same features of a second end of a second 3D object are determined, the first and second features being compared with one another and the first and second features a diameter and / or a color and / or an orientation and / or a georeference.
- the mobile detection device is put into an optical fault mode and, starting from the first end, is moved to the second end.
- the optical flaw mode can be activated by an operating element of the detection device.
- the mobile detection device has a device for voice control.
- An auditory input of commands and / or information can take place via the device for voice control.
- An auditory input can prevent unwanted shaking due to the actuation of operating elements during the detection of infrastructure elements, which contributes to improved detection results.
- an acoustic output of input requests and / or information, in particular feedback and / or warnings, can take place by means of the device for voice control.
- the device for voice control can comprise one or more microphones and / or one or more loudspeakers.
- auditory information is recognized by means of the device for voice control and the georeference is additionally assigned to the points of the 3D point cloud on the basis of the auditory information.
- the auditory information is particularly preferably used, in particular during the sensor data fusion, to estimate the position and the orientation of the mobile detection device.
- the auditory information can be used to detect and classify the infrastructure elements. For example, by means of the device for voice control, auditory information from a user regarding the type of infrastructure element to be recognized (“the line is a fiber optic cable”) and / or regarding the number of infrastructure elements to be recognized “(three lines are laid”) and / or for Arrangement of the infrastructure elements ("on the left is a gas pipe, on the right a fiber optic cable”) can be recognized.
- Display device of the mobile detection unit a representation of the 3D point cloud and / or 3D objects, which correspond to infrastructure elements, is or will be displayed. This offers the advantage that the user of the mobile detection device can view and, if necessary, check the 3D point cloud and / or 3D objects corresponding to infrastructure elements on site, for example immediately after the infrastructure elements have been detected in the excavation pit.
- a textured grid model generated on the basis of the 3D point cloud and the image data of the one or more 2D cameras can be displayed by means of the display device.
- a 2D site plan is displayed.
- the 2D site plan can be generated by means of a data processing device of the mobile detection device, for example on the basis of the particularly georeferenced 3D point cloud.
- the 2D site plan can preferably be saved in a file, for example in the .dxf file format or shape files with individual attributes.
- the design of such a 2D site plan serves to digitally integrate the infrastructure elements into the individual geographic information systems of the responsible owners.
- Display device of the mobile detection device a parts list of infrastructure elements, in particular of line elements and connecting elements, is displayed.
- the parts list can be generated by means of a data processing device of the mobile detection device on the basis of the detected, classified and / or segmented infrastructure elements and can be adapted manually by the user.
- the parts list can include, for example, infrastructure elements from different line networks.
- the parts list can for example contain information about the number of the respective infrastructure elements and / or the number of laid length units of the respective infrastructure elements and / or the position information of the respective infrastructure element in a geodetic reference system and / or the construction progress.
- a display device of the mobile detection device is used to display an overlay of image data from a 2D camera of the detection device with a projection of one or more 3D objects that correspond to an infrastructure element.
- the orientation of the camera viewing direction of the mobile detection device must first be initialized.
- the user must move the mobile detection device, for example, over an area of a few meters to the location or carry out a certain movement pattern / procedure in order to have sufficient sensor data from the mobile detection device for orientation in space.
- An overlay of the image data of the 2D camera provided as part of the 3D reconstruction device is preferably displayed with several projections of the 3D objects which correspond to several, in particular interconnected, infrastructure elements.
- Such a representation can also be referred to as an “augmented reality” representation and enables a realistic or positionally correct representation of the concealed infrastructure elements even in the closed state.
- augmented reality representation
- a display device of the mobile detection device is used to display an overlay of image data from a 2D camera of the detection device provided as part of the 3D reconstruction device with a projection of several points of the 3D point cloud. If a projection of the 3D point cloud is displayed in the display device, there is an increased computational effort in the display compared to the display of the projection of a 3D object. However, a previous generation of the 3D object can then be dispensed with.
- the mobile detection device preferably comprises a display device for displaying display data and a data processing device which is set up to provide display data, the
- the display device can be designed as a combined display and operating device, via which inputs from a user can be recorded, for example as a touchscreen.
- the mobile detection device comprises a laser pointer for the optical marking of infrastructure elements and / or for extended distance measurement and / or for initializing the orientation in the display direction.
- the mobile detection device comprises a polarization filter to avoid gloss, mirroring and reflections in order to improve the quality and optimize the observation data.
- the mobile detection device comprises one or more lighting devices for improved detection, classification and / or segmentation of infrastructure elements.
- the mobile detection device has a device for voice control.
- the device for voice control is preferably set up to enable an acoustic output of input requests and / or information, in particular feedback and / or warnings.
- FIG. 1 shows an exemplary embodiment of a mobile detection device according to the invention in a schematic block diagram
- FIG. 2 shows an exemplary embodiment of a method according to the invention for detecting exposed infrastructure elements and those located in the subsurface in a flowchart
- FIG. 1 shows an exemplary embodiment of a mobile detection device according to the invention in a schematic block diagram
- FIG. 2 shows an exemplary embodiment of a method according to the invention for detecting exposed infrastructure elements and those located in the subsurface in a flowchart
- FIG. 1 shows an exemplary embodiment of a mobile detection device according to the invention in a schematic block diagram
- FIG. 2 shows an exemplary embodiment of a method according to the invention for detecting exposed infrastructure elements and those located in the subsurface in a flowchart
- FIG. 1 shows an exemplary embodiment of a mobile detection device according to the invention in a schematic block diagram
- FIG. 2 shows an exemplary embodiment of a method according to the invention for detecting exposed infrastructure elements and those located in the subsurface in a flowchart
- 3 shows an exemplary projection of a 3D point cloud
- FIG. 7 is a block diagram to illustrate the processes involved in assigning the georeference to the points of the 3D point cloud
- 9a shows a plan view of an excavation pit with several at least partially optically concealed infrastructure elements
- FIG. 9b shows a plan view of the construction pit according to FIG. 7a with an identified and closed optical defect.
- the mobile detection device 1 includes, among other things, one or more receivers 2, consisting of a receiving system for receiving and processing signals from one or more global navigation satellite systems and for determining a first position of the detection device in the global reference system based on runtime measurements of the satellite signals.
- the receiver 2, in particular the receiving system of the receiver 2 can be connected to one or more antennas, which are preferably arranged outside the housing 9 of the mobile detection device 1, particularly preferably on an outer contour of the housing 9. Alternatively, the antenna can be arranged inside the housing 9.
- the mobile acquisition device 1 also contains a 3D reconstruction device 4 for acquiring image data and / or depth data of a scene, in particular a frame of a scene that contains exposed, underground infrastructure elements.
- the mobile detection device 1 comprises an inertial measuring unit 3 for measuring the accelerations along the main axes and the angular velocities of the rotations of the mobile detection device 1.
- second position details of the position of the detection device by means of visual odometry of the image data and / or depth data and by means of a inertial measuring unit 3 estimated by simultaneous position determination and map creation.
- the multiple second position details of the position of the detection device 1 in a local reference system and the multiple orientation details of the orientation of the detection device 1 in the respective local reference system are determined, a.
- One of the second position information and one of the orientation information is determined by means of an inertial measuring unit 3 of the mobile detection device 1, which detects linear accelerations of the mobile detection device 1 in three orthogonally superposed main axes of the local reference system and angular speeds of the rotation of the mobile detection device 1 about these main axes , and / or b.
- the 3D reconstruction device 4 has one or more 2D cameras, by means of which the image data and / or the depth data of the scene are recorded and the determination of one of the second position information and one of the
- Orientation information is provided by means of visual odometry using the image data and / or the depth data; and / or c.
- the 3D reconstruction device 4 has a LIDAR measuring device, by means of which the depth data of the scene are recorded and one of the second position information and one of the orientation information is determined by means of visual odometry on the basis of the depth data.
- the receiver or receivers 2, the inertial measuring unit 3 and the 3D reconstruction device 4 are arranged in a common housing 9.
- the housing 9 has dimensions which enable the mobile detection device 1 to be held by a user with both hands, preferably in one hand.
- the housing 9 has a largest edge length which is smaller than 50 cm, preferably smaller than 40 cm, particularly preferably smaller than 30 cm, for example smaller than 20 cm.
- Further components of the mobile detection device 1, which are also arranged in the housing 9, are a laser pointer 5, a data processing device 6, a storage unit 7, a communication device 10 and a display device 8.
- the laser pointer 5 can be used for the optical marking of infrastructure elements and / or for supplementary distance measurement and is arranged in the housing or frame 9 in such a way that it can be used to generate a laser beam which points in the direction of the scene captured by the 3D reconstruction device 4, for example in the middle of the scene captured with the 3D reconstruction device 4.
- the data processing device 6 is connected to the receiver or receivers 2, the inertial measuring unit 3 and the 3D reconstruction device 4, so that the individual measured and estimated data and the image data can be fed to the data processing device 6. Furthermore, the laser pointer 5, the storage unit 7 and the display device 8 are connected to the data processing device 6.
- the detection device 1 contains a communication device 10, which is designed in particular as a communication device for wireless communication, for example by means of Bluetooth, WLAN or cellular radio.
- the display device 8 is used to visualize the infrastructure elements detected by means of the detection device 1.
- the display device 8 is preferably designed as a combined display and operating device, for example in the manner of a touch-sensitive screen.
- the mobile detection device 1 shown in FIG. 1 can be used in a method for detecting exposed infrastructure elements located underground.
- An exemplary embodiment of such a method 100 is to be explained below with reference to the illustration in FIG. 2.
- signals from one or more global navigation satellite systems are received and processed in a detection step 101 by means of one or more receivers 2 of the mobile detection device 1, as well as one or more position information from the Determined position of the detection device 1 in the global reference system.
- the mobile Acquisition device 1 acquires image data of a scene which contains exposed infrastructure elements located in the subsurface.
- a LIDAR measuring device of the 3D reconstruction device records image data and / or depth data of the scene.
- second position details of the position of the detection device are estimated by means of visual odometry of the image data and / or depth data and by means of an inertial measuring unit 3 by simultaneous position determination and map creation.
- the inertial measuring unit 3 is set up to detect linear accelerations of the mobile detection device 1 in three orthogonal main axes of the local reference system and angular velocities of the rotation of the mobile detection device 1 about these main axes.
- the detection device 1 is carried by a person, preferably with both hands of a person, particularly preferably with one hand of a person.
- the estimated second position information in the local system, the estimated orientation information in the local reference system, the measured first position in the global reference system, the measured accelerations along the main axes and the measured angular speeds of the rotations of the mobile detection device 1 about the main axes and the captured image data are synchronized in the Storage unit 7 of detection device 1 is stored.
- the user can move with the detection device 1 during the detection step 101, for example along an exposed infrastructure element.
- the synchronized storage of this data ensures that the data can be processed correctly in the subsequent process steps.
- the image data acquired by the 3D reconstruction device are processed in a subsequent reconstruction step 102 in such a way that they generate a 3D point cloud with several points and color information for the points. In this respect, we speak of a colored 3D point cloud.
- a georeferencing step 103 the points of the 3D point cloud are then based on the estimated second position information of the 3D reconstruction device 4 in the local reference system, the estimated orientations of the 3D reconstruction device 4 in the local reference system and the measured first positions of the mobile detection device 1 in the global reference system and the measured accelerations of the mobile detection device 1 along the main axes and the measured angular velocities of the rotations of the mobile detection device 1 around the main axes of the mobile detection device 1 a first position specification in a geodetic reference system, for example an officially recognized coordinate system assigned.
- a colored, georeferenced 3D point cloud is calculated and provided.
- a recognition step 104 infrastructure elements are then detected on the basis of the color information in the data. For the detection, classification and / or segmentation of the infrastructure elements, color information of the captured image data is compared with a predetermined color information. Alternatively or additionally, the infrastructure elements can be marked by the user when the scene is captured by means of the laser pointer 5. The marking by the laser pointer 5 can be detected in the image data and used to detect the infrastructure elements.
- several image points of the image data are each assigned to a common infrastructure element, for example a line element or a line connection element.
- the illustration in FIG. 3 shows an exemplary illustration of a recognized infrastructure element in a 2D projection.
- a subsequent data processing step 105 the generated data of the individual recognition step are processed and its infrastructure elements are detected.
- the processing can take place by means of the data processing device 6.
- Various types of processing are possible here, which can be carried out alternatively or cumulatively:
- 3D objects can be generated that correspond to the captured infrastructure elements, so that a 3D model of the underground pipeline network is generated.
- a projection of the 3D point cloud can also be calculated. It is possible that a 2D site plan is generated in which the detected infrastructure elements are reproduced.
- a parts list of the identified infrastructure elements can be generated.
- a visualization step 106 the display device 8 of the mobile detection device 1
- an overlay of image data from a 2D camera of the detection device with a projection of one or more 3D objects that correspond to an infrastructure element, and / or an overlay of image data from a 2D camera of the detection device with a projection of several points of the 3D point cloud can be displayed.
- FIG. 4 visualizes an application of the method according to the invention and the device according to the invention.
- Several frames of a recorded scene are shown which contain a large number of infrastructure elements 200, 200 'of a distribution network.
- the infrastructure elements 200, 200 ‘are glass fiber lines and telecommunication cables, some of which are laid in a common excavation without any spacing from one another.
- the diameter of these infrastructure elements 200, 200 ‘is less than 30 cm, sometimes less than 20 cm.
- Some infrastructure elements 200 ‘have a diameter of less than 10 cm.
- a person 201 stands in the opened excavation and uses a mobile detection device 1 (not visible in FIG. 4) to detect the exposed infrastructure elements 200, 200 'with the method according to the invention.
- FIGS. 5 and 6 show typical construction sites for laying infrastructure elements of underground distribution networks in an urban environment. These construction sites are located in the urban street space and are characterized by construction pits with a depth of 30 cm to 2 m. Space around the excavation pit is limited and accessibility to the excavation pit is partially restricted by parked cars and ongoing road traffic. The urban environment of the excavation is often characterized by shading of the GNSS signals and cell phone reception.
- the mobile detection device 1 comprises the inertial measuring unit 3, the receiver 2 for the signals of the global navigation satellite system including mobile radio interface 302, a LIDAR measuring device 303 of the 3D reconstruction device 4, which is designed here as a solid-state LIDAR measuring device, and a first 2D camera 304 of the 3D reconstruction device 4 and optionally a second 2D camera 305 of the 3D reconstruction device 4.
- the data provided by these data sources or sensors are stored in a synchronized manner in a memory unit 7 of the mobile detection device (step 306). It means that
- the depth data of the scene are recorded by means of the LIDAR measuring device 303 and one of the second position information and one of the orientation information are determined by means of visual odometry on the basis of the depth data.
- a local 3D point cloud with several points is generated, see block 307.
- the image data and / or the depth data of the scene 350 are recorded and one of the second position information and one of the orientation information is determined using visual odometry based on the respective image data and / or the depth data of the 2D camera 304 and possibly 305.
- feature points are extracted, see block 308 and possibly 309.
- At least one infrastructure element is detected and classified and, if necessary, segmented using the image data and / or depth data acquired with the 3D reconstruction device 4, cf. block 310.
- One or more of the following information is obtained: Color of an infrastructure element, diameter of an infrastructure element, course of an infrastructure element, bending radius of an infrastructure element, first and second position information of the mobile detection device.
- the detection, classification and, if necessary, segmentation can take place by means of an artificial neural network which is designed as part of a data processing device of the mobile detection device, in particular as software and / or hardware.
- the mobile detection device can optionally have a device for voice control. Auditory information can be recorded via the facility for voice control that are used to detect and classify the infrastructure elements and / or to assign the georeference to the points of the 3D cloud.
- the output data of blocks 307, 308, 309 and 310 present as local 2D data are first transformed into 3D data (block 311), in particular by back projection.
- the data of a plurality of frames 350, 351, 352 of a scene transformed in this way are then fed to a sensor data fusion 312, which estimates the position and the orientation of the mobile detection device 1 on the basis of a non-linear system of equations.
- a factor graph is preferably used, which represents the complex relationships between various variables and factors.
- the movement information angular velocities, orientation information, etc.
- GN SS factors carrier phase observations
- the GNSS factors represent direct observations of the georeferenced position of a frame
- the relative pose factors provide information about the changes in the pose between the frames and feature point factors link the local references detected in the images (e.g. recognizable structures and / or objects) and establish the spatial relationship to the environment.
- the results of the detection, classification and / or segmentation of infrastructure elements can be entered into the above. Include sensor data fusion.
- the result of the sensor data fusion 312 is a coherent, globally complete, realigned 3D point cloud of all frames of a scene, on the basis of which all infrastructure elements can be extracted three-dimensionally, georeferenced, with an absolute accuracy of a few centimeters.
- FIG. 8 shows a plan view of a section of a distribution network with a plurality of infrastructure elements 200 that were recorded by means of the inventive method and the inventive device.
- areas are that were recorded as part of a common scene, ie as part of a coherent sequence multiple frames, marked by a box 360.
- the scenes are recorded one after the other, for example whenever the respective section of the distribution network is exposed.
- some overlap areas 361 are contained in two different scenes and thus twice.
- the chronological sequence of the scenes can extend over several days.
- These scenes are created within the scope of the sensor data fusion merged, so that a single, common 3D point cloud of the distribution network is generated that does not contain any areas recorded twice. It is advantageous if, with the aid of the sensor data fusion, areas of infrastructure elements recorded several times or at different times, such as overlaps between two recordings, are recognized and reduced to the most recent recorded areas of the infrastructure elements.
- FIG. 9a shows a plan view of part of a distribution network which was partially laid in a closed construction, for example through a press bore.
- part of the infrastructure element 200 arranged underground cannot be visually detected due to being covered by the mobile detection device 1, cf. covered area 400.
- a total of four such partially covered infrastructure elements are shown in FIG. 9a.
- An optical flaw thus arises in the 3D point cloud or the network defined by the 3D objects.
- the optical flaw is detected between two 3D objects 401, 402, which correspond to a first infrastructure element 200, and a connecting 3D object 403, in particular as a 3D spline, is generated to close the optical flaw, see Fig. 9b.
- one or more features of a first end of a first 3D object 401 and the same or the same features of a second end of a second 3D object 402 are determined.
- the features of the two ends are compared with one another.
- the features can be, for example, the diameter and / or the color and / or the orientation and / or position information.
- the operator can move the mobile detection device above the concealed infrastructure element, starting from one end of the infrastructure element that corresponds to the first end of the first 3D object 401, along an optical flaw trajectory to the end of the infrastructure element 200, which corresponds to the second end of the second 3D object 402.
- the mobile detection device 1 can then generate a connection 3D object 403 shown in FIG. 9 b, which connects the first end of the first 3D object 401 to the second end of the second 3D object 402.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Mobile Radio Communication Systems (AREA)
- Geophysics And Detection Of Objects (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112022008096A BR112022008096A2 (pt) | 2019-10-28 | 2020-10-27 | Método e aparelho de captura móvel para a captura de elementos de infraestrutura de uma rede de linhas subterrânea |
CA3159078A CA3159078A1 (en) | 2019-10-28 | 2020-10-27 | Method and mobile capture apparatus for capturing elements of infrastructure of an underground line network |
AU2020372614A AU2020372614A1 (en) | 2019-10-28 | 2020-10-27 | Method and mobile detection unit for detecting elements of infrastructure of an underground line network |
JP2022524162A JP2022553750A (ja) | 2019-10-28 | 2020-10-27 | 地中線ネットワークのインフラストラクチャ要素を検出する方法およびその移動型検出装置 |
CN202080077634.0A CN114667434A (zh) | 2019-10-28 | 2020-10-27 | 用于捕捉地下线路网络的基础设施元件的方法和移动式捕捉设备 |
MX2022005059A MX2022005059A (es) | 2019-10-28 | 2020-10-27 | Procedimiento y dispositivo de deteccion movil para la deteccion de elementos de infraestructura de una red de linea subterranea. |
EP20800066.1A EP4051982A1 (de) | 2019-10-28 | 2020-10-27 | Verfahren und mobile erfassungsvorrichtung zur erfassung von infrastrukturelementen eines unterirdischen leitungsnetzwerks |
US17/770,750 US20220282967A1 (en) | 2019-10-28 | 2020-10-27 | Method and mobile detection unit for detecting elements of infrastructure of an underground line network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019216548.6 | 2019-10-28 | ||
DE102019216548.6A DE102019216548A1 (de) | 2019-10-28 | 2019-10-28 | Verfahren und mobile Erfassungsvorrichtung zur Erfassung von Infrastrukturelementen eines unterirdischen Leitungsnetzwerks |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021083915A1 true WO2021083915A1 (de) | 2021-05-06 |
Family
ID=73040055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2020/080210 WO2021083915A1 (de) | 2019-10-28 | 2020-10-27 | Verfahren und mobile erfassungsvorrichtung zur erfassung von infrastrukturelementen eines unterirdischen leitungsnetzwerks |
Country Status (11)
Country | Link |
---|---|
US (1) | US20220282967A1 (de) |
EP (1) | EP4051982A1 (de) |
JP (1) | JP2022553750A (de) |
CN (1) | CN114667434A (de) |
AU (1) | AU2020372614A1 (de) |
BR (1) | BR112022008096A2 (de) |
CA (1) | CA3159078A1 (de) |
CL (1) | CL2022001061A1 (de) |
DE (1) | DE102019216548A1 (de) |
MX (1) | MX2022005059A (de) |
WO (1) | WO2021083915A1 (de) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12094153B2 (en) * | 2019-05-08 | 2024-09-17 | Nippon Telegraph And Telephone Corporation | Point cloud analysis device, method, and program |
CN115127516B (zh) * | 2022-06-27 | 2024-02-02 | 长安大学 | 一种基于客车底盘的多功能隧道检测车 |
CN115183694B (zh) * | 2022-09-09 | 2022-12-09 | 北京江河惠远科技有限公司 | 输电线路基础数字化测量系统及其控制方法 |
FR3142025A1 (fr) * | 2022-11-10 | 2024-05-17 | Enedis | Tracé d’équipements dans un plan géoréférencé |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112011102132T5 (de) * | 2010-06-25 | 2013-05-23 | Trimble Navigation Ltd. | Verfahren und Vorrichtung für eine imagebasierte Positionierung |
US20140210856A1 (en) | 2013-01-30 | 2014-07-31 | F3 & Associates, Inc. | Coordinate Geometry Augmented Reality Process for Internal Elements Concealed Behind an External Element |
US9230453B2 (en) | 2013-05-21 | 2016-01-05 | Jan Lee Van Sickle | Open-ditch pipeline as-built process |
WO2018213927A1 (en) | 2017-05-23 | 2018-11-29 | Lux Modus Ltd. | Automated pipeline construction modelling |
WO2019018315A1 (en) * | 2017-07-17 | 2019-01-24 | Kaarta, Inc. | ALIGNMENT OF MEASURED SIGNAL DATA WITH SLAM LOCATION DATA AND ASSOCIATED USES |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19926559A1 (de) * | 1999-06-11 | 2000-12-21 | Daimler Chrysler Ag | Verfahren und Vorrichtung zur Detektion von Objekten im Umfeld eines Straßenfahrzeugs bis in große Entfernung |
US9135502B2 (en) * | 2009-05-11 | 2015-09-15 | Universitat Zu Lubeck | Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose |
GB2489179B (en) * | 2010-02-05 | 2017-08-02 | Trimble Navigation Ltd | Systems and methods for processing mapping and modeling data |
WO2012097077A1 (en) * | 2011-01-11 | 2012-07-19 | Intelligent Technologies International, Inc. | Mobile mapping system for road inventory |
US9222771B2 (en) * | 2011-10-17 | 2015-12-29 | Kla-Tencor Corp. | Acquisition of information for a construction site |
EP3140613B1 (de) * | 2014-05-05 | 2024-04-03 | Hexagon Technology Center GmbH | Erfassungssystem |
ES2628950B1 (es) * | 2016-02-04 | 2018-08-16 | Tubecheck S.L. | Sistema y método para determinar trayectorias en conductos subterráneos |
JP6987797B2 (ja) * | 2016-03-11 | 2022-01-05 | カールタ インコーポレイテッド | リアルタイムオンラインエゴモーション推定を有するレーザスキャナ |
US10539956B2 (en) * | 2016-08-02 | 2020-01-21 | Penguin Automated Systems Inc. | Subsurface robotic mapping system and method |
CN106327579B (zh) * | 2016-08-12 | 2019-01-15 | 浙江科技学院 | 基于bim的多维成像融合技术实现隧道爆破质量数字化的方法 |
JP7141403B2 (ja) * | 2017-01-27 | 2022-09-22 | カールタ インコーポレイテッド | 実時間オンライン自己運動推定を備えたレーザスキャナ |
US10444761B2 (en) * | 2017-06-14 | 2019-10-15 | Trifo, Inc. | Monocular modes for autonomous platform guidance systems with auxiliary sensors |
-
2019
- 2019-10-28 DE DE102019216548.6A patent/DE102019216548A1/de not_active Withdrawn
-
2020
- 2020-10-27 CA CA3159078A patent/CA3159078A1/en active Pending
- 2020-10-27 BR BR112022008096A patent/BR112022008096A2/pt not_active Application Discontinuation
- 2020-10-27 WO PCT/EP2020/080210 patent/WO2021083915A1/de unknown
- 2020-10-27 CN CN202080077634.0A patent/CN114667434A/zh active Pending
- 2020-10-27 US US17/770,750 patent/US20220282967A1/en active Pending
- 2020-10-27 AU AU2020372614A patent/AU2020372614A1/en not_active Abandoned
- 2020-10-27 MX MX2022005059A patent/MX2022005059A/es unknown
- 2020-10-27 EP EP20800066.1A patent/EP4051982A1/de not_active Withdrawn
- 2020-10-27 JP JP2022524162A patent/JP2022553750A/ja active Pending
-
2022
- 2022-04-26 CL CL2022001061A patent/CL2022001061A1/es unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112011102132T5 (de) * | 2010-06-25 | 2013-05-23 | Trimble Navigation Ltd. | Verfahren und Vorrichtung für eine imagebasierte Positionierung |
US20140210856A1 (en) | 2013-01-30 | 2014-07-31 | F3 & Associates, Inc. | Coordinate Geometry Augmented Reality Process for Internal Elements Concealed Behind an External Element |
US9230453B2 (en) | 2013-05-21 | 2016-01-05 | Jan Lee Van Sickle | Open-ditch pipeline as-built process |
WO2018213927A1 (en) | 2017-05-23 | 2018-11-29 | Lux Modus Ltd. | Automated pipeline construction modelling |
WO2019018315A1 (en) * | 2017-07-17 | 2019-01-24 | Kaarta, Inc. | ALIGNMENT OF MEASURED SIGNAL DATA WITH SLAM LOCATION DATA AND ASSOCIATED USES |
Also Published As
Publication number | Publication date |
---|---|
EP4051982A1 (de) | 2022-09-07 |
US20220282967A1 (en) | 2022-09-08 |
CL2022001061A1 (es) | 2023-01-06 |
JP2022553750A (ja) | 2022-12-26 |
CA3159078A1 (en) | 2021-05-06 |
CN114667434A (zh) | 2022-06-24 |
AU2020372614A1 (en) | 2022-05-19 |
DE102019216548A1 (de) | 2021-04-29 |
MX2022005059A (es) | 2022-05-18 |
BR112022008096A2 (pt) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4051982A1 (de) | Verfahren und mobile erfassungsvorrichtung zur erfassung von infrastrukturelementen eines unterirdischen leitungsnetzwerks | |
EP0763185B1 (de) | Verfahren zur erfassung, auswertung, ausmessung und speicherung von geo-informationen | |
EP1673589B1 (de) | Verfahren und vorrichtung zur bestimmung der aktualposition eines geodätischen instrumentes | |
DE102016105496A1 (de) | System zur Prüfung von Objekten mittels erweiterter Realität | |
DE112016006262B4 (de) | Dreidimensionaler Scanner und Verarbeitungsverfahren zur Messunterstützung für diesen | |
DE10308525A1 (de) | Vermessungssystem | |
DE102013202393A1 (de) | Bestimmen von Neigungswinkel und Neigungsrichtung unter Verwendung von Bildverarbeitung | |
DE102015209101A1 (de) | Verfahren zum Erzeugen einer Karte für ein autonomes Fahrzeug | |
DE102006055652A1 (de) | Verfahren zur Aufarbeitung dreidimensionaler Daten und Vorrichtung zur Aufarbeitung dreidimensionaler Daten | |
DE102008002241A1 (de) | Verfahren und Messsystem zur bildbasierten Vermessung eines Raumes | |
EP2573512A2 (de) | Verfahren und Anordnung zur Bestimmung der Lage eines Messpunktes im geometrischen Raum | |
DE102005037841B4 (de) | Verfahren und Anordnung zur Bestimmung der relativen Lage eines ersten Objektes bezüglich eines zweiten Objektes, sowie ein entsprechendes Computerprogramm und ein entsprechendes computerlesbares Speichermedium | |
DE102014205640B4 (de) | Vermessung mittels mobilem Gerät | |
EP3387375B1 (de) | Verfahren zur erstellung einer tiefenkarte | |
DE102015120999A1 (de) | Verfahren zur Erzeugung und Darstellung einer computergenerierten, eine Realumgebung abbildenden Simulationsumgebung | |
WO1998043113A1 (de) | Messverfahren unter einbeziehung der lasertechnik für dreidimensionale objekte | |
WO2019043112A1 (de) | Verfahren zur raumvermessung mittels eines messfahrzeugs | |
EP2756261B1 (de) | Vermessungsgerät und verfahren zum gefilterten darstellen von objekt-information | |
DE10151983A1 (de) | Verfahren zur Dokumentation einer Unfallsituation | |
DE102004028736A1 (de) | Verfahren zur automatischen Erfassung und Bestimmung von ortsfesten Objekten im Freien von einem fahrenden Fahrzeug aus | |
EP2749982B1 (de) | Bezugsmodellerzeugung und Aktualisierung | |
DE102011103510A1 (de) | Verfahren zum Erstellen einer dreidimensionalen Repräsentation einer Objektanordnung | |
WO2013143982A1 (de) | Verfahren zum automatischen betreiben einer überwachungsanlage | |
WO2017144033A1 (de) | Verfahren zur ermittlung und darstellung von veränderungen in einer ein reales gelände und darin befindliche reale objekte umfassenden realumgebung | |
WO2021023499A1 (de) | Verfahren und anordnung zur darstellung von technischen objekten |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20800066 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022524162 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3159078 Country of ref document: CA |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112022008096 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2020372614 Country of ref document: AU Date of ref document: 20201027 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020800066 Country of ref document: EP Effective date: 20220530 |
|
ENP | Entry into the national phase |
Ref document number: 112022008096 Country of ref document: BR Kind code of ref document: A2 Effective date: 20220427 |