CN115932882A - System for providing 3D detection of an environment through an autonomous robotic vehicle - Google Patents

System for providing 3D detection of an environment through an autonomous robotic vehicle Download PDF

Info

Publication number
CN115932882A
CN115932882A CN202211000680.8A CN202211000680A CN115932882A CN 115932882 A CN115932882 A CN 115932882A CN 202211000680 A CN202211000680 A CN 202211000680A CN 115932882 A CN115932882 A CN 115932882A
Authority
CN
China
Prior art keywords
trajectory
pattern
environment
lidar
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211000680.8A
Other languages
Chinese (zh)
Inventor
L·海泽尔
A·格斯特
P·戈尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hexcon Earth System Service Public Co ltd
Original Assignee
Hexcon Earth System Service Public Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hexcon Earth System Service Public Co ltd filed Critical Hexcon Earth System Service Public Co ltd
Publication of CN115932882A publication Critical patent/CN115932882A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4808Evaluating distance, position or velocity data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D57/00Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track
    • B62D57/02Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members
    • B62D57/032Vehicles characterised by having other propulsion or other ground- engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legs; with alternately or sequentially lifted feet or skid
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0217Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with energy consumption, time reduction or distance reduction criteria
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to a system for providing 3D exploration of an environment by means of an autonomous robotic vehicle. The system comprises: a SLAM unit for performing a synchronized positioning and mapping process, a path planning unit for determining a path to be taken by the autonomous robotic vehicle, and a lidar device specifically foreseen to be mounted on the autonomous robotic vehicle. The lidar device is configured to have a field of view of 360 degrees about a first axis and 130 degrees about a second axis perpendicular to the first axis, and to generate lidar data with a point acquisition rate of at least 300000 points per second, which allows the SLAM unit to receive the lidar data as part of the perception data for a SLAM process. The path planning unit is configured to determine the path to be taken by performing an evaluation of a further trajectory within the environment map in relation to an estimated point distribution map of the estimated 3D point cloud, which estimated point distribution map is provided by the lidar means on the further trajectory and projected onto the environment map.

Description

System for providing 3D detection of an environment through autonomous robotic vehicles
Technical Field
The present invention relates to a system for providing 3D exploration of an environment by means of an autonomous robotic vehicle.
Background
For example, three-dimensional exploration is used to assess the actual condition of an area of interest (e.g., a confined or hazardous area such as a construction site, industrial plant, commercial complex, or cave). The results of the 3D detection can be used to efficiently plan the next work step or appropriate action to react to the determined actual condition.
The decision making and planning of the working steps is further assisted by means of dedicated digital visualization of the actual state (e.g. in the form of a point cloud or vector file model) or by means of augmented reality functions with 3D probe data.
3D detection typically involves optically scanning and measuring the environment by means of a laser scanner, which emits a measuring laser beam, for example using pulsed electromagnetic radiation. By receiving echoes from a backscattered surface point of the environment, the distance to the surface point is derived and correlated with the angular emission direction of the associated measuring laser beam. In this way, a three-dimensional point cloud is generated. For example, the distance measurement may be based on the time of flight, shape, and/or phase of the pulse.
For the additional information, the laser scanner data can be combined with the camera data, for example by means of an RGB camera or an infrared camera, in particular to provide high-resolution spectral information.
However, acquiring 3D data can be cumbersome and in some cases even dangerous to human workers. Typically, certain areas are prohibited or strictly restricted from access by human workers.
Nowadays, robotic vehicles (particularly autonomous robotic vehicles) are increasingly being used to facilitate data acquisition and reduce the risk to human workers. The 3D detection device used in combination with such a robotic vehicle is typically configured to provide detection data during movement of the robotic vehicle, wherein the reference (referencing) data provides information about the trajectory of the data acquisition unit, e.g. position and/or attitude data, such that detection data acquired from different positions of the data acquisition unit may be combined into a common coordinate system.
This 3D probe data may then be analyzed by means of a feature identification algorithm (e.g., by using shape information provided by virtual object data from the CAD model) to automatically identify semantic and/or geometric features captured by the probe data. Such feature identification, in particular for identifying geometric primitives, is nowadays widely used for analyzing 3D data.
Many different types of autonomous robotic vehicles are known. For example, a ground-based robotic vehicle may have wheels for propelling the robot, which typically have complex suspensions to cope with different kinds of terrain. Another widely used type is a legged robot, e.g. a four legged robot, which is generally able to handle harsh terrain and steep slopes. Aerial robotic vehicles (e.g., quad-rotor drones) are further versatile for detecting difficult to access areas, but often at the expense of less detection time and/or sensor complexity due to the often limited load capacity and battery power.
Unmanned aerial vehicles and unmanned ground vehicles are themselves the most advanced platforms for multilateral use. These imaging sensor and lidar sensor equipped platforms provide acquisition units for autonomous path planning and autonomous movement for acquiring 3D detection and real-world capture data.
For mobile control and path planning, autonomous robotic vehicles are typically configured to: data from sensors of the robotic vehicle are used to autonomously create a 3D map of the new environment, for example by means of a simultaneous localization and mapping (SLAM) function.
In the prior art, motion control and path planning of probe activities are mainly managed by using built-in visual perception sensors of autonomous robots. The acquisition and use of 3D detection data is typically separate from the acquisition and use of control data for the mobile robot.
In prior art robotic vehicles, a compromise must typically be made between field of view and viewing distance (viewing distance) on the one hand and reactivity (e.g. for obstacle detection and initiating evasive maneuvers) on the other hand, which limits the speed of movement of the robot. Typically, the robot only "sees" its immediate surroundings, which provides efficient reactivity against obstacles and terrain changes, whereas larger scale path control is provided by predefined environmental models and guiding instructions. This limits the applicability of autonomous robotic vehicles for mobile 3D detection in unknown terrain, for example. In known terrain, it is cumbersome to follow a predefined path, and it often involves a skilled person to consider various measurement requirements, such as a desired point density, measurement speed or measurement accuracy.
Disclosure of Invention
It is therefore an object of the present invention to provide an improved system for mobile 3D detection with increased applicability.
Another object is to provide a mobile 3D detection system which is easier to operate and which can be used by a wide range of operators, as well as by operators who are not specially trained.
These objects are achieved by implementing at least part of the features of the independent claims. Features which further develop the invention are described in the dependent claims in an alternative or advantageous manner.
The present invention relates to a system for providing 3D detection of an environment by an autonomous robotic vehicle. The system includes a simultaneous localization and mapping unit (referred to as a SLAM unit) configured to perform a simultaneous localization and mapping process (referred to as a SLAM process). The SLAM process includes: the method includes receiving sensory data providing a representation of an environment surrounding an autonomous vehicle at a current location, generating a map of the environment using the sensory data, and determining a trajectory of a path that the autonomous vehicle has traversed within the map of the environment. The system also includes a path planning unit configured to determine a path to be taken by the autonomous robotic vehicle based on a map of the environment. It is specifically foreseen that the lidar means mounted on the autonomous robotic vehicle is configured to generate lidar data to provide a cooperative scanning of an environment associated with the lidar means, wherein the system is configured to generate the lidar data during movement of the lidar means and to provide a reference of the lidar data relative to a common coordinate system for determining a 3D detection point cloud of the environment.
According to one aspect of the invention, a lidar device is configured to have a field of view of 360 degrees about a first axis and 130 degrees about a second axis perpendicular to the first axis, and to generate lidar data at a point acquisition rate of at least 300000 points per second. The SLAM unit is configured to receive the lidar data as part of the perception data, and to generate a map of the environment based on the perception data and determine a trajectory of a path that the autonomous vehicle has traversed within the map of the environment. In order to determine the path to be taken, the path planning unit is configured to perform an evaluation of a further trajectory within a map of the environment in relation to an estimated point distribution map of the estimated 3D point cloud, which estimated point distribution map is (i.e.) provided by the lidar means on the further trajectory and projected onto the map of the environment.
Since the lidar data is used both as perception data and to generate a 3D detection point cloud, the system allows continuous capture of the 3D detection data while providing enhanced field of view and range for path planning.
In one embodiment, the laser radar device is implemented as a laser scanner configured to generate the laser radar data by means of a rotation of the laser beam about two rotation axes. The laser scanner comprises a rotator configured to rotate about one of the two rotation axes and to provide a variable deflection of the exit and return portions of the laser beam, thereby providing a rotation of the laser beam about one of the two rotation axes (which is commonly referred to as the fast axis). The rotator rotates about the fast axis at least 50Hz, while the laser beam rotates about the other of the two axes of rotation (commonly referred to as the slow axis) at least 0.5 Hz. Wherein the laser beam is emitted as a pulsed laser beam, for example, wherein the pulsed laser beam comprises 150 ten thousand pulses per second. For rotation of the laser beam about the two axes, the field of view is 130 degrees about the fast axis and 360 degrees about the slow axis.
For example, laser scanners are foreseen mounted on autonomous robotic vehicles with a slow axis substantially vertical, such that a field of view FoV of 130 degrees of rotation about the fast axis is provided to observe the front, the ground and the rear of the autonomous robotic vehicle.
For example, the evaluation of the further trajectory in relation to the estimated point distribution map comprises a voxel occupancy grid navigation and a probabilistic robot framework for path planning which is fed directly into the lidar data and the trajectory points of the determined trajectory.
In a further embodiment, the path planning unit is configured to: receiving evaluation criteria defining different measurement specifications of the system (e.g., different target values of the probe point cloud); and the evaluation criterion is taken into account to evaluate further trajectories.
As an example, the evaluation criterion defines at least one of: a point density of the detection point cloud projected onto the map of the environment, e.g., at least one of a minimum point density, a maximum point density, and an average point density of the detection point cloud projected onto the map of the environment; an energy consumption threshold, e.g. maximum allowed energy consumption, for the system to complete the further trajectory and provide the detection point cloud; a time consumption threshold, such as a maximum allowed time, for the system to complete the additional trajectory and provide the detection point cloud; a path length threshold for the further trajectory, such as a minimum path length and/or a maximum allowed path length; a minimum area of the track to be covered; detecting a minimum spatial volume covered by the point cloud; and a minimum horizontal angle or a maximum horizontal angle between a travel direction at a track end of a path that the autonomous vehicle has passed and a travel direction at a start of the further track.
In a further embodiment, the path planning unit is configured to receive the path of interest and to optimize and/or expand the path of interest to determine the path to be taken. For example, the attention path is generated and provided by another detecting apparatus (e.g., a mobile reality capturing device having a SLAM function).
To explore unknown regions, for example along a path of interest, the path plan may include a boundary following mode, in which additional trajectories follow a boundary (e.g., given by a wall) in a defined distance. The further trajectory may also comprise regular or random movements or changes of direction within the boundary, e.g. random cycles. Vertical movement may be limited, for example, to ensure that the autonomous robotic vehicle stays on a particular floor to explore everything on that floor.
As an example, a decision tree is built that involves a defined decision basis (e.g., a random left/right decision, or always select left or always select right), where the decision tree returns the top node after a defined number of child nodes, and then uses another decision basis. For example, the decision may be based on a likelihood estimate of the scanned path/environment associated with the followed path portion of the further trajectory.
In order to provide sufficient data processing capacity, the system may have connection means for data exchange with a data cloud providing cloud computing, for example to determine a 3D probe point cloud or to perform at least part of the processing for evaluating further trajectories. The system may benefit from onboard computing, for example by means of a dedicated computing unit provided with a lidar device or by means of a computing unit of an autonomous robotic vehicle, which significantly extends the computing power in case of loss of connection with the cloud or in case of limited data transfer rates. Another possibility is to include connectivity with a companion device (e.g., a tablet computer) that may be configured to determine the 3D probe point cloud or perform at least a portion of the processing for similarly evaluating additional trajectories as compared to cloud processing. The local companion device may then take over processing for areas of limited or no connectivity with the cloud, or the local companion device may serve as a cloud interface in the sense of a relay between on-board and cloud computing. For example, switching between onboard computing, cloud processing, and processing of companion devices is performed dynamically based on connectivity between the three processing locations.
In one embodiment, the system includes an onboard computing unit specifically foreseen as being located on the autonomous robotic vehicle and configured to perform at least a part of the system processing, wherein the system processing comprises performing a SLAM procedure, thereby providing a reference to the lidar data, and performing an evaluation of further trajectories. The system also includes an external computing unit configured to perform at least a portion of the system processing. A communication module of the system is configured to provide communication between the on-board computing unit and the external computing unit, wherein the system includes a workload selection module configured to: monitoring an available bandwidth of the communication module for communication between the on-board computing unit and an external computing unit; monitoring available power of an airborne computing unit, a laser radar device, a SLAM unit and a path planning unit; and dynamically changing the assignment of at least a portion of the system processing to the on-board computing unit and the external computing unit depending on available bandwidth and available power assigned to the external processing unit.
Another aspect of the invention relates to using predefined trajectory information associated with reference objects in the environment, e.g., specialized landmarks, specialized corners of buildings, etc. Alternatively or additionally, the reference object may be a manually marked object to provide predefined trajectory information.
For example, the reference object is located in the path planning software and is associated with predefined trajectory information. The predefined trajectory information is generated by a human operator or an intelligent optimization algorithm. If a manually marked object is used, a corresponding real marked object is (physically) generated (e.g. printed) and placed in the area to be scanned. Upon detection of the reference object in the field, the path planning unit of the system then associates the detected reference object with predefined trajectory information and uses the trajectory information as input to evaluate further trajectories by performing a coordinate transformation (frame transformation) between the real world and the "planned world".
Thus, in further embodiments, the system is configured to access identification information and assignment data for the reference object, wherein the assignment data provides assignment of the reference object to a trajectory specification in the vicinity of the reference object. For example, the trajectory specification is a further direction of travel relative to an external coordinate system or relative to a cardinal direction (cardinal direction). The system includes a reference object detector configured to use the identification information and provide detection of a reference object within the environment based on the identification information. For example, the reference object detector is configured to provide detection of the reference object by means of camera and/or lidar data and visual detection attributes associated with the reference object. Upon detection of the reference object, the path planning unit is configured to take the trajectory specification into account when evaluating the further trajectory.
In a further embodiment, the system is configured to access a 3D reference model of the environment, for example in the form of a CAD model, wherein the trajectory specification is provided in relation to the 3D reference model, in particular wherein the trajectory specification provides a planned path within the 3D reference model. The assignment data provides an assignment of the reference object to a location within the 3D reference model, and the system is configured to determine a coordinate transformation between the map of the environment and the 3D reference model by taking into account the assignment of the reference object to a location within the 3D reference model.
As an example, the planning map used in the path planning software is based on previously recorded real-world capture scans (e.g., 3D detection of the environment by the system or another mobile detection system, such as the Leica BLK2GO device). The planning map may be a digital model that the system can match by converting it through a simulator into a "machine readable" map consisting of images and lidar features. For example, lidar 3D points are directly converted to voxel-based occupancy maps, which can be used for path planning and collision avoidance.
Another aspect of the invention relates to the use of benchmarks (fiducials) to control the movement of a system, particularly an autonomous robotic vehicle. For example, the system includes a list of known (i.e., taught) fiducial markers with corresponding actions. If a fiducial marker is detected, a corresponding action is triggered. For example, a user may dynamically indicate an action to be taken by the system without, for example, direct access to the system (no physical contact) or host system controls.
In one embodiment, the system includes a fiducial marker configured to provide an indication of a local trajectory direction associated with the fiducial marker, for example a visible marker providing a visual determination of the local trajectory direction. The system includes a fiducial mark detector configured to detect fiducial marks and determine a local track direction, for example by identifying a visual attribute (e.g. a line or arrow providing the local track direction), or wherein the local track direction is encoded in a visible code (e.g. a barcode or matrix barcode) read by the fiducial mark detector. The path planning unit is then configured to take the local trajectory direction into account when evaluating the further trajectory.
For example, the fiducial marker is configured to provide a visual indication of, for example, a direction across at least two of the three main axes of a common coordinate system, in particular the three main axes, wherein the system is configured to provide said reference of said lidar data relative to said common coordinate system by determining the direction of the three main axes using the fiducial marker detector and taking into account said direction of the three main axes.
In a further embodiment, the fiducial marker comprises a reference value indication providing position information (e.g. 3D coordinates) in a common coordinate system or in an external coordinate system (e.g. in a world coordinate system) with respect to the set pose of the fiducial marker. The set posture is a 6DoF posture (i.e., the position and orientation of the fiducial marker), and indicates the desired 6DoF posture for the marker. Thus, when correctly placed in the environment, the marker may serve as a so-called probe control point, e.g. for a closed loop of the SLAM process and/or as an absolute reference in the world coordinate system or the local site coordinate system.
Here, the system is configured to: deriving the set pose and considering the set pose to determine a local trajectory direction, in particular by determining the pose of the reference marker in a common or world coordinate system and performing a comparison of the determined pose of the reference marker with the set pose. For example, the comparison is considered to provide a reference of the lidar data relative to a common coordinate system, which may result in an improved determination of the local track direction.
In another embodiment, the fiducial marker is configured to provide an indication of a corresponding action to be performed by the system, wherein the system is configured to determine the corresponding action by using the fiducial marker detector, e.g. wherein the indication of the corresponding action is provided by a visible code, in particular a barcode, more in particular a matrix barcode.
In one embodiment, the corresponding action is at least one of: a stop operation of the system, a pause operation of the system, a restart operation of the system, for example, starting a new capture/job (segmentation), returning to the start of the measurement task, not entering the area near the reference mark, and entering the area near the reference mark in a time-controlled manner. In particular, the path planning unit is configured to take into account the corresponding action when evaluating the further trajectory.
In further embodiments, the fiducial mark comprises a visually detectable pattern, for example, a pattern provided by areas of different reflectivity, different gray scale levels, and/or different colors. The system is configured to determine the 3D orientation of the pattern by: geometric features in the intensity image of the pattern are determined and a plane fitting algorithm is performed to determine the orientation of the pattern plane. The intensity image of the pattern is acquired by scanning the pattern with a lidar measurement beam of a lidar device and detecting the intensity of the returning lidar measurement beam. The 3D orientation of the pattern can then be determined by analyzing the appearance of geometric features in the intensity image of the pattern.
Optionally, the distance to the pattern is derived (e.g. by using a lidar device), which may for example help to determine the 6DoF pose of the pattern, i.e. the 3D position and 3D orientation of the pattern plane.
In further embodiments, the pattern includes circular features and the system is configured to identify an image of the circular features within the intensity image of the pattern. A plane fitting algorithm is configured to fit an ellipse to the image of the circular feature and, based on the fit, determine an orientation of the pattern plane.
As an example, furthermore, the center of the ellipse is determined and aiming information for aiming the center of the ellipse with a lidar measurement beam is derived. This aiming information can then be used as an aiming point reference for aiming the lidar measurement beam in order to derive a distance to the pattern, e.g. for determining a 6DoF pose of the pattern.
In a particular embodiment, the pattern comprises an internal geometric feature, in particular a rectangular feature surrounded by said circular feature, more in particular wherein said internal geometric feature is configured to provide said indication of said local trajectory direction, and said system is configured to determine said local trajectory direction by analyzing said intensity image of said pattern and by taking into account said 3D orientation of said pattern.
Another aspect of the invention relates to calibration (calibration) of the lidar apparatus to assess alignment (alignment) of optics of the lidar apparatus, for example, after an autonomous robotic vehicle is impacted or tumbled.
Thus, in a further embodiment, the system is configured to determine the first geometry of the pattern using scanning of the pattern by the lidar device and detection of the intensity of the returned lidar measurement beam; performing a comparison of the first geometric shape with an expected shape of the pattern, in particular by taking into account an orientation of the pattern plane, more in particular a 3D orientation of the pattern; and based on the comparison, performing an evaluation of, in particular a determination of, an optical calibration of the optics of the lidar means.
In further embodiments, the system includes a camera specifically foreseen to be mounted on the autonomous robotic vehicle and configured to generate camera data during movement of the camera. The system is configured to image the pattern with the camera and determine a second geometry of the pattern; performing a comparison of the second geometry with an expected shape of the pattern, in particular by taking into account an orientation of the pattern plane, more in particular a 3D orientation of the pattern; and taking into account a comparison of the second geometry with an expected shape of the pattern when evaluating, in particular determining, an optical calibration of optics of the lidar device.
In a further embodiment, the system is configured to perform a system monitoring comprising a measurement of a jerk and/or a vibration of the lidar means and to automatically perform an evaluation of, in particular a determination of, an optical calibration of optics of the lidar means in dependence on the system monitoring.
In further embodiments, the system is configured to use data of the inertial measurement unit and/or data of the image pickup unit to provide a reference of the lidar data relative to a common coordinate system and/or to generate a map of the environment and to determine a trajectory of a path that the autonomous vehicle has traversed within the map of the environment.
The system may also be configured to utilize a network of autonomous robotic devices, for example, to provide gap filling and to provide additional information for probe tasks and/or path planning. For example, in a further embodiment, the system is configured to receive an additional environmental map generated by means of a further SLAM process associated with a further autonomous robot vehicle, and the evaluation of the further trajectory takes into account the additional environmental map by evaluating an estimated point distribution map of an estimated 3D point cloud, the estimated point distribution map being provided by the lidar apparatus on a trajectory segment of the further trajectory within the additional environmental map and projected onto the additional environmental map.
Drawings
In the following, a system according to different aspects of the invention will be described or explained in more detail, by way of example only, with reference to working examples schematically shown in the drawings. Like elements in the drawings are denoted by like reference numerals. The embodiments are not generally shown to scale and should not be construed as limiting the invention. Specifically, the method comprises the following steps:
FIG. 1: an exemplary embodiment of an autonomous robotic vehicle equipped with a lidar apparatus according to the present disclosure;
FIG. 2: an exemplary workflow using an autonomous robotic vehicle according to the present disclosure;
FIG. 3: exemplary embodiments of a lidar apparatus as a dual-axis laser scanner;
FIG. 4 is a schematic view of: exemplary communication schemes between different components of a system according to the present invention;
FIG. 5: an exemplary schematic diagram of path planning software utilizing reference objects in accordance with the present invention;
FIG. 6: exemplary use of fiducial markers according to the present invention;
FIG. 7: exemplary embodiments of a fiducial marker according to the present invention.
Detailed Description
Fig. 1 depicts an exemplary embodiment of an autonomous robotic vehicle equipped with a lidar apparatus to be used in a system providing 3D detection according to the present disclosure. Here, the robot carrier 1 is implemented as a four-legged robot. For example, such robots are often used in unknown terrain having different surface characteristics with debris and steep slopes. The robot 1 has sensors 2 and processing capabilities that provide simultaneous localization and mapping, including: receiving perception data providing a representation of the surroundings of the autonomous robot 1 at the current position; generating a map of the environment using the perception data; and determining the trajectory of the path that the robot 1 has traversed within a map of the environment.
According to one aspect of the invention, the robot is equipped with a lidar device 3 having a field of view of 360 degrees about a vertical axis 4 and a vertical field of view 5 of at least 130 ° about a horizontal axis (see fig. 2), wherein the lidar device is configured to generate lidar data at a point acquisition rate of at least 300,000 points per second. Here, the lidar device is embodied exemplarily as a so-called two-axis laser scanner (see fig. 2), wherein the vertical axis 4 is also referred to as slow axis and the horizontal axis is also referred to as fast axis.
The SLAM unit is configured to receive the lidar data as perceptual data, which e.g. provides improved field of view and range, and thereby improved larger scale path determination. This is particularly beneficial for exploring unknown terrain, for example. Another benefit is an omnidirectional horizontal field of view about the vertical axis 4 and a 130 degree vertical field of view about the horizontal axis 5, which provides the ability to cover the front, rear, and ground substantially simultaneously.
The system comprising the legged robot 1 and the lidar device 3 further comprises a path planning unit configured to perform an evaluation of a further trajectory within a map of the environment in relation to an estimated point distribution map of the estimated 3D point cloud, which estimated point distribution map is provided by the lidar device 3 on the further trajectory and projected onto the map of the environment.
As an example, the potential further trajectories are provided by an external source, and the system is configured to optimize and/or extend (e.g. explore more rooms) the potential further trajectories, for example, to provide a desired point distribution on the optimized further trajectories when generating the lidar data. Further trajectories can also be determined "from scratch", for example by using algorithms configured to optimize the distance to the wall and/or by implementing optimization principles based on so-called watertight probabilistic occupancy maps (water positive occupancy maps) and dense maximum likelihood occupancy voxel maps (dense maximum-likelihood occupancy voxel maps).
Fig. 2 depicts an exemplary workflow using an autonomous robotic vehicle according to the present invention, schematically showing a building floor to be probed, where a path of interest (30) (e.g., a potential additional trajectory) is provided by a mobile reality capturing device (top of the figure).
A potential further trajectory is, for example, a recorded trajectory of a movement detection device that has previously measured the environment or a trajectory of set points through a total station, e.g. where the total station comprises a camera and a SLAM function for determining a path of movement of the total station.
In the exemplary workflow depicted in this figure, a user walks through a building and thereby roughly explores the environment by using a hand-held mobile mapping device (e.g., the BLK2GO reality capture device of Leica Geosystems), defining a path of interest 30, i.e., a trajectory taken by the BLK2GO device.
As depicted at the bottom of the figure, the autonomous robot then follows a path of interest 30 (live after the mission or when the user is guiding using BLK2 GO) on an optimized trajectory 31, which provides the best point coverage (coverage) of the lidar apparatus, for example where the distance to walls and objects within the environment is optimized, and where open space and additional rooms along the path of interest are explored.
The optimized trajectory 31 includes a portion associated with an exploration area 32 (e.g., an area that the user has ignored or failed to enter when detecting buildings using the mobile reality capturing device). Other portions of the optimized trajectory 31 are associated with a room 33 where a user has selected a poor trajectory to generate a point cloud of desired quality. For example, the optimized trajectory 31 differs from the originally provided path of interest 30 in that the optimized trajectory is used to improve point density and room coverage by reducing hidden areas due to line-of-sight obstructions.
Fig. 3 shows an exemplary embodiment of the lidar device 3 from fig. 1 in the form of a so-called two-axis laser scanner. The laser scanner comprises a base 7 and a support 8, the support 8 being mounted on the base 7 rotatably about a vertical axis 4. In general, the rotation of the bearing 8 about the vertical axis 4 is also referred to as an azimuthal rotation, irrespective of whether the laser scanner or the vertical axis 4 is perfectly vertically aligned.
The heart of the laser scanner is an optical ranging unit 9 which is arranged in the support 8 and is configured to perform distance measurement by emitting a pulsed laser beam 10 (for example, wherein the pulsed laser beam comprises 150 ten thousand pulses per second) and by detecting the return portion of the pulsed laser beam by means of a receiving unit comprising a photosensitive sensor. Thus, a pulse echo is received from a backscattered surface point of the environment, wherein the distance to the surface point can be derived based on the time of flight, shape and/or phase of the transmitted pulse.
The scanning movement of the laser beam 10 is performed by rotating the support 8 relative to the base 7 about the vertical axis 4 and by means of a rotator 11, which rotator 12 is rotatably mounted on the support 8 and rotates about the horizontal axis 6. For example, both the transmitted laser beam 10 and the return portion of the laser beam are deflected by being integral with the rotating body 11 or applied to a reflective surface of the rotating body 11. Alternatively, the laser radiation sent comes from the side facing away from the reflective surface (i.e. from the inner side of the rotator 11) and is emitted into the environment via the channel region within the reflective surface (see below).
In order to determine the direction of transmission of the ranging beam 10, many different angle determination units are known in the prior art. For example, the emission direction may be detected by means of an angular encoder configured to acquire angular data for detecting the absolute angular position and/or the relative angular change of the support 8 or of the rotary body 11, respectively. Another possibility is to determine the angular position of the support 8 or the rotating body 11 by detecting only full revolutions (revolution) and using knowledge of the set rotational frequency, respectively.
The visualization of the data may be based on well known data processing steps and/or display options, for example, wherein the acquired data is presented in the form of a 3D point cloud, or wherein a 3D vector file model is generated.
The laser scanner is configured to ensure that its total field of view for the measurement operation is 360 degrees in the azimuthal direction defined by the rotation of the support 8 about the vertical axis 4 and at least 130 degrees in the oblique direction defined by the rotation of the rotary body 11 about the horizontal axis 6. In other words, the laser beam 10 can cover the vertical field of view 5 expanding in the skew direction at an expansion angle of at least 130 degrees regardless of the azimuth angle of the support portion 8 about the vertical axis 4.
For example, the total field of view generally refers to the central reference point 12 of the laser scanner defined by the intersection of the vertical axis 4 and the horizontal axis 6.
Fig. 4 illustrates schematically different communication and data processing schemes implemented by different embodiments of the system.
The processing may be performed on an onboard computing unit, for example a dedicated computing unit 13 installed on the autonomous robot 1 specifically for this purpose or a computing unit provided by the robot 1 itself. The processing may also be performed by means of cloud computing 14 and on a companion device 15, such as a tablet computer.
For example, as depicted by the two scenarios in the left portion of the figure, dedicated on-board computing unit 13 expands local computing capabilities, while at the same time dedicated on-board computing unit 13 may be connected to local operator companion 15 for areas where the system is not connected (top left portion of the figure), or may serve as a cloud interface for data cloud 14 to enable cloud computing (bottom left portion of the figure). Alternatively, the lidar device 3 is configured to perform at least part of the processing, e.g. to calculate a trajectory, and to communicate locally with the companion device 15 as a cloud interface and/or to perform further processing steps (top right of the figure). The lidar device 3 may also be linked directly to the cloud 14 (lower right in the figure), wherein the processing is dynamically distributed by the cloud 14.
Switching between onboard computing, cloud processing, lidar device processing, and companion device processing is performed dynamically, depending on the connectivity between computing locations and the available power on the mobile robot 1. Typically, the processing will be moved from the mobile robot to, for example, the cloud and/or companion device whenever possible, as the battery power and data storage of the mobile robot 1 and the devices located on the robot are limited.
Fig. 5 schematically depicts path planning software utilizing the reference object 16. Only one reference object 16 is needed for the rule to work. However, multiple reference objects 16 allow for more accurate positioning.
The reference object 16 is virtually introduced into a planning software 17, the planning software 17 comprising a digital model 18 of the environment, for example a CAD model. A physical counterpart 19 (e.g. in the form of a matrix barcode) of the virtual reference object 16 is generated and placed in the real environment. In the planning software, a further path 20 within the digital model 18 is associated with the virtual reference object 16 such that control data of the robot 1 can be derived therefrom, which allows to position the robot 1 in the real environment and to instruct the robot 1 to follow the further path 20 in the real environment. Thus, the planned path may be used as input to the robot control software 21.
For example, upon visual detection of the real reference object 19 (here in the form of a matrix barcode), the path planning unit associates the detected reference object 19 with a predefined further path 20 and uses the predefined trajectory information as input to evaluate the further trajectory, for example by coordinate transformation between the real world and the "planned world".
Fig. 6 schematically depicts an exemplary use of a fiducial marker according to the present invention. For example, the robot is equipped with a known list of fiducial markers 22 with corresponding actions. If a reference mark 22 is detected, a corresponding action is triggered. For example, the operator loads a particular fiducial marker 220 on the smartphone 23 in the form of a matrix barcode, wherein the particular fiducial marker 220 is associated with stopping or pausing the system to replace the battery. The operator presents the fiducial markers 220 to a camera mounted on the robot 1, wherein the system has access to a data cloud 14 that provides an association of different actions 24 to each of a list of fiducial markers 22, the list of fiducial markers 22 including a particular fiducial marker 220. Thus, the operator is able to control the system, at least to some extent, without actually having access to the control of the robot 1. This may be useful, for example, for emergency situations, as non-operators are allowed to interact with the robot, e.g., to prevent motion, damage, etc.
Alternatively or additionally, another specific reference marker 221 is fixedly placed within the environment, e.g. to ensure that the robot 1 does not enter a specific area. Additional markers (not shown) may be used as coded probe control points (combined targets). Other markers may provide time-gated (time-gated) rules and actions, such as "do not go between 10 and 11 am".
FIG. 7 depicts an exemplary embodiment of the fiducial marker 40. On the left, the fiducial marks 40 are shown in a front view. On the right, the fiducial marker 40 is shown in an angled view.
The fiducial marker comprises a visually detectable pattern, e.g. provided by areas of different reflectivity, different grey levels and/or different colors, comprising a circular feature 41 and an inner geometric feature 42 surrounded by the circular feature 41.
For example, the system is configured to determine a 6DoF (six degrees of freedom) pose of the fiducial marker. The 6DoF pose is derived by determining the 3D orientation of the pattern (i.e. the 3D orientation of the pattern plane) and by determining the 3D position of the pattern. For example, the marked corners 43 (at least three) are analyzed to provide angles that determine the pattern plane. The marked corners 43 may be determined using a camera on the UGV or UAV, respectively.
The circular feature 41 provides an improved determination of the 3D orientation of the pattern plane. For example, the system is configured to generate an intensity image of the pattern by scanning the pattern with a lidar measurement beam of the lidar device, wherein the intensity image is generated by detecting the intensity of the returned lidar measurement beam. The 3D orientation of the pattern plane is determined with improved accuracy by identifying an image of a circular feature within the intensity image of the pattern and running a plane fitting algorithm to fit an ellipse to the image of the circular feature. Additionally, the center of the ellipse may be determined and used as the aiming point of the lidar device to determine the 3D position of the pattern, allowing the 6DoF pose of the pattern to be determined.
Then, the 3D orientation of the pattern, in particular the 6DoF pose of the pattern, is considered for determining the local trajectory direction and/or in the evaluation of the trajectory of the beam warp. For example, a 6DoF pose is considered to provide an improved reference for the lidar data relative to a common coordinate system.
Although the invention has been exemplified above, reference is made in part to some preferred embodiments, it having to be understood that many modifications and combinations of different features of these embodiments can be made. All of these modifications fall within the scope of the appended claims.

Claims (15)

1. A system for providing 3D detection of an environment by an autonomous robotic vehicle (1), the system comprising:
a simultaneous localization and mapping unit, the simultaneous localization and mapping unit being a SLAM unit configured to perform a simultaneous localization and mapping process, the simultaneous localization and mapping process being a SLAM process, the SLAM process comprising: receiving perception data providing a representation of the surroundings of the autonomous robotic vehicle (1) at a current location; generating a map of an environment using the perception data; and determining a trajectory of a path that the autonomous robotic vehicle has traversed within the map of the environment;
a path planning unit configured to: determining a path to be taken by the autonomous robotic vehicle based on the map of the environment; and
a lidar device (3) specifically foreseen to be mounted on the autonomous robotic vehicle (1) and configured to generate lidar data to provide a cooperative scanning of the environment related to the lidar device (3),
wherein the system is configured to: generating the lidar data during movement of the lidar device (3) and providing a reference of the lidar data relative to a common coordinate system to determine a 3D detection point cloud of the environment,
it is characterized in that the preparation method is characterized in that,
the lidar device (3) is configured to: having a field of view of 360 degrees about a first axis (4) and 130 degrees about a second axis (6) perpendicular to the first axis, and generating the lidar data with a point acquisition rate of at least 300000 points per second,
the SLAM unit is configured to: receiving the lidar data as part of the perception data and, based on the perception data, generating the map of the environment and determining the trajectory of the path that the autonomous robotic vehicle (1) has traversed within the map of the environment, and
the path planning unit is configured to: the path (31) to be taken is determined by performing an evaluation of a further trajectory (30, 31) within the map of the environment in relation to an estimated point distribution map of an estimated 3D point cloud, which estimated point distribution map is provided by the lidar means (3) on the further trajectory (30, 31) and projected onto the map of the environment.
2. The system of claim 1, wherein the first and second sensors are disposed in a common housing,
it is characterized in that the preparation method is characterized in that,
the lidar device (3) is implemented as a laser scanner configured to generate the lidar data by means of a rotation of a laser beam (10) about two rotational axes (4, 6), wherein,
the laser scanner comprises a rotating body (11) configured to rotate about one (6) of the two axes of rotation and to provide a variable deflection of an exit portion and a return portion of the laser beam (10) providing a rotation of the laser beam about said one of the two axes of rotation, which is a fast axis (6),
the rotating body (11) rotates around the fast axis (6) at least 50Hz and the laser beam rotates around the other of the two axes of rotation, which is the slow axis (4), at least 0.5Hz,
the laser beam (10) is emitted as a pulsed laser beam, in particular wherein the pulsed laser beam comprises 150 ten thousand pulses per second, and
for the rotation of the laser beam (10) about the two axes of rotation (4, 6), the field of view about the fast axis is 130 degrees and the field of view about the slow axis is 360 degrees.
3. The system according to claim 1 or 2, wherein,
it is characterized in that the preparation method is characterized in that,
the path planning unit is configured to: receiving evaluation criteria defining different measurement specifications of the system, in particular different target values of the probe point cloud; and taking into account the evaluation criterion to evaluate the further trajectory (30, 31),
in particular wherein the evaluation criterion defines at least one of:
a desired path through the environment;
a point density of the probe point cloud projected onto the map of the environment, in particular at least one of a minimum point density, a maximum point density, and an average point density of the probe point cloud projected onto the map of the environment;
an energy consumption threshold, in particular a maximum allowed energy consumption, for the system to complete the further trajectory and to provide the detection point cloud;
a time consumption threshold, in particular a maximum allowed time, for the system to complete the further trajectory and provide the detection point cloud;
a path length threshold of the further trajectory, in particular a minimum path length and/or a maximum allowed path length;
a minimum area of the track to be covered;
the minimum spatial volume covered by the detection point cloud, and
a minimum or maximum horizontal angle between a direction of travel at an end of the trajectory of the path that the autonomous robotic vehicle has traversed and a direction of travel at a beginning of the further trajectory.
4. The system according to any one of the preceding claims,
it is characterized in that the preparation method is characterized in that,
the path planning unit is configured to receive a path of interest (30) and to optimize and/or expand the path of interest (30) to determine a path (31) to take.
5. The system according to any one of the preceding claims,
it is characterized in that the preparation method is characterized in that,
the system is configured to: accessing identification information and assignment data of a reference object (16), wherein the assignment data provides an assignment of the reference object (16) to a trajectory specification within a vicinity of the reference object, in particular a further direction of travel with respect to an external coordinate system or with respect to a base direction,
the system comprises a reference object detector configured to use the identification information and to provide detection of the reference object (16) within the environment based on the identification information, and
upon detection of the reference object, the path planning unit is configured to take into account the trajectory specification when evaluating the further trajectory,
in particular wherein the presence of a catalyst is preferred,
the system is configured to access a 3D reference model (18) of the environment, wherein the trajectory specification is provided in relation to the 3D reference model, in particular wherein the trajectory specification provides a planned path (20) within the 3D reference model (18),
the assignment data provides an assignment of the reference object (16) to a position within the 3D reference model (18), and
the system is configured to: determining a coordinate transformation between the map of the environment and the 3D reference model by taking into account the assignment of the reference object to a location within the 3D reference model.
6. The system according to any one of the preceding claims,
it is characterized in that the preparation method is characterized in that,
the system comprises a fiducial marker (19, 40), in particular a visible marker, configured to provide an indication of a local trajectory direction in relation to the fiducial marker (19, 40), the visible marker providing a visual determination of the local trajectory direction,
the system comprises a reference mark detector configured to detect the reference mark and to determine the local trajectory direction, and
the path planning unit is configured to take into account the local trajectory direction when evaluating the further trajectory (30, 31).
7. The system of claim 6, wherein the first and second sensors are arranged in a single package,
it is characterized in that the preparation method is characterized in that,
the fiducial marker (19, 40) is configured to provide an indication, in particular a visual indication, of a direction across at least two, in particular three, of the three main axes of the common coordinate system, wherein the system is configured to determine the direction of the three main axes by using the fiducial marker detector, and
the system is configured to take into account the directions of the three principal axes to provide the reference of the lidar data relative to the common coordinate system.
8. The system according to claim 6 or 7,
it is characterized in that the preparation method is characterized in that,
the fiducial marker (19, 40) comprises a reference value indication providing position information, in particular 3D coordinates, in the common coordinate system or in an external coordinate system, in particular in a world coordinate system, with respect to a set pose of the fiducial marker (19, 40), wherein the system is configured to: the set pose is derived and taken into account to determine the local trajectory direction, in particular by determining the pose of the reference marker (19, 40) in the common coordinate system or the world coordinate system and performing a comparison of the determined pose of the reference marker with the set pose.
9. The system according to any one of claims 6 to 8,
it is characterized in that the preparation method is characterized in that,
the fiducial marker (19, 40) is configured to provide an indication of a corresponding action to be performed by the system, wherein the system is configured to determine the corresponding action by using the fiducial marker detector, in particular wherein the indication of the corresponding action is provided by a visible code, in particular a barcode, more in particular a matrix barcode,
in particular wherein the corresponding action is at least one of:
the operation of the system is stopped and,
the operation of the system is suspended and,
a restart operation of the system is performed,
returning to the start of the measurement task,
does not enter the area near the fiducial marker, an
Into the area near the fiducial marker in a time-controlled manner,
in particular wherein the path planning unit is configured to take into account the corresponding action when evaluating the further trajectory.
10. The system of any one of claims 6 to 9,
it is characterized in that the preparation method is characterized in that,
the reference mark (19, 40) comprises a visually detectable pattern, in particular provided by areas of different reflectivity, different grey levels and/or different colours, and
the system is configured to determine a 3D orientation of the pattern by:
determining geometrical features (41, 42, 43) in an intensity image of the pattern, wherein the intensity image of the pattern is acquired by scanning the pattern with a lidar measurement beam of the lidar device and detecting the intensity of the returned lidar measurement beam, an
Performing a plane fitting algorithm by analyzing the appearance of the geometric features (41, 42, 43) in the intensity image of the pattern in order to determine the orientation of the pattern plane.
11. The system as set forth in claim 10, wherein,
it is characterized in that the preparation method is characterized in that,
the pattern comprising circular features (41),
the system is configured to identify an image of the circular feature (41) within the intensity image of the pattern, an
The plane fitting algorithm is configured to fit an ellipse to the image of the circular feature (41) and to determine the orientation of the pattern plane based on the fitting, in particular wherein a center of the ellipse is determined and aiming information for aiming the center of the ellipse with the lidar measurement beam is derived,
in particular wherein the pattern comprises internal geometrical features (42, 43), in particular rectangular features surrounded by the circular features (41), more in particular wherein the internal geometrical features (42, 43) are configured to provide the indication of the local trajectory direction and the system is configured to determine the local trajectory direction by analyzing the intensity image of the pattern and by considering the 3D orientation of the pattern.
12. The system according to claim 10 or 11,
it is characterized in that the preparation method is characterized in that,
the system is configured to:
determining a first geometry of the pattern using the scanning of the pattern by the lidar device (3) and the detection of the intensity of the returned lidar measurement beam (10),
performing a comparison of the first geometric shape with an expected shape of the pattern, in particular by taking into account the orientation of the pattern plane, more in particular the 3D orientation of the pattern, and based on the comparison,
performing an evaluation of an optical calibration of optics of the lidar device, in particular a determination of an optical calibration of optics of the lidar device,
in particular wherein the system comprises a camera (2) specifically foreseen to be mounted on the autonomous robotic vehicle and configured to generate camera data during movement of the camera, wherein the system is configured to:
imaging the pattern by means of the camera (2) and determining a second geometry of the pattern,
performing a comparison of the second geometric shape with the expected shape of the pattern, in particular by taking into account the orientation of the pattern plane, more in particular the 3D orientation of the pattern, and
taking into account the comparison of the second geometry with an expected shape of the pattern when evaluating, in particular determining, the optical calibration of the optics of the lidar device (3).
13. The system as set forth in claim 12, wherein,
it is characterized in that the preparation method is characterized in that,
the system is configured to perform system monitoring including measurement of pitch and/or vibration of the lidar device (3), and
the evaluation of the optical calibration of the optics of the lidar apparatus, in particular the determination of the optical calibration of the optics of the lidar apparatus, is performed automatically in dependence on the system monitoring.
14. The system according to any one of the preceding claims,
it is characterized in that the preparation method is characterized in that,
the system comprises an on-board computing unit (13) specifically foreseen to be located on the autonomous robotic vehicle (1) and configured to perform at least part of a system process, wherein said system process comprises: executing a SLAM process; providing the reference to the lidar data; and performing the evaluation of the further trajectory (30, 31),
the system comprises an external computing unit (14, 15) configured to perform at least part of the system processing,
the system includes a communication module configured to provide communication between the on-board computing unit (13) and the external computing unit (14, 15), and
the system includes a workload selection module configured to:
monitoring available bandwidth of the communication module for the communication between the on-board computing unit (13) and the external computing unit (14, 15);
monitoring the available power of the onboard computing unit (13), the lidar device (3), the SLAM unit and the path planning unit; and
dynamically changing the assignment of at least a portion of the system processing to the on-board computing unit and the external computing unit depending on the available bandwidth and the available power assigned to the external processing unit.
15. The system according to any one of the preceding claims,
it is characterized in that the preparation method is characterized in that,
the system is configured to receive an additional map of the environment generated by means of another SLAM process associated with another autonomous robotic vehicle, and
the evaluation of the further trajectory (30, 31) takes into account an additional map of the environment by evaluating an estimated point distribution map of the estimated 3D point cloud, which estimated point distribution map is provided by the lidar device (3) on a trajectory segment of the further trajectory (30, 31) within the additional map of the environment and projected onto the additional map of the environment.
CN202211000680.8A 2021-08-25 2022-08-19 System for providing 3D detection of an environment through an autonomous robotic vehicle Pending CN115932882A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP21193139.9 2021-08-25
EP21193139.9A EP4141474A1 (en) 2021-08-25 2021-08-25 System for 3d surveying by an autonomous robotic vehicle using lidar-slam and an estimated point distribution map for path planning

Publications (1)

Publication Number Publication Date
CN115932882A true CN115932882A (en) 2023-04-07

Family

ID=77543311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211000680.8A Pending CN115932882A (en) 2021-08-25 2022-08-19 System for providing 3D detection of an environment through an autonomous robotic vehicle

Country Status (3)

Country Link
US (1) US20230064071A1 (en)
EP (1) EP4141474A1 (en)
CN (1) CN115932882A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116125995B (en) * 2023-04-04 2023-07-28 华东交通大学 Path planning method and system for high-speed rail inspection robot
CN116908810B (en) * 2023-09-12 2023-12-12 天津大学四川创新研究院 Method and system for measuring earthwork of building by carrying laser radar on unmanned aerial vehicle
CN116977328B (en) * 2023-09-19 2023-12-19 中科海拓(无锡)科技有限公司 Image quality evaluation method in active vision of vehicle bottom robot
CN117372632B (en) * 2023-12-08 2024-04-19 魔视智能科技(武汉)有限公司 Labeling method and device for two-dimensional image, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9864377B2 (en) * 2016-04-01 2018-01-09 Locus Robotics Corporation Navigation using planned robot travel paths
DE102016009461A1 (en) * 2016-08-03 2018-02-08 Daimler Ag Method for operating a vehicle and device for carrying out the method
EP3671261A1 (en) * 2018-12-21 2020-06-24 Leica Geosystems AG 3d surveillance system comprising lidar and multispectral imaging for object classification

Also Published As

Publication number Publication date
EP4141474A1 (en) 2023-03-01
US20230064071A1 (en) 2023-03-02

Similar Documents

Publication Publication Date Title
US20210064024A1 (en) Scanning environments and tracking unmanned aerial vehicles
CN115932882A (en) System for providing 3D detection of an environment through an autonomous robotic vehicle
US10175360B2 (en) Mobile three-dimensional measuring instrument
Fabrizio et al. Real-time computation of distance to dynamic obstacles with multiple depth sensors
JP7263630B2 (en) Performing 3D reconstruction with unmanned aerial vehicles
CN108958250A (en) Multisensor mobile platform and navigation and barrier-avoiding method based on known map
CN108663681A (en) Mobile Robotics Navigation method based on binocular camera Yu two-dimensional laser radar
EP3168704A1 (en) 3d surveying of a surface by mobile vehicles
EP3220227A1 (en) Inspection system and method for performing inspections in a storage facility
Ortiz et al. Vessel inspection: A micro-aerial vehicle-based approach
WO2018144396A1 (en) Tracking image collection for digital capture of environments, and associated systems and methods
Nickerson et al. The ARK project: Autonomous mobile robots for known industrial environments
Yuan et al. Laser-based navigation enhanced with 3D time-of-flight data
US20230064401A1 (en) System for 3d surveying by a ugv and a uav with automatic provision of referencing of ugv lidar data and uav lidar data
JP5427662B2 (en) Robot system
JP2021182177A (en) Vehicle steering system and vehicle steering method
Roennau et al. Robust 3D scan segmentation for teleoperation tasks in areas contaminated by radiation
Krause Multi-purpose environment awareness approach for single line laser scanner in a small rotorcraft UA
Kim et al. Design and implementation of mobile indoor scanning system
JP7369375B1 (en) Management support system for buildings or civil engineering structures
Fiala A robot control and augmented reality interface for multiple robots
Santoro Design and implementation of a Sensory System for an Autonomous Mobile Robot in a Connected Industrial Environment
Tatoglu et al. Continuously Variable Heterogeneous Density 3D Map Generation for Ubiquitous Drones
Gäbel et al. Multisensor Concept for Autonomous Navigation of Unmanned Systems in GNSS-denied Environments
Karam et al. Microdrone-Based Indoor Mapping with Graph SLAM. Drones 2022, 6, 352

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination