WO2022167097A1 - Human-robot collaborative navigation - Google Patents

Human-robot collaborative navigation Download PDF

Info

Publication number
WO2022167097A1
WO2022167097A1 PCT/EP2021/052937 EP2021052937W WO2022167097A1 WO 2022167097 A1 WO2022167097 A1 WO 2022167097A1 EP 2021052937 W EP2021052937 W EP 2021052937W WO 2022167097 A1 WO2022167097 A1 WO 2022167097A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
slam
sensor
sensors
garment
Prior art date
Application number
PCT/EP2021/052937
Other languages
French (fr)
Inventor
Polychronis KONTAXAKIS
Original Assignee
Abb Schweiz Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Abb Schweiz Ag filed Critical Abb Schweiz Ag
Priority to PCT/EP2021/052937 priority Critical patent/WO2022167097A1/en
Publication of WO2022167097A1 publication Critical patent/WO2022167097A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Definitions

  • the present disclosure relates to the field of robot navigation, especially for indoor applications, and to methods and devices for simultaneous localization and mapping (SLAM).
  • SLAM simultaneous localization and mapping
  • Various entities may employ mobile robots or other autonomously controlled devices in an indoor environment or other environment in which human operators also are present.
  • such devices maybe employed to move or deliver items within the environment, to clean or inspect portions of the environment, or to operate instrumentation or equipment within the environment.
  • conventional applications may employ real-time sensing to allow the robot or a similar device to determine the presence of humans in the environment, locate them in real-time and to take action to avoid contacting the sensed humans.
  • Such real-time sensing and avoidance may be computationally intensive and may occupy the computational resources of the mobile robot inconveniently.
  • One objective of the present disclosure is to make available methods and devices for augmenting mobile robots’ navigational capability through multi -robot collaboration and/or human-robot collaboration. It is a further objective to provide an attractive operator-carried sensor arrangement. It is a still further objective to allow a workable map of an environment to be generated in shorter time on the basis of continuous data collection by mobile sensors.
  • a method and a device for generating a map of an environment in which one or more home positions are designated with tags.
  • the method comprises: receiving point-cloud data and pose data from at least two mobile sensors moving in the environment; and generating a map by executing a collaborative simultaneous localization and mapping (SLAM) algorithm using as inputs the received point-cloud data and pose data.
  • SLAM collaborative simultaneous localization and mapping
  • At least one of the mobile sensors is configured to derive a home position from a tag and to calibrate its data accordingly.
  • the device is configured to perform the method.
  • the first aspect relies on collaborative data collection by two or more mobile sensors when these move in the environment.
  • point-cloud data When point-cloud data is combined with pose data, it can be accurately converted between different frames of reference.
  • Pose data also allows registration (i.e., alignment) of point-cloud data that have been acquired from separate viewpoints. Further, the use of well-known home positions as input data will help the SLAM algorithm converge in shorter time.
  • a garment to be worn by a human operator is proposed.
  • the garment is adapted to be worn on the head, foot, hand or other bodily part of the operator and comprises: a color-depth sensor, a SLAM sensor, and a wireless interface for reporting data from the sensors.
  • the color-depth sensor and/or the SLAM sensor is configured to derive a home position from a tag and calibrate its data accordingly.
  • the garment is expected to be perceived as highly acceptable by the user community, not least since the garment may additionally be a functional one (e.g., cap, sunhat, protective helmet) or indicate the standing of the wearer (e.g., official uniform).
  • the attract! vity of the garment will indirectly increase the availability and diversity of sensor data, for the benefit of the SLAM algorithm’s generating of the map of the environment.
  • point-cloud data maybe a representation of three- dimensional points corresponding to detected points in an environment, e.g., points on the surface of a light-reflecting or lidar-reflecting object.
  • the points may be represented in cartesian, radial or polar coordinates and they may additionally carry color or chromaticity values.
  • Pose data may indicate a current position and orientation of a mobile sensor.
  • map covers any representation of an environment that supports robot navigation. A map need not be in a human-readable format; nor must it exhaustively indicate all objects and structures down to a specific length scale, like a geographic map usually does.
  • a “SLAM algorithm” is one which constructs or updates a map of an unknown environment on the basis of sensor data while simultaneously keeping track of the sensor (s) ’ location in the environment.
  • the algorithm may include Bayesian calculations and/or Kalman-type filtering.
  • a “home position” is normally related to the tag, such as a predefined position at which the tag is fixedly arranged, or a variable position in the vicinity of the tag where precise positioning of a mobile sensor relative to the tag is possible. Typically the accuracy of a home position is guaranteed; it may be determined empirically or estimated based on an analysis of the performance of the hardware and software involved.
  • figure 1 shows a sensor-equipped garment and portable support unit intended for a human operator
  • figure 2 is a block diagram of the portable support unit shown in figure 1
  • figure 3 is a floor plan of a building which contains a furnished indoor environment where humans and mobile robots operate
  • figure 4 depicts a heterogeneous multi-agent Bayesian graph network
  • figure 5 is a flowchart of a method for generating a map of an environment
  • figure 6 is a block diagram of a device for generating a map of an environment
  • figure 7 shows a mobile robot equipped with a color-depth sensor and a SLAM sensor.
  • Figure 3 shows an example industrial indoor environment 300 where the methods and devices according to the present invention may be practiced for the purpose of navigation and mapping.
  • the invention may provide comparable advantages when applied in an outdoor context. Delimited by outer and inner building walls drawn in solid line, the example environment 300 in figure 3 is designed for the handling and dispatch of goods 320, which may comprise items of break-bulk cargo.
  • suspended, floor- and wall-mounted furniture 310 is installed in the environment 300 as well as a conveyor 311. At least parts of the furniture 310 can be repositioned to adapt the environment 330 to a different task.
  • Human operators 90, forklifts 330 as well as autonomous mobile robots 340 circulate in the environment 300.
  • tags 350 maybe optical, magnetic, acoustic or maybe configured for unidirectional or bidirectional radio-frequency communication, to allow contactless positioning of a nearby mobile sensor.
  • the mobile sensor which maybe carried by an operator 90, forklift 330 or mobile robot 340, obtains a home position of the tag 350 by performing a positioning process wherein the tag 350 is used as a landmark or fiducial.
  • the mobile sensor is moved past the tag 350, or is approached to the tag 350, until the tag 350 confirms that the mobile sensor has reached the home position of the tag 350, e.g., by sending the mobile sensor a timestamped message.
  • the message may optionally state the home position explicitly.
  • a tag of the first type may be a Wi-FiTM access point or a cellular base station enabled for distance measurements, e.g., based on a measured roundtrip time or accurately timed reference signals.
  • a tag of the second type which confirms that the mobile sensor has reached the home position, may be embodied as an AprilTagTM which may carry a QR code.
  • the home positions serve to initialize or calibrate mobile sensors moving in the environment 300, or they could be fed directly to a centralized map generation process.
  • FIG. 1 shows an operator 90 together with a garment 100 with a sensor arrangement 120.
  • the garment 100 comprises a helmet no to be worn on the operator’s 90 head.
  • the helmet 110 is associated with a portable support unit 140 to be attached to another part of the operator’s 90 clothing or placed in a pocket.
  • the garment 100 can also be carried while the operator 90 maneuvers a vehicle, such as the forklift 330, which extends the operator’s 90 range of operation and the speed at which the environment 300 is traversed.
  • the sensor arrangement 120 is mounted on a front side of the helmet no and comprises, in this example, one color-depth sensor 121 and one SLAM sensor 122.
  • the SLAM sensor 122 maybe configured to output a pose and a position in six degrees of freedom.
  • the SLAM sensor 122 may for example perform an angular measurement, a delta-parallax measurement, an odometric or deadreckoning measurement.
  • an Intel® RealsenseTM T435 depth camera maybe used as the color-depth sensor 121
  • an Intel® RealsenseTM T265 tracking camera maybe used as the SLAM sensor 122.
  • the color-depth sensor 121 maybe characterized as an exteroceptive sensor, whereas the capabilities of the SLAM sensor 122 may at least in part be classified as proprioceptive ones. This combination advantageously joins the obtained 3D pointclouds with pose measurements, which are useful for generating and updating local maps in the framework of a graph-SLAM algorithm.
  • FIG. 2 is a block diagram of the portable support unit 140, which comprises an energy source 141, processing circuitry 142, signal interface 143 and a wireless interface 130.
  • the energy source 141 e.g., chargeable battery
  • the signal interface 143 is configured to receive sensor data from the sensor arrangement 120 and forward the sensor data, after optional processing, via the wireless interface 130.
  • at least some of the components of the portable support unit 140 are included in the helmet no, though this is slightly inconvenient as it adds weight. Besides, when batteries are beginning to be depleted, the helmet no must be taken off for recharging, which is likely to interfere with productivity rather more than simply substituting a freshly charged portable support unit 140.
  • the processing circuitry 142 maybe configured to compress the sensor data before it is forwarded over the wireless interface 130. More precisely, the processing circuitry 142 maybe configured for one or more of the following operations: i) downsampling of point-cloud data; ii) removal of any zero-height cloud points; hi) removal of a height coordinate of cloud points; iv) a lossy or lossless data compression algorithm.
  • Operation i may be adapted to decrease the spatial resolution of the three- dimensional points. It may further equilibrate the point density by removing excess points in overpopulated regions. Operation ii rests on the understanding that zeroheight (i.e., floor- or ground-level) points are generally not conducive to finding obstacles relevant to the mobile robots’ 340 navigation in the environment 300. Operation hi, which may be preceded by an application of operation ii, corresponds to an assumption that all obstacles are to be avoided regardless of their height. In other words, collisions and other inconveniences may be avoided as soon as a two- dimensional map of the environment 300 is available.
  • Operation iv may use a generic lossless data compression algorithm; if a lossy data compression algorithm is used, it is preferably adapted for spatial data, so as to minimize the destruction of useful information.
  • Generically applicable software routines for analyzing point-cloud data and performing operations thereon are available, for example, from The Point Cloud Library https://pointclouds.org).
  • the processing circuitry 142 is configured to generate a local map only on the basis of data collected by the local sensor arrangement 120.
  • the local map may be generated by a SLAM algorithm.
  • the local map may relate to those areas of the environment 300 that the operator 90 wearing the garment 100 has visited. As a data set, the local map may have a higher useful information density than the raw sensor data or preprocessed sensor data. As such, this embodiment may involve a more limited usage of the outgoing wireless interface 130.
  • Figure 7 shows a mobile robot 340 equipped with an analogous sensor arrangement 120, including a depth sensor 121 and a SLAM sensor 122.
  • the sensor arrangement 120 is preferably mounted on an upper part of the mobile robot 340 to ensure visibility.
  • a generic mobile robot 340 may comprise propulsion means for moving over a substrate and end effectors (arms) for gripping and processing workpieces 320.
  • the mobile robot 340 is generally selfcontrolled (optionally subject to centralized coordination, such as mission planning or fleet management) and self-powered, to allow it to move autonomously in the environment 300.
  • the mobile robot 340 may for example be a YuMiTM robot manufactured by the applicant.
  • the mobile robot 340 shown in figure 7 acts as a further data source (mobile agent) to a map generation process.
  • unmanned aerial vehicles (UAVs, or drones) equipped with exteroceptive and/ or proprioceptive sensors may be used as a further data collection platform.
  • the wireless interface 130 of the garment 100 and an equivalent interface (not shown) of the mobile robot 340 or UAV are operable to establish links to a wireless interface 640 of a data processing device 600 of the type shown in figure 6.
  • the data processing device 600 maybe a general-purpose computer programmed to generate a map of the environment 300. It may from a functional point of view constitute a centralized entity; this however does not foreclose distributed implementations that rely, at least in part, on networked (cloud) resources.
  • the data processing device 600 further comprises processing circuitry 610, a memory 620 and an output interface 640.
  • the wireless interface 630 is configured to receive point-cloud data and pose data from at least two mobile sensors 120 moving in the environment 300.
  • the memory 620 is preferably of a non-volatile type and maybe used to offload a runtime memory of the processing circuitry 610 for the purpose of storing finished or semifinished cartographic data.
  • the memory 620 may furthermore store a basic description of the environment 300, e.g., its outer limits according to the floor plan, while mobile objects such as furniture 310 and goods 320 are purposefully left out.
  • the output interface 640 may be adapted to make the resulting map available to a robot controller or robot fleet manager that plans and executes movement of the mobile robots 340 in the environment 300.
  • the processing circuitry 610 is configured to generate a map by executing a SLAM algorithm using as inputs the received point-cloud data and pose data.
  • the SLAM algorithm may be a collaborative algorithm capable of multisensory integration, by which data from the at least two mobile sensors 120 are combined in a common map generation process. Alternatively, the data collected by some of the mobile sensors 120 is being locally processed into local maps, which the data processing device 600 receives and combines.
  • the processing circuitry 610 is configured to pre-process the data into a compatible or interoperable common datatype, regardless of the different platforms on which the data was collected. The data is fed to the SLAM algorithm in this compatible format.
  • the pre-processing may include collapsing three-dimensional point data collected by sensors into a common two-dimensional plane.
  • FIG. 4 illustrates an aspect of a possible implementation of the SLAM algorithm to be executed by the processing circuitry 610.
  • the figure shows a heterogeneous multi-agent Bayesian graph network, where a mobile robot 340 and a human operator 90 wearing the garment 100 begin a mapping procedure starting from a home position defined by an AprilTagTM 350. To the extent these tags 350 have a variable appearance, this maybe coordinated by a network 351 that supplies control signals to the tags 350.
  • the network 351 may maintain a communicative link to the data processing device 600.
  • the data provided by the mobile robot 340 is depicted within an upper dashed rectangle labeled “340”, while the data originating from the sensors carried by the operator 90 is found within the lower dashed rectangle “90”.
  • the processing circuitry 610 may have access to a dynamic model which predicts the effect a particular control input u(t) may have on the state of the mobile robot 340. It is recalled that a SLAM algorithm generates the map M gradually while at the same time using it for navigation purposes; this is why it is justified to use the map to connect various measurements while the map generation is still in progress.
  • An edge between two nodes represents a spatial constraint relating two robot poses X(t 1 ) R ,X(t 2 ) R , two operator poses X(t 1 ) H ,X(t 2 ) H or one robot pose and one operator pose X(t 1 ) R ,X(t 2 ) H .
  • a constraint consists in a probability distribution over the relative transformations between the two poses. These transformations are either odometry measurements (e.g., wheel odometry, visual odometry) between consecutive positions, or are determined by aligning the observations acquired at two positions of the operator 90 or mobile robot 340. Having initiated the graph in figure 4 by two mobile agents from two different home positions, each agent will produce several local maps, and these local maps will be merged (and optionally optimized) when loop closures are detected between the nodes generated by each individual agent.
  • Figure 5 summarizes as a method 500 a basic mode of operation of the data processing device 600.
  • the device 600 receives point-cloud data and pose data from at least two mobile sensors 120 moving in the environment 300. It is understood that at least one of the mobile sensors 120 is configured to derive a home position from a tag 350 and to calibrate its data accordingly.
  • the data from the mobile sensor 120 maybe received 511 in a processed form, e.g., as a local map M that a processor at the mobile sensor 120 has generated.
  • a map is generated by executing a collaborative SLAM algorithm using as inputs the received point-cloud data and pose data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A garment (100) to be worn by a human operator (90), comprising: a color-depth sensor (121); a simultaneous localization and mapping, SLAM, sensor (122); and a wireless interface (130) for reporting data from the sensors, wherein the color-depth sensor and/or the SLAM sensor is configured to derive a home position from a tag and calibrate its data accordingly. A data processing device comprises: a wireless interface configured to receive point-cloud data and pose data from at least two mobile sensors (120) moving in an environment; and processing circuitry configured to generate a map by executing a collaborative SLAM algorithm using as inputs the received point-cloud data and pose data.

Description

HUMAN-ROBOT COLLABORATIVE NAVIGATION
TECHNICAL FIELD
[0001] The present disclosure relates to the field of robot navigation, especially for indoor applications, and to methods and devices for simultaneous localization and mapping (SLAM).
BACKGROUND
[0002] Various entities may employ mobile robots or other autonomously controlled devices in an indoor environment or other environment in which human operators also are present. For example, such devices maybe employed to move or deliver items within the environment, to clean or inspect portions of the environment, or to operate instrumentation or equipment within the environment. Here, conventional applications may employ real-time sensing to allow the robot or a similar device to determine the presence of humans in the environment, locate them in real-time and to take action to avoid contacting the sensed humans. Such real-time sensing and avoidance, however, may be computationally intensive and may occupy the computational resources of the mobile robot inconveniently.
[0003] Therefore, mobile robots’ navigational capabilities and the efficiency of their implementations remain two areas in need of development.
SUMMARY
[0004] One objective of the present disclosure is to make available methods and devices for augmenting mobile robots’ navigational capability through multi -robot collaboration and/or human-robot collaboration. It is a further objective to provide an attractive operator-carried sensor arrangement. It is a still further objective to allow a workable map of an environment to be generated in shorter time on the basis of continuous data collection by mobile sensors.
[0005] These and other aspects are achievable by the invention defined in the independent claims. The dependent claims relate to advantageous embodiments of the invention.
[0006] In a first aspect of the invention, there is provided a method and a device for generating a map of an environment, in which one or more home positions are designated with tags. The method comprises: receiving point-cloud data and pose data from at least two mobile sensors moving in the environment; and generating a map by executing a collaborative simultaneous localization and mapping (SLAM) algorithm using as inputs the received point-cloud data and pose data. At least one of the mobile sensors is configured to derive a home position from a tag and to calibrate its data accordingly. The device is configured to perform the method.
[0007] The first aspect relies on collaborative data collection by two or more mobile sensors when these move in the environment. When point-cloud data is combined with pose data, it can be accurately converted between different frames of reference. Pose data also allows registration (i.e., alignment) of point-cloud data that have been acquired from separate viewpoints. Further, the use of well-known home positions as input data will help the SLAM algorithm converge in shorter time.
[0008] Within a second aspect of the invention, a garment to be worn by a human operator is proposed. The garment is adapted to be worn on the head, foot, hand or other bodily part of the operator and comprises: a color-depth sensor, a SLAM sensor, and a wireless interface for reporting data from the sensors. The color-depth sensor and/or the SLAM sensor is configured to derive a home position from a tag and calibrate its data accordingly.
[0009] The garment is expected to be perceived as highly acceptable by the user community, not least since the garment may additionally be a functional one (e.g., cap, sunhat, protective helmet) or indicate the standing of the wearer (e.g., official uniform). The attract! vity of the garment will indirectly increase the availability and diversity of sensor data, for the benefit of the SLAM algorithm’s generating of the map of the environment.
[0010] As used herein, “point-cloud data” maybe a representation of three- dimensional points corresponding to detected points in an environment, e.g., points on the surface of a light-reflecting or lidar-reflecting object. The points may be represented in cartesian, radial or polar coordinates and they may additionally carry color or chromaticity values. “Pose data” may indicate a current position and orientation of a mobile sensor. The term “map” covers any representation of an environment that supports robot navigation. A map need not be in a human-readable format; nor must it exhaustively indicate all objects and structures down to a specific length scale, like a geographic map usually does. A “SLAM algorithm” is one which constructs or updates a map of an unknown environment on the basis of sensor data while simultaneously keeping track of the sensor (s) ’ location in the environment. The algorithm may include Bayesian calculations and/or Kalman-type filtering. Finally, a “home position” is normally related to the tag, such as a predefined position at which the tag is fixedly arranged, or a variable position in the vicinity of the tag where precise positioning of a mobile sensor relative to the tag is possible. Typically the accuracy of a home position is guaranteed; it may be determined empirically or estimated based on an analysis of the performance of the hardware and software involved.
[oon] Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Aspects and embodiments are now described, by way of example, with reference to the accompanying drawings, on which: figure 1 shows a sensor-equipped garment and portable support unit intended for a human operator; figure 2 is a block diagram of the portable support unit shown in figure 1; figure 3 is a floor plan of a building which contains a furnished indoor environment where humans and mobile robots operate; figure 4 depicts a heterogeneous multi-agent Bayesian graph network; figure 5 is a flowchart of a method for generating a map of an environment; figure 6 is a block diagram of a device for generating a map of an environment; and figure 7 shows a mobile robot equipped with a color-depth sensor and a SLAM sensor. DETAILED DESCRIPTION
[0013] The aspects of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, on which certain embodiments of the invention are shown. These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of the invention to those skilled in the art. Like numbers refer to like elements throughout the description.
[0014] Figure 3 shows an example industrial indoor environment 300 where the methods and devices according to the present invention may be practiced for the purpose of navigation and mapping. The invention may provide comparable advantages when applied in an outdoor context. Delimited by outer and inner building walls drawn in solid line, the example environment 300 in figure 3 is designed for the handling and dispatch of goods 320, which may comprise items of break-bulk cargo. For these purposes, suspended, floor- and wall-mounted furniture 310 is installed in the environment 300 as well as a conveyor 311. At least parts of the furniture 310 can be repositioned to adapt the environment 330 to a different task. Human operators 90, forklifts 330 as well as autonomous mobile robots 340 circulate in the environment 300.
[0015] Because of the variability of the environment 300 and since the robots 340 generally move more efficiently when they have access to an up-to-date map of the navigable space, the mapping of the environment 300 has been facilitated by the provision of tags 350. The tags 350 maybe optical, magnetic, acoustic or maybe configured for unidirectional or bidirectional radio-frequency communication, to allow contactless positioning of a nearby mobile sensor. The mobile sensor, which maybe carried by an operator 90, forklift 330 or mobile robot 340, obtains a home position of the tag 350 by performing a positioning process wherein the tag 350 is used as a landmark or fiducial. Alternatively, the mobile sensor is moved past the tag 350, or is approached to the tag 350, until the tag 350 confirms that the mobile sensor has reached the home position of the tag 350, e.g., by sending the mobile sensor a timestamped message. The message may optionally state the home position explicitly. A tag of the first type may be a Wi-Fi™ access point or a cellular base station enabled for distance measurements, e.g., based on a measured roundtrip time or accurately timed reference signals. A tag of the second type, which confirms that the mobile sensor has reached the home position, may be embodied as an AprilTag™ which may carry a QR code. The home positions serve to initialize or calibrate mobile sensors moving in the environment 300, or they could be fed directly to a centralized map generation process.
[0016] Figure 1 shows an operator 90 together with a garment 100 with a sensor arrangement 120. The garment 100 comprises a helmet no to be worn on the operator’s 90 head. In this example, the helmet 110 is associated with a portable support unit 140 to be attached to another part of the operator’s 90 clothing or placed in a pocket. The garment 100 can also be carried while the operator 90 maneuvers a vehicle, such as the forklift 330, which extends the operator’s 90 range of operation and the speed at which the environment 300 is traversed.
[0017] The sensor arrangement 120 is mounted on a front side of the helmet no and comprises, in this example, one color-depth sensor 121 and one SLAM sensor 122. The SLAM sensor 122 maybe configured to output a pose and a position in six degrees of freedom. For this purpose, the SLAM sensor 122 may for example perform an angular measurement, a delta-parallax measurement, an odometric or deadreckoning measurement. In an implementation of the garment 100, an Intel® Realsense™ T435 depth camera maybe used as the color-depth sensor 121, and an Intel® Realsense™ T265 tracking camera maybe used as the SLAM sensor 122. The color-depth sensor 121 maybe characterized as an exteroceptive sensor, whereas the capabilities of the SLAM sensor 122 may at least in part be classified as proprioceptive ones. This combination advantageously joins the obtained 3D pointclouds with pose measurements, which are useful for generating and updating local maps in the framework of a graph-SLAM algorithm.
[0018] Figure 2 is a block diagram of the portable support unit 140, which comprises an energy source 141, processing circuitry 142, signal interface 143 and a wireless interface 130. The energy source 141 (e.g., chargeable battery) powers both the processing circuitry 142 and the sensor arrangement 120 on the helmet no. The signal interface 143 is configured to receive sensor data from the sensor arrangement 120 and forward the sensor data, after optional processing, via the wireless interface 130. In simpler embodiments, at least some of the components of the portable support unit 140 are included in the helmet no, though this is slightly inconvenient as it adds weight. Besides, when batteries are beginning to be depleted, the helmet no must be taken off for recharging, which is likely to interfere with productivity rather more than simply substituting a freshly charged portable support unit 140.
[0019] To economize bandwidth and transmission resources, the processing circuitry 142 maybe configured to compress the sensor data before it is forwarded over the wireless interface 130. More precisely, the processing circuitry 142 maybe configured for one or more of the following operations: i) downsampling of point-cloud data; ii) removal of any zero-height cloud points; hi) removal of a height coordinate of cloud points; iv) a lossy or lossless data compression algorithm.
Operation i may be adapted to decrease the spatial resolution of the three- dimensional points. It may further equilibrate the point density by removing excess points in overpopulated regions. Operation ii rests on the understanding that zeroheight (i.e., floor- or ground-level) points are generally not conducive to finding obstacles relevant to the mobile robots’ 340 navigation in the environment 300. Operation hi, which may be preceded by an application of operation ii, corresponds to an assumption that all obstacles are to be avoided regardless of their height. In other words, collisions and other inconveniences may be avoided as soon as a two- dimensional map of the environment 300 is available. Operation iv may use a generic lossless data compression algorithm; if a lossy data compression algorithm is used, it is preferably adapted for spatial data, so as to minimize the destruction of useful information. Generically applicable software routines for analyzing point-cloud data and performing operations thereon are available, for example, from The Point Cloud Library https://pointclouds.org).
[0020] In one embodiment, the processing circuitry 142 is configured to generate a local map only on the basis of data collected by the local sensor arrangement 120. The local map may be generated by a SLAM algorithm. The local map may relate to those areas of the environment 300 that the operator 90 wearing the garment 100 has visited. As a data set, the local map may have a higher useful information density than the raw sensor data or preprocessed sensor data. As such, this embodiment may involve a more limited usage of the outgoing wireless interface 130. [0021] Figure 7 shows a mobile robot 340 equipped with an analogous sensor arrangement 120, including a depth sensor 121 and a SLAM sensor 122. The sensor arrangement 120 is preferably mounted on an upper part of the mobile robot 340 to ensure visibility. As is common practice in the art, a generic mobile robot 340 may comprise propulsion means for moving over a substrate and end effectors (arms) for gripping and processing workpieces 320. The mobile robot 340 is generally selfcontrolled (optionally subject to centralized coordination, such as mission planning or fleet management) and self-powered, to allow it to move autonomously in the environment 300. The mobile robot 340 may for example be a YuMi™ robot manufactured by the applicant. By means of the sensor arrangement 120, the mobile robot 340 shown in figure 7 acts as a further data source (mobile agent) to a map generation process. Also unmanned aerial vehicles (UAVs, or drones) equipped with exteroceptive and/ or proprioceptive sensors may be used as a further data collection platform.
[0022] The wireless interface 130 of the garment 100 and an equivalent interface (not shown) of the mobile robot 340 or UAV are operable to establish links to a wireless interface 640 of a data processing device 600 of the type shown in figure 6. The data processing device 600 maybe a general-purpose computer programmed to generate a map of the environment 300. It may from a functional point of view constitute a centralized entity; this however does not foreclose distributed implementations that rely, at least in part, on networked (cloud) resources. The data processing device 600 further comprises processing circuitry 610, a memory 620 and an output interface 640.
[0023] The wireless interface 630 is configured to receive point-cloud data and pose data from at least two mobile sensors 120 moving in the environment 300. The memory 620 is preferably of a non-volatile type and maybe used to offload a runtime memory of the processing circuitry 610 for the purpose of storing finished or semifinished cartographic data. The memory 620 may furthermore store a basic description of the environment 300, e.g., its outer limits according to the floor plan, while mobile objects such as furniture 310 and goods 320 are purposefully left out. The output interface 640 may be adapted to make the resulting map available to a robot controller or robot fleet manager that plans and executes movement of the mobile robots 340 in the environment 300. [0024] The processing circuitry 610 is configured to generate a map by executing a SLAM algorithm using as inputs the received point-cloud data and pose data. The SLAM algorithm may be a collaborative algorithm capable of multisensory integration, by which data from the at least two mobile sensors 120 are combined in a common map generation process. Alternatively, the data collected by some of the mobile sensors 120 is being locally processed into local maps, which the data processing device 600 receives and combines. To allow data from the various data sources to be combined, the processing circuitry 610 is configured to pre-process the data into a compatible or interoperable common datatype, regardless of the different platforms on which the data was collected. The data is fed to the SLAM algorithm in this compatible format. The pre-processing may include collapsing three-dimensional point data collected by sensors into a common two-dimensional plane.
[0025] Figure 4 illustrates an aspect of a possible implementation of the SLAM algorithm to be executed by the processing circuitry 610. The figure shows a heterogeneous multi-agent Bayesian graph network, where a mobile robot 340 and a human operator 90 wearing the garment 100 begin a mapping procedure starting from a home position defined by an AprilTag™ 350. To the extent these tags 350 have a variable appearance, this maybe coordinated by a network 351 that supplies control signals to the tags 350. The network 351 may maintain a communicative link to the data processing device 600.
[0026] Solving the graph-based SLAM problem may involve the construction of a graph where the nodes represent poses of the mobile robot 340 (X(t)R, t = 0,1,2, ... T) and poses of the operator 90 (X(t)H, t = 0,1,2, ... T). The nodes are interconnected by other components that represent the robot control inputs (u(t)R, t = 0,1,2, ... T) as well as the different measurements (Z t)R, Z t)H, t = 0,1,2, ... T) made with respect to the tags 350 and a current appearance of the map M and taken at those particular poses. The data provided by the mobile robot 340 is depicted within an upper dashed rectangle labeled “340”, while the data originating from the sensors carried by the operator 90 is found within the lower dashed rectangle “90”. The processing circuitry 610 may have access to a dynamic model which predicts the effect a particular control input u(t) may have on the state of the mobile robot 340. It is recalled that a SLAM algorithm generates the map M gradually while at the same time using it for navigation purposes; this is why it is justified to use the map to connect various measurements while the map generation is still in progress. An edge between two nodes represents a spatial constraint relating two robot poses X(t1)R,X(t2)R, two operator poses X(t1)H,X(t2)H or one robot pose and one operator pose X(t1)R,X(t2)H. A constraint consists in a probability distribution over the relative transformations between the two poses. These transformations are either odometry measurements (e.g., wheel odometry, visual odometry) between consecutive positions, or are determined by aligning the observations acquired at two positions of the operator 90 or mobile robot 340. Having initiated the graph in figure 4 by two mobile agents from two different home positions, each agent will produce several local maps, and these local maps will be merged (and optionally optimized) when loop closures are detected between the nodes generated by each individual agent.
[0027] Figure 5 summarizes as a method 500 a basic mode of operation of the data processing device 600. In a first step 510, the device 600 receives point-cloud data and pose data from at least two mobile sensors 120 moving in the environment 300. It is understood that at least one of the mobile sensors 120 is configured to derive a home position from a tag 350 and to calibrate its data accordingly. Within the first step 510, the data from the mobile sensor 120 maybe received 511 in a processed form, e.g., as a local map M that a processor at the mobile sensor 120 has generated. In a second step 520 of the method, a map is generated by executing a collaborative SLAM algorithm using as inputs the received point-cloud data and pose data.
[0028] The aspects of the present disclosure have mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims.

Claims

1. A method (500) for generating a map of an environment (300), in which one or more home positions are designated with tags (350), the method comprising: receiving (510) point-cloud data and pose data from at least two mobile sensors (120) moving in the environment; and generating (520) a map by executing a collaborative simultaneous localization and mapping, SLAM, algorithm using as inputs the received point-cloud data and pose data, wherein at least one of the mobile sensors is configured to derive a home position from a tag and to calibrate its data accordingly.
2. The method (500) of claim 1, wherein at least one of the mobile sensors (120) is carried by a human operator (90).
3. The method (500) of claim 1 or 2, wherein at least one of the mobile sensors (120) is robot-mounted.
4. The method (500) of any of the preceding claims, wherein at least one of the mobile sensors (120) collects the data using a color-depth sensor (121) and/or a SLAM sensor (122).
5. The method (500) of claim 4, wherein at least one of the mobile sensors (120) collects the data using a color-depth sensor (121) combined with a SLAM sensor (122).
6. The method (500) of any of the preceding claims, wherein each of the mobile sensors (120) is configured for proprioceptive measurements.
7. The method (500) of any of the preceding claims, wherein the collaborative SLAM algorithm includes graph-based SLAM.
8. The method (500) of any of the preceding claims, wherein said receiving pointcloud data and pose data includes receiving (511) a local map (M) associated with one of the mobile sensors (120).
9. The method (500) of any of the preceding claims, wherein said at least one of the mobile sensors is configured to derive the home position from an optical or radiofrequency tag.
10. A device (6oo) comprising: a wireless interface (630) configured to receive point-cloud data and pose data from at least two mobile sensors (120) moving in an environment (300), in which one or more home positions are designated with optical or radio-frequency tags (350); and processing circuitry (610) configured to generate a map by executing a collaborative simultaneous localization and mapping, SLAM, algorithm using as inputs the received point-cloud data and pose data, wherein at least one of the mobile sensors is configured to derive a home position from a tag and to calibrate its data accordingly.
11. A garment (100) to be worn by a human operator (90), comprising: a color-depth sensor (121); a simultaneous localization and mapping, SLAM, sensor (122); and a wireless interface (130) for reporting data from the sensors, wherein the color-depth sensor and/ or the SLAM sensor is configured to derive a home position from a tag (350) and calibrate its data accordingly.
12. The garment (100) of claim 11, wherein the wireless interface (130) is arranged in a portable support unit (140) associated with the garment.
13. The garment (100) of claim 12, wherein the portable support unit further comprises an energy source (141) for powering at least the sensors and the wireless interface.
14. The garment (100) of any of claims 11 to 13, further comprising processing circuitry (142) configured to compress the data from the sensors, including at least one of the following operations: i) downsample point-cloud data; ii) remove any zero-height cloud points; hi) remove a height coordinate of cloud points; iv) execute a data compression algorithm.
15- The garment (100) of any of claims n to 14, further comprising processing circuitry (142) configured to generate a local map (M) on the basis of the data from the sensors.
16. The garment (100) of any of claims 11 to 15, which is a hat or helmet.
PCT/EP2021/052937 2021-02-08 2021-02-08 Human-robot collaborative navigation WO2022167097A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/052937 WO2022167097A1 (en) 2021-02-08 2021-02-08 Human-robot collaborative navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/052937 WO2022167097A1 (en) 2021-02-08 2021-02-08 Human-robot collaborative navigation

Publications (1)

Publication Number Publication Date
WO2022167097A1 true WO2022167097A1 (en) 2022-08-11

Family

ID=74572776

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/052937 WO2022167097A1 (en) 2021-02-08 2021-02-08 Human-robot collaborative navigation

Country Status (1)

Country Link
WO (1) WO2022167097A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180306587A1 (en) * 2017-04-21 2018-10-25 X Development Llc Methods and Systems for Map Generation and Alignment
WO2019000417A1 (en) * 2017-06-30 2019-01-03 SZ DJI Technology Co., Ltd. Map generation systems and methods
US20190347783A1 (en) * 2018-05-14 2019-11-14 Sri International Computer aided inspection system and methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180306587A1 (en) * 2017-04-21 2018-10-25 X Development Llc Methods and Systems for Map Generation and Alignment
WO2019000417A1 (en) * 2017-06-30 2019-01-03 SZ DJI Technology Co., Ltd. Map generation systems and methods
US20190347783A1 (en) * 2018-05-14 2019-11-14 Sri International Computer aided inspection system and methods

Similar Documents

Publication Publication Date Title
AU2019404207B2 (en) Collaborative autonomous ground vehicle
EP3423913B1 (en) Sensor trajectory planning for a vehicle-mounted sensor
CA3076533C (en) Multi-resolution scan matching with exclusion zones
Schneier et al. Literature review of mobile robots for manufacturing
US10278333B2 (en) Pruning robot system
US10209063B2 (en) Using sensor-based observations of agents in an environment to estimate the pose of an object in the environment and to estimate an uncertainty measure for the pose
WO2020051923A1 (en) Systems And Methods For VSLAM Scale Estimation Using Optical Flow Sensor On A Robotic Device
US11372423B2 (en) Robot localization with co-located markers
KR20180109118A (en) A method for identifying the exact position of robot by combining QR Code Tag, beacon terminal, encoder and inertial sensor
US10852740B2 (en) Determining the orientation of flat reflectors during robot mapping
Park et al. Indoor localization for autonomous mobile robot based on passive RFID
KR20210033808A (en) Method of applying heterogeneous position information acquisition mechanism in outdoor region and robot and cloud server implementing thereof
Blomqvist et al. Go fetch: Mobile manipulation in unstructured environments
CN114167866B (en) Intelligent logistics robot and control method
WO2022167097A1 (en) Human-robot collaborative navigation
CN114995459A (en) Robot control method, device, equipment and storage medium
WO2021049227A1 (en) Information processing system, information processing device, and information processing program
US20240036586A1 (en) Method for adding one or more anchor points to a map of an environment
Balasooriya et al. Development of the smart localization techniques for low-power autonomous rover for predetermined environments
KR102445846B1 (en) Obstacle tracking system and obstacle tracing method
US20240182282A1 (en) Hybrid autonomous system and human integration system and method
Wang et al. Master-Followed multiple robots cooperation SLAM adapted to search and rescue scenarios
Øvsthus et al. Mobile Robotic Manipulator Based Autonomous Warehouse Operations
RAVANKAR et al. Distributed Docking Station System for Mobile Robots
Pol Navigation Systems of Indoor Automated Guided Vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21704245

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21704245

Country of ref document: EP

Kind code of ref document: A1