WO2022128896A1 - Method and apparatus for multiple robotic devices in an environment - Google Patents

Method and apparatus for multiple robotic devices in an environment Download PDF

Info

Publication number
WO2022128896A1
WO2022128896A1 PCT/EP2021/085454 EP2021085454W WO2022128896A1 WO 2022128896 A1 WO2022128896 A1 WO 2022128896A1 EP 2021085454 W EP2021085454 W EP 2021085454W WO 2022128896 A1 WO2022128896 A1 WO 2022128896A1
Authority
WO
WIPO (PCT)
Prior art keywords
environment mapping
mapping data
environment
robot
aggregated
Prior art date
Application number
PCT/EP2021/085454
Other languages
French (fr)
Inventor
Tao Luo
Vincent Alleaume
Pierrick Jouet
Philippe Robert
Original Assignee
Interdigital Ce Patent Holdings, Sas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Ce Patent Holdings, Sas filed Critical Interdigital Ce Patent Holdings, Sas
Publication of WO2022128896A1 publication Critical patent/WO2022128896A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control
    • G05D1/0297Fleet control by controlling means in a control room

Definitions

  • the present disclosure generally relates to the field of robotics, and in particular the present disclosure is related to data retrieval and extraction of environment mapping data obtained from heterogeneous robotic devices.
  • the robot In robotics, observation of a robot’s environment is important for efficient functioning of the robot. Therefore, the robot is equipped with one or more sensors that enable it to capture its environment.
  • a robotic vacuum cleaner or a dedicated robotic measuring device may use a laser distance measuring sensor to create a two-dimensional (2D) map (floorplan) of its environment, including walls and other obstacles, and use the 2D map in order to improve its cleaning efficiency.
  • a service robot such as a home-care robot may need visual sensors to elaborate a 3D environment mapping of its environment for improved understanding of the nature of the obstacles detected.
  • a 3D observation in a home, office or other environment may enable determining that an obstacle is an object that may be easily circumvented (e.g., furniture, a person, a book or a shoe), much like a human would do.
  • Same purpose or different purpose robots that evolve in a same complex environment such as a home, office or any other kind of environment, may take advantage of specific knowledge of the environment that each robot has, for improved efficiency of all.
  • Figure 1a is a 2D occupancy grid map of an environment, determined by a robot capable of exploring and moving in a planar space.
  • Figure 1b is a 3D point cloud 11 (or mesh) of the same example environment, determined by means of a robot capable of determining its environment in a 3D space.
  • Figure 2 is a block diagram of multiple robots communicating with a centralized device (or entity) according to an embodiment.
  • Figure 3 is a flow chart of a method 300 of gathering data from multiple (possibly heterogeneous) robots according to an embodiment, for the purpose of, for example, providing individual robot environment mapping data from environment mapping data stored in an aggregated (universal) environment mapping.
  • Figure 4 is a flowchart for a method of collecting data from multiple, possibly heterogeneous, robots, according to an embodiment, for example, for the purpose of robot localization, that may be used to perform complex tasks involving multiple robots, or for the purpose of exploring a complex environment using multiple robots.
  • Figure 5 is an example of a complex environment with multiple areas, and multiple heterogeneous robots.
  • Figure 7a-c illustrate extraction of environment mapping data for a particular robot type from the aggregated environment mapping.
  • Figure 8a-c illustrate how a similar extraction method may be used for a different type of robot.
  • Figure 9 is a graph representation of the complex environment of figures 5 and 6.
  • Figure 10 illustrates a coarse-level statistic data model of past robot locations in a complex environment.
  • Figure 11 illustrates a finer level statistic data model of past robot locations in a given area of the complex environment.
  • Figure 12 illustrates robots performing a collaborative task in a complex environment.
  • Figure 13 is an exemplary embodiment of a robotic device.
  • Figure 14 is an exemplary embodiment of a centralized entity (device).
  • Figure 15 is a flow chart of an embodiment of a method for coordinating multiple robotic devices in an environment.
  • Figure 16 is a flow chart of a further embodiment of a method for multiple robotic devices in an environment.
  • robots The efficiency of a set of heterogeneous types of robotic devices (‘robots’) may be improved when the environment observation data of the different types of robots can be used for their mutual benefit.
  • robots The embodiments described herein give some non-limited implementation examples of solutions to these and other problems.
  • a robot device may be equipped with one or more sensor(s) for measuring the robot’s environment.
  • a sensor is, for example, a camera or a depth sensor, a temperature sensor, a humidity sensor, a radiation sensor.
  • the environment observation is dependent of the sensor location and orientation, it may be suitable to move the robot device, and thus the sensor(s), around the environment to get an exhaustive understanding of the environment.
  • a robot may observe the same object from multiple points of view (top, side) in order to obtain a full 3D representation of it.
  • the robot device may be a robotic vacuum cleaner with only a limited set of sensors that are sufficient for performing simple tasks.
  • the robotic vacuum cleaner device may be able to overcome low obstacles such as a door sill, it is essentially a surface device that explores a 2D plane of its environment only. But, while it evolves in a 2D plane only, it may benefit from a 3D observation in order to obtain knowledge of the environment and the nature of the obstacles in the environment.
  • a service robot such as a home-care robot, would require (more sophisticated, more expensive) sensors that enable a 3D environment mapping of its environment. The performance of the robotic vacuum cleaner may be improved if it can take benefit of the 3D environment mapping of the service robot, and vice versa.
  • the vacuum cleaner robot may gain from knowing that an obstacle is a temporary obstacle such as a person or moveable furniture, and take appropriate actions, such as suspend the vacuum cleaning in the area while the person is present in the environment, wait for the furniture to be moved to clean the area.
  • the performance of the service robot may be improved if it can take benefit from a 2D map of a room the service robot has not explored yet.
  • Multiple robots that evolve in a same environment may benefit from knowing each other’s location in that environment when performing a more complex task, such as vacuum cleaning followed by mopping, or a home-care robot alerting a person not to enter a room where a robot is mopping, because the floor may be slippery.
  • the localization problem is to estimate the robot’s position and orientation within an available map of its environment.
  • Each individual robot may have its own (type of, kind of) map, i.e., their own representation of their environment, established by its own sensor(s).
  • Robots may typically need to (re-)calibrate their position in their representation of the environment by (returning to) starting from a fixed position, e.g., the location of their charging base (docking station).
  • a robot may start from an arbitrary position in the environment, if it did not move to that position by itself, and/or if it has not established a map of the environment previously.
  • the embodiments described herein try to solve, among others, some of the previously described problems.
  • Figure 1a is a 2D occupancy grid map 10 of an environment, determined by a robot (e.g., a first robot) capable of exploring and moving in a planar space, e.g., on a floor level.
  • the robot is for example equipped with a radar or lidar for range/distance sensing through a laser or ultrasound device.
  • the robot may determine, for example, a 2D occupancy grid map of its environment as depicted.
  • the 2D occupancy grid map gives information about the environment in the plane where the robot observes.
  • the 2D occupancy grid map enables the robot to detect, for example, the presence of walls I furniture 100, and doors I openings 101.
  • the 2D occupancy grid map thus obtained may be sufficient for itself, and enable it to perform simple tasks, such as mopping, or vacuum cleaning.
  • Figure 1b is a 3D point cloud 11 (or mesh) of the same example environment, determined by a robot (e.g., a second robot) capable of determining its environment in a 3D space.
  • the robot may, for example, be equipped with range/distance sensor(s) as the above mentioned first robot, that may be moved, tilted or rotated as required. Even if multiple robots may use same types of sensor(s), the individual robots may generate different kinds (types, formats) of environment mapping data.
  • the 3D point cloud may be obtained by aggregation of many point cloud slices, each slice being obtained at a different vertical or horizontal position or angle of the sensor(s).
  • An observation based on (an) optical camera(s) may give even further, and/or different, information about the environment.
  • information about the nature of obstacles which information is difficult to obtain from the 2D occupancy grid map or the 3D point cloud, such as the presence of human beings, pets, or moveable furniture, which information can be obtained from the optical camera sensor by image processing, possibly coupled to Artificial Intelligence (Al) so that the objects can be classified (e.g., as furniture, a person, a pet, a wall, a window, a door, etc.).
  • a problem of multi-robot localization exists in the case that there are multiple (heterogeneous) robots that evolve in a same environment.
  • a robot may be localized quickly and navigate in a region that is not mapped (explored) by itself using environment mapping data obtained from other robots.
  • mutual localization between robots may be obtained using their individual localization (position) in an aggregated environment mapping, that is the fusion (aggregation) of different (kinds of) sensory data according to each robot’s features and characteristics.
  • an aggregated environment mapping is built for a group of (heterogeneous) robots in an environment.
  • a centralized device may collect individual environment mapping data from each robot, e.g., (a) first environment mapping data (set) from a first device, and (a) second environment mapping data (set) from a second device.
  • the first and the second environment mapping data sets may each relate to a different part (e.g., an area, a region, a space, a room) of the same environment, or to same part of the environment. Possibly the different areas of the environment may be mapped (explored) by multiple (heterogeneous) devices.
  • the environment mapping data is then collected from the multiple devices and is then used to build (construct, establish) an aggregated environment mapping (data, data set) of the environment.
  • the aggregated environment mapping data (set) may be used for different purposes according to embodiments.
  • the aggregated environment mapping data may be used to localize each robot in the environment, and to coordinate tasks (e.g., in the centralized device) to be executed by the different robots.
  • the aggregated environment mapping may be used to transmit the location of one robot to another robot, which may among others be used for mutual localization of robots.
  • the aggregated environment mapping may be used to extract environment mapping data for an individual robot according to the characteristics of the robot in order to complete the environment mapping data of the robot, for example, to complete partial environment mapping data of a robot with environment mapping data of parts of the environment that it did not explore itself.
  • statistical analysis of a robot’s previous locations in the aggregated environment mapping as well as robot’s characteristics may be employed to provide an initial hypothesis of a robot’s location.
  • the aggregated environment mapping is divided into subsets corresponding to individual parts of the environment, and a subset corresponding to the part of the environment where the robot is localized may be transmitted to a robot including its own localization in the subset.
  • new sensory data may be captured by that robot to update the unified data representation.
  • multiple robots may be associated to accomplish a collaborative task.
  • robot locations may be visualized using augmented or virtual reality.
  • mutual localization between associated robots e.g., for executing a task or tasks needing the association/coordination of multiple robots is obtained by their individual localization in the aggregated environment mapping.
  • the centralized device may plan/schedule such complex tasks and transmit instructions to each individual robot for executing its individual subtask according to the individual robot’s characteristics, and may possibly transmit individual environment mapping data to each or some of the individual robots, the individual environment mapping data being extracted from the aggregated environment mapping and adapted/configured to the individual robot’s characteristics.
  • the centralized device may be a server.
  • the centralized device is one of the robots, for example a robot having a supervisory role, such as a household robot ‘employing’ several other robots such as vacuum cleaner robots or kitchen robots.
  • the centralized device may be operated by a user via a user interface on the centralized device or on a distant device.
  • FIG. 2 is a block diagram of multiple robots communicating with a centralized device according to an embodiment.
  • Each robot 21 , 22 has a wireless and/or wired interface for communication with a centralized entity 23. Examples of wireless interfaces are WiFi, Zigbee, or Bluetooth. Examples of wired interfaces are Ethernet or USB.
  • Each robot has a Central Processing Unit (CPU) 21 a, 22a and a memory 21 b, 22b and (a) sensor(s) 21 c, 22c.
  • the robots may communicate with centralized entity (device) 23.
  • Centralized entity 23 may be a dedicated server, or a robot, in which case it may also be equipped with at least one sensor (not shown).
  • Centralized entity 23 includes a CPU 23a, a memory 23b and optionally a user interface 23c.
  • Robot 21 is located in area 26 of a building.
  • Centralized entity 23 is located in area 27.
  • Robot 22 is located in area 28.
  • the robots may be the same type, or of a different type.
  • the robots may, according to their characteristics, have different features, including different types and number of environment sensors, and may, according to their features, store different types of environment mapping data.
  • Figure 3 is a flow chart of a method 300 of gathering data from multiple (possibly heterogeneous) robots according to an embodiment, for the purpose of, for example, providing individual robot environment mapping data from environment mapping data stored in a aggregated environment mapping, or for robot localization, and possibly for enabling the execution of complex tasks wherein multiple robots are associated to perform the complex tasks.
  • the method is for example implemented in centralized entity 23.
  • individual environment mapping data from multiple robots is collected (e.g., from robots 21 and 22) and is stored (e.g., in memory 23b of the centralized entity, or in the cloud).
  • a aggregated environment mapping is built (constructed) from the individual environment mapping data received from the multiple robots.
  • the individual environment mapping data may, for example, include a 3D point cloud received from robot 21 , and a 2D grid map received from robot 22.
  • the aggregated environment mapping may be, for example, a 3D point cloud, or a 2D grid map, or may take a different form that is not, or that is partly not suited for visual representation, such as a matrix, vector, graph, or database.
  • the aggregated environment mapping may include other type of data retrieved from the robots such as robot characteristics, robot capabilities, identification or description of the sensors used by each robot to observe the environment, and other types of data that will be discussed further on in the present document, such as robot localization, and statistical data.
  • a aggregated environment mapping may be constructed in step 302 that includes a full-sliced 3D point cloud for room 26 (e.g., received from robot 21 ) and a single 3D point cloud slice for room 28 (e.g., received from robot 22).
  • step 303 that is executed for example on request of a robot, individual environment mapping data/localization data is extracted from the aggregated environment mapping, and is transmitted to the requesting robot in step 304.
  • robot 22 may request a 2D occupancy grid map of room 26.
  • the centralized entity 23 may then extract, in step 303, a 2D occupancy grid map from the 3D point cloud it has for room 26 in its aggregated environment mapping, and transmit the 2D occupancy grid map to robot 22.
  • the 2D occupancy grid map may for example be obtained from a 3D point cloud slice that is on the floor level, as robot 22 is a vacuum cleaner.
  • the extraction is according to the request of the robot (here, robot requests environment mapping data for room 26 only) and the characteristics for the robot (here, the environment mapping data for robot 22 should be in the form of a 2D occupancy grid map because that is the type of map used by that particular robot).
  • a robot may request to localize another robot.
  • robot 21 may request to localize robot 22.
  • the centralized entity will extract, from the aggregated environment mapping, the location of robot 22 and transmit the location to robot 21 .
  • the location will be according to the location format used by the robot according to its characteristics, e.g., a location in a 2D occupancy grid map, or a location in a 3D scene model.
  • the location may be specific to the part of the area of which the robot has an environment mapping. For example, a location may be relative to a room if the robot has environment mapping data for that room only, or to a building if the robot has environment mapping data for the whole building.
  • the centralized entity 23 or any other distant device may use the aggregated environment mapping for planning of relatively complex tasks that imply the use of multiple, possibly different, robots.
  • an system consists of a centralized entity e.g., a server device, the server device possibly having a user interface (such as a tablet or smartphone) and multiple robots as in figure 2.
  • the server device may communicate with the device or robots in a wireless or wired way and deal with received data and requests.
  • Each robot may send its current environment mapping data, its robot characteristics, sensor types and descriptions, local trajectories, etc., to the server device.
  • the server device may send commands (instructions), necessary navigation data for executing a task, to each robot.
  • the server device can also provide robot data to assist user interactions on an III.
  • User commands are transmitted from the III device to the server device for the task association of multiple robots or interaction with each robot, or the server device may itself plan the execution of complex tasks requiring the association of multiple robots.
  • the aggregated environment mapping and history of robot locations may be stored in the server device.
  • the commands/instructions and necessary data e.g. environment mapping data, statistics of locations
  • Localization may be implemented in the server device or in an individual robot.
  • Figure 4 is a flowchart for a method of collecting data from multiple, possibly heterogeneous, robots, according to an embodiment, for example, for the purpose of robot localization, that may be used to perform complex tasks involving multiple robots, or for the purpose of exploring a complex environment using multiple robots.
  • Data is received from multiple robots, in a step 401 , by a centralized entity (e.g., implemented in a server device or in one of the robots), e.g., sensory data (e.g., 2D map, 3D data) and robot characteristics (e.g., sensor type (e.g., camera, laser), sensor parameters, robot type (e.g., vacuum cleaner, mopping robot, kitchen robot, household robot), robot capabilities (clean, mop, explore, fly, hover, roll), robot configuration, robot dimensions, and possibly individual robot position if known in an area where the robot evolves, trajectories explored by the robot in the area, possibly associated with timestamps).
  • a centralized entity e.g., implemented in a server device or in one of the robots
  • robot characteristics e.g., sensor type (e.g., camera, laser), sensor parameters, robot type (e.g., vacuum cleaner, mopping robot, kitchen robot, household robot), robot capabilities (clean, mop, explore, fly, hover
  • a aggregated environment mapping is built from the data received from the multiple robots.
  • individual robot environment mapping data received from the robots may be expanded (completed) in step 402b with individual environment mapping data received from other robots related to regions (areas) not explored or only partially explored by the individual robots.
  • data may be retrieved, such as statistic of previous robot locations coupled to robot characteristics for the purpose of robot localization. While individual localization in the area where the robot evolves may be realized in the centralized entity or on a robot, mutual localization between different robots may be obtained by their locations in the aggregated environment mapping, step 404.
  • the aggregated environment mapping may be updated by freshly captured data from the individual robots as they explore an environment (area), step 405. This updating may be a continuous process.
  • FIG. 5 is an example of a complex environment with multiple areas, and multiple heterogeneous robots.
  • Complex environment 50 includes areas 1 (501 ), area 2 (502), area 3 (503), area 4 (504) and area 5 (505).
  • Complex environment 50 includes robot A (5010), located in area 3, robot B (5011 ) located in area 2, robot C (5012) located in area 5, and robot D (5013) located in area 1.
  • Robot A may be a vacuum cleaner robot.
  • Robot B may be a household (or service) robot.
  • Robot C may be a ceiling-mounted kitchen robot that can do several kinds of tasks such as loading and emptying a dishwasher, cooking, picking cups and plates.
  • Robot D may be a mop robot. In this example, each of the robots A, B, C and D are different and have different functions.
  • Robots A, B, C and D have different dimensions and different functions. While robots A, B and D may freely move to any area, their dimensions permitting, robot C has a fixed location in area 5.
  • a aggregated environment mapping of robot environment mapping data may be built in a centralized entity.
  • the aggregated environment mapping can be an enriched point cloud, where a point or a set of points not only indicates its 3D position but also has some attributes related to the robot from which the data is obtained, and its type of sensors.
  • robots can be instructed (configured) to map a complete complex environment in a collaborative way, or the aggregated environment mapping is built by incrementally integrating environment mapping data from robots while they evolve in (the different areas of) the complex environment, or both. The different embodiments are discussed in the next sections of the present document.
  • multiple robots are activated initially in the same area of a complex environment, or in different areas.
  • the centralized entity may instruct the robots to explore the environment autonomously.
  • An algorithm for multi-robot collaborative dense scene reconstruction may be used. Given a partially scanned scene, the scanning paths (trajectories) for multiple robots are planned by the centralized entity, to efficiently coordinate with each other so as to maximize the scanning coverage and reconstruction quality while minimize the overall scanning effort.
  • a collaboration strategy for multiple robots can be employed for environment mapping of a complex environment.
  • fusion (aggregation) of (different) sensory data from (different types of) robots is described here.
  • multiple robots start in a same area of the complex environment.
  • multiple robots may start in different areas.
  • Passages passageways, halls, door openings
  • the passages are employed to plan (schedule) individual paths for each robot to explore a yet unexplored area, or area(s) of which it is desired to update exploration data.
  • the complex environment is mapped by the robots in a collaborative way according to the plan. For instance, in Figure 6, three heterogenous robots A, B and D (5010, 5011 and 5013 respectively), start from area 3, and are each instructed by the centralized entity to explore a different area of the complex environment.
  • area 3 is the starting area for all robots. They may first build an initial aggregated environment mapping of area 3 using their individual environment mapping. Then, depending on the analysis (discovery) of passages in area 3, the path planning for each robot to explore a different area can be instructed by the centralized entity; as mentioned any exits, passages, passageways, or doors are detected in their own environment mapping data of area 3.
  • the centralized entity constructs path planning for each robot to enter an area to explore and to map. If a criteria of nearest path is taken for example, robot A (5010) is planned to enter area 1 (501 ) and map it, robot B (5011 ) is planned to enter area 2 (502) and map it, and robot D (5013) is planned to enter area 4 (504) and map it.
  • the robots may be instructed by the centralized entity to map any remaining area(s) according to their order of finishing of their environment mapping task. For example, robot D may enter area 5 when it has finished environment mapping area 4 and when it is the first robot to finish its environment mapping task.
  • a aggregated environment mapping of the environment may be built incrementally using the sensory data of different robots collected by the centralized entity at run time of the robots.
  • the centralized entity does not need to instruct the robots to explore an area of the complex environment nor to establish a path planning.
  • the aggregated environment mapping is built ‘on the fly’, based on environment mapping data obtained from the robots as they execute their tasks. While each robot evolves in the environment, new environment mapping data may be merged (fused, aggregated, integrated) into a(n) (existing) aggregated environment mapping.
  • the centralized entity may instruct robots to visit a ‘reference point’ or ‘meeting point’ that the centralized entity has fixed in the aggregated environment mapping.
  • This reference point may be a location in the environment that is the same for all robots, so that the centralized entity may acquire data from each robot in the same area, where there exist enough and distinguishable features for the calibration of environment mapping data from each robot against the aggregated environment mapping.
  • 510 is a reference point.
  • the present location of one of the robots may be designated as a reference point and the other robots (robot B, robot D) are instructed by the centralized entity to visit this location as a calibration step to calibrate their position in their own environment mapping data before merging their environment mapping data into the aggregated environment mapping.
  • a reference point may also be a location different for each robot and may serve as a starting point for the robot’s own position calibration in the aggregated environment mapping.
  • the centralized entity may instruct robot A to go to the reference point in area 1 before going to the location in area 2, so that robot A’s position is calibrated in the environment mapping data that was extracted for it from the aggregated environment mapping.
  • the reference point in area 1 for robot A can be determined as the most frequent location of robot A in area 1 according to its statistic of past locations.
  • Each robot may define a ‘meeting point’ in one or more area(s) for which it has provided data used in the aggregated environment mapping.
  • a meeting point may be an arbitrary location (the farthest left location in the mapped area for instance), or having some particular features (a specific area corner, specific area door).
  • Meeting points may be used by the server to evaluate and check regularly the aggregated environment mapping performance: any robot having access to an area should be able to reach a related “meeting point” accurately, even if that meeting point was defined by another robot using different sensors.
  • the server may also send two or more robots (having access to the meeting point area) close to that meeting point (making robots facing each other for instance) with an objective to check the aggregated environment mapping accuracy, based on robot sensor(s) information once they reached what they consider being the meeting point, using the path generated from the aggregated environment mapping data.
  • a similar interactive check (via a user and III display (on AR display) for instance) may be done, this time letting the user visually evaluate if the robot(s) have reached the indicated meeting point with an acceptable accuracy.
  • the aggregated environment mapping is in the form of an enriched point cloud, where each point not only indicates its 3D position but also has some attributes, such as its color, the type of robot that did the observation, robot/sensor parameters (features, characteristics), the corresponding image features of the corresponding keyframe if the observation was done with a camera, the corresponding pixels in a 2D grid map.
  • the environment mapping data of different robots may be registered (recorded, memorized, stored) together in the aggregated environment mapping using methods for 3D point cloud registration. Then, while the first step of building a aggregated environment mapping is collection of the individual data from the robots, a second step for building the aggregated environment mapping is conversion of the collected individual data into a 3D point cloud.
  • a 2D occupancy grid map captured by 2D Lidar can be converted into a 3D slice point cloud parallel to the ground using the height of the Lidar, the map resolution, and the location from where the 2D occupancy grid map was made.
  • Depth images captured by a depth camera may be converted into point cloud using the intrinsics of depth camera and the camera poses.
  • Keyframe images captured by a color camera may be employed to extract the landmarks and dense 3D reconstruction using the methods of structure from motion and multi-view stereo methods, respectively.
  • the point cloud registration in 3D the different sensory data are fused together, and any associated attributes may complement the corresponding 3D points.
  • This enriched 3D point cloud may be encoded using the standardized Point Cloud Compression (PCC) for data transmission between the centralized entity and the robots for those robots that may directly use the 3D point cloud format for their individual environment mapping; for robots using other kinds of environment mapping format, data may be extracted from the enriched 3D point cloud (e.g., stored in the centralized entity), converted in the individual environment mapping format of the robot using another kind of environment mapping format and compressed if needed (e.g., in a Zip format).
  • PCC Point Cloud Compression
  • Each of a plurality of robots that evolve in a same complex environment may not have traversed/explored all the areas of the environment.
  • environment mapping data may be extracted from the aggregated environment mapping that is adapted to different forms of environment mapping data for each robot, and transmitted to them. Given a list of robot types, the attributes of the aggregated environment mapping may therefore be checked, to ensure whether a specific robot or specific robot type has provided environment mapping data for an area. If a specific robot or robot type did not provide environment mapping data for an area, the environment mapping data in this area may be extracted from the aggregated environment mapping and may be converted into a format according to the characteristics of the robot or robot type before being transmitted to the robot.
  • robot A is equipped with a 2D Lidar sensors and robot A exports a 2D occupancy grid map of area 3 and area 1 .
  • Robot B is equipped with a depth sensor and exports a 3D point cloud of area 2. If robot A is placed in area 2 for performing a task, it is necessary to expand its 2D occupancy grid map (that only contains areas 3 and 1 ) with a 2D occupancy grid map of area 2 for localization and navigation.
  • the 3D point cloud of area 2 in the aggregated environment mapping established based on data from robot B, may be converted into a 2D occupancy grid map compatible with the characteristics of robot A, such as Lidar height, map parameters.
  • a 3D point cloud shown in Figure 7a as an example, its ground plane is first detected. Then the slice plane is determined as a plane parallel to the ground with the same distance as the height of the Lidar sensor(s) of robot A (5010).
  • a threshold of distance to the slice plane may also be predefined to project the data of barrier objects close to the ground into the slice point cloud, so that any collision areas may be included in the environment mapping data.
  • a slice point cloud 7b, cropped from the 3D point cloud 7a may be converted into an 2D occupancy grid map 7c according to map parameters of robot A.
  • the slice point cloud 7a is first transformed into the local coordinate system defined on the slice plane, i.e., the axes of X and Y are in the slice plane, and the axis of Z is along the normal direction of the slice plane. Then using the map resolution particular to robot A, the slice point cloud is projected into the discrete grid points of map 7c, where any occupied grid points are indicated in black. Assuming that the robot is located in the center of the slice point cloud, the free grids are indicated as white between the center and the occupied grid points. The other grid points are unknown area depicted in grey.
  • the 2D occupancy grid map generated from the 3D point cloud of area may complete the environment mapping of robot A for its localization and navigation in area 2, of which it did not have, initially, any environment mapping data.
  • the aggregated environment mapping may also complete environment mapping data for a robot, in the form of image and depth.
  • a 3D point cloud 8a may be rendered into RGB images 8b and depth map 8c by virtual cameras.
  • the intrinsics of virtual cameras may be consistent with those of an RGB camera and a depth camera of a robots (e.g., of those of robot B (5011 ).
  • the camera poses may be sampled in space considering the robot characteristics such as the height and orientation of the robot camera.
  • the statistics of robot locations may also be considered, which makes the sampled positions of virtual camera appear at the more frequent locations of other robots in the area.
  • the aggregated environment mapping of a complex environment may be divided into subsets that correspond to different areas.
  • a graph may represent the organization in subsets of the data in the aggregated environment mapping, where each graph node indicates data of a different area and edges connect adjacent areas and represent passages from one area to another. See Figure 9, showing a graph representation of the complex environment of figures 5 and 6, which may be used for the robots to improve their efficiency of robot localization and navigation, and cooperation.
  • node 903 corresponding to area 3 has edges to all the nodes that represent adjacent areas: area 1 (node 901 ) area 2 (node 902), area 4 (node 904) and area 5 (node 905).
  • robots may retrieve data from the aggregated environment mapping, e.g., for their own localization or localization of peer robots, or for retrieval of expanded (completed) environment mapping, according to robot characteristics (e.g. robot type, robot features, sensor parameters). Localization of robots may be accelerated or otherwise improved (e.g., improved preciseness), based for example, on history of past locations, and on history of executed tasks.
  • robot characteristics e.g. robot type, robot features, sensor parameters.
  • Localization of robots may be accelerated or otherwise improved (e.g., improved preciseness), based for example, on history of past locations, and on history of executed tasks.
  • a statistical analysis of robot’s past locations may provide an initial hypothesis about a robot’s current (present) location. This analysis may be implemented in a coarse-to-fine manner.
  • a ranked list of possible areas may be provided to the robot, e.g., based on robot characteristics, past locations, past executed tasks, and the robot may retrieve for each area in the list, the corresponding environment mapping data of the area, as well as the most likely position of the robot in that area.
  • the robot may be instructed, by the centralized entity, to perform some reference measurements of its environment (for example, a distance measurement in each of the four compass directions may give a good idea of the dimensions of an area and thereby identify the area; an observation with a camera sensor may identify an object which may identify an area), transmit the resulting data to the centralized entity, which matches the data to the aggregated environment mapping.
  • the centralized entity may exactly determine the location of the robot in terms of area and location in the area, and may transmit the location and possibly the environment mapping for the area to the robot.
  • the aggregated environment mapping may also include the location of each charging station in each area, and for which types of robots the charging station is suited, and the centralized entity may transmit the location of the charging station in an area, or the location of a nearest charging station suited for a robot, to a robot.
  • the centralized entity may inform a robot that a particular charging station is occupied by another robot and may suggest that the robot directs itself to an alternative charging station that is not occupied.
  • the trajectories of robots are recorded in the centralized entity (possibly in the aggregated environment mapping or associated with it) when they perform a task.
  • On a coarse level the number of times in an area that the robot has visited is counted.
  • the coarse statistic of a robot’s past locations in a complex environment 50 is illustrated by columns 1001 , 1002, 1003, 1004 and 1005, each column having a height that is related to the past location in each area.
  • the ground plane of each area may be divided into a grid, of configurable mesh size.
  • the trajectory of a robot may be projected into the complex environment and/or in an area for finer resolution.
  • FIG. 11 an example of the statistic of locations in one area (area 3 (503)) is shown; it can be concluded that the robot has a very high probability to be in location 1105, which may correspond to its charging station.
  • timestamps are associated to robot locations, for still improved localization; for example, based on the past robot locations and the associated time stamps, it may be concluded that a robot has a very high probability to be at a particular location at a given time.
  • the statistical data of past robot locations may be visualized through a user interface device, e.g., in augmented reality.
  • a user interface device e.g., in augmented reality.
  • the III device is first localized in the aggregated environment mapping.
  • the statistical data may be displayed in an application of augmented reality using various types of presentation such as 3D columns, heat maps, etc. that overlay on the real environment.
  • This kind of visualization can assist the user to select the meeting point in the process of the incremental construction of the aggregated environment mapping.
  • the user can interactively designate a location as a meeting point, where the visit frequency of the robot is higher and there are more distinguishing features.
  • the robot characteristics e.g. robot type, robot identifier (ID), sensor types and parameters
  • Such characteristics may be associated to portions of the data in the aggregated environment mapping according to the robot which provided the environmental observations on which each portion is based.
  • a robot sends a command of localization and its characteristics to the server.
  • the aggregated environment mapping is first checked whether there is the environment mapping data having the same robot type or not. If available, the robot’s own previous environment mapping data is retrieved for localization. If not, the compatible environment mapping data from a similar robot (e.g. robot of the same type or of a similar type) is then checked whether available.
  • the similar robot could have the same sensor type, or the same height of sensor as the robot to be localized. If there is no appropriate raw environment mapping data available for localization, the expanded environment mapping data compatible with the robot characteristics is employed as aforementioned. If all else fails, it means that there is never a robot having mapped the target area. Thus, the currently captured data by this robot is first merged into the aggregated environment mapping. The centralized entity may control this robot to explore the environment and visit a meeting point for registration validation then update the aggregated environment mapping. Thus, the robot can take benefit of the global localization in the aggregated environment mapping.
  • the reference environment mapping data is extracted from the aggregated environment mapping with respect to the aforementioned hypotheses of initial locations and related characteristics of the robot.
  • the terminal location of a robot in the last task is stored on the server separately, which can be used as the first hypothesis of its starting location in the next task. If the robot is displaced before the beginning of a new task, the last location would be invalid for the robot localization.
  • the corresponding reference data is extracted to register with the captured environment mapping data when the robot is activated. If this registration fails, it means that the robot has been moved and the (re-)localization is required before executing the task.
  • the above statistical analysis for this robot can provide a list of possible areas and the most possible location in each area.
  • the corresponding reference data is retrieved as aforementioned.
  • the robot captures a partial data of its surrounding after being activated in the new task.
  • the robot localization is realized by the data registration between the partial data and the reference environment mapping data with the initial location retrieved from the fine statistic.
  • the environment mapping data is in the form of point cloud
  • To register the partial environment mapping data with its reference data there are two stages of initial alignment and refined registration.
  • the structural representations of both environment mapping data are extracted to initially align them.
  • Iterative Closest Point (ICP) related methods may be employed for refinement of the registration (ICP is an algorithm that enables to minimize a difference between several clouds of points and may be used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning).
  • the registration error is verified with a predefined threshold to determine whether the robot is correctly localized in the reference environment mapping data. If not, the corresponding data of a candidate area and the hypothesis of initial location are retrieved to repeat the registration procedure until the robot is localized.
  • the environment mapping data is in the form of image or depth map
  • the 2D image matching methods can be employed to realize the robot localization.
  • the necessary environment mapping data can be obtained by searching the path between the area of starting location and the one of destination on the graph representation.
  • the environment mapping data of the related areas on the path are only extracted compatible with the robot characteristics, which can reduce the overload of data on the robot for a complex environment.
  • multiple robots may map a complex environment in a collaborative and/or incremental way and, for example, a robot may advantageously use the aggregated environment mapping to obtain environment mapping data obtained from other robots for its own localization, environment mapping, or for localization of other robots, and notably localization and environment mapping of areas that were not mapped by itself, or of which the environment mapping is incomplete.
  • the new environment mapping data captured by the robot may advantageously be employed to update the current aggregated environment mapping and its updates may help to complete, correct or adjust the aggregated environment mapping, notably when changes have taken place in the complex environment.
  • the updating of the aggregated environment mapping may be accomplished in an incremental way as described in the previous sections.
  • robot characteristics may also be considered in the updating.
  • the robot’s environment mapping data update may be compared with the environment mapping data of the area in the aggregated environment mapping, regarding their coverage zones. When the new environment mapping data covers a smaller zone of the area, the aggregated environment mapping may, for example, not be updated based on the robot’s environment mapping data update.
  • the robots environment mapping data update may be merged into the aggregated environment mapping.
  • the centralized entity may instruct the robot of which it has received the environment mapping data update, to move to a pre-defined meeting (reference) point in order to be able to calibrate the update received from the robot to the environment mapping data in the aggregated environment mapping.
  • the more complete robot’s environment mapping data update may then be used to update the aggregated environment mapping, and the less complete environment mapping data previously received by the same robot and that was used to construct the aggregated environment mapping may be replaced. If the environment mapping data in the aggregated environment mapping for the area for which the robot environment mapping data update is received was obtained from another robot, this means that the robot sending the update did not map the area, or at least did not contribute to the aggregated environment mapping for that area.
  • the aggregated environment mapping may therefore include attributes that are related to an area, such as (a) timestamp(s) (date/time).
  • the timestamp of the environment mapping data for that area in the aggregated environment mapping may be compared with the robot’s activation time (or current time), and if it follows from the comparing that the area has not been visited for more than a defined period, the environment mapping data for that area is obtained from the robot and is then used to update the aggregated environment mapping.
  • Individual robots may be localized in their environment mapping data and in the aggregated environment mapping.
  • Mutual localization between multiple robots may be obtained from their locations in the aggregated environment mapping.
  • a user interface may enable a user to associate multiple robots to perform a complex task in a collaborative way.
  • the association can be indicated through the user interface or voice command.
  • the localization of the robots participating in the task and the collaborative path planning of the robots may be computed on the centralized server.
  • robot A (5010) is a sweeping robot with a 2D Lidar and is localized, in the aggregated environment mapping, in area 3 (503) and robot D (5013) is a mopping robot with a depth sensor that is also localized, in the aggregated environment mapping, in area 2 (502) but was then switched off and manually moved to area 5 (505).
  • the user may want, via the user interface, to associate the two robots A and D to vacuum clean (robot A) and then mop (robot D) the floor in area 1 (501 ).
  • robot A vacuum cleaning
  • robot D mapping
  • robot A vacuum cleaning
  • robot D mapping
  • robot A vacuum cleaning
  • robot D mapping
  • data e.g., occupancy grid map of area 3
  • robot A last known location and/or robot A’s characteristics
  • robot D its environment mapping data may be a slice point cloud of area 2 in the aggregated environment mapping, that was generated by a grid map obtained from robot A.
  • the environment mapping data of robot D that it obtained from the exploration of its environment when activated, does not correspond to the environment mapping data for area 2 (it rather corresponds to area 5).
  • the centralized entity when receiving the environment mapping data of area 5 from robot D, may determine that the environment mapping data obtained from robot D does not correspond to that of area 2, but rather to that of area 5, and may therefore conclude that robot D has been moved (carried) manually from area 2 to area 5. Further, based on the depth measurements done by the sensors of robot D, the exact location of robot D in area 5 may be determined, if required. However, it may be sufficient to know that robot D is in area 5.
  • a list, ranked according to probability, of areas and, if required, position in these areas may help to localize robot D.
  • the locations of robot A in area 3 and of robot D in area 5 can now be updated in the aggregated environment mapping.
  • the centralized entity may then instruct robot A and robot D to move to area 1 , using the graph-node representation of figure 9, for example for their navigation to area 1 .
  • the robots should start vacuum cleaning and mopping, a trajectory may be determined by the centralized entity for robot A, and robot D should follow the path of robot A.
  • the centralized entity may therefore extract environment mapping data for area 1 from the aggregated environment mapping, convert it to environment mapping data suited for each robot according to its particular features, and transmit environment mapping data of area 1 to each of the robots A and D.
  • the centralized entity may then compute the paths to follow for each robot in area 1 , indicate the paths in the environment mapping data for each robot, and transmit the instructions to follow these paths to the robots.
  • the centralized entity may collect the new environment mapping data related to these areas and update the aggregated environment mapping.
  • FIG. 13 is an exemplary embodiment of a robotic device 1300.
  • the robotic device (or robot, or device) 1300 comprises at least one processor (Processing Unit, Central Processing Unit) 1301 , at least one memory 1302, a clock unit 1303, robot sensor(s) and driver logic 1304a, robot sensor(s) 1304b, a transmit/receive unit (transceiver) interface 1305, a battery 1306, robot displacement (movement) driver logic 1307a, robot displacement (electro- )mechanical elements 1307b, robot function driver logic 1308a and robot function (electro-)mechanical elements 1308b.
  • the robotic device 1300 corresponds for example to robot 21 , 22 of figure 2, or to any of robots A-D (5010-5012) of figures 5, 6 or 12.
  • Figure 14 is an embodiment of a centralized entity (device) 1400.
  • Device 1400 may for example correspond to device 23 of figure 2.
  • Device 1400 may include at least one processor (Processing Unit, Central Processing Unit) 1401 , at least one memory 1402, at least one clock unit 1403, at least one transmit/receive (transceiver) interface 1405, and optionally an input-output interface 1406, e.g., a display and a keyboard, or a tactile display.
  • the elements 1401 -1406 are connected to an internal data and communication bus 1411.
  • the elements of the device 1400 may be configured to receive individual environment mapping data from at least two of the multiple robotic devices and relating to at least one area of the environment, the individual environment mapping data being in a environment mapping format specific to each of the at least two multiple robotic devices; to construct a aggregated environment mapping of the environment based on the individual environment mapping data received from the at least two multiple robotic devices, the aggregated environment mapping being in a format comprising information contained in the received individual environment mapping data; to extract, from the aggregated environment mapping, individual environment mapping data destined to at least one of the at least two multiple robotic devices, the extracted individual environment mapping being specific to the individual environment mapping data format used by the at least one of the at least two multiple robotic devices; and to transmit the extracted individual environment mapping data to the at least one of the at least two multiple robotic devices.
  • the device is further configured to: obtain, based on the constructed aggregated environment mapping of the environment, locations of the at least two multiple robotic devices in the environment; and to transmit, to the at least one of the at least two robotic devices, the obtained locations in the extracted individual environment mapping data for the at least one of the at least two robotic devices.
  • the aggregated environment mapping is in a 3D point cloud format.
  • the individual environment mapping data format used by the at least one of the at least two multiple robotic devices is a 2D occupancy grid map.
  • the aggregated environment mapping comprises attributes associated to the environment mapping, the attributes being any of: information representative of an area to which an item of data in the aggregated environment mapping relates; information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping and an individual environment mapping data format used by the robotic device that contributed to the item of data; information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping; information representative of a last known location of a robotic device in an area of the environment to which the item of data in the aggregated environment mapping relates; information representative of characteristics if the robotic device that contributed to an item of data in the aggregated environment mapping; and a timestamp of creation of an item of data in the aggregated environment mapping.
  • the robot characteristics comprise any of: sensor type and number of sensors.
  • the device is further configured to transmit, to at least one of the at least two robotic devices, information representative of an area where the at least one robotic device in the environment was known to be located, based on information extracted from the aggregated environment mapping.
  • Figure 15 is a flow chart of an embodiment of a method 1500 for multiple robotic devices in an environment.
  • the method is for example implemented in a centralized entity, such as 23 of figure 2.
  • individual environment mapping data is received from at least two of the multiple robotic devices and relating to at least one area of the environment, the individual environment mapping data being in an environment mapping format specific to each of the at least two multiple robotic devices.
  • an aggregated environment mapping of the environment is constructed, based on the individual environment mapping data received from the at least two multiple robotic devices, the aggregated environment mapping being in a format comprising information contained in the received individual environment mapping data.
  • individual environment mapping data destined to at least one of the at least two multiple robotic devices is extracted from the aggregated environment mapping, the extracted individual environment mapping being specific to the individual environment mapping data format used by the at least one of the at least two multiple robotic devices; and in 1504, the extracted individual environment mapping data is transmitted to the at least one of the at least two multiple robotic devices.
  • the method includes obtaining, based on the constructed aggregated environment mapping of the environment, locations of the at least two multiple robotic devices in the environment, and transmitting, to the at least one of the at least two robotic devices, the obtained locations in the extracted individual environment mapping data for the at least one of the at least two robotic devices.
  • the aggregated environment mapping is in a 3D point cloud format.
  • the individual environment mapping data format used by the at least one of the at least two multiple robotic devices is a 2D occupancy grid map.
  • the aggregated environment mapping comprises attributes associated to the environment mapping, the attributes being any of: information representative of an area to which an item of data in the aggregated environment mapping relates; information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping and an individual environment mapping data format used by the robotic device that contributed to the item of data; information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping; information representative of a last known location of a robotic device in an area of the environment to which the item of data in the aggregated environment mapping relates; information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping; a timestamp of creation of an item of data in the aggregated environment mapping.
  • the robot characteristics comprise any of sensor type and number of sensors.
  • the method further comprises transmitting, to at least one of the at least two robotic devices, information representative of an area where the at least one robotic device in the environment was known to be located, based on information extracted from the aggregated environment mapping.
  • the extracted individual environment mapping is that of an area for which the at least one of the at least two multiple robots has no individual environment mapping.
  • Figure 16 is a flow chart of a further embodiment of a method 1600 for multiple robotic devices in an environment. The method is for example implemented in a centralized entity, such as 23 of figure 2.
  • first environment mapping data relating to an environment is received from a first robotic device, the first environment mapping data being in a first environment mapping data format specific to the first robotic device.
  • second environment mapping data relating to the environment is received, from a second robotic device, the second environment mapping data being in a second environment mapping data format specific to the second robotic device.
  • aggregated environment mapping data of the environment is constructed based on the first environment mapping data and the second environment mapping data received.
  • third environment mapping data for the first robotic device is extracted from the aggregated environment mapping data, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device.
  • the extracted third environment mapping data is transmitted to the first robotic device in the first environment mapping data format specific to the first robotic device.
  • third environment mapping data for the second robotic device is extracted from the aggregated environment mapping data, the third environment mapping data describing at least a portion of the environment not mapped by the second robotic device. Then, in 1605, the extracted third environment mapping data is transmitted to the second robotic device in the second environment mapping data format specific to the second robotic device.
  • the method further comprises obtaining, based on the aggregated environment mapping data, a location of the second robotic device in the environment; and transmitting, to the first robotic device, the obtained second location according to the first environment mapping data format.
  • the first environment mapping data relates to a first area of the environment
  • the second environment mapping data relates to a second area of the environment.
  • the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
  • the first environment mapping data format is a 2D occupancy grid map environment mapping data format
  • the second environment mapping data format is a 3D point cloud environment mapping data format
  • the first environment mapping data format is a 3D point cloud environment mapping data format
  • the second environment mapping data format is a 2D occupancy grid map environment mapping data format
  • the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
  • the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
  • the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
  • the characteristics comprise any of a sensor type, a number of sensors.
  • the present also relates to an embodiment of a centralized entity device (a device), e.g., device 1400.
  • Device 1400 may for example correspond to device 23 of figure 2, the device comprising at least one processor 1401 , and at least one memory 1402.
  • the at least one processor and the at least one memory being configured to receive, from a first robotic device, first environment mapping data relating to an environment, the first environment mapping data being in a first environment mapping data format specific to the first robotic device.
  • the at least one processor and the at least one memory being configured to receive, from a second robotic device, second environment mapping data relating to the environment, the second environment mapping data being in a second environment mapping data format specific to the second robotic device.
  • the at least one processor and the at least one memory being configured to construct aggregated environment mapping data of the environment based on the first environment mapping data and the second environment mapping data received.
  • the at least one processor and the at least one memory being configured to extract, from the aggregated environment mapping data, third environment mapping data for the first robotic device, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device.
  • the at least one processor and the at least one memory being configured to transmit the extracted third environment mapping data to the first robotic device in the first environment mapping data format specific to the first robotic device.
  • the at least one processor, and at least one memory are further configured to obtain, based on the aggregated environment mapping data, a location of the second robotic device in the environment, and to transmit, to the first robotic device, the obtained second location according to the first environment mapping data format.
  • the first environment mapping data relates to a first area of the environment
  • the second environment mapping data relates to a second area of the environment.
  • the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
  • the first environment mapping data format is a 2D occupancy grid map environment mapping data format
  • the second environment mapping data format is a 3D point cloud environment mapping data format
  • the first environment mapping data format is a 3D point cloud environment mapping data format
  • the second environment mapping data format is a 2D occupancy grid map environment mapping data format
  • the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
  • the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
  • the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
  • the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
  • the characteristics comprise any of a sensor type, a number of sensors.
  • aspects of the principles of the present disclosure can be embodied as a system, method or computer readable medium. Accordingly, aspects of the principles of the present disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code and so forth), or an embodiment combining hardware and software aspects that can all generally be defined to herein as a “circuit”, “module” or “system”. Furthermore, aspects of the principles of the present disclosure can take the form of a computer readable storage medium. Any combination of one or more computer readable storage medium(s) can be utilized. Thus, for example, it is to be appreciated that the diagrams presented herein represent conceptual views of illustrative system components and/or circuitry embodying the principles of the present disclosure. Similarly, it is to be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable storage media and so executed by a computer or processor, whether such computer or processor is explicitly shown.
  • a computer readable storage medium can take the form of a computer readable program product embodied in one or more computer readable medium(s) and having computer readable program code embodied thereon that is executable by a computer.
  • a computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information there from.
  • a computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Some or all aspects of the storage medium may be remotely located (e.g., in the ‘cloud’).

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Environment mapping data may be received from, for example, two robotic devices (robots). The environment mapping data may relate to an environment, in an environment mapping format specific to each of robot. An aggregated environment mapping of the environment is built based on the environment mapping data received from the robots. Environment mapping data destined to a robot, is extracted from the aggregated environment mapping, where the extracted environment mapping is specific to the environment mapping data format used by that robot. The aggregated environment mapping may be used, for example, to complete a robot's environment mapping of an area that is unexplored by that robot, or to localize other robots in the environment, or to coordinate complex tasks involving multiple robots

Description

METHOD AND APPARATUS FOR MULTIPLE ROBOTIC DEVICES IN AN ENVIRONMENT
FIELD
The present disclosure generally relates to the field of robotics, and in particular the present disclosure is related to data retrieval and extraction of environment mapping data obtained from heterogeneous robotic devices.
BACKGROUND
Any background information described herein is intended to introduce the reader to various aspects of art, which may be related to the present embodiments that are described below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light.
In robotics, observation of a robot’s environment is important for efficient functioning of the robot. Therefore, the robot is equipped with one or more sensors that enable it to capture its environment. For example, a robotic vacuum cleaner or a dedicated robotic measuring device may use a laser distance measuring sensor to create a two-dimensional (2D) map (floorplan) of its environment, including walls and other obstacles, and use the 2D map in order to improve its cleaning efficiency. A service robot such as a home-care robot may need visual sensors to elaborate a 3D environment mapping of its environment for improved understanding of the nature of the obstacles detected. For example, a 3D observation in a home, office or other environment may enable determining that an obstacle is an object that may be easily circumvented (e.g., furniture, a person, a book or a shoe), much like a human would do. Same purpose or different purpose robots that evolve in a same complex environment such as a home, office or any other kind of environment, may take advantage of specific knowledge of the environment that each robot has, for improved efficiency of all.
SUMMARY
According to one aspect of the present disclosure, there are provided methods for multiple robotic devices in an environment, according to the described embodiments and appended claims. According to a further aspect of the present disclosure, embodiments of a device implementing at least one of the methods for multiple robotic devices in an environment are described in the following and claimed in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
More advantages of the present disclosure will appear through the description of particular, non-restricting embodiments. To describe the way the advantages of the present disclosure can be obtained, particular descriptions of the present principles are rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. The drawings depict exemplary embodiments of the disclosure and are therefore not to be considered as limiting its scope. The embodiments described can be combined to form particular advantageous embodiments. In the following figures, items with same reference numbers as items already described in a previous figure will not be described again to avoid unnecessary obscuring the disclosure. The embodiments will be described with reference to the following drawings in which:
Figure 1a is a 2D occupancy grid map of an environment, determined by a robot capable of exploring and moving in a planar space.
Figure 1b is a 3D point cloud 11 (or mesh) of the same example environment, determined by means of a robot capable of determining its environment in a 3D space.
Figure 2 is a block diagram of multiple robots communicating with a centralized device (or entity) according to an embodiment.
Figure 3 is a flow chart of a method 300 of gathering data from multiple (possibly heterogeneous) robots according to an embodiment, for the purpose of, for example, providing individual robot environment mapping data from environment mapping data stored in an aggregated (universal) environment mapping.
Figure 4 is a flowchart for a method of collecting data from multiple, possibly heterogeneous, robots, according to an embodiment, for example, for the purpose of robot localization, that may be used to perform complex tasks involving multiple robots, or for the purpose of exploring a complex environment using multiple robots. Figure 5 is an example of a complex environment with multiple areas, and multiple heterogeneous robots.
In Figure 6, three heterogeneous robots A, B and D (5010, 5011 and 5013 respectively), start from area 3, and are each instructed by the centralized entity to explore a different area of the complex environment.
Figure 7a-c illustrate extraction of environment mapping data for a particular robot type from the aggregated environment mapping.
Figure 8a-c illustrate how a similar extraction method may be used for a different type of robot.
Figure 9 is a graph representation of the complex environment of figures 5 and 6.
Figure 10 illustrates a coarse-level statistic data model of past robot locations in a complex environment.
Figure 11 illustrates a finer level statistic data model of past robot locations in a given area of the complex environment.
Figure 12 illustrates robots performing a collaborative task in a complex environment.
Figure 13 is an exemplary embodiment of a robotic device.
Figure 14 is an exemplary embodiment of a centralized entity (device).
Figure 15 is a flow chart of an embodiment of a method for coordinating multiple robotic devices in an environment.
Figure 16 is a flow chart of a further embodiment of a method for multiple robotic devices in an environment.
It should be understood that the drawings are for purposes of illustrating the concepts of the disclosure and are not necessarily the only possible configuration for illustrating the disclosure.
DETAILED DESCRIPTION
The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope.
All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
The efficiency of a set of heterogeneous types of robotic devices (‘robots’) may be improved when the environment observation data of the different types of robots can be used for their mutual benefit. The embodiments described herein give some non-limited implementation examples of solutions to these and other problems.
A robot device may be equipped with one or more sensor(s) for measuring the robot’s environment. A sensor is, for example, a camera or a depth sensor, a temperature sensor, a humidity sensor, a radiation sensor. As the environment observation is dependent of the sensor location and orientation, it may be suitable to move the robot device, and thus the sensor(s), around the environment to get an exhaustive understanding of the environment. For example, when a camera sensor is used, a robot may observe the same object from multiple points of view (top, side) in order to obtain a full 3D representation of it. For example, the robot device may be a robotic vacuum cleaner with only a limited set of sensors that are sufficient for performing simple tasks. While the robotic vacuum cleaner device may be able to overcome low obstacles such as a door sill, it is essentially a surface device that explores a 2D plane of its environment only. But, while it evolves in a 2D plane only, it may benefit from a 3D observation in order to obtain knowledge of the environment and the nature of the obstacles in the environment. A service robot, such as a home-care robot, would require (more sophisticated, more expensive) sensors that enable a 3D environment mapping of its environment. The performance of the robotic vacuum cleaner may be improved if it can take benefit of the 3D environment mapping of the service robot, and vice versa. For example, the vacuum cleaner robot may gain from knowing that an obstacle is a temporary obstacle such as a person or moveable furniture, and take appropriate actions, such as suspend the vacuum cleaning in the area while the person is present in the environment, wait for the furniture to be moved to clean the area. Likewise, the performance of the service robot may be improved if it can take benefit from a 2D map of a room the service robot has not explored yet. Multiple robots that evolve in a same environment may benefit from knowing each other’s location in that environment when performing a more complex task, such as vacuum cleaning followed by mopping, or a home-care robot alerting a person not to enter a room where a robot is mopping, because the floor may be slippery. Likewise, localization in its environment is desired for a robot, so that it can make decisions and accomplish its tasks in an autonomous way. The localization problem is to estimate the robot’s position and orientation within an available map of its environment. Each individual robot may have its own (type of, kind of) map, i.e., their own representation of their environment, established by its own sensor(s). Robots may typically need to (re-)calibrate their position in their representation of the environment by (returning to) starting from a fixed position, e.g., the location of their charging base (docking station). Typically, it is difficult to contemplate that a robot may start from an arbitrary position in the environment, if it did not move to that position by itself, and/or if it has not established a map of the environment previously. The embodiments described herein try to solve, among others, some of the previously described problems.
Figure 1a is a 2D occupancy grid map 10 of an environment, determined by a robot (e.g., a first robot) capable of exploring and moving in a planar space, e.g., on a floor level. The robot is for example equipped with a radar or lidar for range/distance sensing through a laser or ultrasound device. The robot may determine, for example, a 2D occupancy grid map of its environment as depicted. The 2D occupancy grid map gives information about the environment in the plane where the robot observes. The 2D occupancy grid map enables the robot to detect, for example, the presence of walls I furniture 100, and doors I openings 101. The 2D occupancy grid map thus obtained may be sufficient for itself, and enable it to perform simple tasks, such as mopping, or vacuum cleaning.
Figure 1b is a 3D point cloud 11 (or mesh) of the same example environment, determined by a robot (e.g., a second robot) capable of determining its environment in a 3D space. The robot may, for example, be equipped with range/distance sensor(s) as the above mentioned first robot, that may be moved, tilted or rotated as required. Even if multiple robots may use same types of sensor(s), the individual robots may generate different kinds (types, formats) of environment mapping data. The 3D point cloud may be obtained by aggregation of many point cloud slices, each slice being obtained at a different vertical or horizontal position or angle of the sensor(s).
An observation based on (an) optical camera(s) (not shown) may give even further, and/or different, information about the environment. Notably, information about the nature of obstacles, which information is difficult to obtain from the 2D occupancy grid map or the 3D point cloud, such as the presence of human beings, pets, or moveable furniture, which information can be obtained from the optical camera sensor by image processing, possibly coupled to Artificial Intelligence (Al) so that the objects can be classified (e.g., as furniture, a person, a pet, a wall, a window, a door, etc.).
A problem of multi-robot localization exists in the case that there are multiple (heterogeneous) robots that evolve in a same environment. According to embodiments, a robot may be localized quickly and navigate in a region that is not mapped (explored) by itself using environment mapping data obtained from other robots. According to embodiments, mutual localization between robots may be obtained using their individual localization (position) in an aggregated environment mapping, that is the fusion (aggregation) of different (kinds of) sensory data according to each robot’s features and characteristics.
According to embodiments, an aggregated environment mapping is built for a group of (heterogeneous) robots in an environment. According to embodiments, a centralized device may collect individual environment mapping data from each robot, e.g., (a) first environment mapping data (set) from a first device, and (a) second environment mapping data (set) from a second device. The first and the second environment mapping data sets may each relate to a different part (e.g., an area, a region, a space, a room) of the same environment, or to same part of the environment. Possibly the different areas of the environment may be mapped (explored) by multiple (heterogeneous) devices. The environment mapping data is then collected from the multiple devices and is then used to build (construct, establish) an aggregated environment mapping (data, data set) of the environment. The aggregated environment mapping data (set) may be used for different purposes according to embodiments. For example, the aggregated environment mapping data may be used to localize each robot in the environment, and to coordinate tasks (e.g., in the centralized device) to be executed by the different robots. For example, the aggregated environment mapping may be used to transmit the location of one robot to another robot, which may among others be used for mutual localization of robots. For example, the aggregated environment mapping may be used to extract environment mapping data for an individual robot according to the characteristics of the robot in order to complete the environment mapping data of the robot, for example, to complete partial environment mapping data of a robot with environment mapping data of parts of the environment that it did not explore itself. According to embodiments, in order to accelerate the localization, statistical analysis of a robot’s previous locations in the aggregated environment mapping as well as robot’s characteristics may be employed to provide an initial hypothesis of a robot’s location. According to embodiments, the aggregated environment mapping is divided into subsets corresponding to individual parts of the environment, and a subset corresponding to the part of the environment where the robot is localized may be transmitted to a robot including its own localization in the subset.
According to embodiments, when a robot navigates in an un-explored area or an area that was not explored since a given time lap, (unexplored by itself, or unexplored in the aggregated environment mapping), new sensory data may be captured by that robot to update the unified data representation. According to embodiments, multiple robots may be associated to accomplish a collaborative task. According to embodiments, robot locations may be visualized using augmented or virtual reality. According to embodiments, mutual localization between associated robots (e.g., for executing a task or tasks needing the association/coordination of multiple robots) is obtained by their individual localization in the aggregated environment mapping. According to embodiments, the centralized device may plan/schedule such complex tasks and transmit instructions to each individual robot for executing its individual subtask according to the individual robot’s characteristics, and may possibly transmit individual environment mapping data to each or some of the individual robots, the individual environment mapping data being extracted from the aggregated environment mapping and adapted/configured to the individual robot’s characteristics. According to embodiments, the centralized device may be a server. According to embodiments, the centralized device is one of the robots, for example a robot having a supervisory role, such as a household robot ‘employing’ several other robots such as vacuum cleaner robots or kitchen robots. According to embodiments, the centralized device may be operated by a user via a user interface on the centralized device or on a distant device.
Figure 2 is a block diagram of multiple robots communicating with a centralized device according to an embodiment. Each robot 21 , 22 has a wireless and/or wired interface for communication with a centralized entity 23. Examples of wireless interfaces are WiFi, Zigbee, or Bluetooth. Examples of wired interfaces are Ethernet or USB. Each robot has a Central Processing Unit (CPU) 21 a, 22a and a memory 21 b, 22b and (a) sensor(s) 21 c, 22c. The robots may communicate with centralized entity (device) 23. Centralized entity 23 may be a dedicated server, or a robot, in which case it may also be equipped with at least one sensor (not shown). Centralized entity 23 includes a CPU 23a, a memory 23b and optionally a user interface 23c. Robot 21 is located in area 26 of a building. Centralized entity 23 is located in area 27. Robot 22 is located in area 28. The robots may be the same type, or of a different type. The robots may, according to their characteristics, have different features, including different types and number of environment sensors, and may, according to their features, store different types of environment mapping data.
Figure 3 is a flow chart of a method 300 of gathering data from multiple (possibly heterogeneous) robots according to an embodiment, for the purpose of, for example, providing individual robot environment mapping data from environment mapping data stored in a aggregated environment mapping, or for robot localization, and possibly for enabling the execution of complex tasks wherein multiple robots are associated to perform the complex tasks. The method is for example implemented in centralized entity 23. In a first step 301 , individual environment mapping data from multiple robots is collected (e.g., from robots 21 and 22) and is stored (e.g., in memory 23b of the centralized entity, or in the cloud). In a step 302, a aggregated environment mapping is built (constructed) from the individual environment mapping data received from the multiple robots. The individual environment mapping data may, for example, include a 3D point cloud received from robot 21 , and a 2D grid map received from robot 22. The aggregated environment mapping may be, for example, a 3D point cloud, or a 2D grid map, or may take a different form that is not, or that is partly not suited for visual representation, such as a matrix, vector, graph, or database. The aggregated environment mapping may include other type of data retrieved from the robots such as robot characteristics, robot capabilities, identification or description of the sensors used by each robot to observe the environment, and other types of data that will be discussed further on in the present document, such as robot localization, and statistical data. Using the above examples, if the aggregated environment mapping includes a 3D point cloud, a aggregated environment mapping may be constructed in step 302 that includes a full-sliced 3D point cloud for room 26 (e.g., received from robot 21 ) and a single 3D point cloud slice for room 28 (e.g., received from robot 22). In a step 303, that is executed for example on request of a robot, individual environment mapping data/localization data is extracted from the aggregated environment mapping, and is transmitted to the requesting robot in step 304. Using the above examples, robot 22 may request a 2D occupancy grid map of room 26. The centralized entity 23 may then extract, in step 303, a 2D occupancy grid map from the 3D point cloud it has for room 26 in its aggregated environment mapping, and transmit the 2D occupancy grid map to robot 22. The 2D occupancy grid map may for example be obtained from a 3D point cloud slice that is on the floor level, as robot 22 is a vacuum cleaner. The extraction is according to the request of the robot (here, robot requests environment mapping data for room 26 only) and the characteristics for the robot (here, the environment mapping data for robot 22 should be in the form of a 2D occupancy grid map because that is the type of map used by that particular robot). Likewise, a robot may request to localize another robot. For example, robot 21 may request to localize robot 22. Then, in step 303, the centralized entity will extract, from the aggregated environment mapping, the location of robot 22 and transmit the location to robot 21 . The location will be according to the location format used by the robot according to its characteristics, e.g., a location in a 2D occupancy grid map, or a location in a 3D scene model. Also, the location may be specific to the part of the area of which the robot has an environment mapping. For example, a location may be relative to a room if the robot has environment mapping data for that room only, or to a building if the robot has environment mapping data for the whole building.
According to embodiments, the centralized entity 23 or any other distant device, may use the aggregated environment mapping for planning of relatively complex tasks that imply the use of multiple, possibly different, robots. According to embodiments, an system consists of a centralized entity e.g., a server device, the server device possibly having a user interface (such as a tablet or smartphone) and multiple robots as in figure 2. The server device may communicate with the device or robots in a wireless or wired way and deal with received data and requests. Each robot may send its current environment mapping data, its robot characteristics, sensor types and descriptions, local trajectories, etc., to the server device. The server device may send commands (instructions), necessary navigation data for executing a task, to each robot. The server device can also provide robot data to assist user interactions on an III. User commands (instructions) are transmitted from the III device to the server device for the task association of multiple robots or interaction with each robot, or the server device may itself plan the execution of complex tasks requiring the association of multiple robots. Moreover, the aggregated environment mapping and history of robot locations may be stored in the server device. The commands/instructions and necessary data (e.g. environment mapping data, statistics of locations) may be transmitted from the server device to the corresponding robot or device. Localization may be implemented in the server device or in an individual robot.
Figure 4 is a flowchart for a method of collecting data from multiple, possibly heterogeneous, robots, according to an embodiment, for example, for the purpose of robot localization, that may be used to perform complex tasks involving multiple robots, or for the purpose of exploring a complex environment using multiple robots. Data is received from multiple robots, in a step 401 , by a centralized entity (e.g., implemented in a server device or in one of the robots), e.g., sensory data (e.g., 2D map, 3D data) and robot characteristics (e.g., sensor type (e.g., camera, laser), sensor parameters, robot type (e.g., vacuum cleaner, mopping robot, kitchen robot, household robot), robot capabilities (clean, mop, explore, fly, hover, roll), robot configuration, robot dimensions, and possibly individual robot position if known in an area where the robot evolves, trajectories explored by the robot in the area, possibly associated with timestamps). In step 402, a aggregated environment mapping is built. In a step 402a, a aggregated environment mapping is built from the data received from the multiple robots. In the aggregated environment mapping, individual robot environment mapping data received from the robots may be expanded (completed) in step 402b with individual environment mapping data received from other robots related to regions (areas) not explored or only partially explored by the individual robots. In a step 403, data may be retrieved, such as statistic of previous robot locations coupled to robot characteristics for the purpose of robot localization. While individual localization in the area where the robot evolves may be realized in the centralized entity or on a robot, mutual localization between different robots may be obtained by their locations in the aggregated environment mapping, step 404. The aggregated environment mapping may be updated by freshly captured data from the individual robots as they explore an environment (area), step 405. This updating may be a continuous process.
Figure 5 is an example of a complex environment with multiple areas, and multiple heterogeneous robots. Complex environment 50 includes areas 1 (501 ), area 2 (502), area 3 (503), area 4 (504) and area 5 (505). Complex environment 50 includes robot A (5010), located in area 3, robot B (5011 ) located in area 2, robot C (5012) located in area 5, and robot D (5013) located in area 1. Robot A may be a vacuum cleaner robot. Robot B may be a household (or service) robot. Robot C may be a ceiling-mounted kitchen robot that can do several kinds of tasks such as loading and emptying a dishwasher, cooking, picking cups and plates. Robot D may be a mop robot. In this example, each of the robots A, B, C and D are different and have different functions. In other examples, not shown, there may be more or less robots, more or less areas, and some of the robots may be of same manufacture and function (i.e. not heterogeneous with at least one another robot). Robots A, B, C and D have different dimensions and different functions. While robots A, B and D may freely move to any area, their dimensions permitting, robot C has a fixed location in area 5.
- Aggregated environment mapping of the environment
As mentioned previously, a aggregated environment mapping of robot environment mapping data may be built in a centralized entity. For instance, the aggregated environment mapping can be an enriched point cloud, where a point or a set of points not only indicates its 3D position but also has some attributes related to the robot from which the data is obtained, and its type of sensors. There are several ways to build the aggregated environment mapping; for example, robots can be instructed (configured) to map a complete complex environment in a collaborative way, or the aggregated environment mapping is built by incrementally integrating environment mapping data from robots while they evolve in (the different areas of) the complex environment, or both. The different embodiments are discussed in the next sections of the present document.
- Collaborative environment mapping of the environment
According to an embodiment, multiple robots are activated initially in the same area of a complex environment, or in different areas. According to their initial environment mapping data that the robots have already obtained, the centralized entity may instruct the robots to explore the environment autonomously. An algorithm for multi-robot collaborative dense scene reconstruction may be used. Given a partially scanned scene, the scanning paths (trajectories) for multiple robots are planned by the centralized entity, to efficiently coordinate with each other so as to maximize the scanning coverage and reconstruction quality while minimize the overall scanning effort. According to embodiments, a collaboration strategy for multiple robots can be employed for environment mapping of a complex environment. According to embodiments, fusion (aggregation) of (different) sensory data from (different types of) robots is described here.
According to the following scenario, multiple robots start in a same area of the complex environment. Of course, multiple robots may start in different areas. Passages (passageways, halls, door openings) to other areas may be detected autonomously by robot(s) or may be indicated manually by a user using a User Interface for example. The passages are employed to plan (schedule) individual paths for each robot to explore a yet unexplored area, or area(s) of which it is desired to update exploration data. The complex environment is mapped by the robots in a collaborative way according to the plan. For instance, in Figure 6, three heterogenous robots A, B and D (5010, 5011 and 5013 respectively), start from area 3, and are each instructed by the centralized entity to explore a different area of the complex environment. Thus, area 3 is the starting area for all robots. They may first build an initial aggregated environment mapping of area 3 using their individual environment mapping. Then, depending on the analysis (discovery) of passages in area 3, the path planning for each robot to explore a different area can be instructed by the centralized entity; as mentioned any exits, passages, passageways, or doors are detected in their own environment mapping data of area 3. The centralized entity constructs path planning for each robot to enter an area to explore and to map. If a criteria of nearest path is taken for example, robot A (5010) is planned to enter area 1 (501 ) and map it, robot B (5011 ) is planned to enter area 2 (502) and map it, and robot D (5013) is planned to enter area 4 (504) and map it. The robots may be instructed by the centralized entity to map any remaining area(s) according to their order of finishing of their environment mapping task. For example, robot D may enter area 5 when it has finished environment mapping area 4 and when it is the first robot to finish its environment mapping task.
- Incremental environment mapping of the environment
According to another embodiment, a aggregated environment mapping of the environment may be built incrementally using the sensory data of different robots collected by the centralized entity at run time of the robots. In this case, the centralized entity does not need to instruct the robots to explore an area of the complex environment nor to establish a path planning. The aggregated environment mapping is built ‘on the fly’, based on environment mapping data obtained from the robots as they execute their tasks. While each robot evolves in the environment, new environment mapping data may be merged (fused, aggregated, integrated) into a(n) (existing) aggregated environment mapping. According to embodiments, to ensure the accuracy of data fusion, the centralized entity may instruct robots to visit a ‘reference point’ or ‘meeting point’ that the centralized entity has fixed in the aggregated environment mapping. This reference point may be a location in the environment that is the same for all robots, so that the centralized entity may acquire data from each robot in the same area, where there exist enough and distinguishable features for the calibration of environment mapping data from each robot against the aggregated environment mapping. For example, 510 is a reference point. Alternatively, the present location of one of the robots (e.g., the location of robot A, may be designated as a reference point and the other robots (robot B, robot D) are instructed by the centralized entity to visit this location as a calibration step to calibrate their position in their own environment mapping data before merging their environment mapping data into the aggregated environment mapping. A reference point may also be a location different for each robot and may serve as a starting point for the robot’s own position calibration in the aggregated environment mapping. For example, when environment mapping data captured by A in area 1 has been fused with the aggregated environment mapping stored in the centralized entity, and robot A is instructed to go to a location in area 2 that it has not mapped itself but that was mapped by robot B, the centralized entity may instruct robot A to go to the reference point in area 1 before going to the location in area 2, so that robot A’s position is calibrated in the environment mapping data that was extracted for it from the aggregated environment mapping. The reference point in area 1 for robot A can be determined as the most frequent location of robot A in area 1 according to its statistic of past locations.
Each robot may define a ‘meeting point’ in one or more area(s) for which it has provided data used in the aggregated environment mapping. A meeting point may be an arbitrary location (the farthest left location in the mapped area for instance), or having some particular features (a specific area corner, specific area door).
Meeting points may be used by the server to evaluate and check regularly the aggregated environment mapping performance: any robot having access to an area should be able to reach a related “meeting point” accurately, even if that meeting point was defined by another robot using different sensors.
The server may also send two or more robots (having access to the meeting point area) close to that meeting point (making robots facing each other for instance) with an objective to check the aggregated environment mapping accuracy, based on robot sensor(s) information once they reached what they consider being the meeting point, using the path generated from the aggregated environment mapping data.
A similar interactive check (via a user and III display (on AR display) for instance) may be done, this time letting the user visually evaluate if the robot(s) have reached the indicated meeting point with an acceptable accuracy.
If the accuracy is found to be low, a global re-environment mapping of the whole area using all available robots may be triggered.
- Fusion of different sensory data into the aggregated environment mapping
According to an embodiment, the aggregated environment mapping is in the form of an enriched point cloud, where each point not only indicates its 3D position but also has some attributes, such as its color, the type of robot that did the observation, robot/sensor parameters (features, characteristics), the corresponding image features of the corresponding keyframe if the observation was done with a camera, the corresponding pixels in a 2D grid map. The environment mapping data of different robots may be registered (recorded, memorized, stored) together in the aggregated environment mapping using methods for 3D point cloud registration. Then, while the first step of building a aggregated environment mapping is collection of the individual data from the robots, a second step for building the aggregated environment mapping is conversion of the collected individual data into a 3D point cloud. For example, a 2D occupancy grid map captured by 2D Lidar can be converted into a 3D slice point cloud parallel to the ground using the height of the Lidar, the map resolution, and the location from where the 2D occupancy grid map was made. Depth images captured by a depth camera may be converted into point cloud using the intrinsics of depth camera and the camera poses. Keyframe images captured by a color camera may be employed to extract the landmarks and dense 3D reconstruction using the methods of structure from motion and multi-view stereo methods, respectively. Through the point cloud registration in 3D, the different sensory data are fused together, and any associated attributes may complement the corresponding 3D points. In the process of collaborative or incremental environment mapping, there may exist noisy data due to robots appearing in the scanning range of other robots. That means that the 3D points appearing along the recorded trajectories of all robots may be regarded as noise and may therefore be removed from the aggregated environment mapping. This enriched 3D point cloud may be encoded using the standardized Point Cloud Compression (PCC) for data transmission between the centralized entity and the robots for those robots that may directly use the 3D point cloud format for their individual environment mapping; for robots using other kinds of environment mapping format, data may be extracted from the enriched 3D point cloud (e.g., stored in the centralized entity), converted in the individual environment mapping format of the robot using another kind of environment mapping format and compressed if needed (e.g., in a Zip format).
- Expanding (completing) robot environment mapping data
Each of a plurality of robots that evolve in a same complex environment may not have traversed/explored all the areas of the environment. With the help of other robots’ environment mapping data, a robot may be instructed to navigate in a region that has not yet been mapped by itself. Using the robot characteristics, environment mapping data may be extracted from the aggregated environment mapping that is adapted to different forms of environment mapping data for each robot, and transmitted to them. Given a list of robot types, the attributes of the aggregated environment mapping may therefore be checked, to ensure whether a specific robot or specific robot type has provided environment mapping data for an area. If a specific robot or robot type did not provide environment mapping data for an area, the environment mapping data in this area may be extracted from the aggregated environment mapping and may be converted into a format according to the characteristics of the robot or robot type before being transmitted to the robot.
For instance, in Figure 6, robot A is equipped with a 2D Lidar sensors and robot A exports a 2D occupancy grid map of area 3 and area 1 . Robot B is equipped with a depth sensor and exports a 3D point cloud of area 2. If robot A is placed in area 2 for performing a task, it is necessary to expand its 2D occupancy grid map (that only contains areas 3 and 1 ) with a 2D occupancy grid map of area 2 for localization and navigation. For this purpose, the 3D point cloud of area 2 in the aggregated environment mapping, established based on data from robot B, may be converted into a 2D occupancy grid map compatible with the characteristics of robot A, such as Lidar height, map parameters.
See Figure 7: taking a 3D point cloud shown in Figure 7a as an example, its ground plane is first detected. Then the slice plane is determined as a plane parallel to the ground with the same distance as the height of the Lidar sensor(s) of robot A (5010). Advantageously, a threshold of distance to the slice plane may also be predefined to project the data of barrier objects close to the ground into the slice point cloud, so that any collision areas may be included in the environment mapping data. As shown, a slice point cloud 7b, cropped from the 3D point cloud 7a, may be converted into an 2D occupancy grid map 7c according to map parameters of robot A. The slice point cloud 7a is first transformed into the local coordinate system defined on the slice plane, i.e., the axes of X and Y are in the slice plane, and the axis of Z is along the normal direction of the slice plane. Then using the map resolution particular to robot A, the slice point cloud is projected into the discrete grid points of map 7c, where any occupied grid points are indicated in black. Assuming that the robot is located in the center of the slice point cloud, the free grids are indicated as white between the center and the occupied grid points. The other grid points are unknown area depicted in grey. Thus, via the above transformations of the aggregated environment mapping, the 2D occupancy grid map generated from the 3D point cloud of area may complete the environment mapping of robot A for its localization and navigation in area 2, of which it did not have, initially, any environment mapping data.
According to another embodiment, the aggregated environment mapping (e.g., when the aggregated environment mapping is in the form of an enriched 3D point cloud) may also complete environment mapping data for a robot, in the form of image and depth. As shown in Figure 8, a 3D point cloud 8a may be rendered into RGB images 8b and depth map 8c by virtual cameras. The intrinsics of virtual cameras may be consistent with those of an RGB camera and a depth camera of a robots (e.g., of those of robot B (5011 ). The camera poses may be sampled in space considering the robot characteristics such as the height and orientation of the robot camera. The statistics of robot locations may also be considered, which makes the sampled positions of virtual camera appear at the more frequent locations of other robots in the area.
Therefore, for each robot, its environment mapping data can be completed in un-explored areas by the sensory data of other robots from the aggregated environment mapping as described in the above embodiments. According to embodiments, the aggregated environment mapping of a complex environment may be divided into subsets that correspond to different areas. According to an embodiment, a graph may represent the organization in subsets of the data in the aggregated environment mapping, where each graph node indicates data of a different area and edges connect adjacent areas and represent passages from one area to another. See Figure 9, showing a graph representation of the complex environment of figures 5 and 6, which may be used for the robots to improve their efficiency of robot localization and navigation, and cooperation. In the graph of figure 9, node 903 corresponding to area 3 has edges to all the nodes that represent adjacent areas: area 1 (node 901 ) area 2 (node 902), area 4 (node 904) and area 5 (node 905).
- Accelerating robot localization
According to embodiments, once the aggregated environment mapping is constructed, robots may retrieve data from the aggregated environment mapping, e.g., for their own localization or localization of peer robots, or for retrieval of expanded (completed) environment mapping, according to robot characteristics (e.g. robot type, robot features, sensor parameters). Localization of robots may be accelerated or otherwise improved (e.g., improved preciseness), based for example, on history of past locations, and on history of executed tasks.
- Analysis of robot locations
According to embodiments, to accelerate/im prove robot localization, a statistical analysis of robot’s past locations may provide an initial hypothesis about a robot’s current (present) location. This analysis may be implemented in a coarse-to-fine manner. Once a robot is activated in an arbitrary area of a complex environment which has previously been mapped in the aggregated environment mapping, a ranked list of possible areas may be provided to the robot, e.g., based on robot characteristics, past locations, past executed tasks, and the robot may retrieve for each area in the list, the corresponding environment mapping data of the area, as well as the most likely position of the robot in that area. Alternatively, the robot may be instructed, by the centralized entity, to perform some reference measurements of its environment (for example, a distance measurement in each of the four compass directions may give a good idea of the dimensions of an area and thereby identify the area; an observation with a camera sensor may identify an object which may identify an area), transmit the resulting data to the centralized entity, which matches the data to the aggregated environment mapping. Based on the matching process, the centralized entity may exactly determine the location of the robot in terms of area and location in the area, and may transmit the location and possibly the environment mapping for the area to the robot.
The aggregated environment mapping may also include the location of each charging station in each area, and for which types of robots the charging station is suited, and the centralized entity may transmit the location of the charging station in an area, or the location of a nearest charging station suited for a robot, to a robot. Optionally, and based on the information stored in the centralized entity that may be part of the aggregated environment mapping, the centralized entity may inform a robot that a particular charging station is occupied by another robot and may suggest that the robot directs itself to an alternative charging station that is not occupied.
According to an embodiment, the trajectories of robots are recorded in the centralized entity (possibly in the aggregated environment mapping or associated with it) when they perform a task. On a coarse level, the number of times in an area that the robot has visited is counted. For instance, in Figure 10, the coarse statistic of a robot’s past locations in a complex environment 50 is illustrated by columns 1001 , 1002, 1003, 1004 and 1005, each column having a height that is related to the past location in each area. On a finer level, e.g. on an area level, the ground plane of each area may be divided into a grid, of configurable mesh size. According to embodiments, the trajectory of a robot may be projected into the complex environment and/or in an area for finer resolution. As shown in Figure 11 , an example of the statistic of locations in one area (area 3 (503)) is shown; it can be concluded that the robot has a very high probability to be in location 1105, which may correspond to its charging station. According to embodiments, timestamps are associated to robot locations, for still improved localization; for example, based on the past robot locations and the associated time stamps, it may be concluded that a robot has a very high probability to be at a particular location at a given time.
According to an embodiment, the statistical data of past robot locations may be visualized through a user interface device, e.g., in augmented reality. For instance, using a mobile device or a head-mounted display device, to visualize the history of a robot’s past locations, the III device is first localized in the aggregated environment mapping. Then the statistical data may be displayed in an application of augmented reality using various types of presentation such as 3D columns, heat maps, etc. that overlay on the real environment. This kind of visualization can assist the user to select the meeting point in the process of the incremental construction of the aggregated environment mapping. For example, the user can interactively designate a location as a meeting point, where the visit frequency of the robot is higher and there are more distinguishing features.
- Combination with robot characteristics
When extracting the reference data for localization from the aggregated environment mapping, the robot characteristics (e.g. robot type, robot identifier (ID), sensor types and parameters) are also considered besides the hypothesis of its initial location. Such characteristics may be associated to portions of the data in the aggregated environment mapping according to the robot which provided the environmental observations on which each portion is based. In an embodiment, a robot sends a command of localization and its characteristics to the server. For the target area, the aggregated environment mapping is first checked whether there is the environment mapping data having the same robot type or not. If available, the robot’s own previous environment mapping data is retrieved for localization. If not, the compatible environment mapping data from a similar robot (e.g. robot of the same type or of a similar type) is then checked whether available. The similar robot could have the same sensor type, or the same height of sensor as the robot to be localized. If there is no appropriate raw environment mapping data available for localization, the expanded environment mapping data compatible with the robot characteristics is employed as aforementioned. If all else fails, it means that there is never a robot having mapped the target area. Thus, the currently captured data by this robot is first merged into the aggregated environment mapping. The centralized entity may control this robot to explore the environment and visit a meeting point for registration validation then update the aggregated environment mapping. Thus, the robot can take benefit of the global localization in the aggregated environment mapping.
- Robot localization in the aggregated environment mapping
In the process of robot localization, the reference environment mapping data is extracted from the aggregated environment mapping with respect to the aforementioned hypotheses of initial locations and related characteristics of the robot. In an embodiment, the terminal location of a robot in the last task is stored on the server separately, which can be used as the first hypothesis of its starting location in the next task. If the robot is displaced before the beginning of a new task, the last location would be invalid for the robot localization. Using the last location of a robot, the corresponding reference data is extracted to register with the captured environment mapping data when the robot is activated. If this registration fails, it means that the robot has been moved and the (re-)localization is required before executing the task. Then, the above statistical analysis for this robot can provide a list of possible areas and the most possible location in each area. According to the robot characteristics, the corresponding reference data is retrieved as aforementioned. The robot captures a partial data of its surrounding after being activated in the new task. Thus, the robot localization is realized by the data registration between the partial data and the reference environment mapping data with the initial location retrieved from the fine statistic.
In the case that the environment mapping data is in the form of point cloud, to register the partial environment mapping data with its reference data, there are two stages of initial alignment and refined registration. The structural representations of both environment mapping data are extracted to initially align them. Then Iterative Closest Point (ICP) related methods may be employed for refinement of the registration (ICP is an algorithm that enables to minimize a difference between several clouds of points and may be used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning). The registration error is verified with a predefined threshold to determine whether the robot is correctly localized in the reference environment mapping data. If not, the corresponding data of a candidate area and the hypothesis of initial location are retrieved to repeat the registration procedure until the robot is localized. In another case that the environment mapping data is in the form of image or depth map, the 2D image matching methods can be employed to realize the robot localization.
After a robot is localized in the aggregated environment mapping, it can navigate in the environment to perform the new task even if it has not mapped the complete environment. Once the destination in the task is designated in the aggregated environment mapping, the necessary environment mapping data can be obtained by searching the path between the area of starting location and the one of destination on the graph representation. The environment mapping data of the related areas on the path are only extracted compatible with the robot characteristics, which can reduce the overload of data on the robot for a complex environment.
- Updating robot environment mapping data
According to embodiments, multiple robots may map a complex environment in a collaborative and/or incremental way and, for example, a robot may advantageously use the aggregated environment mapping to obtain environment mapping data obtained from other robots for its own localization, environment mapping, or for localization of other robots, and notably localization and environment mapping of areas that were not mapped by itself, or of which the environment mapping is incomplete. However, as the robot performs its tasks, it will navigate in areas that it has not mapped before. The new environment mapping data captured by the robot may advantageously be employed to update the current aggregated environment mapping and its updates may help to complete, correct or adjust the aggregated environment mapping, notably when changes have taken place in the complex environment.
According to an embodiment, the updating of the aggregated environment mapping may be accomplished in an incremental way as described in the previous sections. According to an embodiment, robot characteristics (features) may also be considered in the updating. According to an embodiment, if a robot’s environment mapping data, received as an update to the aggregated environment mapping by the centralized entity, is from the same robot that contributed to the environment mapping data of that area in the aggregated environment mapping, the robot’s environment mapping data update may be compared with the environment mapping data of the area in the aggregated environment mapping, regarding their coverage zones. When the new environment mapping data covers a smaller zone of the area, the aggregated environment mapping may, for example, not be updated based on the robot’s environment mapping data update. If however it covers a larger area, the robots environment mapping data update may be merged into the aggregated environment mapping. To ensure the accuracy of data fusion, the centralized entity may instruct the robot of which it has received the environment mapping data update, to move to a pre-defined meeting (reference) point in order to be able to calibrate the update received from the robot to the environment mapping data in the aggregated environment mapping. The more complete robot’s environment mapping data update may then be used to update the aggregated environment mapping, and the less complete environment mapping data previously received by the same robot and that was used to construct the aggregated environment mapping may be replaced. If the environment mapping data in the aggregated environment mapping for the area for which the robot environment mapping data update is received was obtained from another robot, this means that the robot sending the update did not map the area, or at least did not contribute to the aggregated environment mapping for that area.
As it is possible to have changes in the environment (e.g., addition, removal, movement of furniture) as time goes by. In this case, according to an embodiment, it may be advantageous to update the aggregated environment mapping with more recent environment mapping data for that area. The aggregated environment mapping may therefore include attributes that are related to an area, such as (a) timestamp(s) (date/time). According to an embodiment, when a robot is activated in an area, the timestamp of the environment mapping data for that area in the aggregated environment mapping may be compared with the robot’s activation time (or current time), and if it follows from the comparing that the area has not been visited for more than a defined period, the environment mapping data for that area is obtained from the robot and is then used to update the aggregated environment mapping.
- Associating multiple robots
Individual robots may be localized in their environment mapping data and in the aggregated environment mapping. Mutual localization between multiple robots may be obtained from their locations in the aggregated environment mapping.
According to embodiments, a user interface may enable a user to associate multiple robots to perform a complex task in a collaborative way. The association can be indicated through the user interface or voice command. The localization of the robots participating in the task and the collaborative path planning of the robots may be computed on the centralized server.
For example, in the complex environment 50 of Figure 12, robot A (5010) is a sweeping robot with a 2D Lidar and is localized, in the aggregated environment mapping, in area 3 (503) and robot D (5013) is a mopping robot with a depth sensor that is also localized, in the aggregated environment mapping, in area 2 (502) but was then switched off and manually moved to area 5 (505). The user may want, via the user interface, to associate the two robots A and D to vacuum clean (robot A) and then mop (robot D) the floor in area 1 (501 ).
Once robot A (vacuum cleaning) and robot D (mopping) are activated (manually, or upon instruction from the centralized entity), they may first explore their surroundings and localize themselves in the complex environment. For the localization of robot A, data (e.g., occupancy grid map of area 3) may be retrieved from the aggregated environment mapping, using for example, robot A’s last known location and/or robot A’s characteristics. For robot D, its environment mapping data may be a slice point cloud of area 2 in the aggregated environment mapping, that was generated by a grid map obtained from robot A. As Robot D has been moved (e.g., carried by a person) to area 5, the environment mapping data of robot D, that it obtained from the exploration of its environment when activated, does not correspond to the environment mapping data for area 2 (it rather corresponds to area 5). According to an embodiment, the centralized entity, when receiving the environment mapping data of area 5 from robot D, may determine that the environment mapping data obtained from robot D does not correspond to that of area 2, but rather to that of area 5, and may therefore conclude that robot D has been moved (carried) manually from area 2 to area 5. Further, based on the depth measurements done by the sensors of robot D, the exact location of robot D in area 5 may be determined, if required. However, it may be sufficient to know that robot D is in area 5.
Alternatively, through statistical analysis of past locations of robot D, a list, ranked according to probability, of areas and, if required, position in these areas may help to localize robot D.
The locations of robot A in area 3 and of robot D in area 5 can now be updated in the aggregated environment mapping. To perform their task collaboratively, the centralized entity may then instruct robot A and robot D to move to area 1 , using the graph-node representation of figure 9, for example for their navigation to area 1 . When the robots arrive in area 1 , the robots should start vacuum cleaning and mopping, a trajectory may be determined by the centralized entity for robot A, and robot D should follow the path of robot A. The centralized entity may therefore extract environment mapping data for area 1 from the aggregated environment mapping, convert it to environment mapping data suited for each robot according to its particular features, and transmit environment mapping data of area 1 to each of the robots A and D. The centralized entity may then compute the paths to follow for each robot in area 1 , indicate the paths in the environment mapping data for each robot, and transmit the instructions to follow these paths to the robots.
In the process, if robot A did not map area 1 , and robot D did not map area 5, the centralized entity may collect the new environment mapping data related to these areas and update the aggregated environment mapping.
Figure 13 is an exemplary embodiment of a robotic device 1300. The robotic device (or robot, or device) 1300 comprises at least one processor (Processing Unit, Central Processing Unit) 1301 , at least one memory 1302, a clock unit 1303, robot sensor(s) and driver logic 1304a, robot sensor(s) 1304b, a transmit/receive unit (transceiver) interface 1305, a battery 1306, robot displacement (movement) driver logic 1307a, robot displacement (electro- )mechanical elements 1307b, robot function driver logic 1308a and robot function (electro-)mechanical elements 1308b. The robotic device 1300 corresponds for example to robot 21 , 22 of figure 2, or to any of robots A-D (5010-5012) of figures 5, 6 or 12. Figure 14 is an embodiment of a centralized entity (device) 1400. Device 1400 may for example correspond to device 23 of figure 2. Device 1400 may include at least one processor (Processing Unit, Central Processing Unit) 1401 , at least one memory 1402, at least one clock unit 1403, at least one transmit/receive (transceiver) interface 1405, and optionally an input-output interface 1406, e.g., a display and a keyboard, or a tactile display. The elements 1401 -1406 are connected to an internal data and communication bus 1411. The elements of the device 1400 may be configured to receive individual environment mapping data from at least two of the multiple robotic devices and relating to at least one area of the environment, the individual environment mapping data being in a environment mapping format specific to each of the at least two multiple robotic devices; to construct a aggregated environment mapping of the environment based on the individual environment mapping data received from the at least two multiple robotic devices, the aggregated environment mapping being in a format comprising information contained in the received individual environment mapping data; to extract, from the aggregated environment mapping, individual environment mapping data destined to at least one of the at least two multiple robotic devices, the extracted individual environment mapping being specific to the individual environment mapping data format used by the at least one of the at least two multiple robotic devices; and to transmit the extracted individual environment mapping data to the at least one of the at least two multiple robotic devices.
According to an embodiment, the device is further configured to: obtain, based on the constructed aggregated environment mapping of the environment, locations of the at least two multiple robotic devices in the environment; and to transmit, to the at least one of the at least two robotic devices, the obtained locations in the extracted individual environment mapping data for the at least one of the at least two robotic devices.
According to an embodiment, the aggregated environment mapping is in a 3D point cloud format.
According to an embodiment, the individual environment mapping data format used by the at least one of the at least two multiple robotic devices is a 2D occupancy grid map.
According to an embodiment, the aggregated environment mapping comprises attributes associated to the environment mapping, the attributes being any of: information representative of an area to which an item of data in the aggregated environment mapping relates; information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping and an individual environment mapping data format used by the robotic device that contributed to the item of data; information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping; information representative of a last known location of a robotic device in an area of the environment to which the item of data in the aggregated environment mapping relates; information representative of characteristics if the robotic device that contributed to an item of data in the aggregated environment mapping; and a timestamp of creation of an item of data in the aggregated environment mapping.
According to an embodiment, the robot characteristics comprise any of: sensor type and number of sensors.
According to an embodiment, the device is further configured to transmit, to at least one of the at least two robotic devices, information representative of an area where the at least one robotic device in the environment was known to be located, based on information extracted from the aggregated environment mapping.
Figure 15 is a flow chart of an embodiment of a method 1500 for multiple robotic devices in an environment. The method is for example implemented in a centralized entity, such as 23 of figure 2. In 1501 , individual environment mapping data is received from at least two of the multiple robotic devices and relating to at least one area of the environment, the individual environment mapping data being in an environment mapping format specific to each of the at least two multiple robotic devices. In 1502 an aggregated environment mapping of the environment is constructed, based on the individual environment mapping data received from the at least two multiple robotic devices, the aggregated environment mapping being in a format comprising information contained in the received individual environment mapping data. In 1503, individual environment mapping data destined to at least one of the at least two multiple robotic devices is extracted from the aggregated environment mapping, the extracted individual environment mapping being specific to the individual environment mapping data format used by the at least one of the at least two multiple robotic devices; and in 1504, the extracted individual environment mapping data is transmitted to the at least one of the at least two multiple robotic devices. According to an embodiment, the method includes obtaining, based on the constructed aggregated environment mapping of the environment, locations of the at least two multiple robotic devices in the environment, and transmitting, to the at least one of the at least two robotic devices, the obtained locations in the extracted individual environment mapping data for the at least one of the at least two robotic devices.
According to an embodiment, the aggregated environment mapping is in a 3D point cloud format.
According to an embodiment, the individual environment mapping data format used by the at least one of the at least two multiple robotic devices is a 2D occupancy grid map.
According to an embodiment, the aggregated environment mapping comprises attributes associated to the environment mapping, the attributes being any of: information representative of an area to which an item of data in the aggregated environment mapping relates; information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping and an individual environment mapping data format used by the robotic device that contributed to the item of data; information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping; information representative of a last known location of a robotic device in an area of the environment to which the item of data in the aggregated environment mapping relates; information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping; a timestamp of creation of an item of data in the aggregated environment mapping.
According to an embodiment, the robot characteristics comprise any of sensor type and number of sensors.
According to an embodiment, the method further comprises transmitting, to at least one of the at least two robotic devices, information representative of an area where the at least one robotic device in the environment was known to be located, based on information extracted from the aggregated environment mapping.
According to an embodiment, the extracted individual environment mapping is that of an area for which the at least one of the at least two multiple robots has no individual environment mapping. Figure 16 is a flow chart of a further embodiment of a method 1600 for multiple robotic devices in an environment. The method is for example implemented in a centralized entity, such as 23 of figure 2. In 1601 , first environment mapping data relating to an environment is received from a first robotic device, the first environment mapping data being in a first environment mapping data format specific to the first robotic device. In 1602, second environment mapping data relating to the environment is received, from a second robotic device, the second environment mapping data being in a second environment mapping data format specific to the second robotic device. In 1603, aggregated environment mapping data of the environment is constructed based on the first environment mapping data and the second environment mapping data received. In 1604, third environment mapping data for the first robotic device is extracted from the aggregated environment mapping data, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device. In 1605, the extracted third environment mapping data is transmitted to the first robotic device in the first environment mapping data format specific to the first robotic device.
Alternatively, in 1604, third environment mapping data for the second robotic device is extracted from the aggregated environment mapping data, the third environment mapping data describing at least a portion of the environment not mapped by the second robotic device. Then, in 1605, the extracted third environment mapping data is transmitted to the second robotic device in the second environment mapping data format specific to the second robotic device.
According to an embodiment, the method further comprises obtaining, based on the aggregated environment mapping data, a location of the second robotic device in the environment; and transmitting, to the first robotic device, the obtained second location according to the first environment mapping data format.
According to an embodiment, the first environment mapping data relates to a first area of the environment, and the second environment mapping data relates to a second area of the environment.
According to an embodiment, the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
According to an embodiment, the first environment mapping data format is a 2D occupancy grid map environment mapping data format, and the second environment mapping data format is a 3D point cloud environment mapping data format.
According to an embodiment, the first environment mapping data format is a 3D point cloud environment mapping data format, and the second environment mapping data format is a 2D occupancy grid map environment mapping data format.
According to an embodiment, the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
According to an embodiment, the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment, the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment, the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
According to an embodiment, the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment, the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
According to an embodiment, the characteristics comprise any of a sensor type, a number of sensors.
The present also relates to an embodiment of a centralized entity device (a device), e.g., device 1400. Device 1400 may for example correspond to device 23 of figure 2, the device comprising at least one processor 1401 , and at least one memory 1402. The at least one processor and the at least one memory being configured to receive, from a first robotic device, first environment mapping data relating to an environment, the first environment mapping data being in a first environment mapping data format specific to the first robotic device. The at least one processor and the at least one memory being configured to receive, from a second robotic device, second environment mapping data relating to the environment, the second environment mapping data being in a second environment mapping data format specific to the second robotic device. The at least one processor and the at least one memory being configured to construct aggregated environment mapping data of the environment based on the first environment mapping data and the second environment mapping data received. The at least one processor and the at least one memory being configured to extract, from the aggregated environment mapping data, third environment mapping data for the first robotic device, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device. The at least one processor and the at least one memory being configured to transmit the extracted third environment mapping data to the first robotic device in the first environment mapping data format specific to the first robotic device.
According to an embodiment of the device, the at least one processor, and at least one memory are further configured to obtain, based on the aggregated environment mapping data, a location of the second robotic device in the environment, and to transmit, to the first robotic device, the obtained second location according to the first environment mapping data format.
According to an embodiment of the device, the first environment mapping data relates to a first area of the environment, and the second environment mapping data relates to a second area of the environment.
According to an embodiment of the device, the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
According to an embodiment of the device, the first environment mapping data format is a 2D occupancy grid map environment mapping data format, and the second environment mapping data format is a 3D point cloud environment mapping data format.
According to an embodiment of the device, the first environment mapping data format is a 3D point cloud environment mapping data format, and the second environment mapping data format is a 2D occupancy grid map environment mapping data format.
According to an embodiment of the device, the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
According to an embodiment of the device, the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment of the device, the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment of the device, the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
According to an embodiment of the device, the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
According to an embodiment of the device, the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
According to an embodiment of the device, the characteristics comprise any of a sensor type, a number of sensors.
It is to be appreciated that some elements in the drawings may not be used or be necessary in all embodiments. Some operations may be executed in parallel. Embodiments other than those illustrated and/or described are possible. For example, a device implementing the present principles may include a mix of hard- and software.
It is to be appreciated that aspects of the principles of the present disclosure can be embodied as a system, method or computer readable medium. Accordingly, aspects of the principles of the present disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code and so forth), or an embodiment combining hardware and software aspects that can all generally be defined to herein as a “circuit”, “module” or “system”. Furthermore, aspects of the principles of the present disclosure can take the form of a computer readable storage medium. Any combination of one or more computer readable storage medium(s) can be utilized. Thus, for example, it is to be appreciated that the diagrams presented herein represent conceptual views of illustrative system components and/or circuitry embodying the principles of the present disclosure. Similarly, it is to be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable storage media and so executed by a computer or processor, whether such computer or processor is explicitly shown.
A computer readable storage medium can take the form of a computer readable program product embodied in one or more computer readable medium(s) and having computer readable program code embodied thereon that is executable by a computer. A computer readable storage medium as used herein is considered a non-transitory storage medium given the inherent capability to store the information therein as well as the inherent capability to provide retrieval of the information there from. A computer readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Some or all aspects of the storage medium may be remotely located (e.g., in the ‘cloud’). It is to be appreciated that the following, while providing more specific examples of computer readable storage mediums to which the present principles can be applied, is merely an illustrative and not exhaustive listing, as is readily appreciated by one of ordinary skill in the art: a hard disk, a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Claims

1 . A method comprising: receiving, from a first robotic device, first environment mapping data relating to an environment, the first environment mapping data being in a first environment mapping data format specific to the first robotic device; receiving, from a second robotic device, second environment mapping data relating to the environment, the second environment mapping data being in a second environment mapping data format specific to the second robotic device; constructing aggregated environment mapping data of the environment based on the first environment mapping data and the second environment mapping data received; extracting, from the aggregated environment mapping data, third environment mapping data for the first robotic device, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device; and transmitting the extracted third environment mapping data to the first robotic device in the first environment mapping data format specific to the first robotic device.
2. The method according to claim 1 , further comprising: obtaining, based on the aggregated environment mapping data, a location of the second robotic device in the environment; and transmitting, to the first robotic device, the obtained second location according to the first environment mapping data format.
3. The method according to claim 1 or 2, wherein the first environment mapping data relates to a first area of the environment, and the second environment mapping data relates to a second area of the environment.
4. The method according to any of claims 1 to 3, wherein the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
5. The method according to any of claims 1 to 4, wherein the first environment mapping data format is a 2D occupancy grid map environment mapping data format, and the second environment mapping data format is a 3D point cloud environment mapping data format.
6. The method according to any of claims 1 to 4, wherein the first environment mapping data format is a 3D point cloud environment mapping data format, and the second environment mapping data format is a 2D occupancy grid map environment mapping data format.
7. The method according to any of claims 1 to 6, wherein the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
8. The method according to any of claims 1 to 7, wherein the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
9. The method according to any of claims 1 to 8, wherein the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
10. The method according to any of claims 1 to 9, wherein the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
11 . The method according to any of claims 1 to 10, wherein the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
12. The method according to any of claims 1 to 11 , wherein the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
13. The method according to claim 11 , wherein the characteristics comprise any of: a sensor type, a number of sensors.
14. A device, the device comprising at least one processor, and at least one memory, configured to: receive, from a first robotic device, first environment mapping data relating to an environment, the first environment mapping data being in a first environment mapping data format specific to the first robotic device; receive, from a second robotic device, second environment mapping data relating to the environment, the second environment mapping data being in a second environment mapping data format specific to the second robotic device; construct aggregated environment mapping data of the environment based on the first environment mapping data and the second environment mapping data received; extract, from the aggregated environment mapping data, third environment mapping data for the first robotic device, the third environment mapping data describing at least a portion of the environment not mapped by the first robotic device ; and transmit the extracted third environment mapping data to the first robotic device in the first environment mapping data format specific to the first robotic device.
15. The device according to claim 14, wherein the at least one processor, and at least one memory are further configured to: obtain, based on the aggregated environment mapping data, a location of the second robotic device in the environment; and transmit, to the first robotic device, the obtained second location according to the first environment mapping data format.
16. The device according to claim 14 or 15, wherein the first environment mapping data relates to a first area of the environment, and the second environment mapping data relates to a second area of the environment.
17. The device according to any one of claims 14 to 16, wherein the aggregated environment mapping data is in a 3D point cloud environment mapping data format.
18. The device according to any of claims 14 to 17, wherein the first environment mapping data format is a 2D occupancy grid map environment mapping data format, and the second environment mapping data format is a 3D point cloud environment mapping data format.
19. The device according to any of claims 14 to 17, wherein the first environment mapping data format is a 3D point cloud environment mapping data format, and the second environment mapping data format is a 2D occupancy grid map environment mapping data format.
20. The device according to any of claims 14 to 19, wherein the aggregated environment mapping data comprises information representative of an area to which an item of data in the aggregated environment mapping data relates.
21 . The device according to any of claims 14 to 20, wherein the aggregated environment mapping data comprises information representative of a type of robotic device that contributed to an item of data in the aggregated environment mapping data.
22. The device according to any of claims 14 to 21 , wherein the aggregated environment mapping data comprises information enabling identification of a robotic device that contributed to an item of data in the aggregated environment mapping data.
23. The device according to any of claims 14 to 22, wherein the aggregated environment mapping data comprises information representative of a location of a robotic device in the environment.
24. The device according to any of claims 14 to 23, wherein the aggregated environment mapping data comprises information representative of characteristics of the robotic device that contributed to an item of data in the aggregated environment mapping data.
25. The device according to any of claims 14 to 24, wherein the aggregated environment mapping data comprises a timestamp of creation of an item of data in the aggregated environment mapping data.
26. The device according to claim 24, wherein the characteristics comprise any of: a sensor type, a number of sensors.
PCT/EP2021/085454 2020-12-15 2021-12-13 Method and apparatus for multiple robotic devices in an environment WO2022128896A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP20306568.5 2020-12-15
EP20306568 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022128896A1 true WO2022128896A1 (en) 2022-06-23

Family

ID=74130016

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/085454 WO2022128896A1 (en) 2020-12-15 2021-12-13 Method and apparatus for multiple robotic devices in an environment

Country Status (1)

Country Link
WO (1) WO2022128896A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114939874A (en) * 2022-06-29 2022-08-26 深圳艾摩米智能科技有限公司 Posture planning method for mechanical arm moving along human body surface
CN116358531A (en) * 2023-06-01 2023-06-30 佛山云曼健康科技有限公司 Map construction method, device, robot and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212752A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile cleaning robot teaming and persistent mapping
WO2019171916A1 (en) * 2018-03-05 2019-09-12 日本電気株式会社 Robot management system, robot management method, information processing device, information processing method and information processing program
WO2020004834A1 (en) * 2018-06-27 2020-01-02 Lg Electronics Inc. A plurality of autonomous cleaners and a controlling method for the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212752A1 (en) * 2018-01-05 2019-07-11 Irobot Corporation Mobile cleaning robot teaming and persistent mapping
WO2019171916A1 (en) * 2018-03-05 2019-09-12 日本電気株式会社 Robot management system, robot management method, information processing device, information processing method and information processing program
US20210003418A1 (en) * 2018-03-05 2021-01-07 Nec Corporation Robot management system, robot management method, information processing apparatus, information processing method, and information processing program
WO2020004834A1 (en) * 2018-06-27 2020-01-02 Lg Electronics Inc. A plurality of autonomous cleaners and a controlling method for the same

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114939874A (en) * 2022-06-29 2022-08-26 深圳艾摩米智能科技有限公司 Posture planning method for mechanical arm moving along human body surface
CN114939874B (en) * 2022-06-29 2023-08-01 深圳艾摩米智能科技有限公司 Gesture planning method for mechanical arm moving along human body surface
CN116358531A (en) * 2023-06-01 2023-06-30 佛山云曼健康科技有限公司 Map construction method, device, robot and storage medium
CN116358531B (en) * 2023-06-01 2023-09-01 佛山市星曼信息科技有限公司 Map construction method, device, robot and storage medium

Similar Documents

Publication Publication Date Title
US12098926B2 (en) Methods for finding the perimeter of a place using observed coordinates
EP3508935B1 (en) System for spot cleaning by a mobile robot
US11669086B2 (en) Mobile robot cleaning system
AU2020200546B2 (en) Structure modelling
CN113284240B (en) Map construction method and device, electronic equipment and storage medium
JP7436103B2 (en) Collaborative and persistent mapping of mobile cleaning robots
Shubina et al. Visual search for an object in a 3D environment using a mobile robot
US8214079B2 (en) Central information processing system and method for service robot having layered information structure according to recognition and reasoning level
WO2022128896A1 (en) Method and apparatus for multiple robotic devices in an environment
CN207067803U (en) A kind of mobile electronic device for being used to handle the task of mission area
KR102472176B1 (en) Autonomous robot, location estimation server of autonomous robot and location estimation or autonomous robot using the same
Caracciolo et al. Autonomous navigation system from simultaneous localization and mapping
KR102490754B1 (en) Moving robot system
Langer et al. On-the-fly detection of novel objects in indoor environments
Umari Multi-robot map exploration based on multiple rapidly-exploring randomized trees
KR102490755B1 (en) Moving robot system
Steenbeek CNN based dense monocular visual SLAM for indoor mapping and autonomous exploration
Harati et al. Orthogonal 3D-SLAM for indoor environments using right angle corners
de la Puente et al. RGB-D sensor setup for multiple tasks of home robots and experimental results
Yang et al. Automatic reconstruction of building-scale indoor 3D environment with a deep-reinforcement-learning-based mobile robot
Marginean et al. A Distributed Processing Architecture for Vision Based Domestic Robot Navigation
Sekar Automatic Indoor Modelling using Crowd-Sensed Point Clouds
Prabha Sekar Automatic indoor modelling using crowd-sensed point clouds
JP2024068329A (en) Information processing device, mobile body system, information processing method, and computer program
Lin et al. An integrated 3D mapping approach based on RGB-D for a multi-robot system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21836150

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21836150

Country of ref document: EP

Kind code of ref document: A1