WO2019190395A1 - Method and system for returning a displaced autonomous mobile robot to its navigational path - Google Patents

Method and system for returning a displaced autonomous mobile robot to its navigational path Download PDF

Info

Publication number
WO2019190395A1
WO2019190395A1 PCT/SG2019/050163 SG2019050163W WO2019190395A1 WO 2019190395 A1 WO2019190395 A1 WO 2019190395A1 SG 2019050163 W SG2019050163 W SG 2019050163W WO 2019190395 A1 WO2019190395 A1 WO 2019190395A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
occupancy grid
grid map
map
path
Prior art date
Application number
PCT/SG2019/050163
Other languages
French (fr)
Inventor
Miaolong Yuan
Wei Yun Yau
Original Assignee
Agency For Science, Technology And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency For Science, Technology And Research filed Critical Agency For Science, Technology And Research
Priority to SG11202009494YA priority Critical patent/SG11202009494YA/en
Publication of WO2019190395A1 publication Critical patent/WO2019190395A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3863Structures of map data
    • G01C21/387Organisation of map data, e.g. version management or database structures
    • G01C21/3881Tile-based structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser

Definitions

  • the present disclosure relates to mobile robot localization and navigation, and more particularly to returning a displaced autonomous mobile robot to its navigational path.
  • ACL adaptive Monte Carlo localization approach
  • the AMCL is robust to noises collected by the sensors due to noisy environments since it is able to compensate for accumulated sensor errors, e.g. errors in the raw odometry data during navigation.
  • Traditional probabilistic mapping algorithms are therefore used to build the internal maps with input from perception sensors, such as a laser range finder. Coupled with an advanced localization module such as the AMCL, robot navigation and localization can be achieved deliberately.
  • a robot may unexpectedly fail to track itself in its environment during navigation. This can happen as a result of abrupt wheel slippage resulting in significant errors in the odometry input, or when the robot unexpectedly collides with an obstacle, or when the robot’s sensors are blocked, or when the robot is physically moved to another place (or kidnapped).
  • the well-known robot kidnapping problem is a special global localization issue in mobile robotics. Since a kidnapped robot will fail to track its current position due to a gap in the information collected by its sensors, this will result in invalid path planning and therefore, the robot will be unable to perform unfinished navigation tasks. It is therefore important for mobile robots to have the ability to re-localize itself after it unexpectedly loses tracking its pose relative to its surroundings due to abrupt wheel slippage, unsteady movements on uneven floors, collision with obstacles, sensor blocking or kidnapping.
  • a further existing work on localization failure detection adopts a semi- globalization approach for failure recovery.
  • the approach is only suitable for abrupt wheel slippages due to uneven flooring as the algorithm requires that the robot’s pose does not change significantly.
  • the approach also requires manual operation for the robot to recover its pose.
  • Machine learning techniques have been applied to match the current laser scan to an offline database in an attempt to re-establish a robot’s position in the internal map.
  • this only works if the lost robot is in a known environment i.e. within the pre-built global map. The work also does not address the system awareness of when the robot is kidnapped.
  • a method of returning a displaced autonomous mobile robot to its planned navigational path comprising, , before autonomous deployment of the robot, (i) constructing an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path; and (ii) constructing a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map; and, after autonomous deployment of the robot, (iii) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot; (iv) selecting a destination point in the local occupancy grid map; (v) navigating the mobile robot to the destination point along a designated path; (vi) as the mobile robot travels along the designated path, searching for a location that correspond
  • the robot can explore its vicinity (whether within or outside a global grid map associated with the planned navigation path) so that the displaced robot has the ability to rediscover the planned navigational path by identifying visual cues in its vicinity which is associated with locations on the navigational path.
  • the method may further comprise, when the mobile robot is travelling along its planned navigation path, detecting if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map.
  • the robot may be detected as being displaced only when the misalignment persists for at least 2 seconds.
  • the local occupancy grid map may be constructed when the mobile robot is stationary.
  • the method may also comprise rotating the robot 360° to construct the local occupancy grid map.
  • the destination point may be randomly selected from the local occupancy grid map.
  • the method may further comprise collecting further information of the robot’s vicinity while the robot navigates to the destination point in step (iii); and updating the local occupancy grid map with the further information.
  • the method may then repeat steps (iii) to (v) until such a location is identified.
  • the method may also comprise, wherein recalling the robot pose in step (vi) comprises replacing the local occupancy grid map with the occupancy grid map.
  • the method may further comprise performing a 360° rotation of the robot to converge the robot’s laser with the occupancy grid map.
  • a method of returning a displaced autonomous mobile robot to its planned navigational path based on a cognitive-occupancy grid map, the cognitive-occupancy grid map having a set of visual cues associated with locations along the planned navigational path comprising (i) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot; (ii) selecting a destination point in the local occupancy grid map; (iii) navigating the mobile robot to the destination point along a designated path; (iv) as the mobile robot travels along the designated path, searching for a location that corresponds to a visual cue of the visual cue set defined in the cognitive-occupancy grid map; and (v) upon finding the location, recalling a pose of the robot associated with the visual cue to return the robot to the planned navigational path.
  • the methods of the first and second aspects may be performed by the mobile robot.
  • a system for returning a displaced autonomous mobile robot to its planned navigational path the mobile robot having a sensor and a camera
  • the system comprising a global map builder configured to construct an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path, and construct a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map, before autonomous deployment of the robot; a local map builder configured to construct a local occupancy grid map of a vicinity of the displaced robot, after autonomous deployment of the robot and upon detection that the robot is displaced; a path planner configured to select a destination point in the local occupancy grid map; a navigator configured to navigate the mobile robot to the destination point along a designated path; a place recognition module configured to identify a location along the designated path that corresponds to
  • the system may include a lost detection module configured to detect if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map stored.
  • the recovery module may further comprise a pose-reinitialization module configured to replace the local occupancy grid map with the occupancy grid map.
  • the system may be implemented in various ways and in one example, this may be by way of an autonomous mobile robot comprising the system as discussed above.
  • Figure 1 a shows an exemplary mobile robot according to a preferred embodiment
  • Figure 1 b shows the mobile robot of Figure 1 with its casing removed to show its internal components
  • Figure 2 is a block diagram of a system architecture of the mobile robot of Figure 1 ;
  • Figure 3 is a flow diagram illustrating actions performed by the mobile robot of Figure 1 to self-recover when the mobile robot is lost;
  • Figures 4a - 4d are grid images showing different laser alignments between the robot’s laser scans and the occupancy grid map constructed during a map pre-building stage of the system architecture of Figure 2;
  • Figure 5 is a flow diagram of an exemplary method for returning a lost robot to its navigational path performed by the robot of Figure 1 ;
  • Figures 6a - 6d are grid images of a local occupancy grid map with valid and invalid destination points selected during the exemplary method of Figure 5;
  • Figure 7 illustrates a cognitive-occupancy grid map constructed based on the flow diagram of Figure 3 and used during evaluation of the robot;
  • Figures 8a - 8c illustrate a scenario of how the robot of Figure 1 performs lost recovery after the robot is kidnapped;
  • Figure 1a illustrates a mobile robot 100 which can self-recover when the mobile robot 100 is lost or displaced according to a preferred embodiment.
  • the mobile robot 100 includes an outer casing 1 10 and Figure 1 b shows the mobile robot 100 with the outer casing omitted to show a skeleton 120.
  • the mobile robot 100 includes a robot base 121 having wheels 122 equipped with wheel encoders 123 (not shown in figure 1 b, but see Figure 2) for providing raw odometry data from the wheels 122 when the mobile robot 100 travels.
  • the mobile robot 100 includes a camera in the form of an RGB-D sensor 124, and a laser sensor 125 which operates as a high resolution laser range finder.
  • the skeleton 124 includes an elongate frame 126 supported by the robot base 121 and the elongate frame 126 extends upwards from the robot base 121 .
  • the RGB-D sensor 24 is mounted to and near the top of the elongate frame 126 around 90cm from the ground, and the laser sensor 125 is mounted above the robot base 121 around 30cm from the ground.
  • the wheel encoders 123, the RGB-D sensor 124 and the laser sensor 125 cooperate together to provide input of the robot’s vicinity or surrounding to enable robot localization and navigation.
  • the mobile robot 100 may also include other perception sensors.
  • FIG. 2 is a schematic block diagram of a system architecture of the mobile robot 100.
  • the mobile robot 100 includes a processor (or controller) 127 arranged to be communicatively coupled to the wheels encoder 123, the RGD- D sensor 124, the laser sensor 125 and also to control movement of the wheels 122.
  • the system architecture also includes a global map builder 201 for constructing a cognitive map 211 and an occupancy map 212 based on the output of the RGB-D sensor 124 and the laser sensor 125 respectively.
  • the term‘cognitive-occupancy grid map’ or‘global map’ is used to refer to both the cognitive map 21 1 and the occupancy grid map 212 collectively.
  • the cognitive-occupancy grid map 213 is stored in computer readable memory.
  • the processor 127 is specifically configured to execute instructions of various modules stored in computer readable memory and in particular, a lost detection module 128 and a self-exploration module 150.
  • the self-exploration module 150 includes sub-modules and these comprise a map builder 151 , a path planner 152, a navigator 153, a place recognition module 154 and a recovery module 155. The operation of each of these modules will be explained next in relation to Figure 3, which illustrates a flow diagram having two stages to explain how the mobile robot 100 recovers itself when the mobile robot 100 is lost.
  • a method 200 of recovering the mobile robot 100, when it is lost or displaced includes two stages according to this embodiment and these comprises an offline global map building stage 210, and an online navigation and lost recovery stage 220.
  • a remote controller is used to control the mobile robot 100 manually to explore the environment along a designated route or navigational path from a starting point to a destination or target so as to collect training data.
  • the RGB-D sensor/camera 124 captures images of the environment or the robot’s vicinity as the robot 100 moves along the navigational path.
  • the captured images are then used by the processor 127 to construct/build the cognitive map 21 1.
  • Visual cues are acquired, learned and stored in the cognitive map 211 during the map building process.
  • the visual cues of the cognitive map are associated with actual poses in the occupancy grid map of the robot 100 using a continuous attractor network and in this way, the cognitive map 211 is thus associated with or correlated to the occupancy grid map 212.
  • the laser sensor 122 also collects data of the environment which the robot 100 has explored along the navigational path and the collected data is used to construct an occupancy grid map 212 at step 222.
  • the occupancy grid map 212 is built using ROS gmapping package.
  • map building algorithms known to the skilled person may be used to construct the occupancy grid map 212.
  • the cognitive-occupancy grid map 213 is thus formed which comprises the cognitive map 211 and the occupancy grid map 212.
  • the robot 100 Upon generation of the cognitive-occupancy grid map 213, the robot 100 is ready to be deployed for fully autonomous operation along the designated route or navigation path.
  • the robot 100 is deployed to navigate autonomously at step 224 along the navigation path from a starting point to a target location 226.
  • the occupancy grid map 212 is used for localization and obstacle avoidance during the navigation of the robot 100 to the target location 226.
  • the processor 127 also executes the lost detection module 128 at steps 225 and 227, almost immediately after deployment of the mobile robot 100, and the lost detection module 128 is configured to run continuously as the mobile robot 100 makes its way from the starting point to the target location 226 in order to detect whether or not the robot 100 is lost while navigating along the navigation path.
  • the robot 100 would continue to move along the navigation path towards its target location 226. Details of how the lost detection module 128 detects whether or not the robot 100 is lost will be described later.
  • the self-exploration module 150 is then executed by the processor 127 to execute self-exploration steps 230.
  • the self-exploration module 150 starts by constructing a local vicinity occupancy grid map at step 231 of the displaced robot’s surrounding. This enables the robot 100 to then explore its vicinity or surrounding in search for an ever-visited-place at step 232. Once the robot 100 finds an ever-visited place corresponding to a visual cue defined in the cognitive map 21 1 of the global map 213, an accurate pose associated with the visual cue is recalled from the cognitive map 21 1 and the occupancy grid map 212 is reloaded at step 233.
  • step 234 the processor 127 switches to the recovery module 155 and executes pose re-initialization. After the robot 100 has returned to its original pose prior to being displaced, the robot 100 resends the target location and the navigation module 153 guides the robot 100 to continue to its target location 226.
  • lost detection module 128 The actions performed by lost detection module 128 and lost recovery by the self-exploration module 150 are discussed in further details below.
  • Figures 4(a)-(d) illustrate grid images 300 with different laser alignments between the robot’s laser scans 301 , 302, 303, 304 recorded at timer, and the map boundary points 350 in the occupancy grid map 212.
  • the occupancy grid map 212 may be accurately learned from high resolution and long range laser using the Rao-Blackwellized particle filter and thus, an accurate occupancy grid map 212 may be obtained too.
  • the laser scan 301 would be properly aligned with the map boundary 350 of the occupancy grid map 212 as shown in Figure
  • Figure 4a shows the scenario where the robot 100 is pushed which caused the laser scan 302 to be rotated.
  • Figure 4c the robot 100 is kidnapped from a point A to a point B.
  • the robot is unable to update its position in the map.
  • the robot based on the laser scan 303 in Figure 4c, the robot’s position appears to not have changed i.e. kept at point A.
  • Figure 4d shows the scenario where the laser sensor 125 was blocked. Therefore, the laser scan 304 of Figure 4d is partially obscured.
  • a laser-map matching error function is defined herein as
  • Eq. (1 ) intuitively determines whether the 2D scans from the laser ranger finder and the map are well aligned.
  • a Boolean parameter is defined to determine whether the laser beams from the laser sensor 125 at the current time t i are well aligned with the occupancy grid map
  • the robot 100 is identified to be lost.
  • the thresholds d and N lost are required to be well identified depending on different environments. For example, in the present embodiment, d is 1 .5 m and N lost is 99.
  • the lost recovery problem formulation as addressed by the self-exploration module 150 is given below.
  • the initial configuration and obstacle spaces are built up by the robot 100 without any movement or with safety movement such as 360 degrees rotation i.e. the robot 100 is stationary.
  • the local vicinity occupancy grid map can be built up at step 231 by the local map builder 151 of the selfexploration module 150 and such a map may also be called an on-line local map.
  • a vicinity destination point is randomly selected in the local vicinity occupancy grid map, then both the starting and destination points are available in the lost recovery problem.
  • a path is then set up using the initial configuration and obstacle spaces for the robot 100 to explore the unknown environment.
  • the local vicinity occupancy grid map is updated as the robot 100 navigates from the vicinity starting point to the vicinity destination point such that there are more valid candidates to be selected in the subsequent rounds.
  • the robot 100 When the robot 100 is lost, the robot 100 loses track of its pose in the global map i.e. occupancy grid map 212, and the occupancy grid map would not be effective for path planning, obstacle avoidance, and localization anymore.
  • the robot 100 may retrieve the related visual cues that have been stored in the cognitive map 21 1. In other words, the robot 100 needs to find a path from its current (lost) location to an ever-visited-place that is defined by the cognitive- occupancy grid map.
  • Figure 5 is a flow diagram of an exemplary method 400 to elaborate the self- exploration steps 230 further and as performed by the mobile robot 100 to return itself to its navigational path.
  • the map builder 151 is executed by the processor 127 at step 231 to construct the local vicinity occupancy grid map of the displaced robot’s surrounding or vicinity.
  • the path planner 152 selects a vicinity destination point within the local vicinity occupancy grid map.
  • the path planner 152 includes a path planning algorithm to determine a designated path from the vicinity starting point to the vicinity destination point.
  • the navigator 153 navigates the robot 100 to the vicinity destination point along the designated path. During the navigation, the robot 100 collects more information in its vicinity. The local vicinity grid-occupancy map is updated with the collected information as the robot 100 travels along the designated path‘looking’ for ever-visited places.
  • the vicinity destination point has to be valid before the path planner 152 can determine the designated path to the vicinity destination point and the navigator 153 can navigate the robot 100 to the vicinity destination point.
  • Figures 6a and 6b are images of the local vicinity occupancy grid map with examples of valid random poses 510, 520 of the robot 100 while figures 6c and 6d are images of the local vicinity occupancy grid map with examples of invalid random poses 530, 540. If the random pose 510,520 is valid, the robot 100 navigates to the randomly generated pose and simultaneously updates the local vicinity occupancy map. Otherwise, another pose is randomly selected until a valid pose is generated.
  • the search area s is restricted in a bounded area 500, i.e., 6x6m 2 in the present embodiment.
  • the search area can be enlarged at the cost of more possible searching time.
  • the Rao-Blackwellized particle filter is adopted to build a new occupancy grid map which has incorporated the dynamic window approach (DWA) for path planning and obstacle avoidance during the self-exploration. The purpose is to guarantee that the robot 100 would navigate to the target location 226 efficiently and safely.
  • DWA dynamic window approach
  • the place recognition module 154 checks the ‘live’ images obtained via the RGB-D sensor 124 to survey the vicinity as the mobile robot 100 moves along the designated path to identify a marker that corresponds to a visual cue defined in the cognitive-occupancy grid map 213.
  • the robot 100 relocates itself using the cognitive map 21 1. Otherwise, if the mobile robot 100 reaches the vicinity destination point without identifying the marker, a new destination point is generated in the updated local vicinity occupancy grid map.
  • the robot 100 then repeats steps 231a, 231 b and 232 until the robot 100 finds an ever-visited-place. It is assumed that every point in the local vicinity occupancy grid map has the same probability to be an ever-visited place. If this is not true, a better or smarter strategy can be adopted for step 231a to select the valid destination point. For example, a cost function can be defined for the lost recovery so as to find an optimal lost recovery solution.
  • the occupancy grid map 212 is reloaded at step 233.
  • the global and local cost maps are used for obstacle avoidance during robot navigation.
  • the global and local cost maps maintain information about where the robot 100 should navigate in the form of a grid map.
  • the cost maps use both laser sensor data and information from the local occupancy grid map to store and update information about obstacles in the dynamic office environment.
  • Each cell in the cost maps can be either free, occupied, or unknown status. Each status has a cost value assigned to it upon projection into the cost maps.
  • the recovery module 155 recalls a pose of the robot 100 that is associated with the visual cue.
  • the system re-starts the navigation module 153.
  • pose re-initialization the robot 100 rotates itself around 360 degree in order to converge the laser beams with the occupancy grid map 212.
  • the target goal would be re-sent again and the robot 100 would have self-recovered is able to continue with its unfinished tasks or continue to the target location 226.
  • the laser scans do not need to match the maps exactly. Since the robot 100 re-starts navigation, it will be able to localize itself in the rest of the way by itself.
  • the Pseudo-code for the proposed lost recovery algorithm is provided below:
  • the mobile robot 100 is evaluated to test its operational readiness.
  • the system architecture 200 is implemented and runs on Robot Operation System (ROS) under Ubuntu 12.04.
  • ROS Robot Operation System
  • Table 1 includes the related parameter settings used in the lost robot recovery method of Figure 3.
  • the mobile robot 100 is configured to navigate in an office environment as illustrated in Figure 7.
  • the remote controller is used to control the robot 100 to explore the office environment in which the robot 100 is meant to operate.
  • the laser sensor 125 is used to build the occupancy grid map 212 using ROS gmapping package.
  • the images captured from the RGB-D sensor 124 are used to build the cognitive map 211 using an improved version of RatSLAM.
  • the cognitive map 21 1 has a topological structure in which the nodes are associated with the corresponding visual cues and locations, denoted as visual experiences.
  • the visual experiences serve as a space memorization mechanism to remember all the ever-visited places that the robot 100 has been to.
  • the cognitive map 21 1 contains a set of spatial coordinates that the robot 100 has travelled which are actual robot poses. Therefore, the cognitive-occupancy grid map (or global map) 213 is constructed.
  • Figure 7 illustrates the constructed cognitive-occupancy grid map 213 although in Figure 7, the reference 600 is used instead.
  • the global map 600 is constructed in an office environment with the area of around 21x18m 2 .
  • the resulting occupancy grid map 212 built by data collected from the laser sensor 125 represents the map of the office environment. This is readily apparent as compared to the raw odometry 602 received from the robot wheel encoders 123 which show significant drifts from the actual robot poses without any drift compensation mechanisms in place.
  • the raw odometry 602 runs outside of the mapped office environment.
  • Visited visual experiences 604 are represented as square boxes in the cognitive map 211.
  • the robot pose trajectory (not shown in Figure 7) is estimated by the Rao-Blackwellized particle filter. In this evaluation, the mapping process took 780.66 seconds and the robot 100 travelled 265.11 meters around the office environment with an average speed of 0.34 m/s.
  • the mobile robot 100 After completing the map building process, the mobile robot 100 is ready to be deployed and the resulting cognitive-occupancy map 600, together with the self-exploration, lost detection and self-pose recovery modules 150,128 are used to self-recover the robot 100 if the robot 100 is lost.
  • a resting time t resting is provided to allow sufficient time for system updating.
  • the pre-built grid map i.e. the occupancy grid map 212
  • the system rests for 3 seconds in order that there is sufficient time for reconfiguring the system and path planning.
  • the AMCL technique for path planning, obstacle avoidance and tracking the pose of a robot 100 against the occupancy grid map 212 is used, and the lost detection module 128 is executed to detect if the robot 100 is lost as explained in relation to Figure 3.
  • the obstacle information is cleared by resetting the global and local costmaps and start the self-exploration module 150 to create the local vicinity occupancy grid map and to look for a marker which corresponds to a visual cue of the ever-visited place, or re-visited place.
  • the incoming visual inputs captured from the RGB-D sensor 124 are compared with the historical visual experiences 604 stored in the cognitive map 21 1.
  • Figures 8a - 8c depict scenario of the robot 100 being kidnapped and actions performed by the mobile robot 100 to self-recover.
  • the robot 100 is well-localized during navigation (see the occupancy grid map 212 in Figure 4a) to move to a target location 226 along a planned or designated navigation path 810 and it should be appreciated that the lost detection module 128 is running. The robot 100 is then kidnapped to another location outside of the planned or designated navigation path 810. After around 2 - 3 seconds, the lost detection module 128 detects that the robot 100 is lost and the self-exploration module 150 is triggered by the processor 127.
  • the local map builder 151 creates a local vicinity occupancy grid map 820 of the lost/displaced robot’s vicinity or surrounding, identifies a vicinity destination point 822 and maps out a designated path and the robot 100 searches for all possible ever-visited-places along the designated path.
  • the robot 100 finds an ever-visited-place 824 along the designated path and steps 233 and 234 of Figure 5 are executed to re-initialise the robot’s pose re-initialization by rotating itself 360°.
  • the occupancy grid map 212 is reloaded and the robot 100 has returned to its planned navigation path. Thereafter, the recovered robot 100 continues on the planned navigation path until the robot 100 reaches its goal i.e. target location 226.
  • Figures 8a-8c illustrate the effectiveness of recovering the lost robot by creating a local vicinity occupancy map of the vicinity in which the robot is lost and trying to identify an ever-visited place (or familiar place) based on the cognitive- occupancy grid map to recover itself. It should be apparent that the described embodiment is also applicable for other scenario such as when the robot 100 is pushed or when the laser sensor 125 is blocked causing the robot 100 to be displaced. It is noted that in some regions of the office environment such as a long corridor, the laser sensor 125 may not provide sufficient information for accurate laser- map matching. Even though an ever-visited-place with accurate robot pose is found during the self-exploration process, the pose might not be successfully recovered due to a lack of sufficient laser information for laser-map matching. In this case, a subsequent trial is tried again once the current self-recovery fails until the robot 100 is finally properly recovered.
  • the ever-visited-places which have resulted in pose recovery failures may be excluded for the subsequent self-exploration process.
  • the performance of the pure random exploration can be significantly improved by constraining the searching behaviours with some conditions. For example, if the robot 100 cannot find an ever-visited-place along a trajectory from position A to position B, the randomly generated poses within a band of this trajectory can be excluded. Subsequently, the efficiency of the self- exploration can be improved.
  • the lost recovery method 400 of the described embodiment seamlessly integrates map creation and updating, dynamic path planning, place recognition and localization of the mobile robot 100 using a cognitive-occupancy grid map.
  • the cognitive map 21 1 integrates visual cue-based episodic memory thereby endowing humans or animals with the capacity to learn and recall experiences in the context of space and thus, the cognitive map 21 1 is suitable for path planning and navigation.
  • the cognitive map 21 1 in robotics acquires, stores, and maintains information about the environment as spatial knowledge which is represented by various topological relationships within the cognitive map 21 1.
  • the cognitive map 211 which encodes visual cues of its environments provides an intuitive way to assist the robot 100 to self-recover by itself when the robot 100 is lost.
  • the proposed method 400 may not require modeling of uncertainties of landmarks, compared to the traditional probabilistic SLAM methods. It is noted that while the cognitive map 21 1 is able to perform localization with a high accuracy, its ability to map the physical environment is relatively poor when closing a large loop due to odometry drift when the robot 100 has to travel long distances.
  • the cognitive map 211 also does not perform well when used for path planning and obstacle avoidance during robot navigation. However, the limitations of the cognitive map 211 are complemented by the occupancy grid map 212. Therefore, the cognitive- occupancy grid map 213 which synergizes the cognitive map 21 1 and the occupancy grid map 212 is designed to be implemented with the proposed method to address the lost recovery problem.
  • the proposed method is implemented on a mobile robot that performs autonomous mapping, navigation and localization, lost detection, selfexploration and pose recovery.
  • the described embodiment is particularly advantageous since the proposed method 400 includes map creation and updating and dynamic path planning via the self-exploration process.
  • the proposed algorithm can thus be applicable to more scenarios.
  • the described embodiment thus proposes a robot system that is capable of detecting if the robot 100 is displaced from its navigational path, and re- localizes to return to its navigational path.
  • the robot 100 is able to recover from a deliberate displacement from its known position or location.
  • a cognitive-occupancy grid map technique is adopted, which synergizes the cognitive map technique and the occupancy grid map technique to solve the lost robot self-recovery problem.
  • a robot system comprises an offline global map builder such as the offline pre-building stage 210.
  • the global map builder is arranged to construct an occupancy grid map that represents the location of the robot in relation to its surrounding, based on point cloud data captured using a laser source.
  • the map builder further constructs a cognitive map that captures visual cues acquired, classified, and stored, visual cues associated with position (pose) or location of the robot and captured.
  • the robotic system further comprises an online navigator for path planning and obstacle avoidance.
  • the robotic system further comprises a lost detection & recovery module for ascertaining if robot has been displaced; the module further comprises a self-exploratory module, which is triggered if the robot is ascertained to be displaced or lost.
  • the module searches for previously traversed or visited location that is classified by the cognitive-occupancy grid map. Upon triggering or flagging that a visited location has been recognized, a visual cue associated with the pose of the robot is recalled from the cognitive map; and initializes a pose re-initialisation and recovery module; and resumes navigation. As a result, the robotic system can self-recover from a displaced or lost situation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Databases & Information Systems (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Method 400 and system of returning a displaced autonomous mobile robot 100 to its planned navigational path based on a cognitive-occupancy grid map are disclosed. The cognitive-occupancy map includes a set of visual cues associated with locations along the planned navigational path. The method 400 comprises: (i) upon detection that the robot 100 is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot 100 at step 231, (ii) selecting a destination point in the local occupancy grid map at step 231a, (iii) navigating the mobile robot 100 to the destination point along a designated path at step 231b, (iv) as the mobile robot 100 travels along the designated path, searching for a location that corresponds to a visual cue defined in the cognitive-occupancy map at step 232, and (v) upon finding the location, recalling a pose of the robot 100 associated with the visual cue 442 at step 234 to return the robot 100 to the planned navigational path.

Description

METHOD AND SYSTEM FOR RETURNING A DISPLACED AUTONOMOUS MOBILE ROBOT TO ITS NAVIGATIONAL PATH
TECHNICAL FIELD
The present disclosure relates to mobile robot localization and navigation, and more particularly to returning a displaced autonomous mobile robot to its navigational path.
BACKGROUND
For mobile robots, spatial localization is a fundamental competency that an autonomous robot should possess since information on the robot’s location with respect to its environment is needed as input for making decisions on further actions. Existing work introduced an adaptive Monte Carlo localization approach (AMCL) for mobile robot navigation and localization using a particle filter. Given an internal map of its environment, the algorithm estimates the position and orientation (collectively referred to as pose) of the robot in the internal map as it tracks its movement in the environment. As the robot moves, its sensors provide updated information about its surrounding, and the algorithm assigns and updates the weight to each particle/point in the configuration space according to a likelihood of the robot occupying the space. The particle filter therefore represents a probability distribution of the robot’s likely location in the configuration space. The AMCL is robust to noises collected by the sensors due to noisy environments since it is able to compensate for accumulated sensor errors, e.g. errors in the raw odometry data during navigation. Traditional probabilistic mapping algorithms are therefore used to build the internal maps with input from perception sensors, such as a laser range finder. Coupled with an advanced localization module such as the AMCL, robot navigation and localization can be achieved deliberately.
However, sometimes a robot may unexpectedly fail to track itself in its environment during navigation. This can happen as a result of abrupt wheel slippage resulting in significant errors in the odometry input, or when the robot unexpectedly collides with an obstacle, or when the robot’s sensors are blocked, or when the robot is physically moved to another place (or kidnapped). The well-known robot kidnapping problem is a special global localization issue in mobile robotics. Since a kidnapped robot will fail to track its current position due to a gap in the information collected by its sensors, this will result in invalid path planning and therefore, the robot will be unable to perform unfinished navigation tasks. It is therefore important for mobile robots to have the ability to re-localize itself after it unexpectedly loses tracking its pose relative to its surroundings due to abrupt wheel slippage, unsteady movements on uneven floors, collision with obstacles, sensor blocking or kidnapping.
Existing work on re-localization does not provide a satisfactory solution. For example, despite AMCL’s ability for robot navigation and localization, it is not able to effectively re-localize or return to its original navigational path (self- recover) within a pre-built global map because the algorithm depends on the pre-built global map which is not effective for a robot that is lost or displaced.
A further existing work on localization failure detection adopts a semi- globalization approach for failure recovery. However, the approach is only suitable for abrupt wheel slippages due to uneven flooring as the algorithm requires that the robot’s pose does not change significantly. In addition, the approach also requires manual operation for the robot to recover its pose. Machine learning techniques have been applied to match the current laser scan to an offline database in an attempt to re-establish a robot’s position in the internal map. However, this only works if the lost robot is in a known environment i.e. within the pre-built global map. The work also does not address the system awareness of when the robot is kidnapped.
Existing works require that the robot is lost within a known environment and thus, this has limited applications. The lost recovery problem then becomes just a localization problem. Therefore, it is desirable to provide ways for a displaced autonomous mobile robot to return to its planned navigational path in order to address the problems mentioned in existing prior art and/or to provide the public with a useful choice. SUMMARY
According to a first aspect, there is provided a method of returning a displaced autonomous mobile robot to its planned navigational path, the mobile robot having a sensor and a camera the method comprising, , before autonomous deployment of the robot, (i) constructing an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path; and (ii) constructing a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map; and, after autonomous deployment of the robot, (iii) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot; (iv) selecting a destination point in the local occupancy grid map; (v) navigating the mobile robot to the destination point along a designated path; (vi) as the mobile robot travels along the designated path, searching for a location that corresponds to a visual cue of the visual cue set defined in the cognitive map; and (vii) upon finding the location, recalling the robot pose associated with the visual cue to return the robot to the planned navigational path.
Advantageously, as discussed in the described embodiment, the robot can explore its vicinity (whether within or outside a global grid map associated with the planned navigation path) so that the displaced robot has the ability to rediscover the planned navigational path by identifying visual cues in its vicinity which is associated with locations on the navigational path.
Preferably, the method may further comprise, when the mobile robot is travelling along its planned navigation path, detecting if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map. In one specific example, the robot may be detected as being displaced only when the misalignment persists for at least 2 seconds.
Preferably, the local occupancy grid map may be constructed when the mobile robot is stationary. The method may also comprise rotating the robot 360° to construct the local occupancy grid map. In a specific exemplary implementation, the destination point may be randomly selected from the local occupancy grid map.
Preferably, the method may further comprise collecting further information of the robot’s vicinity while the robot navigates to the destination point in step (iii); and updating the local occupancy grid map with the further information.
If no location is found which corresponds to the visual cue, the method may then repeat steps (iii) to (v) until such a location is identified. The method may also comprise, wherein recalling the robot pose in step (vi) comprises replacing the local occupancy grid map with the occupancy grid map. In that case, the method may further comprise performing a 360° rotation of the robot to converge the robot’s laser with the occupancy grid map.
According to a second aspect, there is provided a method of returning a displaced autonomous mobile robot to its planned navigational path based on a cognitive-occupancy grid map, the cognitive-occupancy grid map having a set of visual cues associated with locations along the planned navigational path, the method comprising (i) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot; (ii) selecting a destination point in the local occupancy grid map; (iii) navigating the mobile robot to the destination point along a designated path; (iv) as the mobile robot travels along the designated path, searching for a location that corresponds to a visual cue of the visual cue set defined in the cognitive-occupancy grid map; and (v) upon finding the location, recalling a pose of the robot associated with the visual cue to return the robot to the planned navigational path.
The methods of the first and second aspects may be performed by the mobile robot.
According to a third aspect, there is provided a system for returning a displaced autonomous mobile robot to its planned navigational path, the mobile robot having a sensor and a camera, the system comprising a global map builder configured to construct an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path, and construct a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map, before autonomous deployment of the robot; a local map builder configured to construct a local occupancy grid map of a vicinity of the displaced robot, after autonomous deployment of the robot and upon detection that the robot is displaced; a path planner configured to select a destination point in the local occupancy grid map; a navigator configured to navigate the mobile robot to the destination point along a designated path; a place recognition module configured to identify a location along the designated path that corresponds to a visual cue of the visual cue set defined in the cognitive map; and a recovery module configured to recall the robot pose associated with the visual cue to return the robot to the planned navigational path.
Preferably, the system may include a lost detection module configured to detect if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map stored. Preferably, the recovery module may further comprise a pose-reinitialization module configured to replace the local occupancy grid map with the occupancy grid map. The system may be implemented in various ways and in one example, this may be by way of an autonomous mobile robot comprising the system as discussed above.
It should be apparent that features relating to one aspect may be relevant to the other aspects.
BRIEF DESCRIPTION OF THE DRAWINGS
An exemplary embodiment will be described with reference to the accompanying drawings in which:
Figure 1 a shows an exemplary mobile robot according to a preferred embodiment;
Figure 1 b shows the mobile robot of Figure 1 with its casing removed to show its internal components;
Figure 2 is a block diagram of a system architecture of the mobile robot of Figure 1 ;
Figure 3 is a flow diagram illustrating actions performed by the mobile robot of Figure 1 to self-recover when the mobile robot is lost; Figures 4a - 4d are grid images showing different laser alignments between the robot’s laser scans and the occupancy grid map constructed during a map pre-building stage of the system architecture of Figure 2;
Figure 5 is a flow diagram of an exemplary method for returning a lost robot to its navigational path performed by the robot of Figure 1 ;
Figures 6a - 6d are grid images of a local occupancy grid map with valid and invalid destination points selected during the exemplary method of Figure 5;
Figure 7 illustrates a cognitive-occupancy grid map constructed based on the flow diagram of Figure 3 and used during evaluation of the robot; Figures 8a - 8c illustrate a scenario of how the robot of Figure 1 performs lost recovery after the robot is kidnapped;
DETAILED DESCRIPTION
The following description contains specific examples for illustrative purposes. The person skilled in the art would appreciate that variations and alterations to the specific examples are possible and within the scope of the present disclosure. The figures and the following description of the particular embodiments should not take away from the generality of the preceding summary.
Figure 1a illustrates a mobile robot 100 which can self-recover when the mobile robot 100 is lost or displaced according to a preferred embodiment. The mobile robot 100 includes an outer casing 1 10 and Figure 1 b shows the mobile robot 100 with the outer casing omitted to show a skeleton 120. As can be seen from the skeleton 120, the mobile robot 100 includes a robot base 121 having wheels 122 equipped with wheel encoders 123 (not shown in figure 1 b, but see Figure 2) for providing raw odometry data from the wheels 122 when the mobile robot 100 travels. In this embodiment, the mobile robot 100 includes a camera in the form of an RGB-D sensor 124, and a laser sensor 125 which operates as a high resolution laser range finder. The skeleton 124 includes an elongate frame 126 supported by the robot base 121 and the elongate frame 126 extends upwards from the robot base 121 . The RGB-D sensor 24 is mounted to and near the top of the elongate frame 126 around 90cm from the ground, and the laser sensor 125 is mounted above the robot base 121 around 30cm from the ground. The wheel encoders 123, the RGB-D sensor 124 and the laser sensor 125 cooperate together to provide input of the robot’s vicinity or surrounding to enable robot localization and navigation. The mobile robot 100 may also include other perception sensors.
Figure 2 is a schematic block diagram of a system architecture of the mobile robot 100. The mobile robot 100 includes a processor (or controller) 127 arranged to be communicatively coupled to the wheels encoder 123, the RGD- D sensor 124, the laser sensor 125 and also to control movement of the wheels 122. The system architecture also includes a global map builder 201 for constructing a cognitive map 211 and an occupancy map 212 based on the output of the RGB-D sensor 124 and the laser sensor 125 respectively. For easy of reference, the term‘cognitive-occupancy grid map’ or‘global map’ is used to refer to both the cognitive map 21 1 and the occupancy grid map 212 collectively. The cognitive-occupancy grid map 213 is stored in computer readable memory. The processor 127 is specifically configured to execute instructions of various modules stored in computer readable memory and in particular, a lost detection module 128 and a self-exploration module 150. The self-exploration module 150 includes sub-modules and these comprise a map builder 151 , a path planner 152, a navigator 153, a place recognition module 154 and a recovery module 155. The operation of each of these modules will be explained next in relation to Figure 3, which illustrates a flow diagram having two stages to explain how the mobile robot 100 recovers itself when the mobile robot 100 is lost.
As illustrated in Figure 3, a method 200 of recovering the mobile robot 100, when it is lost or displaced, includes two stages according to this embodiment and these comprises an offline global map building stage 210, and an online navigation and lost recovery stage 220.
During the offline global map building stage 210, a remote controller is used to control the mobile robot 100 manually to explore the environment along a designated route or navigational path from a starting point to a destination or target so as to collect training data. The RGB-D sensor/camera 124 captures images of the environment or the robot’s vicinity as the robot 100 moves along the navigational path. At step 221 , the captured images are then used by the processor 127 to construct/build the cognitive map 21 1. Visual cues are acquired, learned and stored in the cognitive map 211 during the map building process. The visual cues of the cognitive map are associated with actual poses in the occupancy grid map of the robot 100 using a continuous attractor network and in this way, the cognitive map 211 is thus associated with or correlated to the occupancy grid map 212.
Simultaneously with step 221 , during the movement, the laser sensor 122 also collects data of the environment which the robot 100 has explored along the navigational path and the collected data is used to construct an occupancy grid map 212 at step 222. In this embodiment, the occupancy grid map 212 is built using ROS gmapping package. Alternatively, other map building algorithms known to the skilled person may be used to construct the occupancy grid map 212. At 223, the cognitive-occupancy grid map 213 is thus formed which comprises the cognitive map 211 and the occupancy grid map 212.
Upon generation of the cognitive-occupancy grid map 213, the robot 100 is ready to be deployed for fully autonomous operation along the designated route or navigation path. At the navigation and lost recovery stage 220, the robot 100 is deployed to navigate autonomously at step 224 along the navigation path from a starting point to a target location 226. The occupancy grid map 212 is used for localization and obstacle avoidance during the navigation of the robot 100 to the target location 226. The processor 127 also executes the lost detection module 128 at steps 225 and 227, almost immediately after deployment of the mobile robot 100, and the lost detection module 128 is configured to run continuously as the mobile robot 100 makes its way from the starting point to the target location 226 in order to detect whether or not the robot 100 is lost while navigating along the navigation path. As long as the robot 100 is not lost, the robot 100 would continue to move along the navigation path towards its target location 226. Details of how the lost detection module 128 detects whether or not the robot 100 is lost will be described later. When the robot 100 is detected to be lost, the self-exploration module 150 is then executed by the processor 127 to execute self-exploration steps 230.
Broadly, the self-exploration module 150 starts by constructing a local vicinity occupancy grid map at step 231 of the displaced robot’s surrounding. This enables the robot 100 to then explore its vicinity or surrounding in search for an ever-visited-place at step 232. Once the robot 100 finds an ever-visited place corresponding to a visual cue defined in the cognitive map 21 1 of the global map 213, an accurate pose associated with the visual cue is recalled from the cognitive map 21 1 and the occupancy grid map 212 is reloaded at step 233.
Thereafter, at step 234, the processor 127 switches to the recovery module 155 and executes pose re-initialization. After the robot 100 has returned to its original pose prior to being displaced, the robot 100 resends the target location and the navigation module 153 guides the robot 100 to continue to its target location 226.
The actions performed by lost detection module 128 and lost recovery by the self-exploration module 150 are discussed in further details below.
Lost detection
Three scenarios are considered for lost detection: (1 ) the robot 100 is kidnapped, (2) the laser sensor 125 is blocked, and (3) the robot 100 is pushed. Although only three scenarios are discussed, it is noted that the lost detection module 128 may also be used in other lost robot scenarios resulting from unstable movements, abrupt wheel slippage etc.
Figures 4(a)-(d) illustrate grid images 300 with different laser alignments between the robot’s laser scans 301 , 302, 303, 304 recorded at timer, and the map boundary points 350 in the occupancy grid map 212.
The occupancy grid map 212 may be accurately learned from high resolution and long range laser using the Rao-Blackwellized particle filter and thus, an accurate occupancy grid map 212 may be obtained too. During the navigation, if the robot 100 is well localized, the laser scan 301 would be properly aligned with the map boundary 350 of the occupancy grid map 212 as shown in Figure
4a. However, if the robot 100 is not well-localized, the laser scan 302,303,304 would not match the map boundary 350 of occupancy grid map 212 properly, as illustrated in Figures 4b to 4d. Figure 4b shows the scenario where the robot 100 is pushed which caused the laser scan 302 to be rotated. In Figure 4c, the robot 100 is kidnapped from a point A to a point B. However, in this instance, there is no input from the raw odometry data and therefore the robot 100 is unable to update its position in the map. As such, based on the laser scan 303 in Figure 4c, the robot’s position appears to not have changed i.e. kept at point A. Finally, Figure 4d shows the scenario where the laser sensor 125 was blocked. Therefore, the laser scan 304 of Figure 4d is partially obscured.
A laser-map matching error function is defined herein as
Figure imgf000017_0005
Figure imgf000017_0001
wherein denote the number of the point set and the Euclidean
Figure imgf000017_0002
distance from a point to a set, respectively. is the set of laser beam at
Figure imgf000017_0004
current time ti . B is the set of the boundary points of the occupancy grid map
212, and bj is any boundary point. Eq. (1 ) intuitively determines whether the 2D scans from the laser ranger finder and the map are well aligned. A Boolean parameter is defined to determine whether the laser beams from the laser
Figure imgf000017_0003
sensor 125 at the current time ti are well aligned with the occupancy grid map
212:
Figure imgf000018_0001
If the value of within a duration of 2-3 seconds, the robot 100 is
Figure imgf000018_0002
identified to be lost. The thresholds d and Nlost are required to be well identified depending on different environments. For example, in the present embodiment, d is 1 .5 m and Nlost is 99.
Lost recovery
The lost recovery problem formulation as addressed by the self-exploration module 150 is given below.
Given an initial configuration space, X , an initial obstacle space, Xobs, a destination map Mgoal , a starting point qstart in a vicinity of the destination map, a kinematic constraint, Kcon , and a possible cost function c(·) . Find a (or an optimal) path from the starting point qstart to an ever-visited-place which is defined by the destination map.
The initial configuration and obstacle spaces are built up by the robot 100 without any movement or with safety movement such as 360 degrees rotation i.e. the robot 100 is stationary. Subsequently, the local vicinity occupancy grid map can be built up at step 231 by the local map builder 151 of the selfexploration module 150 and such a map may also be called an on-line local map.
The problem formulation on path planning addressed by the path planner 152 of the self-exploration module 150 is given as below.
Given an initial configuration space, X , an obstacle space, Xobs, a vicinity starting point gstart, a vicinity destination point ggoal (or a destination region), a kinematic constraint in the map, Kcon , and a possible cost function c( ). Find a (or an optimal) path from the vicinity starting point qstart to the vicinity destination point ggoal .
There are two differences between the problem formulation on lost recovery and the problem formulation on path planning. They are as follows:
(1 ) Both the vicinity starting and destination points are exactly known in the path planning. However, only the starting point is known in the lost recovery;
(2) Both the full configuration and obstacle spaces are available in the path planning while only initial configuration and obstacle spaces are available in the lost recovery.
If a vicinity destination point is randomly selected in the local vicinity occupancy grid map, then both the starting and destination points are available in the lost recovery problem. A path is then set up using the initial configuration and obstacle spaces for the robot 100 to explore the unknown environment. The local vicinity occupancy grid map is updated as the robot 100 navigates from the vicinity starting point to the vicinity destination point such that there are more valid candidates to be selected in the subsequent rounds. In addition, it is unknown whether the randomly selected vicinity destination point is an ever- visited-place. It is thus also necessary to perform place recognition using the destination map Mgoal during the navigation.
As such, self-exploration can be extended from path planning by adding three new components:
(1 ) a new component to randomly select the destination point in the on-line local map (i.e. the local vicinity occupancy map);
(2) a new component to update the on-line local map;
(3) a new component to verify whether there is any point in the path to be an ever-visited-place which is defined by the destination map.
When the robot 100 is lost, the robot 100 loses track of its pose in the global map i.e. occupancy grid map 212, and the occupancy grid map would not be effective for path planning, obstacle avoidance, and localization anymore. However, similar to human being’s spatial cognition and navigation ability, if the robot 100 can find an ever-visited place by self-exploring the environment, the robot 100 may retrieve the related visual cues that have been stored in the cognitive map 21 1. In other words, the robot 100 needs to find a path from its current (lost) location to an ever-visited-place that is defined by the cognitive- occupancy grid map.
Figure 5 is a flow diagram of an exemplary method 400 to elaborate the self- exploration steps 230 further and as performed by the mobile robot 100 to return itself to its navigational path.
Upon detection that the robot 100 is displaced or lost as illustrated in Figure 3, the map builder 151 is executed by the processor 127 at step 231 to construct the local vicinity occupancy grid map of the displaced robot’s surrounding or vicinity.
At step 231 a, the path planner 152 selects a vicinity destination point within the local vicinity occupancy grid map. The path planner 152 includes a path planning algorithm to determine a designated path from the vicinity starting point to the vicinity destination point.
At step 231 b, the navigator 153 navigates the robot 100 to the vicinity destination point along the designated path. During the navigation, the robot 100 collects more information in its vicinity. The local vicinity grid-occupancy map is updated with the collected information as the robot 100 travels along the designated path‘looking’ for ever-visited places.
It is noted that the vicinity destination point has to be valid before the path planner 152 can determine the designated path to the vicinity destination point and the navigator 153 can navigate the robot 100 to the vicinity destination point. Figures 6a and 6b are images of the local vicinity occupancy grid map with examples of valid random poses 510, 520 of the robot 100 while figures 6c and 6d are images of the local vicinity occupancy grid map with examples of invalid random poses 530, 540. If the random pose 510,520 is valid, the robot 100 navigates to the randomly generated pose and simultaneously updates the local vicinity occupancy map. Otherwise, another pose is randomly selected until a valid pose is generated.
In this embodiment, the search area s is restricted in a bounded area 500, i.e., 6x6m2 in the present embodiment. However, the search area can be enlarged at the cost of more possible searching time. The Rao-Blackwellized particle filter is adopted to build a new occupancy grid map which has incorporated the dynamic window approach (DWA) for path planning and obstacle avoidance during the self-exploration. The purpose is to guarantee that the robot 100 would navigate to the target location 226 efficiently and safely.
Referring again to Figure 5, at step 232, the place recognition module 154 checks the ‘live’ images obtained via the RGB-D sensor 124 to survey the vicinity as the mobile robot 100 moves along the designated path to identify a marker that corresponds to a visual cue defined in the cognitive-occupancy grid map 213.
If the marker is identified, the robot 100 relocates itself using the cognitive map 21 1. Otherwise, if the mobile robot 100 reaches the vicinity destination point without identifying the marker, a new destination point is generated in the updated local vicinity occupancy grid map. The robot 100 then repeats steps 231a, 231 b and 232 until the robot 100 finds an ever-visited-place. It is assumed that every point in the local vicinity occupancy grid map has the same probability to be an ever-visited place. If this is not true, a better or smarter strategy can be adopted for step 231a to select the valid destination point. For example, a cost function can be defined for the lost recovery so as to find an optimal lost recovery solution. Once an ever-visited place is found, the occupancy grid map 212 is reloaded at step 233. However, the previous obstacles recorded around the robot 100 during self-exploration are still in the robot’s system. If the local vicinity occupancy grid map is not cleared, this may result in invalid paths found by global and local planners of the robot 100 and the robot 100 might collide with obstacles. Therefore, related obstacle information of the local vicinity occupancy grid map should be cleared by resetting global and local cost maps in the system The global and local cost maps are used for obstacle avoidance during robot navigation. The global and local cost maps maintain information about where the robot 100 should navigate in the form of a grid map. The cost maps use both laser sensor data and information from the local occupancy grid map to store and update information about obstacles in the dynamic office environment. Each cell in the cost maps can be either free, occupied, or unknown status. Each status has a cost value assigned to it upon projection into the cost maps.
At step 234, the recovery module 155 recalls a pose of the robot 100 that is associated with the visual cue. Next, the system re-starts the navigation module 153. For pose re-initialization, the robot 100 rotates itself around 360 degree in order to converge the laser beams with the occupancy grid map 212. After the pose re-initialization process, the target goal would be re-sent again and the robot 100 would have self-recovered is able to continue with its unfinished tasks or continue to the target location 226. It should be noted that during the pose recovery process, the laser scans do not need to match the maps exactly. Since the robot 100 re-starts navigation, it will be able to localize itself in the rest of the way by itself. The Pseudo-code for the proposed lost recovery algorithm is provided below:
Figure imgf000025_0001
Evaluation
The mobile robot 100 is evaluated to test its operational readiness. For the evaluation, the system architecture 200 is implemented and runs on Robot Operation System (ROS) under Ubuntu 12.04. Table 1 includes the related parameter settings used in the lost robot recovery method of Figure 3.
Table 1 : parameter settings
Figure imgf000025_0002
For the evaluation, the mobile robot 100 is configured to navigate in an office environment as illustrated in Figure 7. During the offline global map building stage 210, the remote controller is used to control the robot 100 to explore the office environment in which the robot 100 is meant to operate. The laser sensor 125 is used to build the occupancy grid map 212 using ROS gmapping package. The images captured from the RGB-D sensor 124 are used to build the cognitive map 211 using an improved version of RatSLAM. The cognitive map 21 1 has a topological structure in which the nodes are associated with the corresponding visual cues and locations, denoted as visual experiences. The visual experiences serve as a space memorization mechanism to remember all the ever-visited places that the robot 100 has been to. The cognitive map 21 1 contains a set of spatial coordinates that the robot 100 has travelled which are actual robot poses. Therefore, the cognitive-occupancy grid map (or global map) 213 is constructed.
Figure 7 illustrates the constructed cognitive-occupancy grid map 213 although in Figure 7, the reference 600 is used instead. The global map 600 is constructed in an office environment with the area of around 21x18m2.
The resulting occupancy grid map 212 built by data collected from the laser sensor 125 represents the map of the office environment. This is readily apparent as compared to the raw odometry 602 received from the robot wheel encoders 123 which show significant drifts from the actual robot poses without any drift compensation mechanisms in place. The raw odometry 602 runs outside of the mapped office environment. Visited visual experiences 604 are represented as square boxes in the cognitive map 211. The robot pose trajectory (not shown in Figure 7) is estimated by the Rao-Blackwellized particle filter. In this evaluation, the mapping process took 780.66 seconds and the robot 100 travelled 265.11 meters around the office environment with an average speed of 0.34 m/s. After completing the map building process, the mobile robot 100 is ready to be deployed and the resulting cognitive-occupancy map 600, together with the self-exploration, lost detection and self-pose recovery modules 150,128 are used to self-recover the robot 100 if the robot 100 is lost.
Furthermore, when the robot 100 switches from one module to another module, a resting time tresting is provided to allow sufficient time for system updating. For example, when the pre-built grid map (i.e. the occupancy grid map 212) is reloaded, the system rests for 3 seconds in order that there is sufficient time for reconfiguring the system and path planning.
During navigation, the AMCL technique for path planning, obstacle avoidance and tracking the pose of a robot 100 against the occupancy grid map 212 is used, and the lost detection module 128 is executed to detect if the robot 100 is lost as explained in relation to Figure 3. When the robot 100 is detected to be lost, the obstacle information is cleared by resetting the global and local costmaps and start the self-exploration module 150 to create the local vicinity occupancy grid map and to look for a marker which corresponds to a visual cue of the ever-visited place, or re-visited place. Thus, the incoming visual inputs captured from the RGB-D sensor 124 are compared with the historical visual experiences 604 stored in the cognitive map 21 1. If the latest input matches with one of the visual experiences 604 stored in the cognitive map 21 1 , it is considered as a scene which had been seen/visited previously by the robot 100. Figures 8a - 8c depict scenario of the robot 100 being kidnapped and actions performed by the mobile robot 100 to self-recover.
In Figure 8a, the robot 100 is well-localized during navigation (see the occupancy grid map 212 in Figure 4a) to move to a target location 226 along a planned or designated navigation path 810 and it should be appreciated that the lost detection module 128 is running. The robot 100 is then kidnapped to another location outside of the planned or designated navigation path 810. After around 2 - 3 seconds, the lost detection module 128 detects that the robot 100 is lost and the self-exploration module 150 is triggered by the processor 127. In Figure 8b, the local map builder 151 creates a local vicinity occupancy grid map 820 of the lost/displaced robot’s vicinity or surrounding, identifies a vicinity destination point 822 and maps out a designated path and the robot 100 searches for all possible ever-visited-places along the designated path. In Figure 8c, the robot 100 finds an ever-visited-place 824 along the designated path and steps 233 and 234 of Figure 5 are executed to re-initialise the robot’s pose re-initialization by rotating itself 360°. The occupancy grid map 212 is reloaded and the robot 100 has returned to its planned navigation path. Thereafter, the recovered robot 100 continues on the planned navigation path until the robot 100 reaches its goal i.e. target location 226. Figures 8a-8c illustrate the effectiveness of recovering the lost robot by creating a local vicinity occupancy map of the vicinity in which the robot is lost and trying to identify an ever-visited place (or familiar place) based on the cognitive- occupancy grid map to recover itself. It should be apparent that the described embodiment is also applicable for other scenario such as when the robot 100 is pushed or when the laser sensor 125 is blocked causing the robot 100 to be displaced. It is noted that in some regions of the office environment such as a long corridor, the laser sensor 125 may not provide sufficient information for accurate laser- map matching. Even though an ever-visited-place with accurate robot pose is found during the self-exploration process, the pose might not be successfully recovered due to a lack of sufficient laser information for laser-map matching. In this case, a subsequent trial is tried again once the current self-recovery fails until the robot 100 is finally properly recovered.
Referring to Figure 7, it is noted that in the circled region 608, the laser sensor 125 did not provide sufficient information for accurate laser map matching during the pose recovery process. Therefore, it required several trials to finally recover the robot 100. Table 2 shows the number of recovery attempts attempted by the lost robot 100 in the circled region 608. Table 2
Figure imgf000030_0001
To improve the performance, the ever-visited-places which have resulted in pose recovery failures may be excluded for the subsequent self-exploration process. Furthermore, the performance of the pure random exploration can be significantly improved by constraining the searching behaviours with some conditions. For example, if the robot 100 cannot find an ever-visited-place along a trajectory from position A to position B, the randomly generated poses within a band of this trajectory can be excluded. Subsequently, the efficiency of the self- exploration can be improved.
It should be appreciated that the lost recovery method 400 of the described embodiment seamlessly integrates map creation and updating, dynamic path planning, place recognition and localization of the mobile robot 100 using a cognitive-occupancy grid map.
The cognitive map 21 1 integrates visual cue-based episodic memory thereby endowing humans or animals with the capacity to learn and recall experiences in the context of space and thus, the cognitive map 21 1 is suitable for path planning and navigation. The cognitive map 21 1 in robotics acquires, stores, and maintains information about the environment as spatial knowledge which is represented by various topological relationships within the cognitive map 21 1. Thus, the cognitive map 211 which encodes visual cues of its environments provides an intuitive way to assist the robot 100 to self-recover by itself when the robot 100 is lost.
Furthermore, the proposed method 400 may not require modeling of uncertainties of landmarks, compared to the traditional probabilistic SLAM methods. It is noted that while the cognitive map 21 1 is able to perform localization with a high accuracy, its ability to map the physical environment is relatively poor when closing a large loop due to odometry drift when the robot 100 has to travel long distances. The cognitive map 211 also does not perform well when used for path planning and obstacle avoidance during robot navigation. However, the limitations of the cognitive map 211 are complemented by the occupancy grid map 212. Therefore, the cognitive- occupancy grid map 213 which synergizes the cognitive map 21 1 and the occupancy grid map 212 is designed to be implemented with the proposed method to address the lost recovery problem.
The proposed method is implemented on a mobile robot that performs autonomous mapping, navigation and localization, lost detection, selfexploration and pose recovery. The described embodiment is particularly advantageous since the proposed method 400 includes map creation and updating and dynamic path planning via the self-exploration process. The proposed algorithm can thus be applicable to more scenarios.
The described embodiment thus proposes a robot system that is capable of detecting if the robot 100 is displaced from its navigational path, and re- localizes to return to its navigational path. In other words, the robot 100 is able to recover from a deliberate displacement from its known position or location. A cognitive-occupancy grid map technique is adopted, which synergizes the cognitive map technique and the occupancy grid map technique to solve the lost robot self-recovery problem.
In other embodiments, a robot system is proposed that comprises an offline global map builder such as the offline pre-building stage 210. The global map builder is arranged to construct an occupancy grid map that represents the location of the robot in relation to its surrounding, based on point cloud data captured using a laser source. The map builder further constructs a cognitive map that captures visual cues acquired, classified, and stored, visual cues associated with position (pose) or location of the robot and captured.
In such other embodiments, the robotic system further comprises an online navigator for path planning and obstacle avoidance. The robotic system further comprises a lost detection & recovery module for ascertaining if robot has been displaced; the module further comprises a self-exploratory module, which is triggered if the robot is ascertained to be displaced or lost. The module searches for previously traversed or visited location that is classified by the cognitive-occupancy grid map. Upon triggering or flagging that a visited location has been recognized, a visual cue associated with the pose of the robot is recalled from the cognitive map; and initializes a pose re-initialisation and recovery module; and resumes navigation. As a result, the robotic system can self-recover from a displaced or lost situation. It should be clear that although the present disclosure has been described with reference to specific exemplary embodiments, various modifications may be made to the embodiments without departing from the scope of the invention as laid out in the claims. For example, although the robot 100 is described as being deployed in an office environment, it should be clear that the described embodiments can be suitably applied to a wide variety of operating environments. Similarly, although the framework is implemented on mobile robot 100, the lost robot recovery framework can be implemented on a broad variety of autonomous robotic systems, including for example, autonomous driving vehicles.
Further, various embodiments as discussed above may be practiced with steps in a different order as disclosed in the description and illustrated in the Figures. Modifications and alternative constructions apparent to the skilled person are understood to be within the scope of the disclosure.

Claims

1 . A method of returning a displaced autonomous mobile robot to its planned navigational path, the mobile robot having a sensor and a camera the method comprising , before autonomous deployment of the robot,
(i) constructing an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path; and
(ii) constructing a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map; and , after autonomous deployment of the robot,
(iii) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot;
(iv) selecting a destination point in the local occupancy grid map;
(v) navigating the mobile robot to the destination point along a designated path;
(vi) as the mobile robot travels along the designated path, searching for a location that corresponds to a visual cue of the visual cue set defined in the cognitive map; and (vii) upon finding the location, recalling the robot pose associated with the visual cue to return the robot to the planned navigational path.
2. A method according to claim 1 , further comprising, when the mobile robot is travelling along its planned navigation path, detecting if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map.
3. A method according to claim 2, wherein the robot is detected as being displaced only when the misalignment persists for at least 2 seconds.
4. A method according to any preceding claim, wherein the local occupancy grid map is constructed when the mobile robot is stationary.
5. A method according to any preceding claim, further comprising rotating the robot 360° to construct the local occupancy grid map.
6. A method according to any preceding claim, wherein the destination point is randomly selected from the local occupancy grid map.
7. A method according to any preceding claim, further comprising collecting further information of the robot’s vicinity while the robot navigates to the destination point in step (iii); and updating the local occupancy grid map with the further information.
8. A method according to any preceding claim, wherein if no location is found which corresponds to the visual cue, repeating steps (iii) to (v) until such a location is identified.
9. A method according to any preceding claim, wherein recalling the robot pose in step (vi) comprises replacing the local occupancy grid map with the occupancy grid map.
10. A method according to claim 9, further comprising performing a 360° rotation of the robot to converge the robot’s laser with the occupancy grid map.
1 1 . A method of returning a displaced autonomous mobile robot to its planned navigational path based on a cognitive-occupancy grid map, the cognitive-occupancy grid map having a set of visual cues associated with locations along the planned navigational path, the method comprising
(i) upon detection that the robot is displaced, constructing a local occupancy grid map of a vicinity of the displaced robot; (ii) selecting a destination point in the local occupancy grid map;
(iii) navigating the mobile robot to the destination point along a designated path;
(iv) as the mobile robot travels along the designated path, searching for a location that corresponds to a visual cue of the visual cue set defined in the cognitive-occupancy grid map; and
(v) upon finding the location, recalling a pose of the robot associated with the visual cue to return the robot to the planned navigational path.
12. A system for returning a displaced autonomous mobile robot to its planned navigational path, the mobile robot having a sensor and a camera, the system comprising a global map builder configured to construct an occupancy grid map based on data received from the sensor of the robot’s vicinity along the planned navigational path, the collected data including robot poses along the planned navigational path, and construct a cognitive map of the planned navigational path based on a set of visual cues captured by the camera of the robot’s vicinity along the planned navigational path, the visual cues being associated with the robot poses in the occupancy grid map, before autonomous deployment of the robot; a local map builder configured to construct a local occupancy grid map of a vicinity of the displaced robot, after autonomous deployment of the robot and upon detection that the robot is displaced; a path planner configured to select a destination point in the local occupancy grid map; a navigator configured to navigate the mobile robot to the destination point along a designated path; a place recognition module configured to identify a location along the designated path that corresponds to a visual cue of the visual cue set defined in the cognitive map; and a recovery module configured to recall the robot pose associated with the visual cue to return the robot to the planned navigational path.
13. A system according to claim 12, further comprising a lost detection module configured to detect if the robot is displaced by determining whether or not there is misalignment between the robot’s actual location and the planned navigation path as defined by the occupancy grid map stored.
14. A system according to claim 12 or 13, wherein the recovery module further comprises a pose-reinitialization module configured to replace the local occupancy grid map with the occupancy grid map.
15. An autonomous mobile robot comprising the system of any one of claims12 to 14.
PCT/SG2019/050163 2018-03-28 2019-03-25 Method and system for returning a displaced autonomous mobile robot to its navigational path WO2019190395A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SG11202009494YA SG11202009494YA (en) 2018-03-28 2019-03-25 Method and system for returning a displaced autonomous mobile robot to its navigational path

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10201802591S 2018-03-28
SG10201802591S 2018-03-28

Publications (1)

Publication Number Publication Date
WO2019190395A1 true WO2019190395A1 (en) 2019-10-03

Family

ID=68062633

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2019/050163 WO2019190395A1 (en) 2018-03-28 2019-03-25 Method and system for returning a displaced autonomous mobile robot to its navigational path

Country Status (2)

Country Link
SG (1) SG11202009494YA (en)
WO (1) WO2019190395A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110609559A (en) * 2019-10-25 2019-12-24 江苏恒澄交科信息科技股份有限公司 Improved DWA dynamic window method for unmanned ship path following and obstacle avoidance
CN110716551A (en) * 2019-11-06 2020-01-21 小狗电器互联网科技(北京)股份有限公司 Mobile robot driving strategy determination method and device and mobile robot
CN110926476A (en) * 2019-12-04 2020-03-27 三星电子(中国)研发中心 Accompanying service method and device of intelligent robot
CN111061266A (en) * 2019-12-12 2020-04-24 湖南大学 Night on-duty robot for real-time scene analysis and space obstacle avoidance
CN111221337A (en) * 2020-01-19 2020-06-02 弗徕威智能机器人科技(上海)有限公司 Construction method and system of robot grid map
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN111881245A (en) * 2020-08-04 2020-11-03 深圳裹动智驾科技有限公司 Visibility dynamic map generation method and device, computer equipment and storage medium
CN111947666A (en) * 2020-08-21 2020-11-17 广州高新兴机器人有限公司 Automatic retrieving method for loss of outdoor laser navigation position
CN112344940A (en) * 2020-11-06 2021-02-09 杭州国辰机器人科技有限公司 Positioning method and device integrating reflective columns and grid map
CN112362059A (en) * 2019-10-23 2021-02-12 北京京东乾石科技有限公司 Method, apparatus, computer device and medium for positioning mobile carrier
CN112378408A (en) * 2020-11-26 2021-02-19 重庆大学 Path planning method for realizing real-time obstacle avoidance of wheeled mobile robot
CN112928799A (en) * 2021-02-04 2021-06-08 北京工业大学 Automatic butt-joint charging method of mobile robot based on laser measurement
CN112965490A (en) * 2021-02-07 2021-06-15 京东数科海益信息科技有限公司 Method, apparatus and non-transitory computer-readable storage medium for controlling robot
CN113110457A (en) * 2021-04-19 2021-07-13 杭州视熵科技有限公司 Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN113156956A (en) * 2021-04-26 2021-07-23 珠海市一微半导体有限公司 Robot navigation method, chip and robot
CN113378390A (en) * 2021-06-15 2021-09-10 浙江大学 Extraterrestrial star traffic analysis method and extraterrestrial star traffic analysis system based on deep learning
CN113467473A (en) * 2021-07-28 2021-10-01 河南中烟工业有限责任公司 Material storage method and device based on autonomous mobile robot
CN113467455A (en) * 2021-07-06 2021-10-01 河北工业大学 Intelligent trolley path planning method and equipment under multi-working-condition unknown complex environment
CN113568405A (en) * 2021-07-15 2021-10-29 南京林业大学 Network equipment signal lamp visual identification system and method based on inspection robot
CN114003035A (en) * 2021-10-28 2022-02-01 山东新一代信息产业技术研究院有限公司 Method, device, equipment and medium for autonomous navigation of robot
CN114237256A (en) * 2021-12-20 2022-03-25 东北大学 Three-dimensional path planning and navigation method suitable for under-actuated robot
CN114355910A (en) * 2021-12-23 2022-04-15 西安建筑科技大学 Indoor robot autonomous map building navigation system and method based on Jetson Nano
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent
WO2022135317A1 (en) * 2020-12-22 2022-06-30 Globe (jiangsu) Co., Ltd. Robotic tool system and control method thereof
CN115016510A (en) * 2022-08-08 2022-09-06 武汉工程大学 Robot navigation obstacle avoidance method and device and storage medium
CN116952253A (en) * 2023-09-21 2023-10-27 松灵机器人(深圳)有限公司 Method for adjusting moving path, terminal device and storage medium
US20240001550A1 (en) * 2022-06-29 2024-01-04 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Method of Controlling Movement of a Mobile Robot in the Event of a Localization Failure

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2621564A (en) * 2022-08-10 2024-02-21 Dyson Technology Ltd A method and system for mapping a real-world environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160008982A1 (en) * 2012-02-08 2016-01-14 RobArt GmbH Method for automatically triggering a self-positioning process
CN106949896A (en) * 2017-05-14 2017-07-14 北京工业大学 A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal
US20180046153A1 (en) * 2016-07-10 2018-02-15 Beijing University Of Technology Method of Constructing Navigation Map by Robot using Mouse Hippocampal Place Cell Model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160008982A1 (en) * 2012-02-08 2016-01-14 RobArt GmbH Method for automatically triggering a self-positioning process
US20180046153A1 (en) * 2016-07-10 2018-02-15 Beijing University Of Technology Method of Constructing Navigation Map by Robot using Mouse Hippocampal Place Cell Model
CN106949896A (en) * 2017-05-14 2017-07-14 北京工业大学 A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MILFORD M. J. ET AL.: "RatSLAM: A Hippocampal Model for Simultaneous Localization and Mapping", IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2004. PROCEEDINGS. ICRA '04, 1 May 2004 (2004-05-01), New Orleans, LA, USA, pages 403 - 408, XP010768308 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112362059A (en) * 2019-10-23 2021-02-12 北京京东乾石科技有限公司 Method, apparatus, computer device and medium for positioning mobile carrier
CN110609559A (en) * 2019-10-25 2019-12-24 江苏恒澄交科信息科技股份有限公司 Improved DWA dynamic window method for unmanned ship path following and obstacle avoidance
CN110716551A (en) * 2019-11-06 2020-01-21 小狗电器互联网科技(北京)股份有限公司 Mobile robot driving strategy determination method and device and mobile robot
CN110926476A (en) * 2019-12-04 2020-03-27 三星电子(中国)研发中心 Accompanying service method and device of intelligent robot
CN110926476B (en) * 2019-12-04 2023-09-01 三星电子(中国)研发中心 Accompanying service method and device for intelligent robot
CN111061266A (en) * 2019-12-12 2020-04-24 湖南大学 Night on-duty robot for real-time scene analysis and space obstacle avoidance
CN111221337A (en) * 2020-01-19 2020-06-02 弗徕威智能机器人科技(上海)有限公司 Construction method and system of robot grid map
CN111813882A (en) * 2020-06-18 2020-10-23 浙江大华技术股份有限公司 Robot map construction method, device and storage medium
CN111813882B (en) * 2020-06-18 2024-05-14 浙江华睿科技股份有限公司 Robot map construction method, device and storage medium
CN111881245A (en) * 2020-08-04 2020-11-03 深圳裹动智驾科技有限公司 Visibility dynamic map generation method and device, computer equipment and storage medium
CN111881245B (en) * 2020-08-04 2023-08-08 深圳安途智行科技有限公司 Method, device, equipment and storage medium for generating visibility dynamic map
CN111947666A (en) * 2020-08-21 2020-11-17 广州高新兴机器人有限公司 Automatic retrieving method for loss of outdoor laser navigation position
CN112344940A (en) * 2020-11-06 2021-02-09 杭州国辰机器人科技有限公司 Positioning method and device integrating reflective columns and grid map
CN112344940B (en) * 2020-11-06 2022-05-17 杭州国辰机器人科技有限公司 Positioning method and device integrating reflective columns and grid map
CN112378408B (en) * 2020-11-26 2023-07-25 重庆大学 Path planning method for realizing real-time obstacle avoidance of wheeled mobile robot
CN112378408A (en) * 2020-11-26 2021-02-19 重庆大学 Path planning method for realizing real-time obstacle avoidance of wheeled mobile robot
WO2022135317A1 (en) * 2020-12-22 2022-06-30 Globe (jiangsu) Co., Ltd. Robotic tool system and control method thereof
CN112928799A (en) * 2021-02-04 2021-06-08 北京工业大学 Automatic butt-joint charging method of mobile robot based on laser measurement
CN112965490A (en) * 2021-02-07 2021-06-15 京东数科海益信息科技有限公司 Method, apparatus and non-transitory computer-readable storage medium for controlling robot
CN113110457B (en) * 2021-04-19 2022-11-15 杭州视熵科技有限公司 Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN113110457A (en) * 2021-04-19 2021-07-13 杭州视熵科技有限公司 Autonomous coverage inspection method for intelligent robot in indoor complex dynamic environment
CN113156956A (en) * 2021-04-26 2021-07-23 珠海市一微半导体有限公司 Robot navigation method, chip and robot
CN113156956B (en) * 2021-04-26 2023-08-11 珠海一微半导体股份有限公司 Navigation method and chip of robot and robot
CN113378390B (en) * 2021-06-15 2022-06-24 浙江大学 Method and system for analyzing trafficability of extraterrestrial ephemeris based on deep learning
CN113378390A (en) * 2021-06-15 2021-09-10 浙江大学 Extraterrestrial star traffic analysis method and extraterrestrial star traffic analysis system based on deep learning
CN113467455A (en) * 2021-07-06 2021-10-01 河北工业大学 Intelligent trolley path planning method and equipment under multi-working-condition unknown complex environment
CN113568405B (en) * 2021-07-15 2024-01-30 南京林业大学 Network equipment signal lamp visual identification system and method based on inspection robot
CN113568405A (en) * 2021-07-15 2021-10-29 南京林业大学 Network equipment signal lamp visual identification system and method based on inspection robot
CN113467473A (en) * 2021-07-28 2021-10-01 河南中烟工业有限责任公司 Material storage method and device based on autonomous mobile robot
CN113467473B (en) * 2021-07-28 2023-09-15 河南中烟工业有限责任公司 Material storage method and device based on autonomous mobile robot
CN114003035A (en) * 2021-10-28 2022-02-01 山东新一代信息产业技术研究院有限公司 Method, device, equipment and medium for autonomous navigation of robot
CN114237256B (en) * 2021-12-20 2023-07-04 东北大学 Three-dimensional path planning and navigation method suitable for under-actuated robot
CN114237256A (en) * 2021-12-20 2022-03-25 东北大学 Three-dimensional path planning and navigation method suitable for under-actuated robot
CN114355910A (en) * 2021-12-23 2022-04-15 西安建筑科技大学 Indoor robot autonomous map building navigation system and method based on Jetson Nano
CN114442625B (en) * 2022-01-24 2023-06-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy combined control agent
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent
US20240001550A1 (en) * 2022-06-29 2024-01-04 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Method of Controlling Movement of a Mobile Robot in the Event of a Localization Failure
WO2024000672A1 (en) * 2022-06-29 2024-01-04 Hong Kong Applied Science and Technology Research Institute Company Limited Method of Controlling Movement of a Mobile Robot in the Event of a Localization Failure
CN115016510A (en) * 2022-08-08 2022-09-06 武汉工程大学 Robot navigation obstacle avoidance method and device and storage medium
CN116952253A (en) * 2023-09-21 2023-10-27 松灵机器人(深圳)有限公司 Method for adjusting moving path, terminal device and storage medium
CN116952253B (en) * 2023-09-21 2024-02-23 深圳库犸科技有限公司 Method for adjusting moving path, terminal device and storage medium

Also Published As

Publication number Publication date
SG11202009494YA (en) 2020-10-29

Similar Documents

Publication Publication Date Title
WO2019190395A1 (en) Method and system for returning a displaced autonomous mobile robot to its navigational path
CN106796434B (en) Map generation method, self-position estimation method, robot system, and robot
Diosi et al. Interactive SLAM using laser and advanced sonar
US11774247B2 (en) Intermediate waypoint generator
Thrun et al. Autonomous exploration and mapping of abandoned mines
US8744665B2 (en) Control method for localization and navigation of mobile robot and mobile robot using the same
EP3770711A1 (en) Method for repositioning robot
Xie et al. A real-time robust global localization for autonomous mobile robots in large environments
WO2017038012A1 (en) Mapping method, localization method, robot system, and robot
JP5287050B2 (en) Route planning method, route planning device, and autonomous mobile device
CN112652001A (en) Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
Kunz et al. Automatic mapping of dynamic office environments
Rasmussen et al. Robot navigation using image sequences
Miura et al. Adaptive robot speed control by considering map and motion uncertainty
Glas et al. Simultaneous people tracking and localization for social robots using external laser range finders
US11774983B1 (en) Autonomous platform guidance systems with unknown environment mapping
Yamauchi et al. Magellan: An integrated adaptive architecture for mobile robotics
Yuan et al. Lost robot self-recovery via exploration using hybrid topological-metric maps
AU2021273605B2 (en) Multi-agent map generation
Tamjidi et al. 6-DOF pose estimation of an autonomous car by visual feature correspondence and tracking
CN114474054B (en) Human-shaped robot navigation method
US20240027224A1 (en) Method for recognizing an erroneous map of an environment
Thallas et al. Particle filter—Scan matching SLAM recovery under kinematic model failures
RU2736559C1 (en) Mobile robot service navigation method
Zhang et al. Adaptive Optimal Multiple Object Tracking Based on Global Cameras for a Decentralized Autonomous Transport System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19774293

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19774293

Country of ref document: EP

Kind code of ref document: A1