WO2021246963A1 - Apparatus and method for indoor guiding - Google Patents

Apparatus and method for indoor guiding Download PDF

Info

Publication number
WO2021246963A1
WO2021246963A1 PCT/SG2021/050314 SG2021050314W WO2021246963A1 WO 2021246963 A1 WO2021246963 A1 WO 2021246963A1 SG 2021050314 W SG2021050314 W SG 2021050314W WO 2021246963 A1 WO2021246963 A1 WO 2021246963A1
Authority
WO
WIPO (PCT)
Prior art keywords
indoor
electronic tag
routes
work zone
robot
Prior art date
Application number
PCT/SG2021/050314
Other languages
French (fr)
Inventor
Hui Leong Edwin HO
Original Assignee
Ngee Ann Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ngee Ann Polytechnic filed Critical Ngee Ann Polytechnic
Publication of WO2021246963A1 publication Critical patent/WO2021246963A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • G05D1/0282Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal generated in a local control room

Definitions

  • the present invention relates to an apparatus and method for indoor guiding.
  • GPS Global Positioning System
  • WiFi Wireless Fidelity
  • Fig. 1 shows a front perspective view of an apparatus of an example of the present disclosure.
  • Fig. 2 shows a rear perspective view of the apparatus of Fig. 1.
  • Fig. 3 shows a Mock-up Grocery Store.
  • Fig. 4 shows a LiDAR generated map of the mock-up grocery store of Fig. 3 with pre designated waypoints indicated.
  • Fig. 5 shows an electronic indoor positioning tag.
  • Fig. 6 shows a prior art implementation for tracking a user from a robot.
  • Fig. 7 shows a top view of a work zone of an apparatus of an example of the present disclosure.
  • An example of the present disclosure comprises a human-robot collaborative robotic mobile system for indoor operation.
  • This system involves an apparatus and a method for indoor guiding.
  • This system is named as an Artificial Intelligence (Al) enabled FOIIow-me Robotic Assistant (AIFO) in the present disclosure.
  • AIFO Artificial Intelligence
  • the apparatus of AIFO is configured to have autonomous ability in route mapping/planning and obstacle avoidance features, coupled with capability to identify, track and comprehend a human-co-worker’s instruction.
  • Indoor refers to anywhere that is sheltered or covered from the sky. For instance, inside a building, under a tent set up outdoors, and the like.
  • Devices such as Google Home, Amazon Alexa, and Apple Home pod are hardware with A.l. enabled Cloud services incorporating highly interactive speech interface.
  • AIFO Artificial Intelligence elements (in particular, the speech capabilities) together with sensors for interaction with a human co-worker and a robotic mobile base (i.e. the apparatus) that has autonomous control in terms of route mapping/planning, obstacle avoidance, and subject/object identification and tracking capabilities.
  • AIFO is configured to have ability to follow in close proximity to a human co-worker and constantly monitor for an ad-hoc task request by the human co-worker.
  • AIFO can be applied in multiple scenarios in which human-robot collaborative operation is required. Some examples are:
  • a Smart Shopping Cart comprising a load bearing mobile base that transports groceries while following shoppers. It could act as a shopping guide which could guide a shopper to shelves on which an item is displayed. This provide a hand-free shopping experience for shoppers.
  • a Robotic Concierge/Tour Guide that can lead visitors to locations within places of interest (i.e. museums, departments within a hospital, and the like) and answer/react to visitors’ queries.
  • An Intelligent Load Carrying platform that can assist in guiding a human co-worker and transporting loads within a warehouse by trialing its human co-worker. For example, a warehouse logistic helper for working with a warehouse staff to improve work efficiency.
  • a follow-me walking assistant capable of guiding a person with disability such as a robotic walking guide for the blind, a personal mobility device with guiding capabilities and the like.
  • the apparatus of AIFO is configured to have two operation modes, a collaborative mode in which the apparatus interacts with a human and performs instructed tasks, and an autonomous mode in which the apparatus automatically performs indoor navigation and/or tracking operations, such as route mapping/planning that can involve obstacle avoidance planning, and subject/object identification and tracking functions. Under the autonomous mode, the apparatus may also perform system updating and/or machine learning operations.
  • FIGS 1 and 2 illustrate an example of an apparatus 100 of AIFO configured with an application as a smart shopping cart.
  • AIFO is implemented with a robotic mobile base 100 (i.e. the apparatus) designed as a smart shopping cart with a load bearing capacity of, for instance, 50kg.
  • 50 kg is just an example and it is possible to configure the load bearing capacity to be heavier or lighter.
  • the apparatus 100 is capable of following or guiding a user, in this case a shopper, to one or more specific products within a grocery store.
  • the interaction between the user and the apparatus is enabled by software such as Google Home.
  • the processor of the apparatus is programmed with a customized chat-bot to assist shopping.
  • the chat-bot is named as “Shop Assist”.
  • the apparatus 100 has the following: one or more processor (not shown) for executing instructions in one or more memory to operate the apparatus 100 to perform functions;
  • a storage cart 102 for placing shopping items a compartment 104 to house hardware (i.e. processor, memory etc.), software (i.e. Operating System, etc.) and/or firmware that may include Google Home; - a sensing device 106 that may include LiDAR, a wheeled mobile base 108 that can have one or more wheels, such as in this case, 4 wheels; and
  • a light indicator for alerting purposes and/or indicating operating states/modes of the apparatus 100.
  • LiDAR relates to a method for measuring distances (ranging) by illuminating a target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target.
  • the apparatus 100 also has a speaker for communicating with a user through speech, a microphone for taking audio input from the user, and a power supply (battery) to provide electrical power to all electrical components.
  • a speaker for communicating with a user through speech
  • a microphone for taking audio input from the user
  • a power supply battery
  • the apparatus 100 has a rear camera 202 for capturing images in the rear of the apparatus 100, an Indoor Positioning System Tag 204, and a charging dock 206 for charging the power supply.
  • a The ability to perform Simultaneous Localization and Mapping (i.e. SLAM) on a pre mapped environment.
  • SLAM Simultaneous Localization and Mapping
  • b Autonomous navigation through grocery shelves and ability to guide a user (human operator) to a location of a specific product. This is referred as guide-me mode in the present disclosure.
  • c Navigation pathway of the AIFO system can be, for example, confined to a series of pre-designated waypoints via shortest path algorithm. This offers more predictable robot motions especially in an environment where there is co-existence of human shoppers. Human shoppers will be able to anticipate the robot’s path thereby avoiding unnecessary collision/blockage to the robot 100.
  • d The ability to perform Simultaneous Localization and Mapping (i.e. SLAM) on a pre mapped environment.
  • SLAM Simultaneous Localization and Mapping
  • An Indoor Positioning tag worn or carried by the human operator and follows the human operator by tracking the tag i.e. a follow-me mode.
  • a speech command to the robot can activate such follow-me mode.
  • the use of such indoor position tag allows the robot 100 to track its human operator even if the human traffic is high.
  • the technique also permits the human operator to be tracked even when there is no line-of-sight (e.g. human operator behind high shelves) which is not possible if tracking is conducted purely by vision camera or cameras.
  • Instructional inputs to the robot are accomplished via Google Home.
  • Chatbot style interaction is developed on DialogFlow platform (Google based) and it permits conversation intent capturing and consolidation of information for scoping of fulfillment actions.
  • AIFO When invocated, AIFO will ask a human operator: “What product are you looking for?”. The respond from the human operator is then automatically matched with an intent dictionary defined within the chatbot engine. If the human operator responded with “Apple”, the guide-me mode is switched on. The response “Apple” will be mapped to a conversational intent related to “Search-Fruits” which prompts AIFO to further question: “Do you have a specific brand in mind or say NO to skip?” The next respond from the operator will scope down the product with a relationship to “Search-Brand” intent.
  • the four wheeled Robotic Mobile Base (i.e. part of the apparatus or robot 100) for indoor navigation is to be used as a smart shopping cart with, for example, a payload of 50kg.
  • Fig. 3 shows a perspective view of a mock-up grocery store 301 set up for a prototype robot 300 of AIFO to demonstrate guide-me and follow-me operations.
  • the robot 300 is the same as the apparatus 100 of Figs. 1 and 2.
  • the robot 300 is shown to be navigating through the mock-up grocery store 301.
  • the perpendicular obstruction 304 is perpendicular relative to the parallel obstructions 302 and the parallel obstructions 302 are perpendicular relative to a major length of the store 301 in Fig. 3.
  • the perpendicular obstruction 304 and the plurality of parallel obstructions 302 simulate shelves containing products in the grocery store 301 .
  • the robot 300 has Simultaneous Localization and Mapping (SLAM) capability, which can be implemented using AMCL algorithm on a Robot Operating System (ROS) framework.
  • Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.
  • Adaptive Monte Carlo Localization (AMCL) algorithm relates to a probabilistic localization system for a robot moving in 2D (e.g. navigate in the 2D map such as Fig. 4).
  • An AMCL node can be configured to work with laser scans and laser maps provided through LiDAR, but it could also be extended to work with other sensor data, such as a sonar or stereo and the like.
  • Fig. 4 is a 2D top view map 400 of the grocery store 301 of Fig. 3, which comprises the robot 300 at its location in Fig. 3, the perpendicular obstruction 304 and the plurality of parallel obstructions 302.
  • a pre-designated waypoints planning system is implemented based on, for example, the shortest path algorithm (Dijkstra’s Algorithm).
  • the 2D map 400 of Fig. 4 is generated and/or updated using LiDAR by the robot 300 as the robot 300 autonomously navigates in the store 301.
  • Pre-designated waypoints 402 are created for the 2D top view map and are known to the robot 300.
  • non-permanent or changeable obstructions placed in the store 301 such as the perpendicular obstruction 304 and the plurality of parallel obstructions 302 are detected and mapped out by the robot 300 in the map 400.
  • the locations of the non permanent or changeable obstructions are not pre-determined.
  • These pre-designated waypoints 402 are set to guide the movements of the robot 300 so that the robot’s movements are more predictable. That is, the robot 300 only moves along the pre designated waypoints 402.
  • the shortest path algorithm is used for the robot 300 to determine the shortest path along the pre-designated waypoints 402 when the robot 300 has to move around the store 301 , for instance, during the follow-me and guide-me operations.
  • the pre-designated waypoints 402 can be set by the robot 300 after allowing the robot 300 to navigate and/or roam around the store 301 .
  • the robot 300 can use other suitable techniques other than pre-designated waypoints 402 to navigate in the store 301.
  • Speech and conversation intent capturing capability can be implemented using Dialogflow framework (Google based) for AIFO’s guide-me and follow-me operations, wherein the guide-me and follow-me operations are activated by speech to the robot 100.
  • Dialogflow framework Google based
  • Python codes can be developed and implemented for Dialogflow fulfillment actions via webhook with usage of an excel sheet for mapping of waypoints coordinates to product labels (i.e. products on shelves).
  • the webhook (user-defined HTTP callbacks) is away for an application to provide other applications with real-time information and allows interactions between otherwise independent web applications.
  • Ultra-WideBand (UWB) Indoor Positioning Tags wearable or carried by a human target are used for tracking of the human target for follow- me mode or operation (i.e. the robot follows the human target).
  • Fig. 5 shows a prototype of such UWB indoor positioning tag.
  • electronic tags based on non- UWB technology but is similar and/or suitable can also be used.
  • the implementation involving such tag or tags is different from existing techniques used in similar robot follow-me operations.
  • the robot has a wireless tracking system mounted on board to track the human operator wearing or carrying an electronic tag configured for wireless communication with the robot. This is illustrated by Fig. 6.
  • a UWB position tag 708 is issued to be worn or carried by a human operator 710.
  • a robot 701 having features of the apparatus 100 of Figs 1 and 2 is configured to follow the human operator 710 wherever he or she goes within an indoor work zone 700.
  • the work zone 700 in this example is the area of a grocery store and in a top view, the work zone 700 is rectangular in shape.
  • the UWB position tag 708 is to be tracked indoors by a plurality of stationary receivers or transceivers 702 mounted on the walls of the indoor work zone 700. There can be, for example, four or more of the stationary receivers or transceivers 702. In the present example, four stationary receivers 702 are placed around four respective corners of the work zone 700.
  • the stationary receivers 702 and the UWB position tag 708 together form an indoor positioning system that enables global position tracking of the UWB position tag 708, and correspondingly tracks the human operator 710 wearing or carrying the UWB position tag 708, within the work zone 700.
  • x (relative to horizontal axis) and /(relative to vertical axis) coordinates (the x and y axes are drawn in Fig. 7) relative to the top view of the work zone 700 of the UWB position tag 708 are determined by the indoor positioning system.
  • the global position (x, y) of the UWB position tag 708 tracked by the stationary receivers 702 is feedback by the indoor positioning system to the robot 701 via Wireless Fidelity (WIFI) or other suitable wireless technology.
  • WIFI Wireless Fidelity
  • Traffic management algorithms and/or techniques to avoid collision of the robot 300 with other robots and/or human operators wearing or carrying the Indoor Positioning Tags can also be implemented for the examples of the present disclosure.
  • an example of the present disclosure may provide the following:
  • the Dialogflow based chatbot allows capturing of conversation intents and is capable of translating human user inputs to provide guide-me or follow-me operation (or mode).
  • the apparatus e.g. 100 of Fig. 1 and 2 of the AIFO system breakdown the user’s speech inputs into keywords to match pre-programmed conversation intents. If conversation intents require more information from the human user interacting with the apparatus, it will raise one or more questions to the user to gather more information.
  • Examples of the present disclosure may have the following features:
  • An apparatus for indoor guiding comprising: a mobile base to enable the apparatus to move indoors; and a processor for executing instructions in a memory to control the apparatus to: autonomously map an indoor work zone to identify obstacles in the work zone so that the apparatus is able to move along one or more routes to avoid the obstacles; receive position input of an electronic tag tagged to a subject from an indoor positioning system; and move along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
  • the apparatus may be further controllable to: move along pre-designated waypoints in the work zone in each of the one or more routes.
  • the apparatus may be further controllable to: enable chatbot style interaction with a user, wherein user input for the interaction is through speech.
  • the apparatus may be further controllable to: receive one or more user input; determine a location in the work zone from the one or more user input; and move along the one or more routes to the location.
  • a method for indoor guiding comprising: autonomously mapping an indoor work zone to identify obstacles in the work zone so that an apparatus is able to move along one or more routes to avoid the obstacles; receiving position input of an electronic tag tagged to a subject from an indoor positioning system; and moving the apparatus along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
  • the method may further comprise: moving the apparatus along pre-designated waypoints in the work zone in each of the one or more routes.
  • the method may further comprise: enabling chatbot style interaction with a user, wherein user input for the interaction is through speech.
  • the method may further comprise: receiving one or more user input; determining a location in the work zone from the one or more user input; and moving the apparatus along the one or more routes to the location.

Abstract

An apparatus and a method for indoor guiding, the apparatus comprising: a mobile base to enable the apparatus to move indoors; and a processor for executing instructions in a memory to control the apparatus to: autonomously map an indoor work zone to identify obstacles in the work zone so that the apparatus is able to move along one or more routes to avoid the obstacles; receive position input of an electronic tag tagged to a subject from an indoor positioning system; and move along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.

Description

Apparatus and Method for Indoor Guiding
Field
The present invention relates to an apparatus and method for indoor guiding.
Background
Conventionally, it is common for guiding robots to rely on cameras (imaging technology), infrared technology and the like to identify a subject to be guided or to follow. Such technologies generally require line of sight with the subject or line of sight with an object attached, tagged or held by the subject. Problems such as loss of guiding capabilities arise when line of sight is lost.
There are positioning technologies such as Global Positioning System (GPS), use of Wireless Fidelity (WiFi) technology involving triangulation, and the like to track the subject or the object without requiring line of sight. However, either they are not suitable for use indoors or it is often challenging to implement such technologies with good accuracy indoors.
Summary
According to an example of the present disclosure, there are provided an apparatus and a method as claimed in the independent claims. Some optional features are defined in the dependent claims.
Brief Description of the Drawings
Fig. 1 shows a front perspective view of an apparatus of an example of the present disclosure.
Fig. 2 shows a rear perspective view of the apparatus of Fig. 1.
Fig. 3 shows a Mock-up Grocery Store.
Fig. 4 shows a LiDAR generated map of the mock-up grocery store of Fig. 3 with pre designated waypoints indicated.
Fig. 5 shows an electronic indoor positioning tag.
Fig. 6 shows a prior art implementation for tracking a user from a robot.
Fig. 7 shows a top view of a work zone of an apparatus of an example of the present disclosure.
Detailed Description
An example of the present disclosure comprises a human-robot collaborative robotic mobile system for indoor operation. This system involves an apparatus and a method for indoor guiding. This system is named as an Artificial Intelligence (Al) enabled FOIIow-me Robotic Assistant (AIFO) in the present disclosure. The apparatus of AIFO is configured to have autonomous ability in route mapping/planning and obstacle avoidance features, coupled with capability to identify, track and comprehend a human-co-worker’s instruction. Indoor refers to anywhere that is sheltered or covered from the sky. For instance, inside a building, under a tent set up outdoors, and the like. Devices such as Google Home, Amazon Alexa, and Apple Home pod are hardware with A.l. enabled Cloud services incorporating highly interactive speech interface. These devices are designed for home use with hands-free interfaces using only speech. The common technology in these devices is the highly accurate speech progressing capability. It is capable of digesting speech sentences, executing the instructions over the web, and replying through speech to web search results in response to a user. AIFO is configured to have such Artificial Intelligence elements (in particular, the speech capabilities) together with sensors for interaction with a human co-worker and a robotic mobile base (i.e. the apparatus) that has autonomous control in terms of route mapping/planning, obstacle avoidance, and subject/object identification and tracking capabilities. AIFO is configured to have ability to follow in close proximity to a human co-worker and constantly monitor for an ad-hoc task request by the human co-worker.
AIFO can be applied in multiple scenarios in which human-robot collaborative operation is required. Some examples are:
- A Smart Shopping Cart comprising a load bearing mobile base that transports groceries while following shoppers. It could act as a shopping guide which could guide a shopper to shelves on which an item is displayed. This provide a hand-free shopping experience for shoppers.
- A Robotic Concierge/Tour Guide that can lead visitors to locations within places of interest (i.e. museums, departments within a hospital, and the like) and answer/react to visitors’ queries.
- An Intelligent Load Carrying platform that can assist in guiding a human co-worker and transporting loads within a warehouse by trialing its human co-worker. For example, a warehouse logistic helper for working with a warehouse staff to improve work efficiency.
- Medical Care Assistive Robot that can assist medical workers, people with disabilities, and/or patients. For example, a follow-me walking assistant capable of guiding a person with disability such as a robotic walking guide for the blind, a personal mobility device with guiding capabilities and the like.
In the above applications, the apparatus of AIFO is configured to have two operation modes, a collaborative mode in which the apparatus interacts with a human and performs instructed tasks, and an autonomous mode in which the apparatus automatically performs indoor navigation and/or tracking operations, such as route mapping/planning that can involve obstacle avoidance planning, and subject/object identification and tracking functions. Under the autonomous mode, the apparatus may also perform system updating and/or machine learning operations.
Figures 1 and 2 illustrate an example of an apparatus 100 of AIFO configured with an application as a smart shopping cart. In this example, AIFO is implemented with a robotic mobile base 100 (i.e. the apparatus) designed as a smart shopping cart with a load bearing capacity of, for instance, 50kg. Note that 50 kg is just an example and it is possible to configure the load bearing capacity to be heavier or lighter. The apparatus 100 is capable of following or guiding a user, in this case a shopper, to one or more specific products within a grocery store. The interaction between the user and the apparatus is enabled by software such as Google Home. Specifically, the processor of the apparatus is programmed with a customized chat-bot to assist shopping. In the present disclosure, the chat-bot is named as “Shop Assist”.
With reference to Fig. 1 , the apparatus 100 has the following: one or more processor (not shown) for executing instructions in one or more memory to operate the apparatus 100 to perform functions;
- a storage cart 102 for placing shopping items; a compartment 104 to house hardware (i.e. processor, memory etc.), software (i.e. Operating System, etc.) and/or firmware that may include Google Home; - a sensing device 106 that may include LiDAR, a wheeled mobile base 108 that can have one or more wheels, such as in this case, 4 wheels; and
- a light indicator for alerting purposes and/or indicating operating states/modes of the apparatus 100.
LiDAR relates to a method for measuring distances (ranging) by illuminating a target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital 3-D representations of the target.
Not shown in Figures 1 and 2, the apparatus 100 also has a speaker for communicating with a user through speech, a microphone for taking audio input from the user, and a power supply (battery) to provide electrical power to all electrical components.
With reference to Fig. 2, the apparatus 100 has a rear camera 202 for capturing images in the rear of the apparatus 100, an Indoor Positioning System Tag 204, and a charging dock 206 for charging the power supply.
Key features of the apparatus 100 (also known herein as "robot”) may be as follow. a. The ability to perform Simultaneous Localization and Mapping (i.e. SLAM) on a pre mapped environment. b. Autonomous navigation through grocery shelves and ability to guide a user (human operator) to a location of a specific product. This is referred as guide-me mode in the present disclosure. c. Navigation pathway of the AIFO system can be, for example, confined to a series of pre-designated waypoints via shortest path algorithm. This offers more predictable robot motions especially in an environment where there is co-existence of human shoppers. Human shoppers will be able to anticipate the robot’s path thereby avoiding unnecessary collision/blockage to the robot 100. d. Tracking of a unique human operator by an Indoor Positioning tag worn or carried by the human operator and follows the human operator by tracking the tag i.e. a follow-me mode. A speech command to the robot can activate such follow-me mode. The use of such indoor position tag allows the robot 100 to track its human operator even if the human traffic is high. The technique also permits the human operator to be tracked even when there is no line-of-sight (e.g. human operator behind high shelves) which is not possible if tracking is conducted purely by vision camera or cameras. e. Instructional inputs to the robot are accomplished via Google Home. f. Chatbot style interaction is developed on DialogFlow platform (Google based) and it permits conversation intent capturing and consolidation of information for scoping of fulfillment actions. When invocated, AIFO will ask a human operator: “What product are you looking for?”. The respond from the human operator is then automatically matched with an intent dictionary defined within the chatbot engine. If the human operator responded with “Apple”, the guide-me mode is switched on. The response “Apple” will be mapped to a conversational intent related to “Search-Fruits” which prompts AIFO to further question: “Do you have a specific brand in mind or say NO to skip?” The next respond from the operator will scope down the product with a relationship to “Search-Brand” intent.
In this case, if the human operator is to respond “Tesco” (i.e. a particular brand of apples), AIFO will then fulfilled the operation by guiding the human operator to a venue where "Tesco Apple” is located. Thereafter, the robot 100 will guide the human operator to the Tesco Apple and the human operator can take Tesco Apples off a shelf to place in the storage cart 102 of the robot 100. Specific details of the example of Figs. 1 and 2 may be as follow.
The four wheeled Robotic Mobile Base (i.e. part of the apparatus or robot 100) for indoor navigation is to be used as a smart shopping cart with, for example, a payload of 50kg.
Fig. 3 shows a perspective view of a mock-up grocery store 301 set up for a prototype robot 300 of AIFO to demonstrate guide-me and follow-me operations. The robot 300 is the same as the apparatus 100 of Figs. 1 and 2. The robot 300 is shown to be navigating through the mock-up grocery store 301. There is a perpendicular obstruction 304 and a plurality of parallel obstructions 302 provided in the store. The perpendicular obstruction 304 is perpendicular relative to the parallel obstructions 302 and the parallel obstructions 302 are perpendicular relative to a major length of the store 301 in Fig. 3. The perpendicular obstruction 304 and the plurality of parallel obstructions 302 simulate shelves containing products in the grocery store 301 .
The robot 300 has Simultaneous Localization and Mapping (SLAM) capability, which can be implemented using AMCL algorithm on a Robot Operating System (ROS) framework. Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Adaptive Monte Carlo Localization (AMCL) algorithm relates to a probabilistic localization system for a robot moving in 2D (e.g. navigate in the 2D map such as Fig. 4). An AMCL node can be configured to work with laser scans and laser maps provided through LiDAR, but it could also be extended to work with other sensor data, such as a sonar or stereo and the like.
Fig. 4 is a 2D top view map 400 of the grocery store 301 of Fig. 3, which comprises the robot 300 at its location in Fig. 3, the perpendicular obstruction 304 and the plurality of parallel obstructions 302. A pre-designated waypoints planning system is implemented based on, for example, the shortest path algorithm (Dijkstra’s Algorithm). The 2D map 400 of Fig. 4 is generated and/or updated using LiDAR by the robot 300 as the robot 300 autonomously navigates in the store 301. Pre-designated waypoints 402 are created for the 2D top view map and are known to the robot 300. However, non-permanent or changeable obstructions placed in the store 301 such as the perpendicular obstruction 304 and the plurality of parallel obstructions 302 are detected and mapped out by the robot 300 in the map 400. Unlike the pre-designated waypoints 402, the locations of the non permanent or changeable obstructions are not pre-determined. These pre-designated waypoints 402 are set to guide the movements of the robot 300 so that the robot’s movements are more predictable. That is, the robot 300 only moves along the pre designated waypoints 402. In the present example, the shortest path algorithm is used for the robot 300 to determine the shortest path along the pre-designated waypoints 402 when the robot 300 has to move around the store 301 , for instance, during the follow-me and guide-me operations. In one example, the pre-designated waypoints 402 can be set by the robot 300 after allowing the robot 300 to navigate and/or roam around the store 301 . In some examples of the present disclosure, the robot 300 can use other suitable techniques other than pre-designated waypoints 402 to navigate in the store 301.
Speech and conversation intent capturing capability can be implemented using Dialogflow framework (Google based) for AIFO’s guide-me and follow-me operations, wherein the guide-me and follow-me operations are activated by speech to the robot 100.
Python codes can be developed and implemented for Dialogflow fulfillment actions via webhook with usage of an excel sheet for mapping of waypoints coordinates to product labels (i.e. products on shelves). The webhook (user-defined HTTP callbacks) is away for an application to provide other applications with real-time information and allows interactions between otherwise independent web applications.
In an example of the present disclosure, Ultra-WideBand (UWB) Indoor Positioning Tags wearable or carried by a human target are used for tracking of the human target for follow- me mode or operation (i.e. the robot follows the human target). Fig. 5 shows a prototype of such UWB indoor positioning tag. In other examples, electronic tags based on non- UWB technology but is similar and/or suitable can also be used. In the present example, the implementation involving such tag or tags is different from existing techniques used in similar robot follow-me operations.
Existing techniques rely largely on relative position tracking of a human operator from the robot. Basically, the robot has a wireless tracking system mounted on board to track the human operator wearing or carrying an electronic tag configured for wireless communication with the robot. This is illustrated by Fig. 6.
With reference to Fig. 7, in the present example, a UWB position tag 708 is issued to be worn or carried by a human operator 710. A robot 701 having features of the apparatus 100 of Figs 1 and 2 is configured to follow the human operator 710 wherever he or she goes within an indoor work zone 700. The work zone 700 in this example is the area of a grocery store and in a top view, the work zone 700 is rectangular in shape. The UWB position tag 708 is to be tracked indoors by a plurality of stationary receivers or transceivers 702 mounted on the walls of the indoor work zone 700. There can be, for example, four or more of the stationary receivers or transceivers 702. In the present example, four stationary receivers 702 are placed around four respective corners of the work zone 700. The stationary receivers 702 and the UWB position tag 708 together form an indoor positioning system that enables global position tracking of the UWB position tag 708, and correspondingly tracks the human operator 710 wearing or carrying the UWB position tag 708, within the work zone 700. Specifically, x (relative to horizontal axis) and /(relative to vertical axis) coordinates (the x and y axes are drawn in Fig. 7) relative to the top view of the work zone 700 of the UWB position tag 708 are determined by the indoor positioning system. In the present example, the global position (x, y) of the UWB position tag 708 tracked by the stationary receivers 702 is feedback by the indoor positioning system to the robot 701 via Wireless Fidelity (WIFI) or other suitable wireless technology.
In Fig. 7, there is an obstruction 704 between the robot 701 and the human operator 710 but this is not a problem as line-of-sight of the human operator 710 by the robot 701 is not required.
Traffic management algorithms and/or techniques to avoid collision of the robot 300 with other robots and/or human operators wearing or carrying the Indoor Positioning Tags can also be implemented for the examples of the present disclosure.
In summary, an example of the present disclosure may provide the following:
(i) A system with integration of chatbot style instruction for fulfilment of robotic services. The Dialogflow based chatbot allows capturing of conversation intents and is capable of translating human user inputs to provide guide-me or follow-me operation (or mode). In guide-me mode, the apparatus (e.g. 100 of Fig. 1 and 2) of the AIFO system breakdown the user’s speech inputs into keywords to match pre-programmed conversation intents. If conversation intents require more information from the human user interacting with the apparatus, it will raise one or more questions to the user to gather more information.
(ii) Follow-me operation of the apparatus (e.g. 100 of Fig. 1 and 2) uses electronic wireless tags, for example, Ultra-WideBand (UWB) Indoor Positioning Tags. These tags are configured to communicate wirelessly with a plurality of stationary wireless stations, wherein each station comprises a receiver and/or a transceiver. The plurality of stationary wireless stations and the tags are parts of an indoor positioning system for tracking each tag. Each tag can have a unique identifier (ID) to allow identification of a human (user) wearing or carrying the tag. In this manner, the indoor positioning system is able to track the user’s location even if the apparatus has no line of sight of the user. This is useful especially in an indoor area with dense obstacle setups (e.g. grocery store with a plurality of tall shelves).
Examples of the present disclosure may have the following features:
An apparatus for indoor guiding, the apparatus comprising: a mobile base to enable the apparatus to move indoors; and a processor for executing instructions in a memory to control the apparatus to: autonomously map an indoor work zone to identify obstacles in the work zone so that the apparatus is able to move along one or more routes to avoid the obstacles; receive position input of an electronic tag tagged to a subject from an indoor positioning system; and move along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
The apparatus may be further controllable to: move along pre-designated waypoints in the work zone in each of the one or more routes.
The apparatus may be further controllable to: enable chatbot style interaction with a user, wherein user input for the interaction is through speech.
The apparatus may be further controllable to: receive one or more user input; determine a location in the work zone from the one or more user input; and move along the one or more routes to the location.
A method for indoor guiding, the method comprising: autonomously mapping an indoor work zone to identify obstacles in the work zone so that an apparatus is able to move along one or more routes to avoid the obstacles; receiving position input of an electronic tag tagged to a subject from an indoor positioning system; and moving the apparatus along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
The method may further comprise: moving the apparatus along pre-designated waypoints in the work zone in each of the one or more routes.
The method may further comprise: enabling chatbot style interaction with a user, wherein user input for the interaction is through speech.
The method may further comprise: receiving one or more user input; determining a location in the work zone from the one or more user input; and moving the apparatus along the one or more routes to the location.
In the specification and claims, unless the context clearly indicates otherwise, the term “comprising” has the non-exclusive meaning of the word, in the sense of “including at least” rather than the exclusive meaning in the sense of “consisting only of”. The same applies with corresponding grammatical changes to other forms of the word such as “comprise”, “comprises” and so on.
While the invention has been described in the present disclosure in connection with a number of examples, embodiments and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.

Claims

Claims
1 . An apparatus for indoor guiding, the apparatus comprising: a mobile base to enable the apparatus to move indoors; and a processor for executing instructions in a memory to control the apparatus to: autonomously map an indoor work zone to identify obstacles in the work zone so that the apparatus is able to move along one or more routes to avoid the obstacles; receive position input of an electronic tag tagged to a subject from an indoor positioning system; and move along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
2. The apparatus of claim 1 , wherein the apparatus is further controllable to: move along pre-designated waypoints in the work zone in each of the one or more routes.
3. The apparatus of claim 1 or 2, wherein the apparatus is further controllable to: enable chatbot style interaction with a user, wherein user input for the interaction is through speech.
4. The apparatus of claim 1 , 2 or 3, wherein the apparatus is further controllable to: receive one or more user input; determine a location in the work zone from the one or more user input; and move along the one or more routes to the location.
5. A method for indoor guiding, the method comprising: autonomously mapping an indoor work zone to identify obstacles in the work zone so that an apparatus is able to move along one or more routes to avoid the obstacles; receiving position input of an electronic tag tagged to a subject from an indoor positioning system; and moving the apparatus along the one or more routes to the position of the electronic tag based on the position input, wherein the indoor positioning system comprises the electronic tag and a plurality of wireless stations configured to track and obtain the position input of the electronic tag.
6. The method of claim 6, the method further comprising: moving the apparatus along pre-designated waypoints in the work zone in each of the one or more routes.
7. The method of claim 5 or 6, the method further comprising: enabling chatbot style interaction with a user, wherein user input for the interaction is through speech.
8. The method of claim 5, 6 or 7, the method further comprising: receiving one or more user input; determining a location in the work zone from the one or more user input; and moving the apparatus along the one or more routes to the location.
9. A system for indoor guiding, the system comprising: the apparatus of any one of claims 1 to 4; and the indoor positioning system.
PCT/SG2021/050314 2020-06-04 2021-06-02 Apparatus and method for indoor guiding WO2021246963A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063034489P 2020-06-04 2020-06-04
US63/034,489 2020-06-04

Publications (1)

Publication Number Publication Date
WO2021246963A1 true WO2021246963A1 (en) 2021-12-09

Family

ID=78831359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2021/050314 WO2021246963A1 (en) 2020-06-04 2021-06-02 Apparatus and method for indoor guiding

Country Status (1)

Country Link
WO (1) WO2021246963A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105563451A (en) * 2016-01-20 2016-05-11 詹雨科 Intelligent following robot
CN205983559U (en) * 2016-07-27 2017-02-22 郭润泽 Intelligence supermarket management system
CN106557791A (en) * 2016-10-20 2017-04-05 徐州赛欧电子科技有限公司 A kind of supermarket shopping management system and its method
CN106647760A (en) * 2016-12-30 2017-05-10 东南大学 Intelligent shopping cart and intelligent shopping method
CN106994993A (en) * 2017-03-17 2017-08-01 浙江大学 Navigate tracking smart supermarket shopping cart and its method based on local positioning system
CN107659918A (en) * 2017-08-11 2018-02-02 东北电力大学 A kind of method and system intelligently followed
CN207637245U (en) * 2017-10-24 2018-07-20 广州鸿灏科技有限公司 Intelligent retail trade system
CN110717003A (en) * 2019-09-27 2020-01-21 四川长虹电器股份有限公司 Intelligent shopping cart autonomous navigation and automatic following method based on path planning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105563451A (en) * 2016-01-20 2016-05-11 詹雨科 Intelligent following robot
CN205983559U (en) * 2016-07-27 2017-02-22 郭润泽 Intelligence supermarket management system
CN106557791A (en) * 2016-10-20 2017-04-05 徐州赛欧电子科技有限公司 A kind of supermarket shopping management system and its method
CN106647760A (en) * 2016-12-30 2017-05-10 东南大学 Intelligent shopping cart and intelligent shopping method
CN106994993A (en) * 2017-03-17 2017-08-01 浙江大学 Navigate tracking smart supermarket shopping cart and its method based on local positioning system
CN107659918A (en) * 2017-08-11 2018-02-02 东北电力大学 A kind of method and system intelligently followed
CN207637245U (en) * 2017-10-24 2018-07-20 广州鸿灏科技有限公司 Intelligent retail trade system
CN110717003A (en) * 2019-09-27 2020-01-21 四川长虹电器股份有限公司 Intelligent shopping cart autonomous navigation and automatic following method based on path planning

Similar Documents

Publication Publication Date Title
Rubio et al. A review of mobile robots: Concepts, methods, theoretical framework, and applications
CN111511620B (en) Dynamic window method using optimal interaction collision avoidance cost assessment
Mekhalfi et al. Recovering the sight to blind people in indoor environments with smart technologies
ES2903525T3 (en) Matching multi-resolution sweeps with exclusion zones
Sales et al. CompaRob: The shopping cart assistance robot
Culler et al. A prototype smart materials warehouse application implemented using custom mobile robots and open source vision technology developed using emgucv
Kulyukin et al. Robocart: Toward robot-assisted navigation of grocery stores by the visually impaired
US9552056B1 (en) Gesture enabled telepresence robot and system
JP4630146B2 (en) Position management system and position management program
US20160260142A1 (en) Shopping facility assistance systems, devices and methods to support requesting in-person assistance
Chaccour et al. Computer vision guidance system for indoor navigation of visually impaired people
Abu Doush et al. ISAB: integrated indoor navigation system for the blind
US20090148034A1 (en) Mobile robot
KR20200099611A (en) Systems and methods for robot autonomous motion planning and navigation
WO2021109890A1 (en) Autonomous driving system having tracking function
Kayukawa et al. Guiding blind pedestrians in public spaces by understanding walking behavior of nearby pedestrians
Chen et al. Kejia robot–an attractive shopping mall guider
Duarte et al. Information and assisted navigation system for blind people
JP2004042148A (en) Mobile robot
Lu et al. Assistive navigation using deep reinforcement learning guiding robot with UWB/voice beacons and semantic feedbacks for blind and visually impaired people
TW201444543A (en) Self-propelled cart
Ventura et al. Towards optimal robot navigation in domestic spaces
Foresi et al. Improving mobility and autonomy of disabled users via cooperation of assistive robots
US20220291685A1 (en) Method and system to improve autonomous robotic systems responsive behavior
Chaccour et al. Novel indoor navigation system for visually impaired and blind people

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21817347

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21817347

Country of ref document: EP

Kind code of ref document: A1