CN117178165A - Target guided wheelchair navigation method and system for sharing control - Google Patents

Target guided wheelchair navigation method and system for sharing control Download PDF

Info

Publication number
CN117178165A
CN117178165A CN202280023681.6A CN202280023681A CN117178165A CN 117178165 A CN117178165 A CN 117178165A CN 202280023681 A CN202280023681 A CN 202280023681A CN 117178165 A CN117178165 A CN 117178165A
Authority
CN
China
Prior art keywords
shared
user
paths
target
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280023681.6A
Other languages
Chinese (zh)
Inventor
洪维德
尼哈·普里雅达什尼·加尔格
雷震
陈邦毅
李磊
林仁丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Technological University
Original Assignee
Nanyang Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Technological University filed Critical Nanyang Technological University
Publication of CN117178165A publication Critical patent/CN117178165A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0055Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots with safety arrangements
    • G05D1/0061Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots with safety arrangements for transition from automatic pilot to manual pilot and vice versa
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Shared control methods, systems, and computer readable memory for target guided wheelchair navigation are provided. According to another aspect, the present embodiment provides a target guidance navigation system of sharing control. The shared control of the target guidance navigation system comprises a user control device and a shared navigation controller. The shared navigation controller is coupled to the user mobile control device and generates a user intent set for navigating a plurality of paths to a predetermined target. The shared navigation controller is further responsive to calculating a probability of each of the plurality of paths reaching the predetermined target, predicting one of a set of user intents as a preferred path, and identifying user input received from the user movement control based on a local path plan of a shared dynamic window method (DWA) using the preferred path as a guideline for a local path plan based on the DWA. Further, the shared navigation controller determines a final control command based on the user input.

Description

Target guided wheelchair navigation method and system for sharing control
Priority claim
The present application claims priority from singapore patent application number 10202103600T filed on day 4 and 8 of 2021.
Technical Field
The present invention relates generally to wheelchair navigation, and more particularly to a target guided wheelchair navigation method and system for shared control.
Background
With the aging population of many countries, the demand for wheelchairs is increasing. Manually controlled wheelchairs require upper limb strength and fine movement control capability. Persons with disabled upper limbs may find it extremely difficult to perform manual control, and even healthy users may benefit from the auxiliary wheelchair by reducing the effort. The robot-assisted wheelchair aims at providing navigation assistance for people through sharing control by utilizing an autonomous navigation function.
The traditional autonomous navigation is a general framework for navigating from a starting position to a target position and consists of a global planner and a local planner. Given an environment map, the global planner uses a heuristic search algorithm, a, etc., to calculate a path between a starting location and a target location. A local path planner, such as a dynamic window scheme (Dynamic Window Approach) or a timed elastic band, then uses information from various sensors, such as lidar or cameras, to track the path locally while meeting the kinematic constraints.
In a shared control framework for wheelchair navigation, the local path planner is modified to also receive user input. Some traditional frameworks only provide a local obstacle avoidance to the user: in other words, it modifies the user's input so that the wheelchair does not collide with obstacles. Such systems may not have the concept of the user's final goal or may assume that the user will follow a fixed path as determined by the global planner. Thus, such systems cannot take into account people's intent.
Without any final goal concept, it is difficult to assist in performing fine motion control tasks such as tight turns or entering narrow doorways. Thus, if there is no intent to predict, the shared control system may result in the user not being able to turn into a narrow doorway.
It is assumed that the user will follow the path to the final target determined by the global planner and allow the shared control algorithm to assist the fine motion control task, however, the user may have some other preferred path, thus disadvantageously limiting the user's control authority. Various studies have shown that while being assisted, users still want to control at a local level and desire wheelchairs to be able to respond to their commands. However, providing assistance based on human intent predictions presents challenges such as defining a set of possible human intent, tracking the most likely human intent based on user-provided control inputs, and providing the user with appropriate control.
Most shared control wheelchair systems based on human intent predictions do not fully address this overall challenge. It is developed for very specific scenarios (e.g. gates navigating to a predetermined target) or for a set of fixed predetermined global targets, and thus does not provide for selection of user controlled local paths.
Thus, there is a need for a target guided wheelchair navigation method and system for shared control that can assist fine-movement tasks and also allow users to follow their preferred path while addressing the challenges of defining a possible set of human intent, tracking the most likely set of human intent based on user-provided control inputs, and providing the user with appropriate control. Furthermore, other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and this background of the disclosure.
Disclosure of Invention
According to at least one aspect, the present embodiment provides a sharing control method for a target guidance navigation system. The method comprises the following steps: generating a user intent set for navigating a plurality of paths to a predetermined target, and predicting one of the user intent sets as a preferred path in response to calculating a probability of each of the plurality of paths to the predetermined target. The method further includes using the preferred path as a guideline for local path planning based on a shared dynamic window scheme (Shared Dynamic Window Approach).
According to another aspect, the present embodiment provides a target guidance navigation system with a shared control method. The target guided navigation system includes a user control device and a shared navigation controller. The shared navigation controller is coupled to the user control device and generates a user intent set for navigating multiple paths to a predetermined target. The shared navigation controller may also predict one of the set of user intents as a preferred path in response to calculating a probability of each of the plurality of paths reaching the predetermined target, and use the preferred path as a guideline for a local path plan based on a shared dynamic window scheme (Shared Dynamic Window Approach), wherein the local path plan based on the shared dynamic window includes identifying user input received from the user movement control. Further, the shared navigation controller determines a final control command based further on the user input.
According to yet another aspect, the present embodiment provides a computer-readable medium. The computer readable medium includes instructions for sharing a navigation controller to perform a method for target guided navigation of a shared control. The instructions cause the shared navigation controller to generate a set of user intents for navigating a plurality of paths to a predetermined target and predict one of the user intents as a preferred path in response to calculating a probability of each of the plurality of paths to the predetermined target. The instructions also cause the shared navigation controller to use the preferred path as a guideline for a shared dynamic window scheme (Shared Dynamic Window Approach) based local path planning, wherein the shared dynamic window based local path planning includes identifying received user input. The instructions also cause the shared navigation controller to determine a final control command based on the user input.
Drawings
The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to illustrate various embodiments and to explain various principles and advantages all in accordance with the present embodiments.
Fig. 1, including fig. 1A and 1B, depicts the processing of a Wo Luo noy (voronoi) path, which is used as a global path from a given origin to a final destination, according to the present embodiment, wherein fig. 1A depicts the processing of a Wo Luo noy path prior to convergence, and fig. 1B depicts the processing of a Wo Luo noy path after convergence.
Fig. 2, comprising fig. 2A to 2D, depicts progressive convergence and interpolation of paths using trajectory optimization while ensuring that the homonym (homonym) of the path does not change, according to the present embodiment, wherein fig. 2A depicts the calculation of a straight line from a starting point S to a first point C, fig. 2B depicts finding a point S ' on the straight line of fig. 2B, fig. 2C is calculating a straight line from the point S ' to a second point C ', and fig. 2D depicts finding a point S on the straight line of fig. 2D.
Fig. 3, including fig. 3A, 3B and 3C, depicts a schematic diagram of exploring the effects of a new homotopic, wherein fig. 3A depicts an initial homotopic path, fig. 3B depicts a homotopic path that is not explored after wheelchair movement, and fig. 3C depicts a homotopic path that is explored after wheelchair movement, according to the present embodiment.
Fig. 4, comprising fig. 4A and 4B, depicts a test scenario of a simulation experiment according to an embodiment of the present invention, wherein fig. 4A depicts a hospital scenario, and fig. 4B depicts a scenario of crossing a doorway.
Fig. 5 is a photograph of a simulation experiment apparatus according to the present embodiment, in which a subject performs an experimental test using a weight on a non-conventional hand.
Fig. 6, comprising fig. 6A and 6B, depicts two attempts by a first disabled subject in a hospital setting using a traditional shared dynamic window approach, wherein fig. 6A depicts a first attempt by the first subject and fig. 6B depicts a second attempt by the first subject.
Fig. 7, comprising fig. 7A and 7B, depicts two attempts by a first disabled subject in a hospital setting using a method according to the present embodiment, wherein fig. 7A depicts a first attempt by the first subject and fig. 7B depicts a second attempt by the first subject.
Fig. 8, comprising fig. 8A and 8B, depicts two attempts by a second disabled subject in a hospital setting using a traditional shared dynamic window approach, wherein fig. 8A depicts a first attempt by the second subject and fig. 8B depicts a second attempt by the second subject.
Fig. 9, comprising fig. 9A and 9B, depicts two attempts by a second disabled subject in a hospital setting using a method according to the present embodiment, wherein fig. 9A depicts a first attempt by the second subject and fig. 9B depicts a second attempt by the second subject.
Fig. 10, comprising fig. 10A and 10B, depicts two attempts by a first disabled subject in a pass-through doorway scenario using a conventional shared dynamic window approach, wherein fig. 10A depicts a first attempt by the first subject and fig. 10B depicts a second attempt by the first subject.
Fig. 11, comprising fig. 11A and 11B, depicts two attempts by a first disabled subject in a gate crossing scenario using a method according to the present embodiment, wherein fig. 11A depicts a first attempt by the first subject and fig. 11B depicts a second attempt by the first subject.
Fig. 12, comprising fig. 12A and 12B, depicts two attempts by a second disabled subject in a pass-through doorway scenario using a traditional shared dynamic window approach, wherein fig. 12A depicts a first attempt by the second subject and fig. 12B depicts a second attempt by the second subject.
Fig. 13, comprising fig. 13A and 13B, depicts two attempts by a second disabled subject in a gate crossing scenario using a method according to the present embodiment, wherein fig. 13A depicts a first attempt by the second subject and fig. 13B depicts a second attempt by the second subject.
Fig. 14 depicts a schematic view of a robotic wheelchair for experimentation according to the present embodiment.
Fig. 15 depicts a photograph showing a real world wheelchair testing scenario.
And fig. 16, including fig. 16A and 16B, depicting an intent prediction system in accordance with the present embodiment, showing how a robotic wheelchair is steered away from a door by obstacle avoidance alone (sharing dynamic window output), but providing a schematic of correct motion to assist the robotic wheelchair in traversing the door, due to imprecise user input.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Detailed Description
The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any theory presented in the preceding background of the invention or the following detailed description. It is an object of the present embodiments to present methods and systems to provide a user with assistance in navigating to a user-specified target, while giving the user control over local path planning. According to this embodiment, dynamic intermediate targets are generated using generalized Wo Luo North diagrams (voronoi diagrames), which are maintained based on user input using MaxEntIRL principles and homonyms of targets, and which are provided with appropriate local control by sharing a dynamic window local path planner, taking into account the received user input and the most likely intermediate targets.
The method and system according to the present embodiments advantageously results in faster achievement of user-specified goals compared to local obstacle avoidance systems, while adhering to user rights compared to systems with fixed predetermined goals and without local shared control. The present embodiments provide methods and systems that are beneficial to users with disabled upper limbs, such as users with cerebral palsy. Meanwhile, the method and the system according to the embodiment solve the requirement of modeling the control capacity of the joystick of the user, so that the human intention of the disabled upper limbs is predicted better.
Methods and systems according to the present embodiments provide a shared control system that can take into account the ultimate goals of wheelchair users to improve assistance while giving the users higher control over wheelchair operation. Furthermore, by involving simulation of a population suffering from induced disability, the method and system according to the present embodiment have been quantitatively demonstrated to be better than systems that do not consider user goals. Furthermore, as described below, since the method and system of the present embodiment take into account the user's control capabilities to better suggest human intent, a cerebral palsy subject using simulated control according to the present embodiment is significantly better assisted when turning into the doorway.
Conventional systems for shared control of robotic devices and wheelchairs can be broadly divided into two categories depending on whether human intent is predicted. Methods that cannot predict human intent may not have the concept of a final target or may assume a fixed global target with a known path that must be followed. For example, the reward function of the dynamic window method (Dynamic Window Approach) is modified to prioritize speeds closer to the user input. Similarly, the vector field histogram (Vector Field Histogram) method is modified to input some weights to the user. However, these methods are difficult for users to enter into narrow doorways or make sharp turns because the obstacle avoidance control system may attempt to remove them from the narrow doorway.
Knowing the intent of the user helps to avoid such characteristics. But solutions with user intent compensation also have their drawbacks. For example, some solutions assume that the global objective and the best path are known and mix the user input with the output of the local path planner. Thus, wheelchairs attempt to follow an optimal path to reach a target determined by the robot while allowing only minor modifications to the local path planning level by the user. While this may assist the user in achieving the goal, it also limits the user's control rights and limits the ability to choose between multiple solutions to achieve the same goal.
Tracking all of the multiple approaches to the goal and predicting which the user prefers may lead to better sharing control, and several approaches have been proposed to assist the user by predicting the user's intent. One of the major challenges of these schemes is how to define the intent of the user. Typically, a manually defined global goal intended to be defined as the shortest path or very specific, such as the case of an automatically detected doorway. Manual definition requires a set of all possible global targets, limiting the applicability of such methods in general.
According to this embodiment, the final target is set to be known, but the path the user wants to take to reach the target is unknown. For a given goal, a generalized Wo Luo North diagram (voronoi diagrames) is used to automatically generate a set of user intent. While Wo Luo nokia diagrams have been used to calculate various paths to a target, prior art systems and methods only allow a user to choose between these different paths, but do not allow the user to continuously control the wheelchair. According to this embodiment, a shared dynamic window based local path planner is used to provide the user with the appropriate control.
After defining the intent, the next step according to the present embodiment is to predict the path preferred by the user. Most conventional systems and methods rely on commands previously issued by a user to predict human intent. For example, some methods predict user joystick commands, some methods learn model parameters manually defined for a particular user, and some methods use maxentetirl's concepts to calculate target probabilities using manually defined cost functions for each target. The method and system according to the present embodiment utilize the concept of maxentetirl to calculate probabilities of various paths, since maxentetirl is a principle method of calculating probabilities of user targets based on a cost function and is easy to specify.
After predicting the user intent, the next step in the method and system according to the present embodiment is to calculate the final control command. Conventional methods employ user commands using outer loop blending or inner loop blending to calculate final control commands. In the outer loop blending, the user input is blended with the optimal actions calculated by the local path planner. While practical outer loop mixing is easier to use and works well, one major drawback of outer loop mixing is that many of the less optimal actions and actions that may be better from the user's perspective will be discarded before mixing with the user input. Furthermore, external circulation mixing may be unsafe. In the inner loop mixing, user input and various possible input actions are taken into account, and at the same time, the optimal actions for a given state are calculated. According to this embodiment, in calculating rewards for each possible speed in the dynamic window, an inner loop mix is used.
Final target G in a static indoor environment where only the wheelchair user is given a known map, but without regard to preferred user path i u In the case of the information of (a), the problem is that the robot issues at each time interval t, which results in as close as possible to the user's preferred path i u Which in turn can bring the user to the target G, what the control command a e a should be. Although path i is preferred u The joystick command U e U issued by the user is unknown but can be observed and used to predict the user's preferred path.
To answer the question, a preferred path set I according to the present embodiment is calculated. In theory, the size of this set may be infinite, as any path may be the user's preferred path. Thus, according to a preferred embodiment, homotopy concepts are used to limit the preferred path set I. For a given starting and target position, the two paths belong to the same homotopy if they can be transformed into each other and do not cross any obstacle, respectively. Thus, according to the definition of homotopy, it is difficult for a user to change paths in different homotopy. Due to this property of homotopy, the size of the preferred set of paths I is limited to the number of homotopies into which all paths can be classified.
Thus, according to the method and system of the present embodiment, paths belonging to different homotopies are calculated using a generalized Wo Luo North-Act diagram (voronoi diagrames), the calculated paths being used to represent the respective homotopies. Preferred path i for user u Belonging to one of the homotopy classes. Principle of maxentetirl is made in this path i u The probability distribution is maintained and the most likely path is used as a guideline for local path planning based on shared dynamic windows. In order for the user to have more control and to simultaneously follow the most likely path of the various homotopy paths, the local path planner calculates the final control commands while taking into account the user input. The measure of the degree of congestion of the environment (i.e., the degree of congestion of the local environment) is used to adjust the weight between the user input and the action of bringing the user to the target through the most likely homonym. This adjustment results in the path being a preferred path i that is close to the user u Because the number of deviations of paths in a given homotopy may be smaller when less space is available.
First, a Wo Luo North path (voronoi paths) is processed for global path planning and, after re-planning, a new path is linked to the previous homonym. This is shown in fig. 1A and 1B, where diagrams 100, 150 represent paths from a given origin 110 to a final destination 120. To generate paths belonging to different homotopies, wo Luo North grams (voronoi diagrames) are generated using the Fortune's scanning algorithm (Fortune's sweep algorithm), with paths in different homotopies shown in diagram 100 prior to convergence. K shortest paths (K shortest paths) from the starting location 110 to the final destination 120 are then extracted using the Yen's algorithm (Yen's algorithm). The homotopy is determined using known algorithms and at each iteration of the Yen's algorithm, a homotopy check is performed to ensure that each of the K shortest paths is in an independent homotopy. Diagram 150 (fig. 1B) represents ten shortest paths (i.e., k=10) for different classes from a given start 110 to a final target 120 after convergence in fig. 1.
According to this embodiment, the Wo Luo North paths (voronoi paths) are very bumpy and occasionally have unnecessary turns, so Wo Luo North paths (voronoi paths) are processed to obtain smooth trajectories that the local path planner can follow. For example, trajectory optimization may be used to generate trajectories from the Wo Luo North path (voronoi paths). However, it is necessary to ensure that the homotopy of the paths does not change when the convergence and interpolation process is performed to optimize it. For this reason, the path convergence and interpolation processing is completed while checking that no obstacle is crossed. The path convergence and interpolation processing procedure example according to the present embodiment is shown stepwise in diagrams 200, 220, 240, 260 in fig. 2A to 2D. Referring to diagram 200 (fig. 2A), the closest point (point a) on path 210 cannot be connected straight to the starting point (point S) without colliding with obstacle 214. The straight line 216 connects the start point S point with a point before the point a (i.e., point C). Referring to illustration 220, the S' point on line 216 may be connected to the a point and not collide with obstacle 214. Referring next to illustrations 240, 260, the path between S and S 'is maintained and the entire process is repeated starting with S'. That is, as shown in the drawing 240, a nearest point a ' point, which is connected to the start point (S ' point) straight line, and a straight line 246 connecting the start point S ' and a C ' point before the a ' point cannot be found on the path 210 without colliding with the obstacle 244. Referring to illustration 260, an S "point is found on line 246 that can be connected to point A' without colliding with obstacle 244. This causes a tight wire path to be found around the obstacle as shown in illustration 260 (fig. 2D).
According to this embodiment, once the wheelchair moves out of its position, the path in the different classes needs to be re-planned. In order to be able to assign probabilities to new paths based on previous user actions, it is necessary to ensure that new paths can be linked to their homonyms. To this end, the new position of the wheelchair will be connected to each path in the path list continued by the previous time step. It is known that a new collision-free path is generated from the current position of the wheelchair, since the wheelchair passes through a new road segment added to each path. Thus, the new path is still connected to its homotopy. However, this may introduce unnecessary turns in the new path. Thus, according to the present embodiment, the same convergence and interpolation processing as used for path smoothing is applied to slow down unnecessary turns.
Multiple initial paths with independent homotopies are generated for obstacles near the wheelchair. As the wheelchair approaches the final target, many of these paths become infeasible by being unable to avoid obstacles near the starting position of the wheelchair. Therefore, it is necessary to explore new homotopy. In each re-planning step we find the K' homolunar path from the wheelchair current position. If the paths have different homotopy than the found paths, they are added to the homotopy path list. Thereafter, the longest path is removed from the list except for the most likely paths until the path list has only K paths left. Referring to fig. 3A, 3B and 3C, diagrams 300, 330, 360 depict diagrams of exploring the effects of new homotopy according to this embodiment. The diagram 300 depicts an initial homotopy path, while the diagram 330 depicts a homotopy path after wheelchair movement without exploring a new homotopy class. Graph 360 depicts homotopy paths in the case of exploring new homotopy classes after wheelchair movement in accordance with the present embodiment, advantageously providing additional options resulting from additional homotopy paths added to the homotopy path list.
After the generation of the set of possible paths I in the independent homotopy, a probability distribution p (i|u) is maintained over the paths at each time interval t 0...t E) represents the action u of path i in observing the user 0...t How likely it is after that until time t. E is determined by assuming the current position (ω x ,ω y ,θ z ) Speed and velocity ofAnd the environment map M. User operation u t Input by joystick (j) x ,j y ) And (5) defining. The path i is represented by a set of equidistant waypoints, each waypoint p being represented by a location (p x ,p y ) And (5) defining. To calculate the probability distribution p (i|u) 0...t E), standard method is to use Bayesian ruleThen (Bayes rule) and calculate p (u) t |i,E)∝p(u t |i,u 0...t-1 E), assuming a given preferred path i u And an environment E in which the user's operation at time t is independent of the previous operation. The probability distribution is shown in formula (1).
p(i|u 0...t ,E)∝p(u t |i,u 0...t-1 ,E)p(i|u 0...t-1 ,E)∝p(u t |i,E)p(i|u 0...t-1 ,E)
(1)
Thereafter, p (u) was performed using the principle of MaxEntIRL t Calculation of i, E), the principle indicating p (u) t I, E) is proportional to the cost index of the arrival path i. The calculation results are shown in formulas (2) to (5):
p(u i |i,E)∝exp(-C i (u t )) (2)
C i (u t )=τ·(α·C cmd +β·C ctrl ) (3)
cost C of arriving Path i i (u t ) Consists of two weighted parts: command cost C cmd Control cost C Ctrl . Command cost C cmd Input by a joystick from a userDynamically selected direction of waypoint p along path i +.>(explained later) the difference is calculated.
Control cost C ctrl Representing the current orientation (theta) of the wheelchair z ) Azimuth θ to waypoint p p Difference between them. Temperature factor tau allowsThe belief modification rate is allowed to be adjusted. Since the distance is calculated from the dynamically selected waypoints along each path I e I according to the current linear velocity of the wheelchair, only the angular difference is checked, i.e. the waypoint p selected on a certain path I is the nearest waypoint s=v at least a distance along the path I linear X t, where t is a time range factor, and v Linear Is the speed of the wheelchair.
Method C for calculating the cost function i (u t ) From observations of the user's operation of the wheelchair: whenever the user's goal is to move along the preferred path, he/she intuitively wants to turn the joystick in the direction of the preferred path.
After obtaining the probability distribution of the set of preferred paths, the probability is used to calculate an action a that brings the user towards the target through the user's preferred paths. Theoretically, this problem can be modeled according to a partially observable markov decision process (partially observable Markov decision process, POMDP), a principle method of planning under uncertainty, where the human preferred path is part of a hidden state. However, it is currently not feasible to solve POMDP through continuous actions. In addition, for a shared control robotic arm, information gathering action is not preferred during shared control because the user tends to push the robotic arm toward a target that may not be preferred by the user. Thus, POMDP was solved as QMDP using the best-known optimization. QMDP (i.e., a markov decision process, where Q refers to the Q value) is a fully observable approximation of the POMDP policy and relies on the Q value to determine an action. For the grab task, a policy can be observed that there is target uncertainty, which requires the robotic arm to be oriented towards the target center. This policy applies to the grabbing task since all targets have a common path.
However, for navigation tasks, guiding the center of all possible paths can be confusing to humans, and unlike grabbing objects, the user may have to make considerable effort to turn to the preferred path. Thus, QMDP is considered as the MDP solution, using the most likely path as the user's preferred path to represent the system state and environment E, and computing the best action for that state using the dynamic window method (Dynamic Window Approach, DWA).
Dynamic window methods (Dynamic Window Approach, DWA) are traditional methods of local obstacle avoidance. The original dynamic window method defines a target G on a map, and obtains an optimal action a through a dynamic speed window in order to minimize an action A set generated by a cost function C, as expressed by formulas (6) and (7):
C=w c ·Clearance+w j ·Heading+w v ·Velocity (7)
the Cost Function (Cost Function) C consists of three parts: gap (Clearance), velocity (bearing), and Heading (Heading). The Clearance (Clearance) measures the distance from the nearest obstacle, which represents the spaciousness of the surrounding environment. The Velocity (Velocity) evaluates the cost of linear Velocity and angular Velocity, with faster velocities, if allowed, being ultimately preferred. Heading (Heading) calculates the difference between the current Heading and the direction to the target G, indicating the progress toward the target location. w (w) c 、w h W v To assign a weight to each portion.
Because the original dynamic window method has some drawbacks, not integrated with the shared control in the framework, a modified shared dynamic window method (Shared Dynamic Window Approach) is used in which a cost function C is calculated as expressed by equations (8) and (9):
C=1-Clearance+Clearance·Cost cmd (8)
Cost cmd =w h ·Heading+w v ·Velocity (9)
taking into account the dynamic constraints and the linear and angular distances to the obstacle, the distance values in the gap (Clearance) here are normalized to [0,1]This addresses the security of the system. Therefore, to ensure security, the gap (Clearance) always has the highest priority with respect to the rest of the costs. Cost of user control cmd Further includes Heading (Heading) and speed (Ve)A space). To conform to the shared control framework according to the present embodiment, the Heading (Heading) function herein measures the degree of alignment between the Heading performed and the direction obtained from the user's joystick input u. The Velocity (Velocity) function describes how close the linear Velocity performed is to the linear Velocity that the user expects, also from the joystick input u.
Since the system and method according to the present embodiment is also responsible for helping users move along their preferred paths and handle complex tasks such as crossing gates and turning around hard, the shared dynamic window approach of using features that only consider the user's instantaneous input alone is inadequate, especially for those lacking fine motion control, where minor erroneous inputs may lead to some adverse consequences.
Therefore, the present embodiment proposes a method called a goal-based shared dynamic window method, which not only ensures that the low-level control of the user can not only protect the safety of the user, but also can help the user navigate along a desired path and handle difficult tasks. To achieve this, a waypoint p along the most likely path is first obtained. Then, an additional part is added to the Cost function for computing the best action, wherein the added part is defined as a Cost p The calculation is shown in formula (10):
Cost p =w distance ·Distance+w direction ·Direction (10)
distance measures the Distance between the waypoint p and the rolling trajectory generated by the dynamic window, while Direction measures the degree of alignment with the Direction of the waypoint p when the trajectory is closest to p. According to the system and method of the present embodiment, by adding a function Cost p Rather than utilizing a pure obstacle avoidance process in a shared dynamic window approach, intervention control is initiated to assist in navigating along the most likely path. Then we calculate Cost cmd And Cost p To obtain a total cost, as shown in equation (11):
Cost total =w cmd ·Cost cmd +w goal ·Cost p (11)
changing w cmd W goal The value of (2) may adjust the degree of intervention of the system.
Finally, using a similar calculation to equation (8), the best actions of the goal-based shared dynamic window method according to this embodiment can be obtained as shown in equations (12) and (13):
C=1-Clearance+Clearance·Cost total (13)
The most probable path calculated by the passbook is the case of an error. It is observed that the user gives a sufficient indication about his preferred class before approaching an area with less available space (e.g. a doorway), and that the most likely path is usually correct when the user arrives in a crowded environment. When users are in relatively little space in obstacles, they want to move freely and often change their preferred path. Thus, to prevent situations where mispredictions do not beneficially interfere with control, a gap (Clearance) is used to dynamically adjust Cost cmd And Cost p And a weight therebetween.
As described above, the normalized gap (Clearance) is an indicator of safety and spaciousness. A value of 0 indicates that the corresponding action will result in an unavoidable collision, while a value of 1 indicates that the action is ensured to be safe. The intermediate values all represent a "risk" index, where values near 0 represent more dangerous behavior and values near 1 mean less risk of operation. Based on the risk index, the sum of all normalized gaps (Clearance) of actions in a actually points to the spaciousness of the environment E in the current state. Therefore, when the sum is high, since the space driving is relatively safe, the Cost cmd A higher weight is assigned. When the sum is low, cost is low because space may be limited p Will become an integral part and system intervention will better assist the user in completing the navigation task. The weight is calculated as shown in formula (14) and (15):
w goal =1-w cmd (15)
as described below, empirically observing, gaps (clearence) are used to determine the weight of the reward function in the user command, most of the time, resulting in predicted user behavior. Theoretically, however, the most likely path may still be wrong and result in the wheelchair bringing the user to an unintended direction. Some conventional techniques only address this problem by providing assistance when the entropy of the probability distribution is low or the probability of the most likely path is above a threshold. However, this solution is not suitable for navigation scenarios where the initial segment in a more path follows a similar path segment but then diverges again. In this case, since the probabilities of all paths are almost equal, no assistance is provided to the user to enter the doorway.
To verify the method and system according to this embodiment, a series of experiments were performed in simulation and on a real wheelchair as described below. The purpose of the experiment was to verify the following assumptions: compared with the shared dynamic window method, the method and the system according to the embodiment require less operation effort and can complete tasks faster, and simultaneously give the user control over the wheelchair comparable to the control given in the shared dynamic window method.
As shown in the illustrations 400, 450 of fig. 4A and 4B, simulations were performed using two test scenarios, with the illustration 400 depicting a hospital scenario for amazon network services (Amazon Web Service) and the illustration 450 depicting a scenario across a doorway. Both scenarios are established in the robot operating systems (Robot Operating System, ROS) and Gazebo, and in both scenarios there is one start point 410, 460 and destination 420, 470 that are fixed, and a predetermined path 430, 480 created between them, meaning that both nominal "target" and "preferred path" are preset for comparison. The preferred path is known only to the user and the shared control method is unknown. The subjects tested in the simulation were 18 healthy subjects (9 men and 9 women) and 2 cerebral palsy subjects (2 men). All procedures were approved by the institutional review board of the university of south america (Institutional Review Board).
Before testing, each subject may first simulate driving a wheelchair for five minutes to become familiar with simulating wheelchair operation, speed, acceleration, and even collision. To simulate hand injuries, all subjects were asked to use their non-dominant hands and to wear a weight of 2 kg on their non-dominant hands, as seen in photograph 500 in fig. 5, to temporarily introduce stiff palms and fingers that limit fine motion control, to conduct experiments. For reference, subjects were also required to complete another set of experiments using their hands, but without the inclusion of weights. For comparison with the traditional shared dynamic window method, the same experiment was performed for all subjects using the shared dynamic window method.
During the test, the subject is required to follow the path as much as possible while navigating to the target. Each subject was required to perform two tasks in each scenario (i.e., hospital scenario and through doorway scenario) for each method (method according to this embodiment and traditional shared dynamic window method) and each hand-weight combination (non-dominant hand with weight, dominant hand without weight), so that each subject should perform a total of sixteen trials.
As previously described, the experiment also recruited two additional subjects diagnosed with cerebral palsy. The test on this subject was similar to the test on healthy subjects, except that the two subjects did not have to wear a 2 kg weight on their non-custom hands. Furthermore, the subjects need only use their hands for routine use. Thus, each of the two subjects was required to perform the task in each scenario (hospital scenario and through doorway scenario) in each method (method according to the present embodiment and conventional shared dynamic window method) and using the hands-on, i.e., eight trials were performed per subject in total.
A real wheelchair experiment was performed on cerebral palsy subjects, who were seated in a real wheelchair 1410 as shown in the diagram 1400 of fig. 14. Wheelchair 1400 includes a laser radar (LIDAR) range scanner 1420 and a joystick input device 1430. The subject must provide user input with their dominant hand via the joystick input device 1430. The real world test includes a scenario that requires a user to enter and exit a door while making a sharp turn, as shown in photograph 1500 of fig. 15. The starting position, target position, and preferred path are guided by the dashed lines marked by tape 1510 as a user. The real world test scenario is a simplified version of the hospital scenario, where the carton 1520 represents a doorway and a wall. The user must first exit the "gate" 1530, make a sharp turn 1540 to enter the next gate 1550, and complete the task along the dashed line of tape 1510.
To estimate and evaluate performance, we recorded and calculated several parameters for each test, including the average test completion time t task And a dynamic time warping (dynamic time warping, DTW) distance d between the predefined path and the path completed by the subject DTW 。t task Represents the speed of the user to complete the task, d DTW Showing how much the user can follow his/her preferred path.
After each trial on healthy subjects, the subjects were asked for feedback on the control of the trial, preferences of the method, dissatisfaction, and other factors, soliciting comments and advice on future improvement. After testing for disabled subjects, semi-structured interviews were also conducted asking them and their caregivers about advice and complaints about using wheelchairs.
The experimental results for healthy subjects are shown in table one, where the data are algorithmically subdivided (shared dynamic window method and labeled "proposed method" according to this embodiment), scene (doorway and hospital) and load (hands-on without weight and hands-off with weight).
Table 1-simulation results for 18 healthy subjects
The proposed method can help the user to complete tasks faster than the shared dynamic window method (Shared Dynamic Window Approach), while maintaining similar control rights
Fig. 6A and 6B depict two attempts by a first physically-impaired subject to a hospital scenario sharing a dynamic window approach, where the dashed line represents the reference path, the rectangle represents the doorway area, and the solid line represents the subject's trajectory. In a similar manner, fig. 7A and 7B depict two attempts by the same subject to use the simulation method and system according to the present embodiment for a hospital scenario. Fig. 8A and 8B depict two attempts by a second physically impaired subject to a hospital scenario sharing a dynamic window approach, while fig. 9A and 9B depict two attempts by a second physically impaired subject to a hospital scenario using a simulation method and system according to the present embodiment. For both subjects, it is clear that the method and system according to the present embodiment is an improvement over the traditional shared dynamic window method of operation.
Similarly, fig. 10A and 10B depict two attempts by a first body barrier tester to traverse a doorway scene of a shared dynamic window approach, with the dashed line representing a reference path, the rectangle representing a doorway area, and the solid line representing the user's trajectory. Fig. 11A and 11B depict two attempts by the same subject to traverse a doorway scene using the simulation method and system according to the present embodiment. And (5) figs. Fig. 12A and 12B depict two attempts to traverse a doorway scene of a shared dynamic window method for a second physically impaired subject, while fig. 13A and 13B depict two attempts by a second subject to traverse a doorway scene using a simulation method and system in accordance with the present embodiments. Also in the through-doorway scenario, the improvement in the traditional shared dynamic window method of operation of the method and system according to the present embodiment is clearly seen for both subjects.
Fig. 16A and 16B depict photographs 1600 and illustrations 1650 showing how a method and system according to the present embodiment helps one of the cerebral palsy subjects enter a narrow door in the event that a conventional shared dynamic window method would guide the user away. While both the output 1610 of the shared dynamic window approach and the inaccurate user input 1620 will guide the wheelchair 1410 away from a narrow doorway, the intent prediction based output 1630 according to this embodiment may correctly guide the robotic wheelchair 1410 through a narrow doorway.
The experimental results of two disabled subjects are summarized in table two, where the data is algorithmically subdivided (shared dynamic window method and method, scenario (gate, hospital and real test) labeled "suggested method" according to this example.
Table II-simulation and true test results for 2 subjects with cerebral palsy
* Subject 1 failed to complete the hospital task because he was unable to enter the doorway twice.
Reported t task To be an additional success test.
Using repeated measures of analysis of variance (Analysis of Variance, ANOVA), healthy subjects were found to complete the task within 34.28 seconds on average using the method according to this example, while reliably being faster [ F (1, 17) =7.52, p <0.05] compared to 37.88 seconds using the traditional shared dynamic window method.
Task performance using a weight of 2 kg limits fine motion control and increases the time it takes to complete the task. Healthy subjects take an average of 37.23 seconds with weight, whereas those without weight take an average of 34.93 seconds [ F (1, 17) =9:35, p <0.01].
Moreover, the hospital scenario (47.69 seconds) requires a longer completion time [ F (1, 17) =75.62, p <0.0001] than the doorway scenario (24.54 seconds), and both algorithms give similar completion times (no algorithm x scenario interaction, F (1, 17) =1.21, p > 0.1).
When studying the way a user follows a given path, both algorithms were found to have comparability [ F (1, 17) =0.93, p >0.1]. This demonstrates that the method and system according to the present embodiment are at least as efficient as the shared dynamic window method algorithm in allocating resources that control rights to follow the user's preferred path.
Throughout the course of the experiment, the subject's operation of the joystick was observed and feedback was required after each trial, as described above. The relevant points are summarized as follows.
(I) While subjects were able to perform all experiments well, they tended to prefer the shared dynamic window method because they felt more control over the robot, although the statistics showed that the method proposed according to this embodiment was still slightly better. However, those subjects who are unable to exert fine motor control, because they can complete the trial faster and more easily, give a higher evaluation of the proposed method.
(II) as subjects approach a stenotic doorway in a hospital setting, system intervention increases due to spatial stenosis. However, since the system walks a short way to the door, most subjects consider it potentially dangerous, and the subject begins to fight the system, trying to pull the wheelchair back to the angle they want to access the door.
(III) all subjects acknowledge that the proposed method according to the present embodiment does require less effort in completing the task and therefore they do not need to be too focused, especially at high speeds.
(IV) one of the subjects was less satisfied with the method proposed according to this example, because she felt that the method of path planning was too aggressive due to the shortened nature, she preferd to accomplish the task in a milder manner.
(V) another subject indicates that when the system begins to intervene, he can obviously feel that he is not hearing his input and is confused about whether his input is wrong, so he can preferably provide an explicit notification when the system is intervening.
These points indicate that the user always has more control over wheelchair navigation, no matter how good or bad he/she is in fine motion control. These points also illustrate that the user lacks trust in the system, resulting in the user's antagonism with the system when a conflict occurs between the system and the user. Thus, a convenient way for the user can assist the user with as little perceived intervention as possible. Furthermore, adding explicit feedback to inform and interact with the user so that the user is familiar with the system intervention, may improve the operation of the method and system according to the present embodiments.
As can be seen from table II, using our proposed method, the task completion time in both simulation scenarios is significantly reduced. In a hospital scenario, subject 1 fails to complete a task in the two-use shared dynamic window approach, as it is always facing the wall rather than the gap in the door. According to the intention prediction-based method of the present embodiment, he can be guided to the doorway correctly and allowed to complete each task. Thus, according to the method and system of the present embodiment, not only are cerebral palsy subjects facilitated to complete tasks faster, but they are also enabled to perform fine motor control tasks. Similarly, in a real wheelchair experiment for subject 2, when he tries to leave the door, he is prevented by the shared dynamic window method, while the method according to the present embodiment helps him leave the door. This reflects that in the actual test scenario, the completion time of subject 2 using the shared dynamic window method was extended by about 30%. However, as can be seen from the user paths shown in fig. 6A, 6B, 7A, 7B, 8A, 8B, 9A, 9B, 10A, 10B, 11A, 11B, 12A, 12B, 13A and 13B, two subjects have a large deviation from the intended path due to the body barrier. Such deviations can lead to erroneous intent predictions and sometimes erroneous assistance. Modeling the user's physical barrier state using POMDP, personalizing human intent predictions, and planning without certainty to help solve the problem.
It can be seen that the present embodiment provides a sharing control system and method, which can better assist the user in navigating by using knowledge of the final target of the user, and provide appropriate control authority for the user. Although an example of a powered wheelchair has been described and presented, the shared control of target guided navigation according to the present embodiment is equally applicable to other robot or vehicle navigation control. Eighteen human subjects in the quantitative experiments performed in the simulation showed that the system and method according to the present embodiment reduced the time and effort required to complete the task compared to the baseline system without regard to the user's intent. However, the offset from the reference path is comparable, thus indicating that the control authority of the individual system is similar. Subjective evaluation of the user shows that the user whose physical barrier is reduced after the weight is introduced is more inclined to use the system and method according to this embodiment than the baseline system.
Experiments conducted on cerebral palsy subjects in a hospital environment clearly demonstrate that the system and method according to the present embodiments can help users with upper limb disabilities, as a subject can only use the system and method according to the present embodiments to accomplish tasks.
It is anticipated that the system and method according to the present embodiments can accommodate the particular disabilities of users operating wheelchairs and can take into account dynamic obstacles in the environment. Further, when the most likely path is different from the user's intended path, modifications can be made to address the unnecessary assistance.
The market size of electric wheelchairs in 2011 was $ 11 billion and was estimated to reach $ 39 billion in 2018. The growth in the market comes to a large extent from the need for activity by those who may be bedridden. The growth of the elderly population, especially those aged 65, 75 and over 85, is a major growth impetus for the wheelchair market, and future market growth will be driven with support for the higher efficiency and functional improvements of electric wheelchairs over manual wheelchairs.
Electric wheelchairs require cognitive and physical skills that not all people possess. One investigation showed that ten to forty percent of elderly people who require electric movement cannot be equipped with an electric wheelchair because of sensory disturbance, poor motor function or cognitive deficit, resulting in safe driving using any existing control device. The trend in the wheelchair market is to integrate with the latest effective technologies such as advanced human-robot interfaces (Human Robot Interface, HRI), navigation obstacle avoidance technologies, stair climbing technologies, and adaptive assistance technologies such as those provided by the systems and methods according to the present embodiments.
While exemplary embodiments have been presented in the foregoing detailed description of the embodiments, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, operation, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing an exemplary embodiment of the invention, it being understood that various changes may be made in the arrangement of functions and steps and method of operation described in an exemplary embodiment. Embodiments do not depart from the scope of the invention as set forth in the appended claims.

Claims (21)

1. A target guidance navigation method of shared control, the method comprising:
generating a user intent set for navigating a plurality of paths to a predetermined target;
predicting one of the user intent sets as a preferred path in response to calculating a probability of each of the plurality of paths reaching the predetermined target; and
The preferred path is used as a guideline for local path planning based on a shared dynamic window scheme (DWA).
2. The method of claim 1, wherein generating the set of user intents for navigating the plurality of paths to the predetermined target comprises: the number of paths is limited using homotopy.
3. The method of claim 2, wherein generating the user intent set for navigating the plurality of paths to the predetermined target further comprises: paths belonging to each of the homotopy classes in the plurality of paths are calculated using a Wo Luo noy graph.
4. A method according to claim 2 or claim 3, wherein said calculating the probability of each of a plurality of paths reaching a predetermined target comprises: a probability distribution for each of the plurality of paths in the homotopy class is maintained.
5. The method of claim 4, wherein the maintaining the probability distribution for each of the plurality of paths in the homotopy class comprises: the probability distribution on each of the plurality of paths in the homotopy is maintained using maxentetirl.
6. The method of any one of the preceding claims, wherein the calculating the probability of each of the plurality of paths reaching the predetermined target comprises: the probability is calculated based on a cost function.
7. The method of any of the preceding claims, wherein the shared DWA-based local path planning includes identifying user input, the method further comprising: a final control command is determined based on the user input.
8. The method of claim 7, wherein determining the final control command for the target guided navigation comprises: the final control command is determined for the target guided navigation using an inner loop mix.
9. The method of claim 7 or claim 8, wherein the determining the final control command for the target guided navigation comprises: determining the final control command for the target guided navigation while calculating rewards for each possible speed in the shared DWA-based local path plan.
10. The method of any of claims 7 to 9, wherein the determining the final control command based on the user input comprises: a final control command is determined based on a weighted user input, wherein a weighting factor of the weighted user input is determined by a congestion measurement responsive to the context of the preferred path.
11. The method of claim 10, wherein the weighting factor of the weighted user input is further determined by an action responsive to reaching the predetermined target by possible homotopy navigation.
12. A target guided navigation system for shared control, the system comprising:
a user control device; and
A shared navigation controller coupled to the user control device, the shared navigation controller generating a set of user intent for navigating a plurality of paths to a predetermined target, predicting one of the user intent as a preferred path in response to calculating a probability of each of the plurality of paths to the predetermined target, and using the preferred path as a guideline for a shared dynamic window scheme (DWA) -based local path planning, wherein the shared DWA-based local path planning includes identifying user input received from the user movement control, the shared navigation controller further determining a final control command based on the user input.
13. The system of claim 12, wherein the shared navigation controller generating a user intent set for navigating multiple paths to a predetermined target comprises: the number of the plurality of paths is limited using homotopies, and paths belonging to each of the homotopies among the plurality of paths are calculated using a Wo Luo noy diagram.
14. The system of claim 13, wherein the shared navigation controller calculates the probability of each of the plurality of paths reaching the predetermined target by maintaining a probability distribution for each of the plurality of paths in the homography.
15. The system of any of claims 12 to 14, wherein the shared navigation controller calculates the probability for each of the plurality of paths to the predetermined target by calculating the probability based on a cost function.
16. The system of any of claims 12 to 15, wherein the shared navigation controller determines the final control command for the target guided navigation using an intra-cyclic mix.
17. The system of any of claims 12 to 16, wherein the shared navigation controller determines the final control command for target guided navigation while calculating rewards for each possible speed in a shared DWA-based local path plan.
18. The system of any of claims 12 to 17, wherein the shared navigational controller is based on weighted user inputs to determine the final control command wherein the shared navigational controller is responsive to congestion measurements of the environment of a preferred path to calculate a weighting factor for the weighted user inputs.
19. The system of claim 18, wherein the shared navigational controller further calculates the weighting factor of the weighted user input responsive to an action of navigating through a possible homography to the predetermined target.
20. The system of claim 18, wherein the shared navigational controller is a wheelchair navigational controller.
21. A computer-readable medium comprising instructions for sharing a navigation controller to perform a method of target guided navigation for shared control, the instructions causing the shared navigation controller to:
generating a user intent set for navigating a plurality of paths to a predetermined target;
predicting one of the user intent sets as a preferred path in response to calculating a probability of each of the plurality of paths reaching the predetermined target;
using the preferred path as a guideline for shared dynamic window scheme (DWA) based local path planning, wherein the shared DWA based local path planning includes identifying received user input; and
And determining a final control command according to the user input.
CN202280023681.6A 2021-04-08 2022-04-06 Target guided wheelchair navigation method and system for sharing control Pending CN117178165A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10202103600T 2021-04-08
SG10202103600T 2021-04-08
PCT/SG2022/050197 WO2022216232A1 (en) 2021-04-08 2022-04-06 Methods and systems for shared control of goal directed wheelchair navigation

Publications (1)

Publication Number Publication Date
CN117178165A true CN117178165A (en) 2023-12-05

Family

ID=83546652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280023681.6A Pending CN117178165A (en) 2021-04-08 2022-04-06 Target guided wheelchair navigation method and system for sharing control

Country Status (2)

Country Link
CN (1) CN117178165A (en)
WO (1) WO2022216232A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112325884B (en) * 2020-10-29 2023-06-27 广西科技大学 DWA-based ROS robot local path planning method
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion

Also Published As

Publication number Publication date
WO2022216232A1 (en) 2022-10-13

Similar Documents

Publication Publication Date Title
Mead et al. Autonomous human–robot proxemics: socially aware navigation based on interaction potential
Kretzschmar et al. Socially compliant mobile robot navigation via inverse reinforcement learning
Lam et al. Human-centered robot navigation—towards a harmoniously human–robot coexisting environment
Kivrak et al. Social navigation framework for assistive robots in human inhabited unknown environments
Li et al. Dynamic shared control for human-wheelchair cooperation
Fernández et al. Improving collision avoidance for mobile robots in partially known environments: the beam curvature method
Geravand et al. An integrated decision making approach for adaptive shared control of mobility assistance robots
Lopes et al. A new hybrid motion planner: Applied in a brain-actuated robotic wheelchair
Xu et al. Reinforcement learning-based shared control for walking-aid robot and its experimental verification
Morales et al. Passenger discomfort map for autonomous navigation in a robotic wheelchair
US11686583B2 (en) Guidance robot and method for navigation service using the same
WO2020129312A1 (en) Guidance robot control device, guidance system in which same is used, and guidance robot control method
JPWO2020129309A1 (en) Guidance robot control device, guidance system using it, and guidance robot control method
WO2020129311A1 (en) Device for controlling guidance robot, guidance system in which same is used, and method for controlling guidance robot
Jiménez et al. Bringing proxemics to walker-assisted gait: using admittance control with spatial modulation to navigate in confined spaces
Montero et al. Dynamic warning zone and a short-distance goal for autonomous robot navigation using deep reinforcement learning
Lei et al. An intention prediction based shared control system for point-to-point navigation of a robotic wheelchair
Urdiales et al. Efficiency based reactive shared control for collaborative human/robot navigation
Onyango et al. A driving behaviour model of electrical wheelchair users
Ghandour et al. A hybrid collision avoidance system for indoor mobile robots based on human-robot interaction
Qian et al. Socially acceptable pre-collision safety strategies for human-compliant navigation of service robots
Poon et al. Learning from demonstration for locally assistive mobility aids
WO2020129310A1 (en) Guide robot control device, guidance system using same, and guide robot control method
CN117178165A (en) Target guided wheelchair navigation method and system for sharing control
Qian et al. Robotic etiquette: Socially acceptable navigation of service robots with human motion pattern learning and prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination