WO2022247325A1 - 助行机器人导航方法、助行机器人及计算机可读存储介质 - Google Patents

助行机器人导航方法、助行机器人及计算机可读存储介质 Download PDF

Info

Publication number
WO2022247325A1
WO2022247325A1 PCT/CN2022/073209 CN2022073209W WO2022247325A1 WO 2022247325 A1 WO2022247325 A1 WO 2022247325A1 CN 2022073209 W CN2022073209 W CN 2022073209W WO 2022247325 A1 WO2022247325 A1 WO 2022247325A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
robot
trajectory
walking
specified task
Prior art date
Application number
PCT/CN2022/073209
Other languages
English (en)
French (fr)
Inventor
郭德骏
斯雷吉思阿拉文
董初桥
邵丹
修震
谭欢
Original Assignee
深圳市优必选科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市优必选科技股份有限公司 filed Critical 深圳市优必选科技股份有限公司
Priority to CN202280004524.0A priority Critical patent/CN115698631A/zh
Publication of WO2022247325A1 publication Critical patent/WO2022247325A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/04Wheeled walking aids for disabled persons
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0094Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/04Wheeled walking aids for disabled persons
    • A61H2003/043Wheeled walking aids for disabled persons with a drive mechanism
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/04Wheeled walking aids for disabled persons
    • A61H2003/046Wheeled walking aids for disabled persons with braking means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1602Physical interface with patient kind of interface, e.g. head rest, knee support or lumbar support
    • A61H2201/1635Hand or arm, e.g. handle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1657Movement of interface, i.e. force application means
    • A61H2201/1659Free spatial automatic movement of interface within a working area, e.g. Robot
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • A61H2201/501Control means thereof computer controlled connected to external computer devices or networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5092Optical sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • A61H2230/62Posture
    • A61H2230/625Posture used as a control parameter for the apparatus

Definitions

  • the present application relates to navigation technology, in particular to a navigation method for a walking-assisted robot and a walking-assisting robot using the method.
  • Mobility aids are devices designed to assist in walking, or otherwise improve the mobility of people with reduced mobility.
  • Common mobility aids such as canes, wheelchairs, and walkers, assist users with impaired walking abilities by providing support and mobility.
  • the research and functions of existing walker robots mainly focus on their navigation functions, including trajectory planning and motion control, to solve the parking problem at the navigation end, while ignoring the integration of body detection and navigation functions, so they cannot provide user-adaptable Navigation features for different poses.
  • the present application provides a navigation method for a walking-assisting robot and a walking-assisting robot using the method, which has cameras and grasping parts set in different directions, so as to navigate the walking-assisting robot to a user to perform specified tasks for the user.
  • the task is to solve the problems existing in the navigation function of the walker robot in the aforementioned prior art.
  • Embodiments of the present application provide a navigation method for navigating a walking-assisted robot to a user to perform a specified task on the user, wherein the robot has a camera and at least one grasping part set in different directions, so
  • the methods described include:
  • controlling the robot to move according to a planned trajectory corresponding to the determined mode of the robot, wherein the trajectory includes the sequence of poses, the last pose in the sequence of poses being a desired pose;
  • Embodiments of the present application also provide a walking aid robot, including:
  • one or more memories storing one or more computer programs for execution by the one or more processors, wherein the one or more computer programs include instructions for:
  • controlling the robot to move according to a planned trajectory corresponding to the determined mode of the robot, wherein the trajectory includes the sequence of poses, the last pose in the sequence of poses being a desired pose;
  • Embodiments of the present application also provide a computer-readable storage medium storing one or more computer programs, wherein the one or more computer programs include a plurality of instructions, when the plurality of instructions are provided by a A walker robot with a camera and at least one gripper executes so that the robot:
  • controlling the robot to move according to a planned trajectory corresponding to the determined mode of the robot, wherein the trajectory includes the sequence of poses, the last pose in the sequence of poses being a desired pose;
  • the walking-assisting robot navigation method provided by the present application and the walking-assisting robot using the method realize the navigation of the walking-assisting robot to the user by combining the recognized user posture and the mode of the walking-assisting robot 1.
  • To perform a specified task for the user thereby solving the problem that the navigation function of the walker robot in the prior art ignores body (posture) detection and the like.
  • Fig. 1A is a schematic diagram of a navigation scene of a walking assistant robot in some embodiments of the present application.
  • FIG. 1B is a schematic diagram of navigating the walking assistant robot to a first desired pose in the scene of FIG. 1A .
  • FIG. 1C is a schematic diagram of navigating the walking-assisted robot to a second desired pose in the scene of FIG. 1A .
  • Figure 2A is a perspective view of a walking assistance robot in some embodiments of the present application.
  • Fig. 2B is a schematic block diagram illustrating the walking assistance robot of Fig. 2A.
  • Fig. 3 is a schematic block diagram of an example of navigation performed by the walking-assisted robot in Fig. 2A.
  • FIG. 4A is a flowchart of an example of determining a mode in the example of navigation of FIG. 3 .
  • Fig. 4B is a schematic diagram illustrating different positions of a desired pose.
  • Fig. 5 is a schematic block diagram of another example of navigation performed by the walking-assisted robot in Fig. 2A.
  • Fig. 6A is a schematic block diagram of an example of navigating according to a planned trajectory in the example of navigating by the walking-assisted robot in Fig. 5 .
  • Fig. 6B is a schematic block diagram of an example of trajectory planning in the example of navigation of Fig. 6A.
  • FIG. 6C is a schematic block diagram of an example of turning and approaching in an example of navigation performed by the walking-assisted robot of FIG. 5 .
  • Fig. 7 is a schematic block diagram of an example of a state machine of the walking assistance robot of Fig. 2A.
  • first”, “second”, and “third” in this application are for descriptive purposes only, and cannot be understood as indicating or implying relative importance, or implying the number of technical features referred to . Thus, the features defined by “first”, “second”, and “third” may explicitly or implicitly include at least one technical feature therein.
  • “plurality” means at least two, such as two, three, etc., unless otherwise clearly defined.
  • One embodiment or “some embodiments” described in the description of the present application means that one or more embodiments of the present application may include specific features, structures or characteristics related to the description of the embodiment .
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in the specification do not mean that the described The embodiment should be cited by all other embodiments, but by “one or more but not all other embodiments", unless specifically stated otherwise.
  • walking aid robot refers to a mobile robot having the ability to assist its user in walking, which may have a structure such as a walking aid or a wheelchair.
  • trajectory planning refers to finding a sequence of time-efficient configurations that move a mobile machine from a source to a destination, where a "trajectory” denotes a time-stamped sequence of poses (for reference, a "path” denotes is a series of poses or positions without time stamps), the term “pose” refers to position (eg x and y coordinates on the x and y axes) and attitude (eg yaw angle along the z axis).
  • the term “navigation” refers to the process of monitoring and controlling the movement of a mobile machine from one place to another, and the term “collision avoidance” refers to preventing collisions or reducing the severity of collisions.
  • the term “sensor” refers to a device, module, machine or subsystem (such as a camera) whose purpose is to detect events or changes in its environment and send relevant information to other electronic devices (such as a processor).
  • FIG. 1A is a schematic diagram of a navigation scene of a walking assistant robot 100 in some embodiments of the present application.
  • Walker robot 100 is navigated in its environment (e.g., living room) to perform auxiliary tasks such as walking assistance (i.e., walking assistance), parking for pick-up, and parking to communicate with user U, while preventing dangerous situations, such as collisions and unsafe states (e.g. drops, temperature extremes, radiation and exposure).
  • walking assistance i.e., walking assistance
  • parking for pick-up a parking for pick-up
  • dangerous states e.g. drops, temperature extremes, radiation and exposure.
  • the walking-assisting robot 100 navigates from a starting point (e.g., where the walking-assisting robot 100 is initially located) to a destination (e.g., a position of a navigation target indicated by the user U or the navigation/operating system of the walking-assisting robot 100), At the same time, obstacles (such as walls, furniture, people, pets and garbage) can be avoided to prevent the above-mentioned dangerous situations. It is necessary to plan the trajectory (such as trajectory T 1 and trajectory T 2 ) of the walking-assisting robot 100 moving from the starting point to the destination (such as destination 1 and destination 2 ), so as to move the walking-assisting robot 100 according to the trajectory. Each trajectory includes a sequence of poses (eg poses S 0 -S 6 of trajectory T 2 ).
  • start point and end point only represent the position of the walking-assisting robot 100 in the scene as shown in the figure, rather than the real start and end of the trajectory (the real start and end of the trajectory should be a pose respectively, for example, in Figure 1A initial poses S i1 , S i2 and desired poses S d1 , S d2 ).
  • initial poses S i1 , S i2 and desired poses S d1 , S d2 initial poses S i1 , S i2 and desired poses S d1 , S d2 .
  • it is necessary to construct an environment map determine the position of the walking-assisting robot 100 in the environment, and plan a trajectory based on the constructed map and the determined position of the walking-assisting robot 100 .
  • the first initial pose S i1 is the starting point of the trajectory T1
  • the second initial pose S i2 is the starting point of the trajectory T2
  • FIG. 1B is a schematic diagram of navigating the walking-assisting robot 100 to a first desired pose S d1 in the scene of FIG. 1A
  • FIG. 1C is a schematic diagram of navigating the walking-assisting robot 100 to a second desired pose S d2 in the scene of FIG. 1A
  • the first desired pose Sd1 is the last one of the pose sequence S in the trajectory T1 , that is, the end of the trajectory T1
  • the second desired pose Sd2 is the last one of the pose sequence S in the trajectory T2 , that is, the trajectory end of T2 .
  • the trajectory T 1 can be planned according to the shortest path to the user U in the constructed map, and the trajectory T 2 can be planned corresponding to the facing direction D f of the user U, for example.
  • avoidance of collisions with obstacles in the constructed map (such as walls and furniture) or obstacles detected in real time (such as humans and pets) can be considered during planning for more accurate and safe navigation of the walking-assisting robot 100 .
  • the navigation of the walking-assisting robot 100 can be controlled by the walking-assisting robot 100 itself (for example, a control interface on the walking-assisting robot 100 ) or the control device 200 (such as a remote control, a smart phone, a tablet computer, a laptop computer, a desktop computer). or other electronic device) by, for example, providing a request for assistance to the mobile machine 100.
  • the walking aid robot 100 and the control device 200 may communicate via a network, which may include, for example, the Internet, an intranet, an extranet, a local area network (LAN), a wide area network (WAN), a wired network, a wireless network (such as a Wi-Fi network, Bluetooth network and mobile network) or other suitable networks, or any combination of two or more such networks.
  • a network which may include, for example, the Internet, an intranet, an extranet, a local area network (LAN), a wide area network (WAN), a wired network, a wireless network (such as a Wi-Fi network, Bluetooth network and mobile network) or other suitable
  • FIG. 2A is a perspective view of a walking assistance robot 100 in some embodiments of the present application.
  • the walking aid robot 100 may be a mobile robot (such as a wheeled robot), which may include a walking frame F, a grip part G, wheels E, and a camera C, thus having a structure similar to a walking aid.
  • the walking assistance robot 100 is only one example of a walking assistance robot, and that the walking assistance robot 100 may have more or fewer components than those shown above or below (such as having legs instead of wheels E), or may With a different configuration or arrangement of components (eg with a single grip, such as a grip bar).
  • the grip part G is installed on the upper edge of the walking frame F for the user U to grasp, and the wheels E are installed on the bottom of the walking frame F (such as the chassis) for moving the walking frame F, so that the user U can be assisted
  • the walker robot 100 supports, stands and moves with the assistance of the walker robot 100 .
  • the height of the walking frame F can be adjusted manually or automatically, for example, through the telescopic mechanism in the walking frame F such as a telescopic rod, so that the grip part G reaches a height that is convenient for the user U to grasp.
  • the grip part G may include a pair of handles G 1 arranged in parallel for the user U to grasp with both hands, and a brake lever G 2 mounted on the handle G 1 for the user U to brake and assist with both hands.
  • the mobile robot 100 may also include related components such as Bowden cables.
  • the camera C moves linearly toward the walking-assisting robot 100 so that, for example, the lens of the camera C is set directly facing the advancing direction DP of the advancing direction Dp , and the gripping portion G is oriented substantially opposite to the advancing direction DP .
  • the retreating direction DB is set such that, for example, the handle G 1 extends toward the retreating direction DB .
  • the walking assistant robot 100 may be another mobile machine such as a vehicle.
  • FIG. 2B is a schematic block diagram illustrating the walking assistance robot 100 of FIG. 2A .
  • the walking aid robot 100 may include a processing unit 110 , a storage unit 120 and a control unit 130 communicating via one or more communication buses or signal lines L.
  • the walker robot 100 is only one example of a walker robot, and that the walker robot 100 may have more or fewer components (such as units, subunits, and modules) than those shown above or below, and two components may be combined. one or more components, or have a different configuration or arrangement of components.
  • the processing unit 110 executes various (groups of) instructions stored in the storage unit 120.
  • Storage unit 120 may include one or more memories (such as high-speed random access memory (RAM) and non-transitory memory), one or more memory controllers, and one or more non-transitory computer-readable storage media (such as Solid State Drive (SSD) or Hard Disk).
  • RAM high-speed random access memory
  • SSD Solid State Drive
  • the control unit 130 may include various controllers (such as a camera controller, a display controller, and a physical button controller) and peripheral interfaces for coupling input/output peripherals of the walking-assisted robot 100 to the processing unit 110 and the storage unit 120 , such as external ports (such as USB), wireless communication circuits (such as RF communication circuits), audio circuits (such as speaker circuits), and sensors (such as inertial measurement units (IMUs)).
  • the storage unit 120 may include a navigation module 121 for implementing navigation functions (such as map construction and trajectory planning) related to the navigation (and trajectory planning) of the walking-assisted robot 100, which may be stored in one or more memory (and one or more non-transitory computer-readable storage media).
  • the navigation module 121 in the memory unit 120 of the walking-assisting robot 100 may be a software module (of the operating system of the walking-assisting robot 100) having instructions I n (for example, used to drive the motor 1321 of the walking-assisted robot 100 to move the walking-assisted robot 100) to realize the navigation, map builder 1211 and path planner 1212 of the walking-assisted robot 100.
  • the map builder 1211 may be a software module having an instruction Ib for building a map for the walking-assisted robot 100
  • the path planner 1212 may be a software module having an instruction Ip for planning a trajectory for the walking-assisting robot 100.
  • the path planner 1212 may include a global path planner for planning the global trajectory (for example: trajectory T 1 and trajectory T 2 ) for the walking-assisted robot 100 and a local trajectory for planning the walking-assisted robot 100 (for example: including Part of the trajectory T 2 with poses S 2 -S 6 ) of the local path planner.
  • the global path planner may be a path planner based on the Dijkstra algorithm, which plans the global trajectory based on the map constructed by the map builder 1211 through simultaneous localization and mapping (SLAM) and other methods.
  • the local path planner may be a path planner based on a TEB (timed elastic band) algorithm, which plans a local trajectory based on the global trajectory Pg and other data collected by the walking assistant robot 100 .
  • TEB timed elastic band
  • images can be collected by the camera C of the walker robot 100, and the collected images can be analyzed to identify obstacles (for example: obstacle O in FIG. 4B ), so that local trajectories can be planned with reference to the identified obstacles, And the walker robot 100 can be moved according to the planned local trajectory to avoid obstacles.
  • obstacles for example: obstacle O in FIG. 4B
  • the map builder 1211 and the path planner 1212 may be sub-modules separated from the instruction I n for realizing the navigation of the walking-assisted robot 100 or other sub-modules of the navigation module 121, or a part of the instruction I n .
  • Path planner 1212 may also have data related to trajectory planning of walking robot 100 (eg, input/output data and temporary data), which may be stored in one or more memories and accessed by processing unit 110 .
  • each path planner 1212 may be a separate module in the storage unit 120 from the navigation module 121 .
  • the instruction I n may include instructions for implementing collision avoidance of the walking assistant robot 100 (such as obstacle detection and path re-planning).
  • the global path planner can replan the global trajectory (i.e. plan a new global path) in response to, for example, the original global trajectory being blocked (e.g. by one or more unexpected obstacles) or insufficient to avoid collisions (e.g. when using Detected obstacles cannot be avoided).
  • the navigation module 121 can be a navigation unit that communicates with the processing unit 110, the storage unit 120, and the control unit 130 through one or more communication buses or signal lines L, and can also include one or more memories (such as high-speed random access memory (RAM) and non-transitory memory) for storing instructions In , map builder 1211 and path planner 1212, and a program for executing stored instructions In , Ib , and Ip or multiple processors (such as MPU and MCU) to realize the navigation of the walking-assisted robot 100 .
  • memories such as high-speed random access memory (RAM) and non-transitory memory
  • the walking assistant robot 100 also includes a communication subunit 131 and an actuation subunit 132 .
  • the communication subunit 131 and the actuating subunit 132 communicate with the control unit 130 through one or more identical communication buses or signal lines, and the one or more communication buses or signal lines can be connected with the above-mentioned one or more communication buses or signal lines.
  • the lines L are the same, or at least partly different.
  • the communication subunit 131 is coupled to the communication interface (such as the network interface 1311 ) of the walking-assisted robot 100 for the walking-assisted robot 100 to communicate with the control device 200 through such as a network and an I/O interface 1312 (such as a physical button).
  • the actuating subunit 132 is coupled to components/devices for realizing the movement of the walker robot 100 to drive the motors 1321 of the wheels E and/or joints of the walker robot 100 .
  • the communication subunit 131 may include a controller for the above-mentioned communication interface of the walking-aid robot 100
  • the actuation sub-unit 132 may include a controller for the above-mentioned components/devices for realizing the movement of the walking-aid robot 100 .
  • the communication subunit 131 and/or the actuation subunit 132 may be only abstract components, used to represent the logical relationship between the components of the walking assistant robot 100 .
  • the walking aid robot 100 can also include a sensor subunit 133, which can include a set of sensors and related controllers, such as a camera C and an IMU 1331 (or an accelerometer and a gyroscope), for detecting its environment to realize its navigation.
  • the sensor subunit 133 communicates with the control unit 130 through one or more communication buses or signal lines, and the one or more communication buses or signal lines may be the same as the above-mentioned one or more communication buses or signal lines L, or at least partially different.
  • the sensor subunit 133 can communicate with the navigation unit through one or more communication buses or signal lines, and the one or more communication buses or signal lines It may be the same as or at least partly different from one or more communication buses or signal lines L mentioned above.
  • the sensor subunit 133 may be only an abstract component, which is used to represent the logical relationship between the components of the walking assistant robot 100 .
  • map builder 1211, path planner 1212, sensor subunit 133, and motor 1321 together form a (navigation) system that implements map Construction, (global and local) path planning and motor drives to enable navigation of the walking aid robot 100.
  • various components shown in FIG. 2B may be realized in hardware, software, or a combination of both hardware and software. Two or more of the processing unit 110, storage unit 120, control unit 130, navigation module 121 and other units/subunits/modules may be implemented on a single chip or circuit. In other embodiments, at least some of them may be implemented on separate chips or circuits.
  • FIG. 3 is a schematic block diagram of an example of navigation performed by the walking assistance robot 100 in FIG. 2A .
  • the instruction (group) I n corresponding to the navigation method of the walking aid robot 100 is stored as the navigation module 121 in the storage unit 120 and the stored instruction I n is executed by the processing unit 110.
  • the navigation method is implemented in the walker robot 100, and then the walker robot 100 can be navigated to the user U.
  • the navigation method may be performed in response to a request for assistance to the mobile machine 100 from, for example, the walking aid robot 100 itself or from (the navigation/operating system of) the control device 200, and may simultaneously take into account obstacle (eg, obstacle O in FIG. 4B ), which may then be re-executed in response to, for example, detection of an unexpected obstacle.
  • the navigation method may also be executed in response to the user U detected by the camera C of the walking aid robot 100 .
  • the processing unit 110 can recognize the posture P of the user U through the camera C of the walking aid robot 100 (block 310 in FIG. 3 ).
  • the posture P can include standing, sitting and lying down, and the camera C can be an RGB- D camera, which provides a continuous stream of images I (including color images and depth images).
  • the image I can be identified by searching and matching the features of the face from the data set stored in the storage unit 120 by using a face matching method (or algorithm) based on a convolutional neural network such as MobileFaceNets. User U's face, so as to identify user U.
  • the keypoints Pk on the user U's body are identified, and the 3D (three-dimensional) positions of the keypoints Pk on the estimated skeleton B of the user U are provided based on eg a (well-trained) neural network.
  • the pose P of the user U can be distinguished by using a classifier to analyze and estimate the positional relationship of key points P k on the skeleton B, thereby identifying the pose P.
  • the user U's facing direction Df can also be identified according to the identified key point Pk, for example, the user detection can be re-executed every predetermined time interval (for example, 1 second) until the navigation according to the planned trajectory is completed (Fig. 3). block 330 and block 530 of FIG. 5).
  • the processing unit 110 may further determine the mode M of the walking assistance robot 100 according to the task type K of the specified task performed on the user U and the recognized posture P of the user U (block 320 of FIG. 3 ).
  • the specified tasks performed on the user U can be divided into multiple task types K, and the mode M can be the method (or method, strategy) for the walking-assisted robot 100 to approach the user U, such as moving to a position in front of or next to the user U, which is based on the task Type K (such as assistance type) and the recognized posture P of the user U to determine the specified task to be performed on the user U (such as walking assistance), so that the walking assistance robot 100 in the mode M can be suitable for the task type K and
  • the recognized gesture P approaches the user U in a way.
  • the mode M may be other operation methods of the walking-assisting robot 100 in the navigation/operating system of the walking-assisting robot 100 (such as the method in which the walking-assisting robot 100 serves the user U).
  • 4A is a flowchart of an example of determining a mode in the example of navigation in FIG. 3 (block 320 of FIG. 3 ).
  • Task type K (step 321), the task type K of the specified task may include assistance type K a and greeting type K g .
  • User assistance tasks such as walking assistance and walking training may belong to the assistance type K a
  • user interaction tasks such as greeting, chatting, emotion detection may belong to the greeting type K g .
  • the mode M of the walking assistant robot 100 may include:
  • Mode 1 Move to user U and face user U;
  • Mode 2 move to user U and turn
  • Mode 3 move to the user U, turn, and approach the user.
  • Mode 4 Move to a position at the user U facing the direction Df , turn, and approach the user.
  • the mode M of the walker robot 100 is determined to be mode 1
  • the Execute step 322 the determined mode M can be represented by a variable (for example, a constant) stored in the storage unit 120 of the walking assistance robot 100 , which is set when the mode M is determined.
  • the mode M may be just an abstract concept, which is not represented in the navigation/operating system of the walking-assisting robot 100, but is just a method selected by the steps in the above-mentioned mode determination to make the walking-assisting robot 100 approach the user U( For example: if step 321 judges that the task type K of the specified task is the assistance type Ka , step 322 judges that the posture P of the user U is a sitting posture P 2 , and step 323 judges that there is no obstacle in the user U’s facing direction Df , and assists walking The robot 100 will be moved to a position in the user U's facing direction Df , turned, and moved closer to the user U, as it was done in mode 4). In mode 1, the walking assistant robot 100 will move to the user U and face the user U to perform user interaction tasks of the greeting type Kg , such as greeting, talking or emotion detection to the user U.
  • the greeting type Kg such as greeting, talking or emotion detection to the user U.
  • the posture P of the user U may be identified (step 322 ), as described above, the posture P of the user U may include standing posture P 1 , sitting posture P 2 and lying posture P 3 .
  • the mode M is determined to be the mode 1; when the posture P of the user U is determined to be the standing posture P1 , the mode M is determined to be the mode 2.
  • the walking assistance robot 100 will move to the user U and turn so that the grip part G faces the user U, thereby being grasped by the standing user U. If it is determined that the posture P of the user U is the sitting posture P2 , step 323 is executed.
  • Mode M is Mode 4.
  • the walking-assisting robot 100 will move to the user U and turn so that the grip part G faces the user U, and then move a distance toward the back direction Db to approach the user U so that the user U who is seated grip.
  • the walking-assisting robot 100 will move to a position in the facing direction Df of the user U and turn so that the grip part G faces the user U, and then move a certain distance in the backward direction Db to approach the user U , so as to be grasped by a user U who is seated.
  • the determination of the mode may be re-executed in response to, for example, a change in the task type K, a change in the posture P of the user U, and the appearance of an obstacle.
  • Fig. 4B is a schematic diagram illustrating different positions of a desired pose.
  • the walker robot 100 can move to the user U such that the desired pose S d (i.e., the last of a series of poses in the trajectory, For example, the first desired pose Sd 1 ) is located at a distance D 1 from the user U, on a circle C centered on the user U, where the radius of the circle C is the distance D 1 (for example, 0.4 meters), while avoiding the obstacle O .
  • the desired pose S d i.e., the last of a series of poses in the trajectory,
  • the first desired pose Sd 1 is located at a distance D 1 from the user U, on a circle C centered on the user U, where the radius of the circle C is the distance D 1 (for example, 0.4 meters), while avoiding the obstacle O .
  • D 1 for example, 0.4 meters
  • the walking-assisting robot 100 can move to the user U such that the desired pose S d (for example, the second desired pose S d2 ) is located at a distance D 2 from the user U , located at a position N on the facing direction D f of the user U with a distance D 2 (for example, 0.4 meters), wherein each of the distance D 1 and the distance D 2 can be a predetermined distance determined according to actual needs such as personal preferences .
  • the desired pose S d for example, the second desired pose S d2
  • a distance D 2 for example, 0.4 meters
  • the processing unit 110 may further control the walker robot 100 to move according to the planned trajectory T (see FIG. 6A ) corresponding to the determined mode M of the walker robot 100 (block 330 in FIG. 3 ).
  • Trajectory T includes a sequence of poses (eg, poses S 0 -S 6 of trajectory T 2 ), the last of which is a desired pose S d (eg, second desired pose S d2 of trajectory T2 ).
  • Each pose includes the position (eg, coordinates in the coordinate system) and attitude (eg, Euler angles in the coordinate system) of the walker robot 100 .
  • FIG. 5 is a schematic block diagram of another example of navigation performed by the walking aid robot 100 in FIG. 2A .
  • Fig. 6A is a schematic block diagram of an example of navigating according to a planned trajectory in the example of navigating by the walking-assisted robot in Fig. 5 .
  • the processing unit 110 may plan the trajectory T according to the determined mode M and the detected obstacle O (block 531 of FIG. 6A ).
  • the trajectory T can be planned by the above-mentioned global trajectory planner based on the map constructed by the map builder 1211 .
  • the trajectory T may be replanned when an obstacle O is detected (see block 532 of FIG. 6A ). Furthermore, if the planning of the trajectory T fails (eg, when the walking assistance robot 100 is blocked by an obstacle O), recovery may be performed (block 550 of FIG. 5 ).
  • Fig. 6B is a schematic block diagram of an example of trajectory planning in the example of navigation of Fig. 6A.
  • the processing unit 110 may determine the facing direction D f of the user U (block 5311 of FIG. 6A ). As described above, the facing direction D f may be determined in user detection (block 310 of FIG. 3 and block 510 of FIG. 5 ). The processing unit 110 may further determine whether there is an obstacle O located substantially in the facing direction D f of the user U (block 5312 in FIG. 6A ). Obstacles O in front of the user U will be detected and avoided.
  • the processing unit 110 can further plan the trajectory T so that the expected pose Sd is basically located in the facing direction Df of the user U (see the lower part of FIG .
  • the posture P is a sitting posture P 2 , and there is basically no obstacle O in the facing direction Df of the user U (in mode 4); the processing unit 110 can further plan the trajectory T so that the desired position The pose Sd is located at a distance D1 from the user U (see top half of Fig. 4B).
  • the processing unit 110 plans the trajectory T such that the desired posture Sd is substantially located in the facing direction Df of the user U (i.e.
  • the processing unit 110 plans the trajectory T so that the desired pose Sd is located at a distance D1 from the user U, for example, in the upper half of Fig. 4B On a circle C with user U as the center and radius D1 (for example, 0.4 meters).
  • the processing unit 110 may further detect an obstacle O (block 532 of FIG. 6A ).
  • the detection of obstacles may be performed after, before or simultaneously with trajectory planning (block 531 of FIG. 6A ). Obstacles O can be detected by the sensor subunit 133 .
  • sensor data may be collected by the sensor subunit 133, and the collected data may be analyzed to identify an obstacle O that appears suddenly or is found suddenly when approaching.
  • Sensor data can be collected from different sensors such as RGB-D cameras, lidar, sonar, and infrared sensors, and fusion algorithms such as Kalman filters can be used to fuse the sensor data received from the sensors, thereby reducing Uncertainty due to noisy data, fusing data of different rates, and merging data of the same object.
  • images can be collected by the camera C in the sensor subunit 133, while other data can be collected by other sensors (such as lidar, sonar and infrared sensors) in the sensor subunit 133, and then the collected data can be fused Analysis, thereby identifying the obstacle O.
  • the processing unit 110 can further collect sensor data through the sensor subunit 133 of the walking-assisting robot 100 based on the planned global trajectory, and the current attitude of the walking-assisting robot 100 (ie, the attitude of the walking-assisting robot 100 in the trajectory T currently located) to plan a local trajectory (for example, a part of trajectory T 2 including poses S 2 -S 6 in FIG. 1A ). Images may be captured by the camera C, and the captured images may be analyzed to identify obstacles (e.g., obstacles O), so that local trajectories may be planned with reference to the identified obstacles, and the walker robot may be moved according to the planned local trajectories 100 to avoid obstacles.
  • obstacles e.g., obstacles O
  • the local trajectory may be planned by the trajectory planner described above in such a way that the local trajectory is generated based on the planned global trajectory, while taking into account identified obstacles (eg, avoiding identified obstacles). Furthermore, obstacle detection may be re-performed at predetermined time intervals (eg, 1 second) until navigation according to the planned trajectory T is completed (block 530 of FIG. 5 ).
  • the processing unit 110 may also control the walking assistant robot 100 to move according to the planned trajectory T (block 533 in FIG. 6A ).
  • the above-mentioned planned local trajectory includes multiple postures of the walking-assisting robot 100, and the motor 1321 of the walking-assisting robot 100 can be driven according to the postures in the local trajectory, so that the walking-assisting robot 100 moves according to the local trajectory, to Realize that the walking assistant robot 100 navigates according to the planned trajectory T.
  • the tracking controller can also be triggered to ensure that the walking robot 100 moves accurately along the planned trajectory T, and the tracking controller can generate a velocity for driving the motor 1321 .
  • obstacle detection block 532 of FIG.
  • the speed of the walking assistant robot 100 may be reduced, or even stopped completely, in case there is no time to replan the trajectory to detour. This can be achieved by the collision avoidance module, which slows down the speed of the walking robot 100 to achieve a fast response without complex calculations.
  • the processing unit 110 may further Walker robot 100 moves such that grip G faces user U' when reaching desired pose S d (block 540 of FIG. 5 ).
  • the processing unit 110 moves (for example, turns) the walking-assisting robot 100 when the desired posture Sd is reached, such that the grip G faces the user when the walking-assisting robot 100 is in mode 2, mode 3 or mode 4 U.
  • FIG. 6C is a schematic block diagram of an example of rotation and approach in the example of line selection and navigation of the walking-assisted robot in FIG. 5 .
  • the processing unit 110 may turn the walker robot 100 upon reaching the desired pose Sd so that the grip G faces the user U (block 541 of FIG. 6C ).
  • the walker robot 100 can rotate in situ (that is, move around an axis passing through the walker robot 100), for example, by turning the two front wheels E (that is, the two front wheels E are in the forward direction of Dp ), and brake the two front wheels E at the same time.
  • the rear wheels E (ie the two wheels E in the backward direction Dp ).
  • the processing unit 110 moves (for example, turns) the walking-assisting robot 100 when the desired posture S is reached , and makes the grip G face the user U when the walking-assisting robot 100 is in mode 2, mode 3 or mode 4, The user U is thereby approached so that the user U who is standing or sitting touches the grip G.
  • the camera C is disposed toward the forward direction DP to move the walking assistance robot 100 linearly
  • the grip part G is disposed toward the backward direction DB substantially opposite to the forward direction DP .
  • the walking assistant robot 100 can rotate 180 degrees when reaching the desired pose Sd , so that the grip part G faces the user U.
  • the processing unit 110 may further move the walking-assisting robot 100 toward the rear direction Db by a distance D3 (see FIG. 1C ) to approach the user U.
  • distance D3 may be a predetermined distance (eg, 0.3 meters).
  • the processing unit 110 moves the walking-assisting robot 100 toward the rear direction Db by a distance D 3 to get closer to the user U, so as to get closer to the user U and facilitate sitting.
  • the user U touches the grip G because the sitting user U cannot reach the grip G from as far away as the standing user U.
  • the processing unit 110 may respond to user detection during navigation (block 310 of FIG. 3 and block 510 of FIG. 5 ) or navigation according to a planned trajectory T (block 530 of FIG. 5 ) (the trajectory in Plan, block 531 of Figure 6A) fails to perform recovery (block 550 of Figure 5).
  • the recovery can remind the user U to change his/her position according to the planned trajectory T by triggering the speaker of the walking-assisted robot 100 before re-attempting user detection or navigation to be more accessible to the user.
  • FIG. 7 is a schematic block diagram of an example of a state machine 200 of the walking assistance robot of FIG. 2A .
  • State machine 200 is an abstract machine that can be in only one of multiple states of walking robot 100 at any given time, which is stored in memory unit 120 of walking robot 100 .
  • the state machine 200 may have a user detection state A1 that performs user detection (block 310 of FIG. 3 and block 510 of FIG. 5 ), performs mode determination (block 320 of FIG. 3 and block 520 of FIG.
  • Mode determination state A 2 Mode determination state A 2 , trajectory planning state A 3 of trajectory planning (frame 531 of FIG. 6A ); motion control state A 4 of executing movement control (frame 533 of FIG. 6A ); performing obstacle detection (frame 532 of FIG. 6A ) obstacle detection state A 5 ; and recovery state A 6 performing recovery (block 550 of FIG. 5 ).
  • State machine 200 changes from one state to another in response to certain conditions. It may change from the user detection state A 1 to the mode determination state A 2 when the user U (the gesture P) is detected, and change to the recovery state A 6 when the user U (the gesture P) is not detected.
  • it can change from the mode determination state A2 to the trajectory planning state A3 , or determine the unchanged mode M of the walking-assisting robot 100 (i.e., the determined mode M has not changed since the last determination) and changes to the mobile control state A 4 .
  • planning the trajectory T it may change from the trajectory planning state A 3 to the motion control state A 4 , and change to the recovery state A 6 when the planning trajectory T fails.
  • the corresponding state machine may have a movement control state A 4 , in which movement control (block 533 of FIG. 6A ) and movement (such as turning) towards the user U is performed;
  • movement control block 533 of FIG. 6A
  • movement such as turning
  • motion control state A4 in which movement control, rotation and rotation with rotation (box 541 of FIG. 6C ) and approach (see box 340 of FIG. 3 and box 540 of FIG.
  • the corresponding state machine may have a movement control state A 4 , in which the above-mentioned movement control, and the above-mentioned rotation and approach with the above-mentioned rotation and backward movement (block 542 of FIG. 6C ) are performed; for mode 4, the corresponding The state machine may have a trajectory planning state A3 where trajectory planning (block 531 of FIG. 6A ) with orientation determination (block 5311 of FIG. 6B ), movement control as described above, and rotation with the above described and the aforementioned turning and approaching of the aforementioned backward movement.
  • This navigation method navigates a walking-assisted robot equipped with a front-facing camera, and approaches the user in a manner that adapts to the user's posture, so that users in different postures can touch the robot (the grip part) to help the user more effectively and safely , move more smoothly, thereby improving the effectiveness, safety and user experience of the robot.
  • the navigation method is easy to implement in various Implemented in platforms and robots. Applications of the navigation method may include approaching a user for walking assistance, greeting, talking, consulting, guiding, screening, body temperature monitoring, and the like.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, solid state drive (SSD), and the like.
  • Volatile memory can include random access memory (RAM), external cache memory, and the like.
  • the processing unit 110 may include a central processing unit (central processing unit, referred to as CPU), or other general-purpose processors, digital signal processors (digital signal processor, referred to as DSP), application specific integrated circuits (application specific integrated circuits) circuit (ASIC for short), field-programmable gate array (FPGA for short), or other programmable logic devices, discrete gates, transistor logic devices, and discrete hardware components.
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the storage unit 120 (and the aforementioned memory) may include an internal storage unit such as a hard disk and an internal memory.
  • the storage unit 120 may also include external storage devices such as a plug-in hard disk, a smart media card (SMC for short), a secure digital (SD for short) card, and a flash memory card.
  • the exemplary units/modules and methods/steps described in the embodiments can be implemented by software, hardware or a combination of software and hardware. Whether these functions are implemented by software or hardware depends on the specific application and design constraints of the technical solution.
  • the above-mentioned navigation method and walking assistant robot can be realized in other ways.
  • the division of units/modules is only a logical division of functions, and other divisions can also be used in actual implementation, that is, multiple units/modules can be combined or integrated into another system, or some features can be Ignored or not executed.
  • the above-mentioned mutual coupling/connection may be direct coupling/connection or communication connection, may also be indirect coupling/connection or communication connection through some interfaces/devices, and may also be in electrical, mechanical or other forms.

Abstract

一种助行机器人导航方法以及使用该方法的助行机器人(100),其中该机器人(100)具有朝向不同方向设置的相机(C)和抓握部(G)。该助行机器人(100)通过以下方式被导航以接近用户:通过该相机(C)识别用户的姿势;根据对该用户执行的指定任务的类型和识别出的用户的姿势,确定出该机器人(100)的模式;控制该机器人(100)根据对应于该机器人(100)的确定出的模式的规划轨迹来移动;以及响应于该机器人(100)的确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达期望位姿时转动该机器人(100),使得该抓握部(G)面向该用户。

Description

助行机器人导航方法、助行机器人及计算机可读存储介质 技术领域
本申请涉及导航技术,尤其涉及一种助行机器人导航方法以及使用该方法的助行机器人。
背景技术
行动辅助设备是旨在帮助步行、或以其他方式改善行动不便人士的行动能力的设备。常见的助行器,例如手杖、轮椅和助行器,通过提供支撑和移动功能来帮助行走能力受损的用户。
随着借助人工智能(AI)技术实现而在日常生活的各个场景中提供家务、交通、卫生保健等各种服务的移动机器人(例如扫地机器人和自动驾驶汽车)的普及,具有助行器功能、并进一步具有自动导航能力的助行器机器人被开发出来,从而以更加自动化和方便的方式来帮助用户。
现有助行机器人的研究和功能主要集中在其导航功能上,包括轨迹规划和运动控制,以解决导航端的停车问题,而忽略了身体检测与其与导航功能的集成,因此无法提供能适应用户的不同姿势的导航功能。
发明内容
本申请提供一种助行机器人导航方法以及使用该方法的、具有朝向不同方向设置的相机和抓握部的助行机器人,用以将该助行机器人导航到用户处、以对该用户执行指定任务,解决前述的现有技术中助行机器人的导航功能所存在的问题。
本申请的实施例提供了一种导航方法,用于将助行机器人导航到用户处、 以对该用户执行指定任务,其中该机器人具有朝向不同方向设置的一个相机和至少一个抓握部,所述方法包括:
通过该相机识别该用户的姿势;
根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户
本申请的实施例还提供了一种助行机器人,包括:
朝向该机器人的前进方向设置的相机;
朝向与该相机不同的方向设置的至少一个抓握部;
一或多个处理器;以及
一或多个存储器,存储有一或多个计算机程序,该一或多个计算机程序由该一或多个处理器执行,其中该一或多个计算机程序包括多个指令用于:
通过该相机识别该用户的姿势;
根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户。
本申请的实施例还提供了一种计算机可读存储介质,存储有一或多个计算 机程序,其中该一或多个计算机程序包括多个指令,当该多个指令由具有朝向不同方向设置的一个相机和至少一个抓握部的助行机器人执行时,使该机器人:
通过该相机识别该用户的姿势;
根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户。
从上述本申请的实施例可知,本申请提供的助行机器人导航方法以及使用该方法的助行机器人是通过结合识别出的用户姿势和助行机器人的模式来实现将助行机器人导航到用户处、以对用户执行指定任务,从而解决现有技术中助行机器人的导航功能忽略了身体(姿势)检测等问题。
附图说明
为了更清楚地说明本申请的实施例中的技术方案,下面对实施例中或现有技术的描述中所使用的附图进行简要的介绍。在以下的附图中,相同的附图标记在整个图中表示相应的部分。应当理解的是,以下描述中的附图仅为本申请的例子。对于本领域的技术人员来说,在没有创造性劳动的情况下,可以基于这些附图来获得其他的附图。
图1A是本申请的一些实施例中助行机器人的导航场景的示意图。
图1B是在图1A场景中将助行机器人导航至第一期望位姿的示意图。
图1C是在图1A场景中将助行机器人导航至第二期望位姿的示意图。
图2A是本申请的一些实施例中的助行机器人的透视图。
图2B是说明图2A的助行机器人的示意框图。
图3是图2A的助行机器人进行导航的例子的示意框图。
图4A是在图3的导航的例子中确定模式的例子的流程图。
图4B是说明期望位姿的不同位置的示意图。
图5是图2A的助行机器人进行导航的另一个例子的示意框图。
图6A是图5的助行机器人进行导航的例子中根据规划的轨迹进行导航的例子的示意框图。
图6B是图6A的导航的例子中的轨迹规划的例子的示意框图。
图6C是图5的助行机器人进行导航的例子中的转动和接近的例子的示意框图。
图7是图2A的助行机器人的状态机的例子的示意框图。
具体实施方式
为使本申请的目的、特征和优点更加明显易懂,下面将结合附图对本申请的实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请的一部分实施例,而非全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都在本申请所保护的范围内。
应当理解的是,当在本申请的说明书和所附的权利要求中使用时,术语“包括”、“包含”、“具有”及其变体表示所述特征、整体、步骤、操作、元素和/或组件的存在,但不排除可以存在或添加一或多个其他特征、整体、步骤、操作、元素、组件和/或其集合。
还应当理解的是,在本申请的说明书中所使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请的范围。如同在本申请的说明书和所附的权利要求中所使用的那样,除非上下文清楚地指明了其他情况,否则单数形式的 “一”、“一个”以及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附的权利要求中所使用的术语“和/或”是指相关列出的项中的一或多个的任何组合以及所有可能组合,并且包括这些组合。
本申请中的术语“第一”、“第二”、“第三”仅仅是出于描述的目的,而不能理解为指示或暗示了相对重要性、或是暗示了所指的技术特征的数量。由此,由“第一”、“第二”、“第三”所限定的特征可以显式或隐式地包括其中的至少一个技术特征。在本申请的描述中,“多个”的含义是至少两个,例如两个、三个等,除非另有明确的定义。
在本申请的说明书中叙述的“一个实施例”或“一些实施例”等意味着在本申请的一或多个实施例中可以包括与该实施例的描述内容相关的特定特征、结构或特点。由此,在说明书的不同地方出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是意味着所描述的实施例应该被所有其他实施例所引用,而是被“一或多个但不是所有其他实施例所引用”,除非另有特别强调。
本申请涉及了助行机器人的导航。这里所使用的术语“助行机器人”是指具有帮助其用户行走的能力的移动机器人,其可具有例如助行器或轮椅的结构。术语“轨迹规划”是指寻找一系列将移动机器从来源移动到目的地的、以时间为参数的有效配置,其中“轨迹”表示带有时间戳的位姿序列(作为参考,“路径”表示的是不带时间戳的一系列位姿或位置),术语“位姿”是指位置(例如x和y轴上的x和y坐标)和姿态(例如沿z轴的偏航角)。术语“导航”是指监视和控制移动机器从一个地方到另一个地方的运动的过程,术语“防撞”是指防止碰撞或降低碰撞的严重程度。术语“传感器”是指目的在于检测其环境中的事件或变化并将相关信息发送到其他电子设备(例如处理器)的设备、模块、机器或子系统(例如相机)。
图1A是本申请的一些实施例中助行机器人100的导航场景的示意图。助行机器人100在其环境(例如客厅)中被导航而执行诸如助行(即行走辅助)、停车接送和停车以与用户U交流的辅助任务,同时可以防止危险情况,例如碰撞和不安全状态(例如坠落、极端温度、辐射和暴露)。在该室内导航中,助行机器人100从起点(例如助行机器人100最初所在的位置)导航到目的地(例如由用户U或助行机器人100的导航/操作系统指示的导航目标的位置),同时可以避开障碍物(例如墙壁、家具、人、宠物和垃圾)以防止上述危险情况。需要规划助行机器人100从起点移动到目的地(例如目的地1和目的地2)的轨迹(例如轨迹T 1和轨迹T 2),以根据轨迹移动助行机器人100。每个轨迹包括一系列位姿(例如轨迹T 2的位姿S 0-S 6)。需要说明的是,起点和终点仅代表如图所示的场景中助行机器人100的位置,而不是轨迹的真正开始和结束(轨迹的真正开始和结束应该分别是一个位姿,例如图1A中的初始位姿S i1、S i2和期望位姿S d1、S d2)。在一些实施例中,为了实现助行机器人100的导航,需要构建环境地图,确定助行机器人100在环境中的位置,并基于所构建的地图和助行机器人100的确定位置来规划轨迹。
第一初始位姿S i1是轨迹T 1的起点,第二初始位姿S i2是轨迹T 2的起点。图1B是在图1A场景中将助行机器人100导航至第一期望位姿S d1的示意图,图1C是在图1A场景中将助行机器人100导航至第二期望位姿S d2的示意图。第一个期望姿态S d1是轨迹T 1中位姿序列S的最后一个,也就是轨迹T 1的末端,第二个期望姿态S d2是轨迹T 2中姿态序列S的最后一个,也就是轨迹T 2的末端。例如可以根据所构建的地图中到用户U的最短路径来规划轨迹T 1,而轨迹T 2可以是对应于例如用户U的面向方向D f而规划的。此外,在规划时可以考虑避免与所构建的地图中的障碍物(例如墙壁和家具)或实时检测到的障碍物(例如人类和宠物)的碰撞,以便更准确且安全地导航助行机器人100。
在一些实施例中,助行机器人100的导航可以由助行机器人100本身(例 如助行机器人100上的控制界面)或控制装置200(例如遥控器、智能手机、平板电脑、笔记本电脑、台式电脑或其他电子设备)通过例如提供对移动机器100的协助的请求来启动。助行机器人100和控制装置200可以通过网络进行通信,该网络可以包括例如因特网、内联网、外联网、局域网(LAN)、广域网(WAN)、有线网络、无线网络(例如、Wi-Fi网络、蓝牙网络和移动网络)或其他合适的网络,或两或多个此类网络的任意组合。
图2A是本申请的一些实施例中的助行机器人100的透视图。在一些实施例中,助行机器人100可以是移动机器人(例如轮式机器人),其可以包括行走架F、抓握部G、轮子E和相机C,因此具有类似助行器的结构。需要注意的是,助行机器人100只是助行机器人的一个例子,而且助行机器人100可以具有比上面或下面所示的更多或更少的部件(例如具有腿而不是轮子E),或者可以具有不同的部件配置或布置(例如具有单个抓握部,比如抓握杆)。抓握部G安装在行走架F的上缘,供使用者U抓握,而轮子E安装在行走架F的底部(例如底盘),用于移动行走架F,因此使用者U可以被助行器机器人100支撑,在助行器机器人100的协助下站立和移动。行走架F的高度可以手动或自动调整,例如通过以伸缩杆为例的行走架F中的伸缩机构,使得抓握部G达到便于使用者U抓握的高度。抓握部G可以包括平行设置、以供用户U通过两只手抓握的一对把手G 1,以及安装在把手G 1上的刹车杆G 2、以供用户U通过两只手制动助行机器人100,并且可以还包括诸如鲍登电缆的相关部件。在助行机器人100中,相机C朝向助行机器人100直线移动而使得例如相机C的镜头直接朝向前进方向D p的前进方向D P而设置,抓握部G朝向基本上与前进方向D P相反的后退方向D B设置,、使得例如把手G 1朝后退方向D B延伸。在其他实施例中,助行机器人100可以是诸如车辆的另一种移动机器。
图2B是说明图2A的助行机器人100的示意框图。助行机器人100可以包括通过一条或多条通信总线或信号线L进行通信的处理单元110、存储单元120 和控制单元130。应该注意的是,助行机器人100只是助行机器人的一个例子,而且助行机器人100可以具有比上面或下面所示更多或更少的组件(例如单元、子单元和模块),可以组合两个或更多个组件,或者具有不同的组件配置或布置。处理单元110执行存储在存储单元120中的各种(各组)指令,这些指令可以是以软件程序的形式执行助行机器100的各种功能、并且处理相关数据,其可以包括一或多个处理器(例如中央处理器)。存储单元120可以包括一或多个存储器(例如高速随机存取存储器(RAM)和非暂时性存储器)、一或多个存储器控制器、以及一或多个非暂时性计算机可读存储介质(例如固态驱动器(SSD)或硬盘)。控制单元130可以包括各种控制器(例如相机控制器、显示器控制器和物理按钮控制器)和用于耦合助行机器人100的输入/输出外围设备到处理单元110和存储单元120的外围设备接口,例如外部端口(如USB)、无线通信电路(如RF通信电路)、音频电路(如扬声器电路)和传感器(如惯性测量单元(IMU))。在一些实施例中,存储单元120可以包括用于实现与助行机器人100的导航(和轨迹规划)相关的导航功能(例如地图构建和轨迹规划)的导航模块121,可以存储在一或多个存储器(以及一或多个非暂时性计算机可读存储介质)中。
助行机器人100的存储单元120中的导航模块121可以是(助行机器人100的操作系统的)软件模块,其具有指令I n(例如用来驱动助行机器人100的电机1321以移动助行机器人100的指令),以实现助行机器人100的导航、地图构建器1211和路径规划器1212。地图构建器1211可以是具有用于为助行机器人100构建地图的指令I b的软件模块,路径规划器1212可以是具有用于为助行机器人100规划轨迹的指令I p的软件模块。路径规划器1212可以包括用于为助行机器人100规划全局轨迹(例如:轨迹T 1和轨迹T 2)的全局路径规划器和用于规划助行机器人100的局部轨迹(例如:包括图1A中的姿态S 2-S 6的轨迹T 2的一部分)的局部路径规划器。全局路径规划器可以是基于Dijkstra算法的 路径规划器,其基于由地图构建器1211通过同时定位与地图构建(simultaneous localization and mapping,SLAM)等方式所构建的地图来规划全局轨迹。局部路径规划器可以是基于TEB(timed elastic band)算法的路径规划器,其基于全局轨迹Pg和助行机器人100收集的其他数据来规划局部轨迹。例如可以通过助行机器人100的相机C来采集图像,并分析所采集的图像以识别障碍物(例如:图4B中的障碍物O),从而可以参考所识别出的障碍物来规划局部轨迹,并且可以根据所规划的局部轨迹移动助行机器人100、以避开障碍物。
地图构建器1211和路径规划器1212可以是与用于实现助行机器人100的导航的指令I n分离的子模块或是导航模块121的其他子模块,或是指令I n的一部分。路径规划器1212还可具有与助行机器人100的轨迹规划相关的数据(例如输入/输出数据和临时数据),其可存储在一或多个存储器中并由处理单元110访问。在一些实施例中,每个路径规划器1212可以是存储单元120中与导航模块121分离的模块。
在一些实施例中,指令I n可以包括用于实现助行机器人100的碰撞避免的指令(例如障碍物检测和路径重新规划)。此外,全局路径规划器可以重新规划全局轨迹(即规划新的全局路径),以响应例如原始全局轨迹被阻挡(例如被一或多个意外障碍物阻挡)或不足以避免碰撞(例如在采用时无法避开所检测到的障碍物)。在其他实施例中,导航模块121可以是通过一条或多条通信总线或信号线L与处理单元110、存储单元120和控制单元130进行通信的导航单元,还可以包括一或多个存储器(例如高速随机存取存储器(RAM)和非暂时性存储器),用于存储指令I n、地图构建器1211和路径规划器1212,以及用于执行所存储的指令I n、I b和I p的一或多个处理器(例如MPU和MCU),以实现助行机器人100的导航。
助行机器人100还包括通信子单元131和致动子单元132。通信子单元131和致动子单元132通过一条或多条相同的通信总线或信号线与控制单元130进 行通信,该一条或多条通信总线或信号线可以与上述一条或多条通信总线或信号线L相同、或至少部分不同。通讯子单元131耦接助行机器人100的通讯接口(例如网络接口1311),以供助行机器人100通过诸如网络和I/O接口1312(例如物理按钮)与控制设备200进行通信。致动子单元132耦合到用于实现助行器机器人100的移动的组件/设备,以驱动助行器机器人100的车轮E和/或关节的马达1321。通信子单元131可以包括用于助行机器人100的上述通信接口的控制器,致动子单元132可以包括用于实现助行器机器人100的移动的上述组件/设备的控制器。在其他实施例中,通信子单元131和/或致动子单元132可以只是抽象组件,用于表示助行机器人100的组件之间的逻辑关系。
助行机器人100还可以包括传感器子单元133,其可以包括一组传感器和相关控制器,例如相机C和IMU 1331(或加速度计和陀螺仪),用于检测其所处的环境、以实现其导航。传感器子单元133通过一条或多条通信总线或信号线与控制单元130进行通信,该一条或多条通信总线或信号线可以与上述的一条或多条通信总线或信号线L相同、或至少部分不同。在其他实施例中,在导航模块121是上述的导航单元的情况下,传感器子单元133可以通过一或多条通信总线或信号线与导航单元进行通信,该一或多条通信总线或信号线可以与上述的一或多条通信总线或信号线L相同、或至少部分不同。另外,传感器子单元133可以只是抽象组件,用于表示助行机器人100的组件之间的逻辑关系。
在一些实施例中,地图构建器1211、路径规划器1212、传感器子单元133和马达1321(以及与助行机器人100耦合的轮E和/或关节)共同组成一个(导航)系统,其实现地图构建、(全局和局部)路径规划和电机驱动,以便实现助行机器人100的导航。此外,图2B中所示的各种组件可以以硬件、软件或硬件和软件两者的组合来实现。处理单元110、存储单元120、控制单元130、导航模块121和其他单元/子单元/模块中的两或多个可以在单个芯片或电路上实 现。在其他实施例中,它们的至少一部分可以在单独的芯片或电路上实现。
图3是图2A的助行机器人100进行导航的例子的示意框图。在一些实施例中,以例如将对应于助行机器人100的导航方法的指令(组)I n存储为存储单元120中的导航模块121并通过处理单元110执行所存储的指令I n的方式在助行机器人100中实现该导航方法,而后助行机器人100可以被导航到用户U处。可以响应于来自例如助行机器人100本身或控制装置200(的导航/操作系统)的对移动机器100的协助的请求来执行该导航方法,并且可以同时考虑通过助行机器人100的摄像机C检测到的障碍物(例如图4B中的障碍物O),而后可以响应于例如检测到意外的障碍物来重新执行。在其他实施例中,该导航方法还可以响应于通过助行机器人100的相机C检测到的用户U来执行该导航方法。
根据该导航方法,处理单元110可以通过助行机器人100的相机C来识别用户U的姿势P(图3的框310),姿势P可以包括站立、坐下和躺下,相机C可以是RGB-D相机,它提供图像I(包括彩色图像和深度图像)的连续流。在一些实施例中,可以先通过使用基于诸如MobileFaceNets等卷积神经网络的人脸匹配方法(或算法)以从存储单元120中存储的数据集中搜索和匹配人脸的特征的方式来识别图像I中用户U的人脸、从而识别用户U。然后,识别用户U的身体上的关键点P k,且基于例如(训练良好的)神经网络而提供用户U的估计骨架B上的关键点P k的3D(三维)位置。最后,可以通过例如使用分类器来分析估计骨架B上的关键点P k的位置关系的方式来区分用户U的姿势P、从而识别姿势P。同时,也可以根据识别出的关键点Pk来识别用户U的面向方向Df,例如可以每隔预定时间间隔(例如1秒)重新执行用户检测、直到完成根据所规划的轨迹的导航(图3的框330和图5的框530)。
在该导航方法中,处理单元110可根据对用户U执行的指定任务的任务类型K和所识别的用户U的姿势P来进一步确定助行机器人100的模式M(图3 的框320)。对用户U执行的指定任务可以划分为多个任务类型K,模式M可以是助行机器人100接近用户U的方法(或方式、策略),例如移动到用户U前面或旁边的位置,其根据任务类型K(例如辅助类型)和所识别的用户U的姿势P来确定将要对用户U执行的指定任务(例如行走辅助),使得在模式M中的助行机器人100可以以适合于任务类型K和所识别出的姿势P的方式来接近用户U。在其他实施例中,模式M可以是在助行机器人100的导航/操作系统中的助行机器人100的其他操作方法(例如助行机器人100为用户U服务的方法)。图4A是在图3的导航的例子中确定模式的例子的流程图(图3的框320),在一些实施例中,在模式的确定中,可以先确定要对用户U执行的指定任务的任务类型K(步骤321),指定任务的任务类型K可以包括协助类型K a和问候类型K g。诸如行走辅助和行走训练的用户辅助任务可能属于辅助类型K a,而诸如问候、交谈、情绪检测的用户交互任务可能属于问候类型K g。助行机器人100的模式M可以包括:
模式1:移动到用户U处并面向用户U;
模式2:移动到用户U处并转动;
模式3:移动到用户U处、转动、并靠近用户;以及
模式4:移动到用户U处的面向方向D f的位置、转动、并靠近用户。
在指定任务的任务类型K被确定为问候类型K g的情况下,助行机器人100的模式M被确定为模式1,在指定任务的任务类型K被确定为辅助类型K a的情况下,将执行步骤322。可以通过存储在助行机器人100的存储单元120中的变量(例如常数)来表示所确定的模式M,其在确定模式M时被设置。或者,模式M可以只是一个抽象概念,在助行机器人100的导航/操作系统中没有表现出来,而只是通过上述模式确定中的步骤所选2择的方法,使助行机器人100接近用户U(例如:如果步骤321判断指定任务的任务类型K为辅助类型K a,步骤322判断用户U的姿势P为坐姿P 2,而步骤323判断没有障碍物在用户U 的面向方向D f上,助行机器人100将被移动到用户U的面向方向D f上的位置、转动、并移动靠近用户U,如同在模式4中所做的那样)。在模式1中,助行机器人100将移动到用户U处、并面向用户U以执行问候类型K g的用户交互任务,例如对用户U的问候、交谈或情绪检测。
在模式的确定中,其次,可以识别用户U的姿势P(步骤322),如上所述,用户U的姿势P可以包括站姿P 1、坐姿P 2和卧姿P 3。在确定用户U的姿势P为卧姿P 3的情况下,模式M被确定为模式1;在用户U的姿势P被确定为站姿P 1的情况下,模式M被确定为模式2。在模式2中,助行机器人100将移动到用户U处并进行转动,使得抓握部G面向用户U,从而被站立的用户U抓握。在判断出用户U的姿势P为坐姿P 2的情况下,执行步骤323。最后,在模式的判断中,可以检测用户U前方是否有障碍物(步骤323),在用户U的面向方向D f(即前方)存在障碍物的情况下,确定模式M为模式3,否则确定模式M为模式4。在模式3中,助行机器人100将移动到用户U处并进行转动,以使抓握部G面向用户U,然后朝向后方向D b移动一段距离以靠近用户U、以便被坐着的用户U抓握。在模式4中,助行机器人100将移动到用户U的面向方向D f上的位置并进行转动使得抓握部G面对用户U,然后朝着向后方向D b移动一段距离以靠近用户U、以便被坐着的用户U抓握。可以响应于例如任务类型K的改变、用户U的姿势P的改变以及障碍物的出现来重新执行模式的确定。
图4B是说明期望位姿的不同位置的示意图。如图4B的上半部分所示,在模式1、2和3中,助行机器人100可以移动到用户U处,使得期望位姿S d(即轨迹中的一系列位姿中的最后一个,例如第一期望位姿Sd 1)位于距用户U距离D 1处,位于以用户U为中心的圆圈C上,其中圆圈C的半径为距离D 1(例如0.4米),同时避开障碍物O。如图4B的下半部分所示,在模式4中,助行机器人100可以移动到用户U处,使得期望位姿S d(例如第二期望位姿S d2) 位于距用户U距离D 2处,位于具有距离D 2(例如0.4米)的用户U的面向方向D f上的位置N上,其中距离D 1和距离D 2中的每一个可以是根据个人喜好等实际需要而确定的预定距离。
在该导航方法中,处理单元110可以进一步控制助行器机器人100根据与所确定的助行器机器人100的模式M相对应的所规划的轨迹T(见图6A)移动(图3的框330)。轨迹T包括位姿序列(例如轨迹T 2的位姿S 0-S 6),该位姿序列的最后一个是期望位姿S d(例如轨迹T2的第二期望位姿S d2)。每个位姿包括助行器机器人100的位置(例如坐标系中的坐标)和姿态(例如坐标系中的欧拉角)。图5是图2A的助行机器人100进行导航的另一个例子的示意框图。在一些实施例中,可以在根据规划轨迹T的导航(图3的框330和图5的框530)失败的情况下执行恢复(图5的框550)。图6A是图5的助行机器人进行导航的例子中根据规划的轨迹进行导航的例子的示意框图。在一些实施例中,为实现根据规划轨迹T的导航(图5的框530),处理单元110可以根据所确定的模式M和检测到的障碍物O来规划轨迹T(图6A的框531)。轨迹T可以由上述全局轨迹规划器基于由地图构建器1211构建的地图来规划。例如可以在检测到障碍物O时重新规划轨迹T(参见图6A的框532)。此外,如果轨迹T的规划失败(例如当助行机器人100被障碍物O阻挡时),则可以执行恢复(图5的框550)。
图6B是图6A的导航的例子中的轨迹规划的例子的示意框图。在一些实施例中,为了规划轨迹T(图6A的框531),处理单元110可以确定用户U的面向方向D f(图6A的框5311)。如上所述,可以在用户检测中确定面向方向D f(图3的框310和图5的框510)。处理单元110可以进一步判断是否有障碍物O基本上位于用户U的面向方向D f上(图6A的框5312)。用户U前方的障碍物O将被检测、从而避开。处理单元110可以进一步规划轨迹T,使得期望位姿S d基本上位于用户U的面向方向D f上(参见图4B的下半部分),在助行 机器人100处于与指定的辅助类型K a的任务对应的模式M的情况下,姿势P为坐姿P 2,且在用户U的面向方向D f上基本没有障碍物O(在模式4中);处理单元110可进一步规划轨迹T,使得期望位姿S d位于距用户U距离D 1处(参见图4B的上半部分)。在助行机器人100处于与指定任务K a对应的模式M的情况下,姿势P为坐姿P 2,且存在基本上在用户U的面向方向D f上的障碍物O(在模式3中),助行机器人100处于对应于辅助类型K a的指定任务的模式M且姿势P为躺姿P 3(在模式1中),或者助行机器人100处于对应于问候类型K g的指定任务的模式M中(在模式1中)(图6A的框5313)。在一些实施例中,当助行机器人100处于模式4(前方没有障碍物O)时,处理单元110规划轨迹T使得期望姿势S d基本上位于用户U的面向方向D f(即在其前面)上,例如在用户U的面向方向D f上、而具有距离D 2(例如0.4米)的图4B的下半部分的位置N上。当助行机器人100处于模式3(前方有障碍物O)或模式1时,处理单元110规划轨迹T使得期望位姿S d位于距用户U距离D 1处,例如在图4B上半部分的以用户U为中心、半径为D1(例如0.4米)的圆圈C上。
为了实现根据规划轨迹T的导航(图5的框530),处理单元110可以进一步检测障碍物O(图6A的框532)。障碍物的检测可以在轨迹计划之后、之前或同时执行(图6A的框531)。障碍物O可以通过传感器子单元133来检测。在一些实施例中,可以通过传感器子单元133收集传感器数据,并对所收集的数据进行分析,以识别突然出现或在接近时突然发现的障碍物O。传感器数据可以从不同的传感器(例如RGB-D相机、激光雷达、声纳和红外线传感器)收集,并且可以使用诸如卡尔曼滤波器之类的融合算法来融合从传感器接收到的传感器数据,从而减少由于噪声数据引起的不确定性、融合不同速率的数据、以及合并相同对象的数据。例如可以通过传感器子单元133中的相机C来采集图像,而其他数据可以通过传感器子单元133中的其他传感器(例如激 光雷达、声纳和红外线传感器)收集,然后可以对收集到的数据进行融合分析,从而识别出障碍物O。处理单元110可以进一步基于所规划的全局轨迹、通过助行机器人100的传感器子单元133收集传感器数据,以及助行机器人100的当前姿态(即助行机器人100当前所处的轨迹T中的姿态)来规划局部轨迹(例如包括图1A中的位姿S 2-S 6的轨迹T 2的一部分)。可以通过相机C来采集图像,并且可以分析所采集的图像以识别障碍物(例如障碍物O),从而可以参考识别出的障碍物来规划局部轨迹,并且根据所规划的局部轨迹移动助行机器人100来避开障碍物。在一些实施例中,可以通过上述轨迹规划器以基于所规划的全局轨迹生成局部轨迹的方式来规划局部轨迹,同时考虑已识别的障碍(例如避开已识别的障碍)。此外,障碍物检测可以每隔预定时间间隔(例如1秒)重新执行,直到完成据所规划的轨迹T的导航(图5的框530)。
为了实现根据规划轨迹T的导航(图5的框530),处理单元110还可以控制助行机器人100按照所规划的轨迹T移动(图6A的框533)。在一些实施例中,上述所规划的局部轨迹包括助行机器人100的多个姿态,助行机器人100的电机1321可以根据局部轨迹中的姿态被驱动,使得助行机器人100根据局部轨迹移动、以实现助行机器人100按照所规划的轨迹T进行导航。还可以触发跟踪控制器、以确保助行机器人100沿着所计划的轨迹T精确地移动,而且跟踪控制器可以产生用于驱动马达1321的速度。此外,障碍物检测(图6A的框532)可以重新被执行,例如在每个预定的移动距离(例如10毫米)中或在对应于每个位姿的移动之后,直到完成据所规划的轨迹T的导航(图5的框530)。此外,在移动控制(图6A的框533)中,在检测到的障碍物O是突然出现的移动障碍物或突然发现的难以检测的障碍物的情况下(由于缺乏感知意识,一些障碍物在远距离很难被检测到),助行机器人100的速度可能会降低、甚至完全停止,以防没有时间重新规划轨迹来绕道。这可以通过碰撞避免模块来实现,其通过降低助行机器人100的速度来实现快速响应,而无需复杂的计算。
在该导航方法中,在助行机器人100处于与辅助类型K a的指定任务和站姿P 1与坐姿P 2其中之一的姿势P相对应的模式M的情况下,处理单元110可以进一步在助行机器人100到达期望位姿S d时移动,使得抓握部G面向用户U,(图5的框540)。在一些实施例中,处理单元110在达到期望姿势S d时移动(例如转动)助行机器人100,当助行机器人100处于模式2、模式3或模式4时,使得抓握部G面对用户U。图6C是图5的助行机器人选行导航的例子中的转动和接近的例子的示意框图。在一些实施例中,在助行机器人100处于与辅助类型K a的指定任务和站姿P 1与坐姿P 2其中之一的姿势P相对应的模式M的情况下,为了使抓握部G面向用户U(图5的框540),处理单元110可以在达到期望位姿S d时转动助行机器人100,使得抓握部G面向用户U(图6C的框541)。助行机器人100可以原位转动(即围绕穿过助行机器人100的轴而移动),例如通过转动两个前轮E(即两个前轮E在D p前进方向),同时制动两个后轮E(即在向后方向D p的两个轮子E)。在一些实施例中,处理单元110在达到期望姿势S d时移动(例如转动)助行机器人100,当助行机器人100处于模式2、模式3或模式4时使抓握部G面向用户U,从而接近用户U以便于站着或坐着的用户U接触抓握部G。在助行机器人100中,相机C朝向前进方向D P设置使助行机器人100直线移动,抓握部G朝基本上与向前进方向D P相反的向后方向D B设置。助行机器人100可以在达到期望位姿S d时旋转180度,以使抓握部G面向用户U。
为了使抓握部G面向用户U(图5的框540),处理单元110可以进一步将助行机器人100朝向后方向D b移动距离D 3(见图1C)、以靠近用户U,在助行机器人100处于与辅助类型K a的指定任务和坐姿P 2对应的模式M的情况下(图6C的框542),距离D 3可以是预定距离(例如0.3米)。在一些实施例中,当助行机器人100处于模式4时,处理单元110将助行机器人100朝向后方向D b移动距离D 3、以靠近用户U,从而进一步接近用户U、并且便于坐着的用户 U触及抓握部G,因为坐着的用户U不能像站着的用户U那样远的距离触及抓握部G。
在该导航方法中,处理单元110可以响应于导航中的用户检测(图3的框310和图5的框510)或根据所规划的轨迹T的导航(图5的框530)(中的轨迹规划,图6A的框531)的失败而执行恢复(图5的框550)。例如该恢复可以根据所规划的轨迹T通过在重新尝试用户检测或导航之前触发助行机器人100的扬声器来提醒用户U改变他/她的位置、以便更容易接近用户。
为了保证助行机器人100的成功导航,可以使状态机根据状态机中所指定的状态顺序、通过例如调用相应的模块(例如轨迹规划器1212)来执行相关操作(例如轨迹规划)。图7是图2A的助行机器人的状态机200的例子的示意框图。状态机200是可以在任何给定时间仅处于助行机器人100的多个状态中的一个的抽象机器,其存储在助行机器人100的存储单元120中。在一些实施例中,状态机200可以具有执行用户检测(图3的框310和图5的框510)的用户检测状态A 1,执行模式确定(图3的框320和图5的框520)的模式确定状态A 2,轨迹规划(图6A的框531)的轨迹规划状态A 3;执行移动控制(图6A的框533)的运动控制状态A 4;执行障碍物检测(图6A的框532)的障碍物检测状态A 5;以及执行恢复(图5的框550)的恢复状态A 6
状态机200响应某些条件而从一种状态变为另一种状态。在检测到用户U(的姿势P)时,它可以从用户检测状态A 1变为模式确定状态A 2,并且在未检测到用户U(的姿势P)时改变为恢复状态A 6。在确定助行机器人100的新检测或改变的模式M时,它可以从模式确定状态A 2变为轨迹规划状态A 3、或确定助行机器人100的未改变模式M(即所确定的模式M自上次确定以来未改变)时变为移动控制状态A 4。在规划轨迹T时,可能从轨迹规划状态A 3变为运动控制状态A 4,而在规划轨迹T失败时变为恢复状态A 6。也可以从移动控制状态A 4变为障碍物检测状态A 5(例如在比如1秒的预定时间间隔之后变为用户 检测状态A 1)以及变为用户检测状态A 1(例如在比如2秒的另一个预定时间间隔之后)。它可以在检测到障碍物O时从障碍物检测状态A 5变为轨迹规划状态A 3,或在未检测到障碍物O时变为运动控制状态A 4。它可以会从恢复状态A 6变为用户检测状态A 1,例如在比如10秒的又一个预定时间间隔之后。
在一些实施例中,可以存在多个状态机,其中每个状态机用于每个模式M,并且可以在确定每个模式M时使用对应的状态机(图3的框320和图5的框520)。例如对于模式1,对应的状态机可以具有移动控制状态A 4,在该状态下进行移动控制(图6A的框533)和面向用户U的移动(例如转动);对于模式2,对应的状态机可以具有运动控制状态A 4,在该状态下进行移动控制、转动和带有转动(图6C的框541)的转动和接近(参见图3的框340和图5的框540);对于模式3,对应的状态机可以具有移动控制状态A 4,在该状态下进行上述移动控制、以及带有上述转动和向后移动(图6C的框542)的上述转动和接近;对于模式4,对应的状态机可以具有轨迹规划状态A 3,在该状态下进行带有面向方向确定(图6B的框方框5311)的轨迹规划(图6A的框531),上述的移动控制、以及带有上述转动和上述向后移动的上述转动和接近。
该导航方法对配备有前置相机的助行机器人进行导航、而以适应用户姿势的方式接近用户,便于处于不同姿势的用户在触及机器人(的抓握部),以帮助用户更有效、更安全、更顺畅地移动,从而提高机器人的有效性、安全性和用户体验。由于所必要的相机(例如RGBD相机)易于获得、价格低廉、体积小、重量轻、易于集成到机器人中,且该导航方法的执行只需要很少的计算资源,因此该导航方法易于在各种平台和机器人中实现。该导航方法的应用可以包括接近用户以进行行走辅助、问候、交谈、咨询、引导、筛查、体温监测等。
本领域技术人员可以理解,上述实施例中的全部或部分方法可以通过一个或多个计算机程序来实现指示相关硬件。此外,一个或多个程序可以存储在非易失性计算机可读存储介质中。当执行一个或多个程序时,执行上述实施例中 的相应方法的全部或部分。对存储(storage)、存储器、数据库或其他介质的任何引用均可包括非易失性和/或易失性存储器。非易失性存储器可以包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)、闪存、固态驱动器(SSD)等。易失性存储器可以包括随机存取存储器(RAM)、外部高速缓冲存储器等。
处理单元110(和上述处理器)可以包括中央处理单元(central processing unit,简称CPU),或者是其他通用处理器、数字信号处理器(digital signal processor,简称DSP)、专用集成电路(application specific integrated circuit,简称ASIC)、现场可编程门阵列(field-programmable gate array,简称FPGA)或者其他可编程逻辑器件、分立门、晶体管逻辑器件和分立硬件组件。通用处理器可以是微处理器,也可以是任何常规的处理器。存储单元120(和上述存储器)可以包括诸如硬盘和内部存储器的内部存储单元。存储单元120还可以包括例如插入式硬盘、智能媒体卡(smart media card,简称SMC)、安全数字(secure digital,简称SD)卡和闪存卡的外部存储设备。
实施例中所描述的示例性单元/模块和方法/步骤可以通过软件、硬件或者软件和硬件的结合来实现。这些功能究竟是通过软件还是硬件来实现,取决于技术方案的具体应用和设计约束。上述导航方法和助行机器人可以通过其他方式实现。例如,单元/模块的划分仅仅是一种逻辑上的功能划分,在实际实现中还可以采用其他划分方式,即多个单元/模块可以组合或集成到另一个系统中,或者某些特征可以被忽略或不执行。另外,上述的相互耦合/连接可以是直接耦合/连接或通信连接,也可以是通过一些接口/设备的间接耦合/连接或通信连接,还可以是电性、机械或其他的形式。
上述实施例仅用于说明本申请的技术方案,而非用于限制本申请的技术方案。虽然本申请已经结合上述实施例进行了详细说明,但是上述各个实施例中的技术方案仍然可以进行修改,或者对其中部分技术特征可以进行等同替换, 使得这些修改或替换并不使得相应技术方案的本质脱离本申请的各个实施例的技术方案的精神和范围,而均应包含在本申请的保护范围内。

Claims (20)

  1. 一种导航方法,用于将助行机器人导航到用户处、以对该用户执行指定任务,其中该机器人具有朝向不同方向设置的一个相机和至少一个抓握部,所述方法包括:
    通过该相机识别该用户的姿势;
    根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
    控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
    响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户。
  2. 根据权利要求1所述的方法,在所述控制该机器人根据对应于该机器人的该确定出的模式的该规划轨迹来移动之前,所述方法还包括:
    确定该用户的面向方向;
    判断是否有障碍物基本上在该用户的该面向方向上;以及
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿基本上位于该用户的该面向方向上。
  3. 如权利要求2所述的方法,其中在所述判断是否有障碍物基本上在该用户的该面向方向之后,所述方法还包括:
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿位于距该用户第一距离处。
  4. 如权利要求1所述的方法,其中该相机朝向该机器人的前进方向设置, 而该抓握部朝向基本上与该前进方向相反的向后方向设置;所述在到达该期望位姿时转动该机器人、使得该抓握部面向该用户包括:
    在到达该期望位姿后将该机器人旋转180度、使该抓握部面向该用户。
  5. 如权利要求4所述的方法,其中在所述到达该期望位姿后将该机器人旋转180度、使该抓握部面向该用户之后,所述方法还包括:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于该坐姿,在转动该机器人后将该机器人朝该向后方向移动第二距离以靠近该用户。
  6. 如权利要求1所述的方法,其中进一步通过以下方式规划该轨迹:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于躺姿,或该确定出的模式与问候类型的该指定任务相对应,规划该轨迹、使该期望位姿位于距该用户第三距离处。
  7. 如权利要求1所述的方法,其中所述通过该相机识别该用户的该姿势包括:
    识别该用户身体上的多个关键点、以提供该些关键点在该用户的估计骨架上的三维位置;以及
    通过分析该用户的该估计骨架上的该些关键点的该三维位置来识别该用户的姿势。
  8. 一种助行机器人,包括:
    朝向该机器人的前进方向设置的相机;
    朝向与该相机不同的方向设置的至少一个抓握部;
    一或多个处理器;以及
    一或多个存储器,存储有一或多个计算机程序,该一或多个计算机程序由该一或多个处理器执行,其中该一或多个计算机程序包括多个指令用于:
    通过该相机识别该用户的姿势;
    根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
    控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
    响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户。
  9. 如权利要求8所述的机器人,其中所述一或多个程序还包括以下指令:
    确定该用户的面向方向;
    判断是否有障碍物基本上在该用户的该面向方向上;以及
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿基本上位于该用户的该面向方向上。
  10. 如权利要求9所述的机器人,其中所述一或多个程序还包括以下指令:
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿位于距该用户第一距离处。
  11. 根据权利要求8所述的机器人,其中该抓握部朝向基本上与该前进方向相反的向后方向设置;所述在到达该期望位姿时转动该机器人、使得该抓握部面向该用户包括:
    在到达该期望位姿后将该机器人旋转180度、使该抓握部面向该用户。
  12. 如权利要求11所述的机器人,其中所述一或多个程序还包括以下指令:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于该坐姿,在转动该机器人后将该机器人朝该向后方向移动第二距离以靠近该用户。
  13. 如权利要求8所述的机器人,其中所述一或多个程序还包括以下指令:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于躺姿,或该确定出的模式与问候类型的该指定任务相对应,规划该轨迹、使该期望位姿位于距该用户第三距离处。
  14. 如权利要求8所述的机器人,其中所述通过该相机识别该用户的该姿势包括:
    识别该用户身体上的多个关键点、以提供该些关键点在该用户的估计骨架上的三维位置;以及
    通过分析该用户的该估计骨架上的该些关键点的该三维位置来识别该用户的姿势。
  15. 一种计算机可读存储介质,存储有一或多个计算机程序,其中该一或多个计算机程序包括多个指令,当该多个指令由具有朝向不同方向设置的一个相机和至少一个抓握部的助行机器人执行时,使该机器人:
    通过该相机识别该用户的姿势;
    根据对该用户执行的该指定任务的类型和识别出的该用户的该姿势,确定出该机器人的模式;
    控制该机器人根据对应于该机器人的该确定出的模式的规划轨迹来移动,其中该轨迹包括该位姿序列,该位姿序列中的最后一个位姿为期望位姿;以及
    响应于该机器人的该确定出的模式与辅助类型的该指定任务相对应、且该用户处于站姿和坐姿中的一个,在到达该期望位姿时转动该机器人、使得该抓握部面向该用户。
  16. 如权利要求15所述的存储介质,其中该一或多个程序还包括多个指令使该机器人:
    确定该用户的面向方向;
    判断是否有障碍物基本上在该用户的该面向方向上;以及
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿基本上位于该用户的该面向方向上。
  17. 如权利要求16所述的存储介质,其中该一或多个程序还包括多个指令使该机器人:
    响应于该机器人的该确定出的模式与该辅助类型的该指定任务相对应、该用户处于该坐姿、而且基本上没有障碍物位于该用户的该面向方向上,规划该轨迹使得该期望位姿位于距该用户第一距离处。
  18. 如权利要求15所述的存储介质,其中该相机朝向该机器人的前进方向设置,而该抓握部朝向基本上与该前进方向相反的向后方向设置;所述在到达该期望位姿时转动该机器人、使得该抓握部面向该用户包括:
    在到达该期望位姿后将该机器人旋转180度、使该抓握部面向该用户。
  19. 如权利要求18所述的存储介质,其中该一或多个程序还包括多个指令使该机器人:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于该坐姿,在转动该机器人后将该机器人朝该向后方向移动第二距离以靠近该用户。
  20. 如权利要求15所述的存储介质,其中该一或多个程序还包括多个指令使该机器人:
    响应于该机器人的该确认出的模式与该辅助类型的该指定任务相对应、而且该用户处于躺姿,或该确定出的模式与问候类型的该指定任务相对应,规划该轨迹、使该期望位姿位于距该用户第三距离处。
PCT/CN2022/073209 2021-05-25 2022-01-21 助行机器人导航方法、助行机器人及计算机可读存储介质 WO2022247325A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280004524.0A CN115698631A (zh) 2021-05-25 2022-01-21 助行机器人导航方法、助行机器人及计算机可读存储介质

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/329,197 US20220382282A1 (en) 2021-05-25 2021-05-25 Mobility aid robot navigating method and mobility aid robot using the same
US17/329,197 2021-05-25

Publications (1)

Publication Number Publication Date
WO2022247325A1 true WO2022247325A1 (zh) 2022-12-01

Family

ID=84195106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/073209 WO2022247325A1 (zh) 2021-05-25 2022-01-21 助行机器人导航方法、助行机器人及计算机可读存储介质

Country Status (3)

Country Link
US (1) US20220382282A1 (zh)
CN (1) CN115698631A (zh)
WO (1) WO2022247325A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116659518A (zh) * 2023-07-31 2023-08-29 小舟科技有限公司 智能轮椅自主导航方法、装置、终端及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240025682A (ko) * 2022-01-26 2024-02-27 저지앙 이헝위에 메디컬 테크놀로지 컴퍼니 리미티드 보행 보조기의 조타 보조력 제어 방법, 조타 보조력 제어 장치 및 메모리
TWI833646B (zh) * 2023-05-12 2024-02-21 緯創資通股份有限公司 助行器、助行器輔助系統及其運作方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105629971A (zh) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 一种机器人自动充电系统及其控制方法
CN105769482A (zh) * 2015-01-09 2016-07-20 松下电器产业株式会社 生活辅助系统和生活辅助方法
US20190220020A1 (en) * 2019-03-26 2019-07-18 Intel Corporation Methods and apparatus for dynamically routing robots based on exploratory on-board mapping
CN110815240A (zh) * 2019-11-11 2020-02-21 中国地质大学(武汉) 一种灌篮与图形认知助长机器人

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8909370B2 (en) * 2007-05-08 2014-12-09 Massachusetts Institute Of Technology Interactive systems employing robotic companions
JP2013022705A (ja) * 2011-07-25 2013-02-04 Sony Corp ロボット装置及びロボット装置の制御方法、コンピューター・プログラム、並びにロボット・システム
JP2013111737A (ja) * 2011-12-01 2013-06-10 Sony Corp ロボット装置及びその制御方法、並びにコンピューター・プログラム
US9146560B2 (en) * 2012-03-30 2015-09-29 Irobot Corporation System and method for implementing force field deterrent for robot
EP2954883B1 (en) * 2013-02-07 2024-03-06 FUJI Corporation Movement assistance robot
AU2014236686B2 (en) * 2013-03-15 2017-06-15 Ntt Disruption Us, Inc. Apparatus and methods for providing a persistent companion device
US9530299B2 (en) * 2015-03-03 2016-12-27 Toyota Motor Engineering & Manufacturing North America, Inc. Methods and apparatuses for assisting a visually-impaired user
US10667697B2 (en) * 2015-06-14 2020-06-02 Facense Ltd. Identification of posture-related syncope using head-mounted sensors
US20180143634A1 (en) * 2016-11-22 2018-05-24 Left Hand Robotics, Inc. Autonomous path treatment systems and methods
KR102012550B1 (ko) * 2017-02-20 2019-08-20 엘지전자 주식회사 돌발 장애물을 식별하는 방법 및 이를 구현하는 로봇
JP2020527266A (ja) * 2017-07-10 2020-09-03 トラベルメイト ロボティクス, インク.Travelmate Robotics, Inc. 自律ロボットシステム
CN107315414B (zh) * 2017-07-14 2021-04-27 灵动科技(北京)有限公司 一种控制机器人行走的方法、装置及机器人
TWI657812B (zh) * 2017-11-14 2019-05-01 緯創資通股份有限公司 助行裝置
TWI665068B (zh) * 2018-02-06 2019-07-11 世擘股份有限公司 自動清潔裝置以及自動充電方法
WO2019236627A1 (en) * 2018-06-04 2019-12-12 John Mark Norton Mobility assistance device
US11175147B1 (en) * 2018-07-20 2021-11-16 Digital Dream Labs, Llc Encouraging and implementing user assistance to simultaneous localization and mapping
WO2020230734A1 (ja) * 2019-05-15 2020-11-19 株式会社ジェイテクト 歩行支援装置
WO2021025715A1 (en) * 2019-08-07 2021-02-11 Boston Dynamics, Inc. Navigating a mobile robot
US11030465B1 (en) * 2019-12-01 2021-06-08 Automotive Research & Testing Center Method for analyzing number of people and system thereof
CN113116224B (zh) * 2020-01-15 2022-07-05 科沃斯机器人股份有限公司 机器人及其控制方法
JP2022048017A (ja) * 2020-09-14 2022-03-25 株式会社東芝 作業推定装置、方法およびプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105629971A (zh) * 2014-11-03 2016-06-01 贵州亿丰升华科技机器人有限公司 一种机器人自动充电系统及其控制方法
CN105769482A (zh) * 2015-01-09 2016-07-20 松下电器产业株式会社 生活辅助系统和生活辅助方法
US20190220020A1 (en) * 2019-03-26 2019-07-18 Intel Corporation Methods and apparatus for dynamically routing robots based on exploratory on-board mapping
CN110815240A (zh) * 2019-11-11 2020-02-21 中国地质大学(武汉) 一种灌篮与图形认知助长机器人

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116659518A (zh) * 2023-07-31 2023-08-29 小舟科技有限公司 智能轮椅自主导航方法、装置、终端及介质
CN116659518B (zh) * 2023-07-31 2023-09-29 小舟科技有限公司 智能轮椅自主导航方法、装置、终端及介质

Also Published As

Publication number Publication date
CN115698631A (zh) 2023-02-03
US20220382282A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
WO2022247325A1 (zh) 助行机器人导航方法、助行机器人及计算机可读存储介质
Bauer et al. The autonomous city explorer: Towards natural human-robot interaction in urban environments
EP2980670B1 (en) Robot cleaning system and method of controlling robot cleaner
US20190184569A1 (en) Robot based on artificial intelligence, and control method thereof
EP3094452B1 (en) Remotely operating a mobile robot
US7383107B2 (en) Computer-controlled power wheelchair navigation system
US20080300777A1 (en) Computer-controlled power wheelchair navigation system
WO2004052597A1 (ja) ロボット制御装置、ロボット制御方法、及びロボット制御プログラム
JP5764795B2 (ja) 移動ロボット、移動ロボット用の学習システムおよび移動ロボットの行動学習方法
US11712802B2 (en) Construction constrained motion primitives from robot maps
JP6150429B2 (ja) ロボット制御システム、ロボット、出力制御プログラムおよび出力制御方法
JP6134894B2 (ja) ロボット制御システムおよびロボット
WO2020114214A1 (zh) 导盲方法和装置,存储介质和电子设备
Silva et al. Navigation and obstacle avoidance: A case study using Pepper robot
JP6142307B2 (ja) 注目対象推定システムならびにロボットおよび制御プログラム
US20180329424A1 (en) Portable mobile robot and operation thereof
JP5732633B2 (ja) コミュニケーションロボット
JP7317436B2 (ja) ロボット、ロボット制御プログラムおよびロボット制御方法
JP5115886B2 (ja) 道案内ロボット
Trahanias et al. Navigational support for robotic wheelchair platforms: an approach that combines vision and range sensors
US20220339786A1 (en) Image-based trajectory planning method and movement control method and mobile machine using the same
JP2020004182A (ja) ロボット、ロボット制御プログラムおよびロボット制御方法
WO2022252722A1 (zh) 地毯检测方法、运动控制方法以及使用该些方法的移动机器
US20230404823A1 (en) Control system and control method for controlling electric walking aid device
JP7258438B2 (ja) ロボット、ロボット制御プログラムおよびロボット制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810062

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE